--- a/FEATURES Mon Oct 19 14:33:59 2009 +0200
+++ b/FEATURES Wed Jan 13 00:04:47 2010 +0100
@@ -4,64 +4,76 @@
$Id$
+vim: spelllang=en spell
+
-------------------------------------------------------------------------------
General Features:
-* Runs as kernel module for Linux 2.6.
+* EtherCAT master implementation conforming to IEC/PAS 62407.
+ - Runs as kernel module for Linux 2.6.
+ - Multiple masters possible on one machine.
-* Comes with EtherCAT-capable network driver for serveral network interface
- cards.
- - Interrupt-less network driver operation.
- - Easy implementation of additional network drivers through common device
- interface of the master.
- - Runs even with PCMCIA cards.
+* EtherCAT-capable versions of standard Linux drivers for wide-spread
+ Ethernet devices.
+ - Interrupt-less operation of Ethernet devices.
+ - Easy implementation of additional Ethernet drivers through common device
+ interface.
+ - Operation possible with any device supported by the standard drivers,
+ including PCMCIA devices.
-* Supports multiple EtherCAT masters on one machine.
+* Supports any realtime environment through independent architecture.
+ - RTAI, Xenomai, etc.
+ - Operation possible without any realtime extension at all.
-* Supports any realtime extension through independent architecture.
- - RTAI, IPIPE, ADEOS, etc.
- - Runs well even without realtime extension.
-
-* Common kernel interface for realtime modules using EtherCAT functionality.
- - Synchronous transmission and reception of EtherCAT frames.
+* Common API for Realtime-Applications in kernelspace.
+ - Requesting and releasing masters.
+ - Dynamic slave configuration, even for slaves that are offline.
+ - Detailed configuration of the slaves' PDOs and SDOs.
+ - Creation of process data domains (see below). Registration of PDO entries
+ for exchange within a domain.
+ - Monitoring the states of masters, slave configurations and domains.
+ - SDO handlers for application-triggered CoE transfers (see below).
- Avoidance of unnecessary copy operations for process data.
* Separating slave groups through domains.
- Handling of multiple slave groups with different sampling rates.
- Automatic calculation of process data mapping, FMMU- and sync manager
configuration within the domains.
+ - Process data exchange can be monitored via a per-domain mechanism.
* Master finite state machine (FSM).
- - Bus monitoring during realtime operation.
- - Automatic reconfiguration of slaves on bus power failure during realtime
- operation.
- - Setting slave states during realtime operation.
+ - The same state machine runs both in idle mode and in realtime operation.
+ - Bus monitoring: Slave states are read cyclically. Automatic scanning of the
+ bus after a topology change.
+ - Automatic configuration of slaves, if a application-layer state change is
+ requested.
-* Special IDLE mode, when master is not in use.
- - Automatic scanning of slaves upon topology changes.
- - Bus visualisation and EoE processing without realtime process connected.
+* Implementation of the CANopen over EtherCAT (CoE) mailbox protocol.
+ - Configuration of CoE-capable slaves.
+ - SDO information service (dictionary listing).
+ - SDO transfers both via the application interface and the command-line tool.
-* Implementation of the CANopen-over-EtherCAT (CoE) protocol.
- - Configuration of CoE-capable slaves via Sdo interface.
- - Sdo information service (dictionary listing).
- - Sdo access via the realtime interface.
+* Implementation of the Ethernet over EtherCAT (EoE) mailbox protocol.
+ - Virtual network interface for any EoE-capable slave.
+ - Both a switched and a routed EoE network architecture is natively supported
+ and configurable with standard tools.
-* Implementation of the Ethernet-over-EtherCAT (EoE) protocol.
- - Creates virtual network devices that are automatically coupled to
- EoE-capable slaves.
- - Thus natively supports either a switched or a routed EoE network
- architecture with standard GNU/Linux tools.
+* Userspace command-line tool 'ethercat'.
+ - Detailed information about master, slaves, domains and bus configuration.
+ - Reading/Writing alias addresses.
+ - Listing slave configurations.
+ - Viewing process data.
+ - SDO download/upload; listing SDO dictionaries.
+ - Slave SII (EEPROM) access.
+ - Controlling application-layer states.
+ - Generation of slave description XML from existing slaves.
-* User space interface via a command-line tool 'ethercat'.
- - Detailed information about master, slaves and the bus configuration.
- - Slave SII reading and writing.
+* Seamless integration in any GNU/Linux distribution.
+ - "Linux Standard Base"-compatible init script for master control.
+ - Master and Ethernet device configuration via sysconfig file.
-* Seamless integration in your favourite GNU/Linux distibution.
- - Master and network device configuration via sysconfig files.
- - "Linux standard base"-compatible init script for master control.
-
-* Virtual read-only network interface for debugging purposes and for
- monitoring the EtherCAT traffic (through Wireshark, or others).
+* Virtual read-only network interface for debugging and traffic monitoring
+ purposes (using Wireshark, etc.). No additional hardware necessary.
-------------------------------------------------------------------------------
--- a/INSTALL Mon Oct 19 14:33:59 2009 +0200
+++ b/INSTALL Wed Jan 13 00:04:47 2010 +0100
@@ -4,38 +4,39 @@
$Id$
+vim: set spelllang=en spell tw=78
+
-------------------------------------------------------------------------------
Building and installing
=======================
-The build and installation procedure is described in section 2.1 in the
-documentation available from http://etherlab.org/en/ethercat.
+The complete build and installation procedure is described in the respective
+section of the documentation available from http://etherlab.org/en/ethercat.
-------------------------------------------------------------------------------
For the impatient: The procedure mainly consists of calling
$ ./configure
-$ make
-$ make modules
+$ make all modules
-(and as root)
+... and as root:
-# make install
-# make modules_install
+# make modules_install install
# depmod
-...and linking the init script and copying the sysconfig file from $PREFIX/etc
+... and linking the init script and copying the sysconfig file from $PREFIX/etc
to the appropriate locations and customizing the sysconfig file.
# ln -s ${PREFIX}/etc/init.d/ethercat /etc/init.d/ethercat
# cp ${PREFIX}/etc/sysconfig/ethercat /etc/sysconfig/ethercat
# vi /etc/sysconfig/ethercat
-The EtherCAT character device will be created with mode 0660 and group root by
-default. If you want to give normal users reading access, create a udev rule
-like this:
+Make sure, that the 'udev' package is installed, to automatically create the
+EtherCAT character devices. The character devices will be created with mode
+0660 and group root by default. If you want to give normal users reading
+access, create a udev rule like this:
# echo KERNEL==\"EtherCAT[0-9]*\", MODE=\"0664\" > /etc/udev/rules.d/99-EtherCAT.rules
--- a/Kbuild.in Mon Oct 19 14:33:59 2009 +0200
+++ b/Kbuild.in Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
@@ -46,10 +39,12 @@
script \
tool
+noinst_HEADERS = \
+ globals.h
+
EXTRA_DIST = \
- Doxyfile \
+ Doxyfile.in \
FEATURES \
- globals.h \
Kbuild.in \
README.EoE
@@ -71,7 +66,7 @@
svn2cl $(srcdir)
@SVNREV=`svnversion $(srcdir)` && \
$(MAKE) dist-bzip2 \
- distdir=$(PACKAGE)-$(VERSION)-$(BRANCH)-r$${SVNREV}
+ distdir=$(PACKAGE)-$(VERSION)-r$${SVNREV}
dist-hook:
if which svnversion >/dev/null 2>&1; then \
--- a/NEWS Mon Oct 19 14:33:59 2009 +0200
+++ b/NEWS Wed Jan 13 00:04:47 2010 +0100
@@ -2,10 +2,61 @@
$Id$
--------------------------------------------------------------------------------
+vim: spelllang=en spell
+
+-------------------------------------------------------------------------------
+
+Changes in version 1.4.1:
+
+* Fixed seg_size parameter when processing an CoE Upload Segment Response.
+* Added octet_string as an SDO data type.
+* Minor: Fixed datagram clean up.
+* Added r8169 driver for kernel 2.6.24 and 2.6.28.
+* Module symbol versions file for ec_master.ko is installed to
+ prefix/modules/ec_master.symvers.
Changes in version 1.4.0:
+* Fixed race condition in jiffy-based frame timeout calculation.
+* Fixed race condition concerning the ec_slave_config_state->operational flag.
+* Fixed wrong calculation of the expected working counter if the process data
+ of a domain span several datagrams.
+* Fixed a kernel oops when a slave configuration is detached while the actual
+ configuration is in progress.
+* Fixed typo in logging output.
+* Removed 'bashisms' from init script ('function' keyword).
+* Fixed bug in e1000 drivers. Memory was allocated when sending the first
+ frame.
+* Modified licence headers to avoid conflicts with the GPL.
+* Restricted licence to GPLv2 only.
+* Fixed spelling of 'PDO', 'SDO' (all uppercase) and 'xx over EtherCAT'
+ (without hyphens).
+* Made domain pointer parameter of ecrt_domain_size() const.
+
+Changes in version 1.4.0-rc3:
+
+* Ported the master thread to the kthread interface.
+* Added missing semaphore up() in an ioctl(). In rare cases, the master
+ semaphore was not released.
+* Minor fix in 'slaves' command that fixed duplicate display of supported
+ mailbox protocols.
+* The SDO Information Service is only queried, if the slave has the
+ corresponding SII bit set.
+* Added some missing header files in the command-line-tool code.
+* Removed unstable e100, forcedeth, and r8169 drivers.
+
+Changes in version 1.4.0-rc2:
+
+* Fixed a deadlock causing race condition concerning thread signaling when the
+ master thread had no opportunity to run, but shall be killed immediately
+ after creation.
+* Added missing up()s causing a semaphore being not released in some rare
+ cases.
+* Minor fixes.
+* Removed some deprecated files.
+
+Changes in version 1.4.0-rc1:
+
* Realtime interface changes:
- Replaced ec_slave_t with ec_slave_config_t, separating the bus
configuration from the actual slaves. Therefore, renamed
@@ -20,18 +71,18 @@
offers the possibility to use a shared-memory region. Therefore,
added the domain methods ecrt_domain_size() and
ecrt_domain_external_memory().
- - Pdo entry registration functions do not return a process data pointer,
+ - PDO entry registration functions do not return a process data pointer,
but an offset in the domain's process data. In addition, an optional bit
position can be requested. This was necessary for the external domain
memory. An additional advantage is, that the returned offset is
immediately valid. If the domain's process data is allocated internally,
the start address can be retrieved with ecrt_domain_data().
- Replaced ecrt_slave_pdo_mapping/add/clear() with
- ecrt_slave_config_pdo_assign_add() to add a Pdo to a sync manager's Pdo
- assignment and ecrt_slave_config_pdo_mapping_add() to add a Pdo entry to a
- Pdo's mapping. ecrt_slave_config_pdos() is a convenience function
+ ecrt_slave_config_pdo_assign_add() to add a PDO to a sync manager's PDO
+ assignment and ecrt_slave_config_pdo_mapping_add() to add a PDO entry to a
+ PDO's mapping. ecrt_slave_config_pdos() is a convenience function
for both, that uses the new data types ec_pdo_info_t and
- ec_pdo_entry_info_t. Pdo entries, that are mapped with these functions
+ ec_pdo_entry_info_t. PDO entries, that are mapped with these functions
can now immediately be registered, even if the bus is offline.
- Renamed ec_bus_status_t, ec_master_status_t to ec_bus_state_t and
ec_master_state_t, respectively. Renamed ecrt_master_get_status() to
@@ -39,15 +90,15 @@
- Added ec_domain_state_t and ec_wc_state_t for a new output parameter
of ecrt_domain_state(). The domain state object does now contain
information, if the process data was exchanged completely.
- - Former "Pdo registration" meant Pdo entry registration in fact, therefore
+ - Former "PDO registration" meant PDO entry registration in fact, therefore
renamed ec_pdo_reg_t to ec_pdo_entry_reg_t and ecrt_domain_register_pdo()
to ecrt_slave_config_reg_pdo_entry().
- Removed ecrt_domain_register_pdo_range(), because it's functionality can
- be reached by specifying an explicit Pdo assignment/mapping and
- registering the mapped Pdo entries.
- - Added an Sdo access interface, working with Sdo requests. These can be
+ be reached by specifying an explicit PDO assignment/mapping and
+ registering the mapped PDO entries.
+ - Added an SDO access interface, working with SDO requests. These can be
scheduled for reading and writing during realtime operation.
- - Exported ecrt_slave_config_sdo(), the generic Sdo configuration function.
+ - Exported ecrt_slave_config_sdo(), the generic SDO configuration function.
- Removed the bus_state and bus_tainted flags from ec_master_state_t.
* Device interface changes:
- Moved device output parameter of ecdev_offer() to return value.
@@ -60,10 +111,10 @@
- Set the master's debug level.
- Show domain information.
- Show master information.
- - List Pdo assignment/mapping.
- - Write an Sdo entry.
- - List Sdo dictionaries.
- - Read an Sdo entry.
+ - List PDO assignment/mapping.
+ - Write an SDO entry.
+ - List SDO dictionaries.
+ - Read an SDO entry.
- Output a slave's SII contents.
- Write slave's SII contents.
- Show slaves.
@@ -72,11 +123,11 @@
* Removed include/ecdb.h.
* Using the timestamp counter is now optional (configure --enable-cycles),
because it is only available on Intel architectures.
-* Sdo dictionaries will now also be fetched in operation mode.
+* SDO dictionaries will now also be fetched in operation mode.
* SII write requests will now also be processed in operation mode.
-* Mapping of Pdo entries is now supported.
-* Current Pdo assignment/mapping is now read via CoE during bus scan, using
- direct Sdo access, independent of the dictionary.
+* Mapping of PDO entries is now supported.
+* Current PDO assignment/mapping is now read via CoE during bus scan, using
+ direct SDO access, independent of the dictionary.
* Network driver news:
- Added 8139too driver for kernel 2.6.22, thanks to Erwin Burgstaller.
- Added 8139too driver for kernel 2.6.23, thanks to Richard Hacker.
@@ -98,8 +149,8 @@
* Added support for slaves that do not support the LRW datagram type. Separate
domains have to be used for inputs and output.
* CoE implementation:
- - Use expedites transfer type for Sdos <= 4 byte (thanks to J. Mohre).
- - Allow gaps in Pdo mapping (thanks to R. Roesch).
+ - Use expedites transfer type for SDOs <= 4 byte (thanks to J. Mohre).
+ - Allow gaps in PDO mapping (thanks to R. Roesch).
- Added some transfer timeouts.
- Ansynchronous handling of Emergency requests.
- Bugfixes.
--- a/README Mon Oct 19 14:33:59 2009 +0200
+++ b/README Wed Jan 13 00:04:47 2010 +0100
@@ -4,6 +4,10 @@
$Id$
+vim: spelllang=en spell tw=78
+
+-------------------------------------------------------------------------------
+
Contents:
1) General Information
2) Requirements
@@ -62,38 +66,31 @@
mode and EoE).
To avoid frame timeouts, deactivating DMA access for hard drives is
-recommented (hdparm -d0 <DEV>).
+recommended (hdparm -d0 <DEV>).
-------------------------------------------------------------------------------
5) License
==========
-Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
This file is part of the IgH EtherCAT Master.
-The IgH EtherCAT Master is free software; you can redistribute it
-and/or modify it under the terms of the GNU General Public License
-as published by the Free Software Foundation; either version 2 of the
-License, or (at your option) any later version.
+The IgH EtherCAT Master is free software; you can redistribute it and/or
+modify it under the terms of the GNU General Public License version 2, as
+published by the Free Software Foundation.
-The IgH EtherCAT Master is distributed in the hope that it will be
-useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-GNU General Public License for more details.
+The IgH EtherCAT Master is distributed in the hope that it will be useful, but
+WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+details.
-You should have received a copy of the GNU General Public License
-along with the IgH EtherCAT Master; if not, write to the Free Software
-Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+You should have received a copy of the GNU General Public License along with
+the IgH EtherCAT Master; if not, write to the Free Software Foundation, Inc.,
+51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-The right to use EtherCAT Technology is granted and comes free of
-charge under condition of compatibility of product made by
-Licensee. People intending to distribute/sell products based on the
-code, have to sign an agreement to guarantee that products using
-software based on IgH EtherCAT master stay compatible with the actual
-EtherCAT specification (which are released themselves as an open
-standard) as the (only) precondition to have the right to use EtherCAT
-Technology, IP and trade marks.
+Using the EtherCAT technology and brand is permitted in compliance with the
+industrial property and similar rights of Beckhoff Automation GmbH.
-------------------------------------------------------------------------------
--- a/README.EoE Mon Oct 19 14:33:59 2009 +0200
+++ b/README.EoE Wed Jan 13 00:04:47 2010 +0100
@@ -2,8 +2,12 @@
$Id$
+vim: spelllang=en spell tw=78
+
+-------------------------------------------------------------------------------
+
This file shall give additional information on how to set up a network
-environment with Ethernet-over-EtherCAT devices.
+environment with Ethernet over EtherCAT devices.
A virtual network interface will appear for every EoE-capable slave. The
interface naming scheme is either eoeXsY, where X is the master index and Y is
--- a/TODO Mon Oct 19 14:33:59 2009 +0200
+++ b/TODO Wed Jan 13 00:04:47 2010 +0100
@@ -6,11 +6,6 @@
-------------------------------------------------------------------------------
-Version 1.4.0:
-
-* Update documentation.
-* Check for possible race condition in jiffy-based frame timeout calculation.
-
Future issues:
* Distributed clocks.
@@ -28,12 +23,12 @@
* Interface/buffers for asynchronous domain IO.
* Make scanning and configuration run parallel (each).
* File access over EtherCAT (FoE).
+* Vendor-specific over EtherCAT (VoE).
* ethercat tool:
- Data type abbreviations.
- Add a -n (numeric) switch.
- Check for unwanted options.
-* Segmented Sdo downloads.
-* Get original driver for r8169.
+* Segmented SDO downloads.
Smaller issues:
--- a/configure.ac Mon Oct 19 14:33:59 2009 +0200
+++ b/configure.ac Wed Jan 13 00:04:47 2010 +0100
@@ -3,7 +3,7 @@
#------------------------------------------------------------------------------
AC_PREREQ(2.59)
-AC_INIT([ethercat],[1.4.0-rc1],[fp@igh-essen.com])
+AC_INIT([ethercat],[1.4.1],[fp@igh-essen.com])
AC_CONFIG_AUX_DIR([autoconf])
AM_INIT_AUTOMAKE([-Wall -Werror dist-bzip2])
AC_PREFIX_DEFAULT([/opt/etherlab])
@@ -14,11 +14,6 @@
# Global
#------------------------------------------------------------------------------
-branch=trunk
-
-AC_DEFINE_UNQUOTED(BRANCH, ["$branch"], [Subversion branch])
-AC_SUBST(BRANCH, [$branch])
-
AC_PROG_CXX
#------------------------------------------------------------------------------
@@ -134,116 +129,6 @@
AC_SUBST(KERNEL_8139TOO,[$kernel8139too])
#------------------------------------------------------------------------------
-# e100 driver
-#------------------------------------------------------------------------------
-
-AC_ARG_ENABLE([e100],
- AS_HELP_STRING([--enable-e100],
- [Enable e100 driver]),
- [
- case "${enableval}" in
- yes) enablee100=1
- ;;
- no) enablee100=0
- ;;
- *) AC_MSG_ERROR([Invalid value for --enable-e100])
- ;;
- esac
- ],
- [enablee100=0] # disabled by default
-)
-
-AM_CONDITIONAL(ENABLE_E100, test "x$enablee100" = "x1")
-AC_SUBST(ENABLE_E100,[$enablee100])
-
-AC_ARG_WITH([e100-kernel],
- AC_HELP_STRING(
- [--with-e100-kernel=<X.Y.Z>],
- [e100 kernel (only if differing)]
- ),
- [
- kernele100=[$withval]
- ],
- [
- kernele100=$linuxversion
- ]
-)
-
-if test "x${enablee100}" = "x1"; then
- AC_MSG_CHECKING([for kernel for e100 driver])
-
- kernels=`ls -1 ${srcdir}/devices/ | grep -oE "^e100-.*-" | cut -d "-" -f 2 | uniq`
- found=0
- for k in $kernels; do
- if test "$kernele100" = "$k"; then
- found=1
- fi
- done
- if test $found -ne 1; then
- AC_MSG_ERROR([kernel $kernele100 not available for e100 driver!])
- fi
-
- AC_MSG_RESULT([$kernele100])
-fi
-
-AC_SUBST(KERNEL_E100,[$kernele100])
-
-#------------------------------------------------------------------------------
-# forcedeth driver
-#------------------------------------------------------------------------------
-
-AC_ARG_ENABLE([forcedeth],
- AS_HELP_STRING([--enable-forcedeth],
- [Enable forcedeth driver]),
- [
- case "${enableval}" in
- yes) enableforcedeth=1
- ;;
- no) enableforcedeth=0
- ;;
- *) AC_MSG_ERROR([Invalid value for --enable-forcedeth])
- ;;
- esac
- ],
- [enableforcedeth=0] # disabled by default!
-)
-
-AM_CONDITIONAL(ENABLE_FORCEDETH, test "x$enableforcedeth" = "x1")
-AC_SUBST(ENABLE_FORCEDETH,[$enableforcedeth])
-
-AC_ARG_WITH([forcedeth-kernel],
- AC_HELP_STRING(
- [--with-forcedeth-kernel=<X.Y.Z>],
- [forcedeth kernel (only if differing)]
- ),
- [
- kernelforcedeth=[$withval]
- ],
- [
- kernelforcedeth=$linuxversion
- ]
-)
-
-if test "x${enableforcedeth}" = "x1"; then
- AC_MSG_CHECKING([for kernel for forcedeth driver])
-
- kernels=`ls -1 ${srcdir}/devices/ | grep -oE "^forcedeth-.*-" | cut -d "-" -f 2 | uniq`
- found=0
- for k in $kernels; do
- if test "$kernelforcedeth" = "$k"; then
- found=1
- fi
- done
- if test $found -ne 1; then
- AC_MSG_ERROR([kernel $kernelforcedeth not available for forcedeth driver!])
- fi
-
- AC_MSG_RESULT([$kernelforcedeth])
-fi
-
-AC_SUBST(KERNEL_FORCEDETH,[$kernelforcedeth])
-
-#------------------------------------------------------------------------------
# e1000 driver
#------------------------------------------------------------------------------
@@ -307,19 +192,19 @@
[Enable r8169 driver]),
[
case "${enableval}" in
- yes) enabler8169=1
- ;;
- no) enabler8169=0
+ yes) enable_r8169=1
+ ;;
+ no) enable_r8169=0
;;
*) AC_MSG_ERROR([Invalid value for --enable-r8169])
;;
esac
],
- [enabler8169=0] # disabled by default
-)
-
-AM_CONDITIONAL(ENABLE_R8169, test "x$enabler8169" = "x1")
-AC_SUBST(ENABLE_R8169,[$enabler8169])
+ [enable_r8169=0] # disabled by default
+)
+
+AM_CONDITIONAL(ENABLE_R8169, test "x$enable_r8169" = "x1")
+AC_SUBST(ENABLE_R8169,[$enable_r8169])
AC_ARG_WITH([r8169-kernel],
AC_HELP_STRING(
@@ -327,31 +212,31 @@
[r8169 kernel (only if differing)]
),
[
- kernelr8169=[$withval]
- ],
- [
- kernelr8169=$linuxversion
- ]
-)
-
-if test "x${enabler8169}" = "x1"; then
+ kernel_r8169=[$withval]
+ ],
+ [
+ kernel_r8169=$linuxversion
+ ]
+)
+
+if test "x${enable_r8169}" = "x1"; then
AC_MSG_CHECKING([for kernel for r8169 driver])
kernels=`ls -1 ${srcdir}/devices/ | grep -oE "^r8169-.*-" | cut -d "-" -f 2 | uniq`
found=0
for k in $kernels; do
- if test "$kernelr8169" = "$k"; then
+ if test "$kernel_r8169" = "$k"; then
found=1
fi
done
if test $found -ne 1; then
- AC_MSG_ERROR([kernel $kernelr8169 not available for r8169 driver!])
+ AC_MSG_ERROR([kernel $kernel_r8169 not available for r8169 driver!])
fi
- AC_MSG_RESULT([$kernelr8169])
-fi
-
-AC_SUBST(KERNEL_R8169,[$kernelr8169])
+ AC_MSG_RESULT([$kernel_r8169])
+fi
+
+AC_SUBST(KERNEL_R8169,[$kernel_r8169])
#------------------------------------------------------------------------------
# RTAI path (optional)
@@ -434,7 +319,7 @@
fi
#------------------------------------------------------------------------------
-# Ethernet-over-EtherCAT support
+# Ethernet over EtherCAT support
#------------------------------------------------------------------------------
AC_ARG_ENABLE([eoe],
--- a/devices/8139too-2.6.13-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/8139too-2.6.13-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/devices/8139too-2.6.17-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/8139too-2.6.17-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/devices/8139too-2.6.18-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/8139too-2.6.18-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/devices/8139too-2.6.19-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/8139too-2.6.19-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/devices/8139too-2.6.22-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/8139too-2.6.22-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/devices/8139too-2.6.23-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/8139too-2.6.23-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/devices/8139too-2.6.24-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/8139too-2.6.24-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/devices/Kbuild.in Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/Kbuild.in Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
# vim: syntax=make
#
@@ -46,20 +39,6 @@
CFLAGS_$(EC_8139TOO_OBJ) = -DSVNREV=$(REV)
endif
-ifeq (@ENABLE_E100@,1)
- EC_E100_OBJ := e100-@KERNEL_E100@-ethercat.o
- obj-m += ec_e100.o
- ec_e100-objs := $(EC_E100_OBJ)
- CFLAGS_$(EC_E100_OBJ) = -DSVNREV=$(REV)
-endif
-
-ifeq (@ENABLE_FORCEDETH@,1)
- EC_FORCEDETH_OBJ := forcedeth-@KERNEL_FORCEDETH@-ethercat.o
- obj-m += ec_forcedeth.o
- ec_forcedeth-objs := $(EC_FORCEDETH_OBJ)
- CFLAGS_$(EC_FORCEDETH_OBJ) = -DSVNREV=$(REV)
-endif
-
ifeq (@ENABLE_E1000@,1)
obj-m += e1000/
endif
--- a/devices/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
@@ -51,13 +44,11 @@
8139too-2.6.24-ethercat.c \
8139too-2.6.24-orig.c \
Kbuild.in \
- e100-2.6.18-ethercat.c \
- e100-2.6.18-orig.c \
ecdev.h \
- forcedeth-2.6.17-ethercat.c \
- forcedeth-2.6.17-orig.c \
- forcedeth-2.6.19-ethercat.c \
- forcedeth-2.6.19-orig.c
+ r8169-2.6.24-ethercat.c \
+ r8169-2.6.24-orig.c \
+ r8169-2.6.28-ethercat.c \
+ r8169-2.6.28-orig.c
BUILT_SOURCES = \
Kbuild
@@ -70,15 +61,9 @@
if ENABLE_8139TOO
cp $(srcdir)/ec_8139too.ko $(DESTDIR)$(LINUX_MOD_PATH)
endif
-if ENABLE_E100
- cp $(srcdir)/ec_e100.ko $(DESTDIR)$(LINUX_MOD_PATH)
-endif
if ENABLE_E1000
$(MAKE) -C e1000 modules_install
endif
-if ENABLE_FORCEDETH
- cp $(srcdir)/ec_forcedeth.ko $(DESTDIR)$(LINUX_MOD_PATH)
-endif
if ENABLE_R8169
cp $(srcdir)/ec_r8169.ko $(DESTDIR)$(LINUX_MOD_PATH)
endif
--- a/devices/e100-2.6.18-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,3074 +0,0 @@
-/******************************************************************************
- *
- * $Id$
- *
- * Copyright (C) 2007 Florian Pose, Ingenieurgemeinschaft IgH
- *
- * This file is part of the IgH EtherCAT Master.
- *
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- *
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
- *
- *****************************************************************************/
-
-/**
- \file
- EtherCAT driver for e100-compatible NICs.
-*/
-
-/* Former documentation: */
-
-/*******************************************************************************
-
- Copyright(c) 1999 - 2005 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-
-*******************************************************************************/
-
-/*
- * e100.c: Intel(R) PRO/100 ethernet driver
- *
- * (Re)written 2003 by scott.feldman@intel.com. Based loosely on
- * original e100 driver, but better described as a munging of
- * e100, e1000, eepro100, tg3, 8139cp, and other drivers.
- *
- * References:
- * Intel 8255x 10/100 Mbps Ethernet Controller Family,
- * Open Source Software Developers Manual,
- * http://sourceforge.net/projects/e1000
- *
- *
- * Theory of Operation
- *
- * I. General
- *
- * The driver supports Intel(R) 10/100 Mbps PCI Fast Ethernet
- * controller family, which includes the 82557, 82558, 82559, 82550,
- * 82551, and 82562 devices. 82558 and greater controllers
- * integrate the Intel 82555 PHY. The controllers are used in
- * server and client network interface cards, as well as in
- * LAN-On-Motherboard (LOM), CardBus, MiniPCI, and ICHx
- * configurations. 8255x supports a 32-bit linear addressing
- * mode and operates at 33Mhz PCI clock rate.
- *
- * II. Driver Operation
- *
- * Memory-mapped mode is used exclusively to access the device's
- * shared-memory structure, the Control/Status Registers (CSR). All
- * setup, configuration, and control of the device, including queuing
- * of Tx, Rx, and configuration commands is through the CSR.
- * cmd_lock serializes accesses to the CSR command register. cb_lock
- * protects the shared Command Block List (CBL).
- *
- * 8255x is highly MII-compliant and all access to the PHY go
- * through the Management Data Interface (MDI). Consequently, the
- * driver leverages the mii.c library shared with other MII-compliant
- * devices.
- *
- * Big- and Little-Endian byte order as well as 32- and 64-bit
- * archs are supported. Weak-ordered memory and non-cache-coherent
- * archs are supported.
- *
- * III. Transmit
- *
- * A Tx skb is mapped and hangs off of a TCB. TCBs are linked
- * together in a fixed-size ring (CBL) thus forming the flexible mode
- * memory structure. A TCB marked with the suspend-bit indicates
- * the end of the ring. The last TCB processed suspends the
- * controller, and the controller can be restarted by issue a CU
- * resume command to continue from the suspend point, or a CU start
- * command to start at a given position in the ring.
- *
- * Non-Tx commands (config, multicast setup, etc) are linked
- * into the CBL ring along with Tx commands. The common structure
- * used for both Tx and non-Tx commands is the Command Block (CB).
- *
- * cb_to_use is the next CB to use for queuing a command; cb_to_clean
- * is the next CB to check for completion; cb_to_send is the first
- * CB to start on in case of a previous failure to resume. CB clean
- * up happens in interrupt context in response to a CU interrupt.
- * cbs_avail keeps track of number of free CB resources available.
- *
- * Hardware padding of short packets to minimum packet size is
- * enabled. 82557 pads with 7Eh, while the later controllers pad
- * with 00h.
- *
- * IV. Recieve
- *
- * The Receive Frame Area (RFA) comprises a ring of Receive Frame
- * Descriptors (RFD) + data buffer, thus forming the simplified mode
- * memory structure. Rx skbs are allocated to contain both the RFD
- * and the data buffer, but the RFD is pulled off before the skb is
- * indicated. The data buffer is aligned such that encapsulated
- * protocol headers are u32-aligned. Since the RFD is part of the
- * mapped shared memory, and completion status is contained within
- * the RFD, the RFD must be dma_sync'ed to maintain a consistent
- * view from software and hardware.
- *
- * Under typical operation, the receive unit (RU) is start once,
- * and the controller happily fills RFDs as frames arrive. If
- * replacement RFDs cannot be allocated, or the RU goes non-active,
- * the RU must be restarted. Frame arrival generates an interrupt,
- * and Rx indication and re-allocation happen in the same context,
- * therefore no locking is required. A software-generated interrupt
- * is generated from the watchdog to recover from a failed allocation
- * senario where all Rx resources have been indicated and none re-
- * placed.
- *
- * V. Miscellaneous
- *
- * VLAN offloading of tagging, stripping and filtering is not
- * supported, but driver will accommodate the extra 4-byte VLAN tag
- * for processing by upper layers. Tx/Rx Checksum offloading is not
- * supported. Tx Scatter/Gather is not supported. Jumbo Frames is
- * not supported (hardware limitation).
- *
- * MagicPacket(tm) WoL support is enabled/disabled via ethtool.
- *
- * Thanks to JC (jchapman@katalix.com) for helping with
- * testing/troubleshooting the development driver.
- *
- * TODO:
- * o several entry points race with dev->close
- * o check for tx-no-resources/stop Q races with tx clean/wake Q
- *
- * FIXES:
- * 2005/12/02 - Michael O'Donnell <Michael.ODonnell at stratus dot com>
- * - Stratus87247: protect MDI control register manipulations
- */
-
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/slab.h>
-#include <linux/delay.h>
-#include <linux/init.h>
-#include <linux/pci.h>
-#include <linux/dma-mapping.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/mii.h>
-#include <linux/if_vlan.h>
-#include <linux/skbuff.h>
-#include <linux/ethtool.h>
-#include <linux/string.h>
-#include <asm/unaligned.h>
-
-// EtherCAT includes
-#include "../globals.h"
-#include "ecdev.h"
-
-#define DRV_NAME "ec_e100"
-#define DRV_EXT "-NAPI"
-#define DRV_VERSION "3.5.10-k2"DRV_EXT
-#define DRV_DESCRIPTION "EtherCAT-capable Intel(R) PRO/100 Network Driver"
-#define PFX DRV_NAME ": "
-
-#define E100_WATCHDOG_PERIOD (2 * HZ)
-#define E100_NAPI_WEIGHT 16
-
-MODULE_DESCRIPTION(DRV_DESCRIPTION);
-MODULE_AUTHOR("Florian Pose <fp@igh-essen.com>");
-MODULE_LICENSE("GPL");
-MODULE_VERSION(DRV_VERSION ", master " EC_MASTER_VERSION);
-
-// EtherCAT variables
-static int ec_device_index = -1;
-static int ec_device_master_index = 0;
-struct net_device *e100_ec_netdev = NULL;
-unsigned int e100_device_index = 0;
-
-// EtherCAT module parameters
-module_param(ec_device_index, int, -1);
-module_param(ec_device_master_index, int, 0);
-MODULE_PARM_DESC(ec_device_index,
- "Index of the device reserved for EtherCAT.");
-MODULE_PARM_DESC(ec_device_master_index,
- "Index of the EtherCAT master to register the device.");
-
-void e100_ec_poll(struct net_device *);
-
-static int debug = 3;
-static int eeprom_bad_csum_allow = 0;
-module_param(debug, int, 0);
-module_param(eeprom_bad_csum_allow, int, 0);
-MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
-MODULE_PARM_DESC(eeprom_bad_csum_allow, "Allow bad eeprom checksums");
-#define DPRINTK(nlevel, klevel, fmt, args...) \
- (void)((NETIF_MSG_##nlevel & nic->msg_enable) && \
- printk(KERN_##klevel PFX "%s: %s: " fmt, nic->netdev->name, \
- __FUNCTION__ , ## args))
-
-#define INTEL_8255X_ETHERNET_DEVICE(device_id, ich) {\
- PCI_VENDOR_ID_INTEL, device_id, PCI_ANY_ID, PCI_ANY_ID, \
- PCI_CLASS_NETWORK_ETHERNET << 8, 0xFFFF00, ich }
-static struct pci_device_id e100_id_table[] = {
- INTEL_8255X_ETHERNET_DEVICE(0x1029, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x1030, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x1031, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1032, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1033, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1034, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1038, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1039, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103A, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103B, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103C, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103D, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103E, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x1050, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1051, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1052, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1053, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1054, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1055, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1056, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1057, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1059, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x1064, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1065, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1066, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1067, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1068, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1069, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x106A, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x106B, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1091, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1092, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1093, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1094, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1095, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1209, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x1229, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x2449, 2),
- INTEL_8255X_ETHERNET_DEVICE(0x2459, 2),
- INTEL_8255X_ETHERNET_DEVICE(0x245D, 2),
- INTEL_8255X_ETHERNET_DEVICE(0x27DC, 7),
- { 0, }
-};
-// prevent from being loaded automatically
-//MODULE_DEVICE_TABLE(pci, e100_id_table);
-
-enum mac {
- mac_82557_D100_A = 0,
- mac_82557_D100_B = 1,
- mac_82557_D100_C = 2,
- mac_82558_D101_A4 = 4,
- mac_82558_D101_B0 = 5,
- mac_82559_D101M = 8,
- mac_82559_D101S = 9,
- mac_82550_D102 = 12,
- mac_82550_D102_C = 13,
- mac_82551_E = 14,
- mac_82551_F = 15,
- mac_82551_10 = 16,
- mac_unknown = 0xFF,
-};
-
-enum phy {
- phy_100a = 0x000003E0,
- phy_100c = 0x035002A8,
- phy_82555_tx = 0x015002A8,
- phy_nsc_tx = 0x5C002000,
- phy_82562_et = 0x033002A8,
- phy_82562_em = 0x032002A8,
- phy_82562_ek = 0x031002A8,
- phy_82562_eh = 0x017002A8,
- phy_unknown = 0xFFFFFFFF,
-};
-
-/* CSR (Control/Status Registers) */
-struct csr {
- struct {
- u8 status;
- u8 stat_ack;
- u8 cmd_lo;
- u8 cmd_hi;
- u32 gen_ptr;
- } scb;
- u32 port;
- u16 flash_ctrl;
- u8 eeprom_ctrl_lo;
- u8 eeprom_ctrl_hi;
- u32 mdi_ctrl;
- u32 rx_dma_count;
-};
-
-enum scb_status {
- rus_ready = 0x10,
- rus_mask = 0x3C,
-};
-
-enum ru_state {
- RU_SUSPENDED = 0,
- RU_RUNNING = 1,
- RU_UNINITIALIZED = -1,
-};
-
-enum scb_stat_ack {
- stat_ack_not_ours = 0x00,
- stat_ack_sw_gen = 0x04,
- stat_ack_rnr = 0x10,
- stat_ack_cu_idle = 0x20,
- stat_ack_frame_rx = 0x40,
- stat_ack_cu_cmd_done = 0x80,
- stat_ack_not_present = 0xFF,
- stat_ack_rx = (stat_ack_sw_gen | stat_ack_rnr | stat_ack_frame_rx),
- stat_ack_tx = (stat_ack_cu_idle | stat_ack_cu_cmd_done),
-};
-
-enum scb_cmd_hi {
- irq_mask_none = 0x00,
- irq_mask_all = 0x01,
- irq_sw_gen = 0x02,
-};
-
-enum scb_cmd_lo {
- cuc_nop = 0x00,
- ruc_start = 0x01,
- ruc_load_base = 0x06,
- cuc_start = 0x10,
- cuc_resume = 0x20,
- cuc_dump_addr = 0x40,
- cuc_dump_stats = 0x50,
- cuc_load_base = 0x60,
- cuc_dump_reset = 0x70,
-};
-
-enum cuc_dump {
- cuc_dump_complete = 0x0000A005,
- cuc_dump_reset_complete = 0x0000A007,
-};
-
-enum port {
- software_reset = 0x0000,
- selftest = 0x0001,
- selective_reset = 0x0002,
-};
-
-enum eeprom_ctrl_lo {
- eesk = 0x01,
- eecs = 0x02,
- eedi = 0x04,
- eedo = 0x08,
-};
-
-enum mdi_ctrl {
- mdi_write = 0x04000000,
- mdi_read = 0x08000000,
- mdi_ready = 0x10000000,
-};
-
-enum eeprom_op {
- op_write = 0x05,
- op_read = 0x06,
- op_ewds = 0x10,
- op_ewen = 0x13,
-};
-
-enum eeprom_offsets {
- eeprom_cnfg_mdix = 0x03,
- eeprom_id = 0x0A,
- eeprom_config_asf = 0x0D,
- eeprom_smbus_addr = 0x90,
-};
-
-enum eeprom_cnfg_mdix {
- eeprom_mdix_enabled = 0x0080,
-};
-
-enum eeprom_id {
- eeprom_id_wol = 0x0020,
-};
-
-enum eeprom_config_asf {
- eeprom_asf = 0x8000,
- eeprom_gcl = 0x4000,
-};
-
-enum cb_status {
- cb_complete = 0x8000,
- cb_ok = 0x2000,
-};
-
-enum cb_command {
- cb_nop = 0x0000,
- cb_iaaddr = 0x0001,
- cb_config = 0x0002,
- cb_multi = 0x0003,
- cb_tx = 0x0004,
- cb_ucode = 0x0005,
- cb_dump = 0x0006,
- cb_tx_sf = 0x0008,
- cb_cid = 0x1f00,
- cb_i = 0x2000,
- cb_s = 0x4000,
- cb_el = 0x8000,
-};
-
-struct rfd {
- u16 status;
- u16 command;
- u32 link;
- u32 rbd;
- u16 actual_size;
- u16 size;
-};
-
-struct rx {
- struct rx *next, *prev;
- struct sk_buff *skb;
- dma_addr_t dma_addr;
-};
-
-#if defined(__BIG_ENDIAN_BITFIELD)
-#define X(a,b) b,a
-#else
-#define X(a,b) a,b
-#endif
-struct config {
-/*0*/ u8 X(byte_count:6, pad0:2);
-/*1*/ u8 X(X(rx_fifo_limit:4, tx_fifo_limit:3), pad1:1);
-/*2*/ u8 adaptive_ifs;
-/*3*/ u8 X(X(X(X(mwi_enable:1, type_enable:1), read_align_enable:1),
- term_write_cache_line:1), pad3:4);
-/*4*/ u8 X(rx_dma_max_count:7, pad4:1);
-/*5*/ u8 X(tx_dma_max_count:7, dma_max_count_enable:1);
-/*6*/ u8 X(X(X(X(X(X(X(late_scb_update:1, direct_rx_dma:1),
- tno_intr:1), cna_intr:1), standard_tcb:1), standard_stat_counter:1),
- rx_discard_overruns:1), rx_save_bad_frames:1);
-/*7*/ u8 X(X(X(X(X(rx_discard_short_frames:1, tx_underrun_retry:2),
- pad7:2), rx_extended_rfd:1), tx_two_frames_in_fifo:1),
- tx_dynamic_tbd:1);
-/*8*/ u8 X(X(mii_mode:1, pad8:6), csma_disabled:1);
-/*9*/ u8 X(X(X(X(X(rx_tcpudp_checksum:1, pad9:3), vlan_arp_tco:1),
- link_status_wake:1), arp_wake:1), mcmatch_wake:1);
-/*10*/ u8 X(X(X(pad10:3, no_source_addr_insertion:1), preamble_length:2),
- loopback:2);
-/*11*/ u8 X(linear_priority:3, pad11:5);
-/*12*/ u8 X(X(linear_priority_mode:1, pad12:3), ifs:4);
-/*13*/ u8 ip_addr_lo;
-/*14*/ u8 ip_addr_hi;
-/*15*/ u8 X(X(X(X(X(X(X(promiscuous_mode:1, broadcast_disabled:1),
- wait_after_win:1), pad15_1:1), ignore_ul_bit:1), crc_16_bit:1),
- pad15_2:1), crs_or_cdt:1);
-/*16*/ u8 fc_delay_lo;
-/*17*/ u8 fc_delay_hi;
-/*18*/ u8 X(X(X(X(X(rx_stripping:1, tx_padding:1), rx_crc_transfer:1),
- rx_long_ok:1), fc_priority_threshold:3), pad18:1);
-/*19*/ u8 X(X(X(X(X(X(X(addr_wake:1, magic_packet_disable:1),
- fc_disable:1), fc_restop:1), fc_restart:1), fc_reject:1),
- full_duplex_force:1), full_duplex_pin:1);
-/*20*/ u8 X(X(X(pad20_1:5, fc_priority_location:1), multi_ia:1), pad20_2:1);
-/*21*/ u8 X(X(pad21_1:3, multicast_all:1), pad21_2:4);
-/*22*/ u8 X(X(rx_d102_mode:1, rx_vlan_drop:1), pad22:6);
- u8 pad_d102[9];
-};
-
-#define E100_MAX_MULTICAST_ADDRS 64
-struct multi {
- u16 count;
- u8 addr[E100_MAX_MULTICAST_ADDRS * ETH_ALEN + 2/*pad*/];
-};
-
-/* Important: keep total struct u32-aligned */
-#define UCODE_SIZE 134
-struct cb {
- u16 status;
- u16 command;
- u32 link;
- union {
- u8 iaaddr[ETH_ALEN];
- u32 ucode[UCODE_SIZE];
- struct config config;
- struct multi multi;
- struct {
- u32 tbd_array;
- u16 tcb_byte_count;
- u8 threshold;
- u8 tbd_count;
- struct {
- u32 buf_addr;
- u16 size;
- u16 eol;
- } tbd;
- } tcb;
- u32 dump_buffer_addr;
- } u;
- struct cb *next, *prev;
- dma_addr_t dma_addr;
- struct sk_buff *skb;
-};
-
-enum loopback {
- lb_none = 0, lb_mac = 1, lb_phy = 3,
-};
-
-struct stats {
- u32 tx_good_frames, tx_max_collisions, tx_late_collisions,
- tx_underruns, tx_lost_crs, tx_deferred, tx_single_collisions,
- tx_multiple_collisions, tx_total_collisions;
- u32 rx_good_frames, rx_crc_errors, rx_alignment_errors,
- rx_resource_errors, rx_overrun_errors, rx_cdt_errors,
- rx_short_frame_errors;
- u32 fc_xmt_pause, fc_rcv_pause, fc_rcv_unsupported;
- u16 xmt_tco_frames, rcv_tco_frames;
- u32 complete;
-};
-
-struct mem {
- struct {
- u32 signature;
- u32 result;
- } selftest;
- struct stats stats;
- u8 dump_buf[596];
-};
-
-struct param_range {
- u32 min;
- u32 max;
- u32 count;
-};
-
-struct params {
- struct param_range rfds;
- struct param_range cbs;
-};
-
-struct nic {
- /* Begin: frequently used values: keep adjacent for cache effect */
- u32 msg_enable ____cacheline_aligned;
- struct net_device *netdev;
- struct pci_dev *pdev;
-
- struct rx *rxs ____cacheline_aligned;
- struct rx *rx_to_use;
- struct rx *rx_to_clean;
- struct rfd blank_rfd;
- enum ru_state ru_running;
-
- spinlock_t cb_lock ____cacheline_aligned;
- spinlock_t cmd_lock;
- struct csr __iomem *csr;
- enum scb_cmd_lo cuc_cmd;
- unsigned int cbs_avail;
- struct cb *cbs;
- struct cb *cb_to_use;
- struct cb *cb_to_send;
- struct cb *cb_to_clean;
- u16 tx_command;
- /* End: frequently used values: keep adjacent for cache effect */
-
- enum {
- ich = (1 << 0),
- promiscuous = (1 << 1),
- multicast_all = (1 << 2),
- wol_magic = (1 << 3),
- ich_10h_workaround = (1 << 4),
- } flags ____cacheline_aligned;
-
- enum mac mac;
- enum phy phy;
- struct params params;
- struct net_device_stats net_stats;
- struct timer_list watchdog;
- struct timer_list blink_timer;
- struct mii_if_info mii;
- struct work_struct tx_timeout_task;
- enum loopback loopback;
-
- struct mem *mem;
- dma_addr_t dma_addr;
-
- dma_addr_t cbs_dma_addr;
- u8 adaptive_ifs;
- u8 tx_threshold;
- u32 tx_frames;
- u32 tx_collisions;
- u32 tx_deferred;
- u32 tx_single_collisions;
- u32 tx_multiple_collisions;
- u32 tx_fc_pause;
- u32 tx_tco_frames;
-
- u32 rx_fc_pause;
- u32 rx_fc_unsupported;
- u32 rx_tco_frames;
- u32 rx_over_length_errors;
-
- u8 rev_id;
- u16 leds;
- u16 eeprom_wc;
- u16 eeprom[256];
- spinlock_t mdio_lock;
-
- u8 ethercat;
- ec_device_t *ecdev;
-};
-
-static inline void e100_write_flush(struct nic *nic)
-{
- /* Flush previous PCI writes through intermediate bridges
- * by doing a benign read */
- (void)readb(&nic->csr->scb.status);
-}
-
-static void e100_enable_irq(struct nic *nic)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&nic->cmd_lock, flags);
- writeb(irq_mask_none, &nic->csr->scb.cmd_hi);
- e100_write_flush(nic);
- spin_unlock_irqrestore(&nic->cmd_lock, flags);
-}
-
-static void e100_disable_irq(struct nic *nic)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&nic->cmd_lock, flags);
- writeb(irq_mask_all, &nic->csr->scb.cmd_hi);
- e100_write_flush(nic);
- spin_unlock_irqrestore(&nic->cmd_lock, flags);
-}
-
-static void e100_hw_reset(struct nic *nic)
-{
- /* Put CU and RU into idle with a selective reset to get
- * device off of PCI bus */
- writel(selective_reset, &nic->csr->port);
- e100_write_flush(nic); udelay(20);
-
- /* Now fully reset device */
- writel(software_reset, &nic->csr->port);
- e100_write_flush(nic); udelay(20);
-
- /* Mask off our interrupt line - it's unmasked after reset */
- e100_disable_irq(nic);
-}
-
-static int e100_self_test(struct nic *nic)
-{
- u32 dma_addr = nic->dma_addr + offsetof(struct mem, selftest);
-
- /* Passing the self-test is a pretty good indication
- * that the device can DMA to/from host memory */
-
- nic->mem->selftest.signature = 0;
- nic->mem->selftest.result = 0xFFFFFFFF;
-
- writel(selftest | dma_addr, &nic->csr->port);
- e100_write_flush(nic);
- /* Wait 10 msec for self-test to complete */
- msleep(10);
-
- /* Interrupts are enabled after self-test */
- e100_disable_irq(nic);
-
- /* Check results of self-test */
- if(nic->mem->selftest.result != 0) {
- DPRINTK(HW, ERR, "Self-test failed: result=0x%08X\n",
- nic->mem->selftest.result);
- return -ETIMEDOUT;
- }
- if(nic->mem->selftest.signature == 0) {
- DPRINTK(HW, ERR, "Self-test failed: timed out\n");
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-static void e100_eeprom_write(struct nic *nic, u16 addr_len, u16 addr, u16 data)
-{
- u32 cmd_addr_data[3];
- u8 ctrl;
- int i, j;
-
- /* Three cmds: write/erase enable, write data, write/erase disable */
- cmd_addr_data[0] = op_ewen << (addr_len - 2);
- cmd_addr_data[1] = (((op_write << addr_len) | addr) << 16) |
- cpu_to_le16(data);
- cmd_addr_data[2] = op_ewds << (addr_len - 2);
-
- /* Bit-bang cmds to write word to eeprom */
- for(j = 0; j < 3; j++) {
-
- /* Chip select */
- writeb(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- for(i = 31; i >= 0; i--) {
- ctrl = (cmd_addr_data[j] & (1 << i)) ?
- eecs | eedi : eecs;
- writeb(ctrl, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- writeb(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
- }
- /* Wait 10 msec for cmd to complete */
- msleep(10);
-
- /* Chip deselect */
- writeb(0, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
- }
-};
-
-/* General technique stolen from the eepro100 driver - very clever */
-static u16 e100_eeprom_read(struct nic *nic, u16 *addr_len, u16 addr)
-{
- u32 cmd_addr_data;
- u16 data = 0;
- u8 ctrl;
- int i;
-
- cmd_addr_data = ((op_read << *addr_len) | addr) << 16;
-
- /* Chip select */
- writeb(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- /* Bit-bang to read word from eeprom */
- for(i = 31; i >= 0; i--) {
- ctrl = (cmd_addr_data & (1 << i)) ? eecs | eedi : eecs;
- writeb(ctrl, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- writeb(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- /* Eeprom drives a dummy zero to EEDO after receiving
- * complete address. Use this to adjust addr_len. */
- ctrl = readb(&nic->csr->eeprom_ctrl_lo);
- if(!(ctrl & eedo) && i > 16) {
- *addr_len -= (i - 16);
- i = 17;
- }
-
- data = (data << 1) | (ctrl & eedo ? 1 : 0);
- }
-
- /* Chip deselect */
- writeb(0, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- return le16_to_cpu(data);
-};
-
-/* Load entire EEPROM image into driver cache and validate checksum */
-static int e100_eeprom_load(struct nic *nic)
-{
- u16 addr, addr_len = 8, checksum = 0;
-
- /* Try reading with an 8-bit addr len to discover actual addr len */
- e100_eeprom_read(nic, &addr_len, 0);
- nic->eeprom_wc = 1 << addr_len;
-
- for(addr = 0; addr < nic->eeprom_wc; addr++) {
- nic->eeprom[addr] = e100_eeprom_read(nic, &addr_len, addr);
- if(addr < nic->eeprom_wc - 1)
- checksum += cpu_to_le16(nic->eeprom[addr]);
- }
-
- /* The checksum, stored in the last word, is calculated such that
- * the sum of words should be 0xBABA */
- checksum = le16_to_cpu(0xBABA - checksum);
- if(checksum != nic->eeprom[nic->eeprom_wc - 1]) {
- DPRINTK(PROBE, ERR, "EEPROM corrupted\n");
- if (!eeprom_bad_csum_allow)
- return -EAGAIN;
- }
-
- return 0;
-}
-
-/* Save (portion of) driver EEPROM cache to device and update checksum */
-static int e100_eeprom_save(struct nic *nic, u16 start, u16 count)
-{
- u16 addr, addr_len = 8, checksum = 0;
-
- /* Try reading with an 8-bit addr len to discover actual addr len */
- e100_eeprom_read(nic, &addr_len, 0);
- nic->eeprom_wc = 1 << addr_len;
-
- if(start + count >= nic->eeprom_wc)
- return -EINVAL;
-
- for(addr = start; addr < start + count; addr++)
- e100_eeprom_write(nic, addr_len, addr, nic->eeprom[addr]);
-
- /* The checksum, stored in the last word, is calculated such that
- * the sum of words should be 0xBABA */
- for(addr = 0; addr < nic->eeprom_wc - 1; addr++)
- checksum += cpu_to_le16(nic->eeprom[addr]);
- nic->eeprom[nic->eeprom_wc - 1] = le16_to_cpu(0xBABA - checksum);
- e100_eeprom_write(nic, addr_len, nic->eeprom_wc - 1,
- nic->eeprom[nic->eeprom_wc - 1]);
-
- return 0;
-}
-
-#define E100_WAIT_SCB_TIMEOUT 20000 /* we might have to wait 100ms!!! */
-#define E100_WAIT_SCB_FAST 20 /* delay like the old code */
-static int e100_exec_cmd(struct nic *nic, u8 cmd, dma_addr_t dma_addr)
-{
- unsigned long flags = 0;
- unsigned int i;
- int err = 0;
-
- if (!nic->ethercat)
- spin_lock_irqsave(&nic->cmd_lock, flags);
-
- /* Previous command is accepted when SCB clears */
- for(i = 0; i < E100_WAIT_SCB_TIMEOUT; i++) {
- if(likely(!readb(&nic->csr->scb.cmd_lo)))
- break;
- cpu_relax();
- if(unlikely(i > E100_WAIT_SCB_FAST))
- udelay(5);
- }
- if(unlikely(i == E100_WAIT_SCB_TIMEOUT)) {
- err = -EAGAIN;
- goto err_unlock;
- }
-
- if(unlikely(cmd != cuc_resume))
- writel(dma_addr, &nic->csr->scb.gen_ptr);
- writeb(cmd, &nic->csr->scb.cmd_lo);
-
-err_unlock:
- if (!nic->ethercat)
- spin_unlock_irqrestore(&nic->cmd_lock, flags);
-
- return err;
-}
-
-static int e100_exec_cb(struct nic *nic, struct sk_buff *skb,
- void (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *))
-{
- struct cb *cb;
- unsigned long flags = 0;
- int err = 0;
-
- if (!nic->ethercat)
- spin_lock_irqsave(&nic->cb_lock, flags);
-
- if(unlikely(!nic->cbs_avail)) {
- err = -ENOMEM;
- goto err_unlock;
- }
-
- cb = nic->cb_to_use;
- nic->cb_to_use = cb->next;
- nic->cbs_avail--;
- cb->skb = skb;
-
- if(unlikely(!nic->cbs_avail))
- err = -ENOSPC;
-
- cb_prepare(nic, cb, skb);
-
- /* Order is important otherwise we'll be in a race with h/w:
- * set S-bit in current first, then clear S-bit in previous. */
- cb->command |= cpu_to_le16(cb_s);
- wmb();
- cb->prev->command &= cpu_to_le16(~cb_s);
-
- while(nic->cb_to_send != nic->cb_to_use) {
- if(unlikely(e100_exec_cmd(nic, nic->cuc_cmd,
- nic->cb_to_send->dma_addr))) {
- /* Ok, here's where things get sticky. It's
- * possible that we can't schedule the command
- * because the controller is too busy, so
- * let's just queue the command and try again
- * when another command is scheduled. */
- if(err == -ENOSPC) {
- //request a reset
- if (!nic->ethercat)
- schedule_work(&nic->tx_timeout_task);
- }
- break;
- } else {
- nic->cuc_cmd = cuc_resume;
- nic->cb_to_send = nic->cb_to_send->next;
- }
- }
-
-err_unlock:
- if (!nic->ethercat)
- spin_unlock_irqrestore(&nic->cb_lock, flags);
-
- return err;
-}
-
-static u16 mdio_ctrl(struct nic *nic, u32 addr, u32 dir, u32 reg, u16 data)
-{
- u32 data_out = 0;
- unsigned int i;
- unsigned long flags;
-
-
- /*
- * Stratus87247: we shouldn't be writing the MDI control
- * register until the Ready bit shows True. Also, since
- * manipulation of the MDI control registers is a multi-step
- * procedure it should be done under lock.
- */
- spin_lock_irqsave(&nic->mdio_lock, flags);
- for (i = 100; i; --i) {
- if (readl(&nic->csr->mdi_ctrl) & mdi_ready)
- break;
- udelay(20);
- }
- if (unlikely(!i)) {
- printk("e100.mdio_ctrl(%s) won't go Ready\n",
- nic->netdev->name );
- spin_unlock_irqrestore(&nic->mdio_lock, flags);
- return 0; /* No way to indicate timeout error */
- }
- writel((reg << 16) | (addr << 21) | dir | data, &nic->csr->mdi_ctrl);
-
- for (i = 0; i < 100; i++) {
- udelay(20);
- if ((data_out = readl(&nic->csr->mdi_ctrl)) & mdi_ready)
- break;
- }
- spin_unlock_irqrestore(&nic->mdio_lock, flags);
- DPRINTK(HW, DEBUG,
- "%s:addr=%d, reg=%d, data_in=0x%04X, data_out=0x%04X\n",
- dir == mdi_read ? "READ" : "WRITE", addr, reg, data, data_out);
- return (u16)data_out;
-}
-
-static int mdio_read(struct net_device *netdev, int addr, int reg)
-{
- return mdio_ctrl(netdev_priv(netdev), addr, mdi_read, reg, 0);
-}
-
-static void mdio_write(struct net_device *netdev, int addr, int reg, int data)
-{
- mdio_ctrl(netdev_priv(netdev), addr, mdi_write, reg, data);
-}
-
-static void e100_get_defaults(struct nic *nic)
-{
- struct param_range rfds = { .min = 16, .max = 256, .count = 256 };
- struct param_range cbs = { .min = 64, .max = 256, .count = 128 };
-
- pci_read_config_byte(nic->pdev, PCI_REVISION_ID, &nic->rev_id);
- /* MAC type is encoded as rev ID; exception: ICH is treated as 82559 */
- nic->mac = (nic->flags & ich) ? mac_82559_D101M : nic->rev_id;
- if(nic->mac == mac_unknown)
- nic->mac = mac_82557_D100_A;
-
- nic->params.rfds = rfds;
- nic->params.cbs = cbs;
-
- /* Quadwords to DMA into FIFO before starting frame transmit */
- nic->tx_threshold = 0xE0;
-
- /* no interrupt for every tx completion, delay = 256us if not 557*/
- nic->tx_command = cpu_to_le16(cb_tx | cb_tx_sf |
- ((nic->mac >= mac_82558_D101_A4) ? cb_cid : cb_i));
-
- /* Template for a freshly allocated RFD */
- nic->blank_rfd.command = cpu_to_le16(cb_el);
- nic->blank_rfd.rbd = 0xFFFFFFFF;
- nic->blank_rfd.size = cpu_to_le16(VLAN_ETH_FRAME_LEN);
-
- /* MII setup */
- nic->mii.phy_id_mask = 0x1F;
- nic->mii.reg_num_mask = 0x1F;
- nic->mii.dev = nic->netdev;
- nic->mii.mdio_read = mdio_read;
- nic->mii.mdio_write = mdio_write;
-}
-
-static void e100_configure(struct nic *nic, struct cb *cb, struct sk_buff *skb)
-{
- struct config *config = &cb->u.config;
- u8 *c = (u8 *)config;
-
- cb->command = cpu_to_le16(cb_config);
-
- memset(config, 0, sizeof(struct config));
-
- config->byte_count = 0x16; /* bytes in this struct */
- config->rx_fifo_limit = 0x8; /* bytes in FIFO before DMA */
- config->direct_rx_dma = 0x1; /* reserved */
- config->standard_tcb = 0x1; /* 1=standard, 0=extended */
- config->standard_stat_counter = 0x1; /* 1=standard, 0=extended */
- config->rx_discard_short_frames = 0x1; /* 1=discard, 0=pass */
- config->tx_underrun_retry = 0x3; /* # of underrun retries */
- config->mii_mode = 0x1; /* 1=MII mode, 0=503 mode */
- config->pad10 = 0x6;
- config->no_source_addr_insertion = 0x1; /* 1=no, 0=yes */
- config->preamble_length = 0x2; /* 0=1, 1=3, 2=7, 3=15 bytes */
- config->ifs = 0x6; /* x16 = inter frame spacing */
- config->ip_addr_hi = 0xF2; /* ARP IP filter - not used */
- config->pad15_1 = 0x1;
- config->pad15_2 = 0x1;
- config->crs_or_cdt = 0x0; /* 0=CRS only, 1=CRS or CDT */
- config->fc_delay_hi = 0x40; /* time delay for fc frame */
- config->tx_padding = 0x1; /* 1=pad short frames */
- config->fc_priority_threshold = 0x7; /* 7=priority fc disabled */
- config->pad18 = 0x1;
- config->full_duplex_pin = 0x1; /* 1=examine FDX# pin */
- config->pad20_1 = 0x1F;
- config->fc_priority_location = 0x1; /* 1=byte#31, 0=byte#19 */
- config->pad21_1 = 0x5;
-
- config->adaptive_ifs = nic->adaptive_ifs;
- config->loopback = nic->loopback;
-
- if(nic->mii.force_media && nic->mii.full_duplex)
- config->full_duplex_force = 0x1; /* 1=force, 0=auto */
-
- if(nic->flags & promiscuous || nic->loopback) {
- config->rx_save_bad_frames = 0x1; /* 1=save, 0=discard */
- config->rx_discard_short_frames = 0x0; /* 1=discard, 0=save */
- config->promiscuous_mode = 0x1; /* 1=on, 0=off */
- }
-
- if(nic->flags & multicast_all)
- config->multicast_all = 0x1; /* 1=accept, 0=no */
-
- /* disable WoL when up */
- if (nic->ethercat ||
- (netif_running(nic->netdev) || !(nic->flags & wol_magic)))
- config->magic_packet_disable = 0x1; /* 1=off, 0=on */
-
- if(nic->mac >= mac_82558_D101_A4) {
- config->fc_disable = 0x1; /* 1=Tx fc off, 0=Tx fc on */
- config->mwi_enable = 0x1; /* 1=enable, 0=disable */
- config->standard_tcb = 0x0; /* 1=standard, 0=extended */
- config->rx_long_ok = 0x1; /* 1=VLANs ok, 0=standard */
- if(nic->mac >= mac_82559_D101M)
- config->tno_intr = 0x1; /* TCO stats enable */
- else
- config->standard_stat_counter = 0x0;
- }
-
- DPRINTK(HW, DEBUG, "[00-07]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- c[0], c[1], c[2], c[3], c[4], c[5], c[6], c[7]);
- DPRINTK(HW, DEBUG, "[08-15]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- c[8], c[9], c[10], c[11], c[12], c[13], c[14], c[15]);
- DPRINTK(HW, DEBUG, "[16-23]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- c[16], c[17], c[18], c[19], c[20], c[21], c[22], c[23]);
-}
-
-/********************************************************/
-/* Micro code for 8086:1229 Rev 8 */
-/********************************************************/
-
-/* Parameter values for the D101M B-step */
-#define D101M_CPUSAVER_TIMER_DWORD 78
-#define D101M_CPUSAVER_BUNDLE_DWORD 65
-#define D101M_CPUSAVER_MIN_SIZE_DWORD 126
-
-#define D101M_B_RCVBUNDLE_UCODE \
-{\
-0x00550215, 0xFFFF0437, 0xFFFFFFFF, 0x06A70789, 0xFFFFFFFF, 0x0558FFFF, \
-0x000C0001, 0x00101312, 0x000C0008, 0x00380216, \
-0x0010009C, 0x00204056, 0x002380CC, 0x00380056, \
-0x0010009C, 0x00244C0B, 0x00000800, 0x00124818, \
-0x00380438, 0x00000000, 0x00140000, 0x00380555, \
-0x00308000, 0x00100662, 0x00100561, 0x000E0408, \
-0x00134861, 0x000C0002, 0x00103093, 0x00308000, \
-0x00100624, 0x00100561, 0x000E0408, 0x00100861, \
-0x000C007E, 0x00222C21, 0x000C0002, 0x00103093, \
-0x00380C7A, 0x00080000, 0x00103090, 0x00380C7A, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x00244C2D, 0x00010004, 0x00041000, \
-0x003A0437, 0x00044010, 0x0038078A, 0x00000000, \
-0x00100099, 0x00206C7A, 0x0010009C, 0x00244C48, \
-0x00130824, 0x000C0001, 0x00101213, 0x00260C75, \
-0x00041000, 0x00010004, 0x00130826, 0x000C0006, \
-0x002206A8, 0x0013C926, 0x00101313, 0x003806A8, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00080600, 0x00101B10, 0x00050004, 0x00100826, \
-0x00101210, 0x00380C34, 0x00000000, 0x00000000, \
-0x0021155B, 0x00100099, 0x00206559, 0x0010009C, \
-0x00244559, 0x00130836, 0x000C0000, 0x00220C62, \
-0x000C0001, 0x00101B13, 0x00229C0E, 0x00210C0E, \
-0x00226C0E, 0x00216C0E, 0x0022FC0E, 0x00215C0E, \
-0x00214C0E, 0x00380555, 0x00010004, 0x00041000, \
-0x00278C67, 0x00040800, 0x00018100, 0x003A0437, \
-0x00130826, 0x000C0001, 0x00220559, 0x00101313, \
-0x00380559, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00130831, 0x0010090B, 0x00124813, \
-0x000CFF80, 0x002606AB, 0x00041000, 0x00010004, \
-0x003806A8, 0x00000000, 0x00000000, 0x00000000, \
-}
-
-/********************************************************/
-/* Micro code for 8086:1229 Rev 9 */
-/********************************************************/
-
-/* Parameter values for the D101S */
-#define D101S_CPUSAVER_TIMER_DWORD 78
-#define D101S_CPUSAVER_BUNDLE_DWORD 67
-#define D101S_CPUSAVER_MIN_SIZE_DWORD 128
-
-#define D101S_RCVBUNDLE_UCODE \
-{\
-0x00550242, 0xFFFF047E, 0xFFFFFFFF, 0x06FF0818, 0xFFFFFFFF, 0x05A6FFFF, \
-0x000C0001, 0x00101312, 0x000C0008, 0x00380243, \
-0x0010009C, 0x00204056, 0x002380D0, 0x00380056, \
-0x0010009C, 0x00244F8B, 0x00000800, 0x00124818, \
-0x0038047F, 0x00000000, 0x00140000, 0x003805A3, \
-0x00308000, 0x00100610, 0x00100561, 0x000E0408, \
-0x00134861, 0x000C0002, 0x00103093, 0x00308000, \
-0x00100624, 0x00100561, 0x000E0408, 0x00100861, \
-0x000C007E, 0x00222FA1, 0x000C0002, 0x00103093, \
-0x00380F90, 0x00080000, 0x00103090, 0x00380F90, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x00244FAD, 0x00010004, 0x00041000, \
-0x003A047E, 0x00044010, 0x00380819, 0x00000000, \
-0x00100099, 0x00206FFD, 0x0010009A, 0x0020AFFD, \
-0x0010009C, 0x00244FC8, 0x00130824, 0x000C0001, \
-0x00101213, 0x00260FF7, 0x00041000, 0x00010004, \
-0x00130826, 0x000C0006, 0x00220700, 0x0013C926, \
-0x00101313, 0x00380700, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00080600, 0x00101B10, 0x00050004, 0x00100826, \
-0x00101210, 0x00380FB6, 0x00000000, 0x00000000, \
-0x002115A9, 0x00100099, 0x002065A7, 0x0010009A, \
-0x0020A5A7, 0x0010009C, 0x002445A7, 0x00130836, \
-0x000C0000, 0x00220FE4, 0x000C0001, 0x00101B13, \
-0x00229F8E, 0x00210F8E, 0x00226F8E, 0x00216F8E, \
-0x0022FF8E, 0x00215F8E, 0x00214F8E, 0x003805A3, \
-0x00010004, 0x00041000, 0x00278FE9, 0x00040800, \
-0x00018100, 0x003A047E, 0x00130826, 0x000C0001, \
-0x002205A7, 0x00101313, 0x003805A7, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00130831, \
-0x0010090B, 0x00124813, 0x000CFF80, 0x00260703, \
-0x00041000, 0x00010004, 0x00380700 \
-}
-
-/********************************************************/
-/* Micro code for the 8086:1229 Rev F/10 */
-/********************************************************/
-
-/* Parameter values for the D102 E-step */
-#define D102_E_CPUSAVER_TIMER_DWORD 42
-#define D102_E_CPUSAVER_BUNDLE_DWORD 54
-#define D102_E_CPUSAVER_MIN_SIZE_DWORD 46
-
-#define D102_E_RCVBUNDLE_UCODE \
-{\
-0x007D028F, 0x0E4204F9, 0x14ED0C85, 0x14FA14E9, 0x0EF70E36, 0x1FFF1FFF, \
-0x00E014B9, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014BD, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014D5, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014C1, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014C8, 0x00000000, 0x00000000, 0x00000000, \
-0x00200600, 0x00E014EE, 0x00000000, 0x00000000, \
-0x0030FF80, 0x00940E46, 0x00038200, 0x00102000, \
-0x00E00E43, 0x00000000, 0x00000000, 0x00000000, \
-0x00300006, 0x00E014FB, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00906E41, 0x00800E3C, 0x00E00E39, 0x00000000, \
-0x00906EFD, 0x00900EFD, 0x00E00EF8, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-}
-
-static void e100_setup_ucode(struct nic *nic, struct cb *cb, struct sk_buff *skb)
-{
-/* *INDENT-OFF* */
- static struct {
- u32 ucode[UCODE_SIZE + 1];
- u8 mac;
- u8 timer_dword;
- u8 bundle_dword;
- u8 min_size_dword;
- } ucode_opts[] = {
- { D101M_B_RCVBUNDLE_UCODE,
- mac_82559_D101M,
- D101M_CPUSAVER_TIMER_DWORD,
- D101M_CPUSAVER_BUNDLE_DWORD,
- D101M_CPUSAVER_MIN_SIZE_DWORD },
- { D101S_RCVBUNDLE_UCODE,
- mac_82559_D101S,
- D101S_CPUSAVER_TIMER_DWORD,
- D101S_CPUSAVER_BUNDLE_DWORD,
- D101S_CPUSAVER_MIN_SIZE_DWORD },
- { D102_E_RCVBUNDLE_UCODE,
- mac_82551_F,
- D102_E_CPUSAVER_TIMER_DWORD,
- D102_E_CPUSAVER_BUNDLE_DWORD,
- D102_E_CPUSAVER_MIN_SIZE_DWORD },
- { D102_E_RCVBUNDLE_UCODE,
- mac_82551_10,
- D102_E_CPUSAVER_TIMER_DWORD,
- D102_E_CPUSAVER_BUNDLE_DWORD,
- D102_E_CPUSAVER_MIN_SIZE_DWORD },
- { {0}, 0, 0, 0, 0}
- }, *opts;
-/* *INDENT-ON* */
-
-/*************************************************************************
-* CPUSaver parameters
-*
-* All CPUSaver parameters are 16-bit literals that are part of a
-* "move immediate value" instruction. By changing the value of
-* the literal in the instruction before the code is loaded, the
-* driver can change the algorithm.
-*
-* INTDELAY - This loads the dead-man timer with its inital value.
-* When this timer expires the interrupt is asserted, and the
-* timer is reset each time a new packet is received. (see
-* BUNDLEMAX below to set the limit on number of chained packets)
-* The current default is 0x600 or 1536. Experiments show that
-* the value should probably stay within the 0x200 - 0x1000.
-*
-* BUNDLEMAX -
-* This sets the maximum number of frames that will be bundled. In
-* some situations, such as the TCP windowing algorithm, it may be
-* better to limit the growth of the bundle size than let it go as
-* high as it can, because that could cause too much added latency.
-* The default is six, because this is the number of packets in the
-* default TCP window size. A value of 1 would make CPUSaver indicate
-* an interrupt for every frame received. If you do not want to put
-* a limit on the bundle size, set this value to xFFFF.
-*
-* BUNDLESMALL -
-* This contains a bit-mask describing the minimum size frame that
-* will be bundled. The default masks the lower 7 bits, which means
-* that any frame less than 128 bytes in length will not be bundled,
-* but will instead immediately generate an interrupt. This does
-* not affect the current bundle in any way. Any frame that is 128
-* bytes or large will be bundled normally. This feature is meant
-* to provide immediate indication of ACK frames in a TCP environment.
-* Customers were seeing poor performance when a machine with CPUSaver
-* enabled was sending but not receiving. The delay introduced when
-* the ACKs were received was enough to reduce total throughput, because
-* the sender would sit idle until the ACK was finally seen.
-*
-* The current default is 0xFF80, which masks out the lower 7 bits.
-* This means that any frame which is x7F (127) bytes or smaller
-* will cause an immediate interrupt. Because this value must be a
-* bit mask, there are only a few valid values that can be used. To
-* turn this feature off, the driver can write the value xFFFF to the
-* lower word of this instruction (in the same way that the other
-* parameters are used). Likewise, a value of 0xF800 (2047) would
-* cause an interrupt to be generated for every frame, because all
-* standard Ethernet frames are <= 2047 bytes in length.
-*************************************************************************/
-
-/* if you wish to disable the ucode functionality, while maintaining the
- * workarounds it provides, set the following defines to:
- * BUNDLESMALL 0
- * BUNDLEMAX 1
- * INTDELAY 1
- */
-#define BUNDLESMALL 1
-#define BUNDLEMAX (u16)6
-#define INTDELAY (u16)1536 /* 0x600 */
-
- /* do not load u-code for ICH devices */
- if (nic->flags & ich)
- goto noloaducode;
-
- /* Search for ucode match against h/w rev_id */
- for (opts = ucode_opts; opts->mac; opts++) {
- int i;
- u32 *ucode = opts->ucode;
- if (nic->mac != opts->mac)
- continue;
-
- /* Insert user-tunable settings */
- ucode[opts->timer_dword] &= 0xFFFF0000;
- ucode[opts->timer_dword] |= INTDELAY;
- ucode[opts->bundle_dword] &= 0xFFFF0000;
- ucode[opts->bundle_dword] |= BUNDLEMAX;
- ucode[opts->min_size_dword] &= 0xFFFF0000;
- ucode[opts->min_size_dword] |= (BUNDLESMALL) ? 0xFFFF : 0xFF80;
-
- for (i = 0; i < UCODE_SIZE; i++)
- cb->u.ucode[i] = cpu_to_le32(ucode[i]);
- cb->command = cpu_to_le16(cb_ucode | cb_el);
- return;
- }
-
-noloaducode:
- cb->command = cpu_to_le16(cb_nop | cb_el);
-}
-
-static inline int e100_exec_cb_wait(struct nic *nic, struct sk_buff *skb,
- void (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *))
-{
- int err = 0, counter = 50;
- struct cb *cb = nic->cb_to_clean;
-
- if ((err = e100_exec_cb(nic, NULL, e100_setup_ucode)))
- DPRINTK(PROBE,ERR, "ucode cmd failed with error %d\n", err);
-
- /* must restart cuc */
- nic->cuc_cmd = cuc_start;
-
- /* wait for completion */
- e100_write_flush(nic);
- udelay(10);
-
- /* wait for possibly (ouch) 500ms */
- while (!(cb->status & cpu_to_le16(cb_complete))) {
- msleep(10);
- if (!--counter) break;
- }
-
- /* ack any interupts, something could have been set */
- writeb(~0, &nic->csr->scb.stat_ack);
-
- /* if the command failed, or is not OK, notify and return */
- if (!counter || !(cb->status & cpu_to_le16(cb_ok))) {
- DPRINTK(PROBE,ERR, "ucode load failed\n");
- err = -EPERM;
- }
-
- return err;
-}
-
-static void e100_setup_iaaddr(struct nic *nic, struct cb *cb,
- struct sk_buff *skb)
-{
- cb->command = cpu_to_le16(cb_iaaddr);
- memcpy(cb->u.iaaddr, nic->netdev->dev_addr, ETH_ALEN);
-}
-
-static void e100_dump(struct nic *nic, struct cb *cb, struct sk_buff *skb)
-{
- cb->command = cpu_to_le16(cb_dump);
- cb->u.dump_buffer_addr = cpu_to_le32(nic->dma_addr +
- offsetof(struct mem, dump_buf));
-}
-
-#define NCONFIG_AUTO_SWITCH 0x0080
-#define MII_NSC_CONG MII_RESV1
-#define NSC_CONG_ENABLE 0x0100
-#define NSC_CONG_TXREADY 0x0400
-#define ADVERTISE_FC_SUPPORTED 0x0400
-static int e100_phy_init(struct nic *nic)
-{
- struct net_device *netdev = nic->netdev;
- u32 addr;
- u16 bmcr, stat, id_lo, id_hi, cong;
-
- /* Discover phy addr by searching addrs in order {1,0,2,..., 31} */
- for(addr = 0; addr < 32; addr++) {
- nic->mii.phy_id = (addr == 0) ? 1 : (addr == 1) ? 0 : addr;
- bmcr = mdio_read(netdev, nic->mii.phy_id, MII_BMCR);
- stat = mdio_read(netdev, nic->mii.phy_id, MII_BMSR);
- stat = mdio_read(netdev, nic->mii.phy_id, MII_BMSR);
- if(!((bmcr == 0xFFFF) || ((stat == 0) && (bmcr == 0))))
- break;
- }
- DPRINTK(HW, DEBUG, "phy_addr = %d\n", nic->mii.phy_id);
- if(addr == 32)
- return -EAGAIN;
-
- /* Selected the phy and isolate the rest */
- for(addr = 0; addr < 32; addr++) {
- if(addr != nic->mii.phy_id) {
- mdio_write(netdev, addr, MII_BMCR, BMCR_ISOLATE);
- } else {
- bmcr = mdio_read(netdev, addr, MII_BMCR);
- mdio_write(netdev, addr, MII_BMCR,
- bmcr & ~BMCR_ISOLATE);
- }
- }
-
- /* Get phy ID */
- id_lo = mdio_read(netdev, nic->mii.phy_id, MII_PHYSID1);
- id_hi = mdio_read(netdev, nic->mii.phy_id, MII_PHYSID2);
- nic->phy = (u32)id_hi << 16 | (u32)id_lo;
- DPRINTK(HW, DEBUG, "phy ID = 0x%08X\n", nic->phy);
-
- /* Handle National tx phys */
-#define NCS_PHY_MODEL_MASK 0xFFF0FFFF
- if((nic->phy & NCS_PHY_MODEL_MASK) == phy_nsc_tx) {
- /* Disable congestion control */
- cong = mdio_read(netdev, nic->mii.phy_id, MII_NSC_CONG);
- cong |= NSC_CONG_TXREADY;
- cong &= ~NSC_CONG_ENABLE;
- mdio_write(netdev, nic->mii.phy_id, MII_NSC_CONG, cong);
- }
-
- if((nic->mac >= mac_82550_D102) || ((nic->flags & ich) &&
- (mdio_read(netdev, nic->mii.phy_id, MII_TPISTATUS) & 0x8000))) {
- /* enable/disable MDI/MDI-X auto-switching.
- MDI/MDI-X auto-switching is disabled for 82551ER/QM chips */
- if((nic->mac == mac_82551_E) || (nic->mac == mac_82551_F) ||
- (nic->mac == mac_82551_10) || (nic->mii.force_media) ||
- !(nic->eeprom[eeprom_cnfg_mdix] & eeprom_mdix_enabled))
- mdio_write(netdev, nic->mii.phy_id, MII_NCONFIG, 0);
- else
- mdio_write(netdev, nic->mii.phy_id, MII_NCONFIG, NCONFIG_AUTO_SWITCH);
- }
-
- return 0;
-}
-
-static int e100_hw_init(struct nic *nic)
-{
- int err;
-
- e100_hw_reset(nic);
-
- DPRINTK(HW, ERR, "e100_hw_init\n");
- if(!in_interrupt() && (err = e100_self_test(nic)))
- return err;
-
- if((err = e100_phy_init(nic)))
- return err;
- if((err = e100_exec_cmd(nic, cuc_load_base, 0)))
- return err;
- if((err = e100_exec_cmd(nic, ruc_load_base, 0)))
- return err;
- if ((err = e100_exec_cb_wait(nic, NULL, e100_setup_ucode)))
- return err;
- if((err = e100_exec_cb(nic, NULL, e100_configure)))
- return err;
- if((err = e100_exec_cb(nic, NULL, e100_setup_iaaddr)))
- return err;
- if((err = e100_exec_cmd(nic, cuc_dump_addr,
- nic->dma_addr + offsetof(struct mem, stats))))
- return err;
- if((err = e100_exec_cmd(nic, cuc_dump_reset, 0)))
- return err;
-
- e100_disable_irq(nic);
-
- return 0;
-}
-
-static void e100_multi(struct nic *nic, struct cb *cb, struct sk_buff *skb)
-{
- struct net_device *netdev = nic->netdev;
- struct dev_mc_list *list = netdev->mc_list;
- u16 i, count = min(netdev->mc_count, E100_MAX_MULTICAST_ADDRS);
-
- cb->command = cpu_to_le16(cb_multi);
- cb->u.multi.count = cpu_to_le16(count * ETH_ALEN);
- for(i = 0; list && i < count; i++, list = list->next)
- memcpy(&cb->u.multi.addr[i*ETH_ALEN], &list->dmi_addr,
- ETH_ALEN);
-}
-
-static void e100_set_multicast_list(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-
- DPRINTK(HW, DEBUG, "mc_count=%d, flags=0x%04X\n",
- netdev->mc_count, netdev->flags);
-
- if(netdev->flags & IFF_PROMISC)
- nic->flags |= promiscuous;
- else
- nic->flags &= ~promiscuous;
-
- if(netdev->flags & IFF_ALLMULTI ||
- netdev->mc_count > E100_MAX_MULTICAST_ADDRS)
- nic->flags |= multicast_all;
- else
- nic->flags &= ~multicast_all;
-
- e100_exec_cb(nic, NULL, e100_configure);
- e100_exec_cb(nic, NULL, e100_multi);
-}
-
-static void e100_update_stats(struct nic *nic)
-{
- struct net_device_stats *ns = &nic->net_stats;
- struct stats *s = &nic->mem->stats;
- u32 *complete = (nic->mac < mac_82558_D101_A4) ? &s->fc_xmt_pause :
- (nic->mac < mac_82559_D101M) ? (u32 *)&s->xmt_tco_frames :
- &s->complete;
-
- /* Device's stats reporting may take several microseconds to
- * complete, so where always waiting for results of the
- * previous command. */
-
- if(*complete == le32_to_cpu(cuc_dump_reset_complete)) {
- *complete = 0;
- nic->tx_frames = le32_to_cpu(s->tx_good_frames);
- nic->tx_collisions = le32_to_cpu(s->tx_total_collisions);
- ns->tx_aborted_errors += le32_to_cpu(s->tx_max_collisions);
- ns->tx_window_errors += le32_to_cpu(s->tx_late_collisions);
- ns->tx_carrier_errors += le32_to_cpu(s->tx_lost_crs);
- ns->tx_fifo_errors += le32_to_cpu(s->tx_underruns);
- ns->collisions += nic->tx_collisions;
- ns->tx_errors += le32_to_cpu(s->tx_max_collisions) +
- le32_to_cpu(s->tx_lost_crs);
- ns->rx_length_errors += le32_to_cpu(s->rx_short_frame_errors) +
- nic->rx_over_length_errors;
- ns->rx_crc_errors += le32_to_cpu(s->rx_crc_errors);
- ns->rx_frame_errors += le32_to_cpu(s->rx_alignment_errors);
- ns->rx_over_errors += le32_to_cpu(s->rx_overrun_errors);
- ns->rx_fifo_errors += le32_to_cpu(s->rx_overrun_errors);
- ns->rx_missed_errors += le32_to_cpu(s->rx_resource_errors);
- ns->rx_errors += le32_to_cpu(s->rx_crc_errors) +
- le32_to_cpu(s->rx_alignment_errors) +
- le32_to_cpu(s->rx_short_frame_errors) +
- le32_to_cpu(s->rx_cdt_errors);
- nic->tx_deferred += le32_to_cpu(s->tx_deferred);
- nic->tx_single_collisions +=
- le32_to_cpu(s->tx_single_collisions);
- nic->tx_multiple_collisions +=
- le32_to_cpu(s->tx_multiple_collisions);
- if(nic->mac >= mac_82558_D101_A4) {
- nic->tx_fc_pause += le32_to_cpu(s->fc_xmt_pause);
- nic->rx_fc_pause += le32_to_cpu(s->fc_rcv_pause);
- nic->rx_fc_unsupported +=
- le32_to_cpu(s->fc_rcv_unsupported);
- if(nic->mac >= mac_82559_D101M) {
- nic->tx_tco_frames +=
- le16_to_cpu(s->xmt_tco_frames);
- nic->rx_tco_frames +=
- le16_to_cpu(s->rcv_tco_frames);
- }
- }
- }
-
-
- if(e100_exec_cmd(nic, cuc_dump_reset, 0))
- DPRINTK(TX_ERR, DEBUG, "exec cuc_dump_reset failed\n");
-}
-
-static void e100_adjust_adaptive_ifs(struct nic *nic, int speed, int duplex)
-{
- /* Adjust inter-frame-spacing (IFS) between two transmits if
- * we're getting collisions on a half-duplex connection. */
-
- if(duplex == DUPLEX_HALF) {
- u32 prev = nic->adaptive_ifs;
- u32 min_frames = (speed == SPEED_100) ? 1000 : 100;
-
- if((nic->tx_frames / 32 < nic->tx_collisions) &&
- (nic->tx_frames > min_frames)) {
- if(nic->adaptive_ifs < 60)
- nic->adaptive_ifs += 5;
- } else if (nic->tx_frames < min_frames) {
- if(nic->adaptive_ifs >= 5)
- nic->adaptive_ifs -= 5;
- }
- if(nic->adaptive_ifs != prev)
- e100_exec_cb(nic, NULL, e100_configure);
- }
-}
-
-static void e100_watchdog(unsigned long data)
-{
- struct nic *nic = (struct nic *)data;
- struct ethtool_cmd cmd;
-
- DPRINTK(TIMER, DEBUG, "right now = %ld\n", jiffies);
-
- /* mii library handles link maintenance tasks */
-
- if (nic->ethercat) {
- ecdev_set_link(nic->ecdev, mii_link_ok(&nic->mii) ? 1 : 0);
- goto finish;
- }
-
- mii_ethtool_gset(&nic->mii, &cmd);
-
- if(mii_link_ok(&nic->mii) && !netif_carrier_ok(nic->netdev)) {
- DPRINTK(LINK, INFO, "link up, %sMbps, %s-duplex\n",
- cmd.speed == SPEED_100 ? "100" : "10",
- cmd.duplex == DUPLEX_FULL ? "full" : "half");
- } else if(!mii_link_ok(&nic->mii) && netif_carrier_ok(nic->netdev)) {
- DPRINTK(LINK, INFO, "link down\n");
- }
-
- mii_check_link(&nic->mii);
-
- /* Software generated interrupt to recover from (rare) Rx
- * allocation failure.
- * Unfortunately have to use a spinlock to not re-enable interrupts
- * accidentally, due to hardware that shares a register between the
- * interrupt mask bit and the SW Interrupt generation bit */
- spin_lock_irq(&nic->cmd_lock);
- writeb(readb(&nic->csr->scb.cmd_hi) | irq_sw_gen,&nic->csr->scb.cmd_hi);
- e100_write_flush(nic);
- spin_unlock_irq(&nic->cmd_lock);
-
- e100_update_stats(nic);
- e100_adjust_adaptive_ifs(nic, cmd.speed, cmd.duplex);
-
- if(nic->mac <= mac_82557_D100_C)
- /* Issue a multicast command to workaround a 557 lock up */
- e100_set_multicast_list(nic->netdev);
-
- if(nic->flags & ich && cmd.speed==SPEED_10 && cmd.duplex==DUPLEX_HALF)
- /* Need SW workaround for ICH[x] 10Mbps/half duplex Tx hang. */
- nic->flags |= ich_10h_workaround;
- else
- nic->flags &= ~ich_10h_workaround;
-
-finish:
- mod_timer(&nic->watchdog, jiffies + E100_WATCHDOG_PERIOD);
-}
-
-static void e100_xmit_prepare(struct nic *nic, struct cb *cb,
- struct sk_buff *skb)
-{
- cb->command = nic->tx_command;
- /* interrupt every 16 packets regardless of delay */
- if((nic->cbs_avail & ~15) == nic->cbs_avail)
- cb->command |= cpu_to_le16(cb_i);
- cb->u.tcb.tbd_array = cb->dma_addr + offsetof(struct cb, u.tcb.tbd);
- cb->u.tcb.tcb_byte_count = 0;
- cb->u.tcb.threshold = nic->tx_threshold;
- cb->u.tcb.tbd_count = 1;
- cb->u.tcb.tbd.buf_addr = cpu_to_le32(pci_map_single(nic->pdev,
- skb->data, skb->len, PCI_DMA_TODEVICE));
- /* check for mapping failure? */
- cb->u.tcb.tbd.size = cpu_to_le16(skb->len);
-}
-
-static int e100_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- int err;
-
- if(nic->flags & ich_10h_workaround) {
- /* SW workaround for ICH[x] 10Mbps/half duplex Tx hang.
- Issue a NOP command followed by a 1us delay before
- issuing the Tx command. */
- if(e100_exec_cmd(nic, cuc_nop, 0))
- DPRINTK(TX_ERR, DEBUG, "exec cuc_nop failed\n");
- udelay(1);
- }
-
- err = e100_exec_cb(nic, skb, e100_xmit_prepare);
-
- switch(err) {
- case -ENOSPC:
- /* We queued the skb, but now we're out of space. */
- DPRINTK(TX_ERR, DEBUG, "No space for CB\n");
- if (!nic->ethercat)
- netif_stop_queue(netdev);
- break;
- case -ENOMEM:
- /* This is a hard error - log it. */
- DPRINTK(TX_ERR, DEBUG, "Out of Tx resources, returning skb\n");
- if (!nic->ethercat)
- netif_stop_queue(netdev);
- return 1;
- }
-
- netdev->trans_start = jiffies;
- return 0;
-}
-
-static int e100_tx_clean(struct nic *nic)
-{
- struct cb *cb;
- int tx_cleaned = 0;
-
- printk(KERN_DEBUG DRV_NAME " tx_clean(%X)\n", (unsigned) nic); // FIXME
-
- if (!nic->cb_to_clean) { // FIXME
- printk(KERN_WARNING DRV_NAME "cb_to_clean is NULL!\n");
- return 0;
- }
-
- if (!nic->ethercat)
- spin_lock(&nic->cb_lock);
-
- DPRINTK(TX_DONE, DEBUG, "cb->status = 0x%04X\n",
- nic->cb_to_clean->status);
-
- /* Clean CBs marked complete */
- for(cb = nic->cb_to_clean;
- cb->status & cpu_to_le16(cb_complete);
- cb = nic->cb_to_clean = cb->next) {
- if(likely(cb->skb != NULL)) {
- nic->net_stats.tx_packets++;
- nic->net_stats.tx_bytes += cb->skb->len;
-
- pci_unmap_single(nic->pdev,
- le32_to_cpu(cb->u.tcb.tbd.buf_addr),
- le16_to_cpu(cb->u.tcb.tbd.size),
- PCI_DMA_TODEVICE);
- if (!nic->ethercat)
- dev_kfree_skb_any(cb->skb);
- cb->skb = NULL;
- tx_cleaned = 1;
- }
- cb->status = 0;
- nic->cbs_avail++;
- }
-
- if (!nic->ethercat) {
- spin_unlock(&nic->cb_lock);
-
- /* Recover from running out of Tx resources in xmit_frame */
- if(unlikely(tx_cleaned && netif_queue_stopped(nic->netdev)))
- netif_wake_queue(nic->netdev);
- }
-
- return tx_cleaned;
-}
-
-static void e100_clean_cbs(struct nic *nic)
-{
- if(nic->cbs) {
- while(nic->cbs_avail != nic->params.cbs.count) {
- struct cb *cb = nic->cb_to_clean;
- if(cb->skb) {
- pci_unmap_single(nic->pdev,
- le32_to_cpu(cb->u.tcb.tbd.buf_addr),
- le16_to_cpu(cb->u.tcb.tbd.size),
- PCI_DMA_TODEVICE);
- dev_kfree_skb(cb->skb);
- }
- nic->cb_to_clean = nic->cb_to_clean->next;
- nic->cbs_avail++;
- }
- pci_free_consistent(nic->pdev,
- sizeof(struct cb) * nic->params.cbs.count,
- nic->cbs, nic->cbs_dma_addr);
- nic->cbs = NULL;
- nic->cbs_avail = 0;
- }
- nic->cuc_cmd = cuc_start;
- nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean =
- nic->cbs;
-}
-
-static int e100_alloc_cbs(struct nic *nic)
-{
- struct cb *cb;
- unsigned int i, count = nic->params.cbs.count;
-
- nic->cuc_cmd = cuc_start;
- nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean = NULL;
- nic->cbs_avail = 0;
-
- nic->cbs = pci_alloc_consistent(nic->pdev,
- sizeof(struct cb) * count, &nic->cbs_dma_addr);
- if(!nic->cbs)
- return -ENOMEM;
-
- for(cb = nic->cbs, i = 0; i < count; cb++, i++) {
- cb->next = (i + 1 < count) ? cb + 1 : nic->cbs;
- cb->prev = (i == 0) ? nic->cbs + count - 1 : cb - 1;
-
- cb->dma_addr = nic->cbs_dma_addr + i * sizeof(struct cb);
- cb->link = cpu_to_le32(nic->cbs_dma_addr +
- ((i+1) % count) * sizeof(struct cb));
- cb->skb = NULL;
- }
-
- nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean = nic->cbs;
- nic->cbs_avail = count;
-
- return 0;
-}
-
-static inline void e100_start_receiver(struct nic *nic, struct rx *rx)
-{
- if(!nic->rxs) return;
- if(RU_SUSPENDED != nic->ru_running) return;
-
- /* handle init time starts */
- if(!rx) rx = nic->rxs;
-
- /* (Re)start RU if suspended or idle and RFA is non-NULL */
- if(rx->skb) {
- e100_exec_cmd(nic, ruc_start, rx->dma_addr);
- nic->ru_running = RU_RUNNING;
- }
-}
-
-#define RFD_BUF_LEN (sizeof(struct rfd) + VLAN_ETH_FRAME_LEN)
-static int e100_rx_alloc_skb(struct nic *nic, struct rx *rx)
-{
- if(!(rx->skb = dev_alloc_skb(RFD_BUF_LEN + NET_IP_ALIGN)))
- return -ENOMEM;
-
- /* Align, init, and map the RFD. */
- rx->skb->dev = nic->netdev;
- skb_reserve(rx->skb, NET_IP_ALIGN);
- memcpy(rx->skb->data, &nic->blank_rfd, sizeof(struct rfd));
- rx->dma_addr = pci_map_single(nic->pdev, rx->skb->data,
- RFD_BUF_LEN, PCI_DMA_BIDIRECTIONAL);
-
- if(pci_dma_mapping_error(rx->dma_addr)) {
- dev_kfree_skb_any(rx->skb);
- rx->skb = NULL;
- rx->dma_addr = 0;
- return -ENOMEM;
- }
-
- /* Link the RFD to end of RFA by linking previous RFD to
- * this one, and clearing EL bit of previous. */
- if(rx->prev->skb) {
- struct rfd *prev_rfd = (struct rfd *)rx->prev->skb->data;
- put_unaligned(cpu_to_le32(rx->dma_addr),
- (u32 *)&prev_rfd->link);
- wmb();
- prev_rfd->command &= ~cpu_to_le16(cb_el);
- pci_dma_sync_single_for_device(nic->pdev, rx->prev->dma_addr,
- sizeof(struct rfd), PCI_DMA_TODEVICE);
- }
-
- return 0;
-}
-
-static int e100_rx_indicate(struct nic *nic, struct rx *rx,
- unsigned int *work_done, unsigned int work_to_do)
-{
- struct sk_buff *skb = rx->skb;
- struct rfd *rfd = (struct rfd *)skb->data;
- u16 rfd_status, actual_size;
-
- if(unlikely(work_done && *work_done >= work_to_do))
- return -EAGAIN;
-
- /* Need to sync before taking a peek at cb_complete bit */
- pci_dma_sync_single_for_cpu(nic->pdev, rx->dma_addr,
- sizeof(struct rfd), PCI_DMA_FROMDEVICE);
- rfd_status = le16_to_cpu(rfd->status);
-
- DPRINTK(RX_STATUS, DEBUG, "status=0x%04X\n", rfd_status);
-
- /* If data isn't ready, nothing to indicate */
- if(unlikely(!(rfd_status & cb_complete)))
- return -ENODATA;
-
- /* Get actual data size */
- actual_size = le16_to_cpu(rfd->actual_size) & 0x3FFF;
- if(unlikely(actual_size > RFD_BUF_LEN - sizeof(struct rfd)))
- actual_size = RFD_BUF_LEN - sizeof(struct rfd);
-
- /* Get data */
- pci_unmap_single(nic->pdev, rx->dma_addr,
- RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
-
- /* this allows for a fast restart without re-enabling interrupts */
- if(le16_to_cpu(rfd->command) & cb_el)
- nic->ru_running = RU_SUSPENDED;
-
- /* Pull off the RFD and put the actual data (minus eth hdr) */
- skb_reserve(skb, sizeof(struct rfd));
- skb_put(skb, actual_size);
- skb->protocol = eth_type_trans(skb, nic->netdev);
-
- if(unlikely(!(rfd_status & cb_ok))) {
- /* Don't indicate if hardware indicates errors */
- if (!nic->ethercat)
- dev_kfree_skb_any(skb);
- } else if(actual_size > ETH_DATA_LEN + VLAN_ETH_HLEN) {
- /* Don't indicate oversized frames */
- nic->rx_over_length_errors++;
- if (!nic->ethercat)
- dev_kfree_skb_any(skb);
- } else {
- nic->net_stats.rx_packets++;
- nic->net_stats.rx_bytes += actual_size;
- nic->netdev->last_rx = jiffies;
- if (!nic->ethercat)
- netif_receive_skb(skb);
- else {
- //ecdev_receive(e100_ec_dev, &rx_ring[ring_offset + 4], pkt_size);
- }
- if(work_done)
- (*work_done)++;
- }
-
- rx->skb = NULL;
-
- return 0;
-}
-
-static void e100_rx_clean(struct nic *nic, unsigned int *work_done,
- unsigned int work_to_do)
-{
- struct rx *rx;
- int restart_required = 0;
- struct rx *rx_to_start = NULL;
-
- /* are we already rnr? then pay attention!!! this ensures that
- * the state machine progression never allows a start with a
- * partially cleaned list, avoiding a race between hardware
- * and rx_to_clean when in NAPI mode */
- if(RU_SUSPENDED == nic->ru_running)
- restart_required = 1;
-
- /* Indicate newly arrived packets */
- for(rx = nic->rx_to_clean; rx->skb; rx = nic->rx_to_clean = rx->next) {
- int err = e100_rx_indicate(nic, rx, work_done, work_to_do);
- if(-EAGAIN == err) {
- /* hit quota so have more work to do, restart once
- * cleanup is complete */
- restart_required = 0;
- break;
- } else if(-ENODATA == err)
- break; /* No more to clean */
- }
-
- /* save our starting point as the place we'll restart the receiver */
- if(restart_required)
- rx_to_start = nic->rx_to_clean;
-
- /* Alloc new skbs to refill list */
- for(rx = nic->rx_to_use; !rx->skb; rx = nic->rx_to_use = rx->next) {
- if(unlikely(e100_rx_alloc_skb(nic, rx)))
- break; /* Better luck next time (see watchdog) */
- }
-
- if(restart_required) {
- // ack the rnr?
- writeb(stat_ack_rnr, &nic->csr->scb.stat_ack);
- e100_start_receiver(nic, rx_to_start);
- if(work_done)
- (*work_done)++;
- }
-}
-
-static void e100_rx_clean_list(struct nic *nic)
-{
- struct rx *rx;
- unsigned int i, count = nic->params.rfds.count;
-
- nic->ru_running = RU_UNINITIALIZED;
-
- if(nic->rxs) {
- for(rx = nic->rxs, i = 0; i < count; rx++, i++) {
- if(rx->skb) {
- pci_unmap_single(nic->pdev, rx->dma_addr,
- RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
- dev_kfree_skb(rx->skb); // FIXME
- }
- }
- kfree(nic->rxs);
- nic->rxs = NULL;
- }
-
- nic->rx_to_use = nic->rx_to_clean = NULL;
-}
-
-static int e100_rx_alloc_list(struct nic *nic)
-{
- struct rx *rx;
- unsigned int i, count = nic->params.rfds.count;
-
- nic->rx_to_use = nic->rx_to_clean = NULL;
- nic->ru_running = RU_UNINITIALIZED;
-
- if(!(nic->rxs = kmalloc(sizeof(struct rx) * count, GFP_ATOMIC)))
- return -ENOMEM;
- memset(nic->rxs, 0, sizeof(struct rx) * count);
-
- for(rx = nic->rxs, i = 0; i < count; rx++, i++) {
- rx->next = (i + 1 < count) ? rx + 1 : nic->rxs;
- rx->prev = (i == 0) ? nic->rxs + count - 1 : rx - 1;
- if(e100_rx_alloc_skb(nic, rx)) {
- e100_rx_clean_list(nic);
- return -ENOMEM;
- }
- }
-
- nic->rx_to_use = nic->rx_to_clean = nic->rxs;
- nic->ru_running = RU_SUSPENDED;
-
- return 0;
-}
-
-static irqreturn_t e100_intr(int irq, void *dev_id, struct pt_regs *regs)
-{
- struct net_device *netdev = dev_id;
- struct nic *nic = netdev_priv(netdev);
- u8 stat_ack = readb(&nic->csr->scb.stat_ack);
-
- DPRINTK(INTR, DEBUG, "stat_ack = 0x%02X\n", stat_ack);
-
- if(stat_ack == stat_ack_not_ours || /* Not our interrupt */
- stat_ack == stat_ack_not_present) /* Hardware is ejected */
- return IRQ_NONE;
-
- /* Ack interrupt(s) */
- writeb(stat_ack, &nic->csr->scb.stat_ack);
-
- /* We hit Receive No Resource (RNR); restart RU after cleaning */
- if(stat_ack & stat_ack_rnr)
- nic->ru_running = RU_SUSPENDED;
-
- if(!nic->ethercat && likely(netif_rx_schedule_prep(netdev))) {
- e100_disable_irq(nic);
- __netif_rx_schedule(netdev);
- }
-
- return IRQ_HANDLED;
-}
-
-void e100_ec_poll(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- static unsigned int cleaned = 0;
-
- cleaned += e100_tx_clean(nic);
-
- if (cleaned >= 1000) {
- printk(KERN_INFO DRV_NAME " %u frames sent.\n", cleaned);
- cleaned = 0;
- }
-}
-
-static int e100_poll(struct net_device *netdev, int *budget)
-{
- struct nic *nic = netdev_priv(netdev);
- unsigned int work_to_do = min(netdev->quota, *budget);
- unsigned int work_done = 0;
- int tx_cleaned;
-
- e100_rx_clean(nic, &work_done, work_to_do);
- tx_cleaned = e100_tx_clean(nic);
-
- /* If no Rx and Tx cleanup work was done, exit polling mode. */
- if(!nic->ethercat &&
- ((!tx_cleaned && (work_done == 0)) || !netif_running(netdev))) {
- netif_rx_complete(netdev);
- e100_enable_irq(nic);
- return 0;
- }
-
- *budget -= work_done;
- netdev->quota -= work_done;
-
- return 1;
-}
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void e100_netpoll(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-
- if (nic->ethercat)
- return;
-
- e100_disable_irq(nic);
- e100_intr(nic->pdev->irq, netdev, NULL);
- e100_tx_clean(nic);
- e100_enable_irq(nic);
-}
-#endif
-
-static struct net_device_stats *e100_get_stats(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return &nic->net_stats;
-}
-
-static int e100_set_mac_address(struct net_device *netdev, void *p)
-{
- struct nic *nic = netdev_priv(netdev);
- struct sockaddr *addr = p;
-
- if (!is_valid_ether_addr(addr->sa_data))
- return -EADDRNOTAVAIL;
-
- memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
- e100_exec_cb(nic, NULL, e100_setup_iaaddr);
-
- return 0;
-}
-
-static int e100_change_mtu(struct net_device *netdev, int new_mtu)
-{
- if(new_mtu < ETH_ZLEN || new_mtu > ETH_DATA_LEN)
- return -EINVAL;
- netdev->mtu = new_mtu;
- return 0;
-}
-
-#ifdef CONFIG_PM
-static int e100_asf(struct nic *nic)
-{
- /* ASF can be enabled from eeprom */
- return((nic->pdev->device >= 0x1050) && (nic->pdev->device <= 0x1057) &&
- (nic->eeprom[eeprom_config_asf] & eeprom_asf) &&
- !(nic->eeprom[eeprom_config_asf] & eeprom_gcl) &&
- ((nic->eeprom[eeprom_smbus_addr] & 0xFF) != 0xFE));
-}
-#endif
-
-static int e100_up(struct nic *nic)
-{
- int err;
-
- if((err = e100_rx_alloc_list(nic)))
- return err;
- if((err = e100_alloc_cbs(nic)))
- goto err_rx_clean_list;
- if((err = e100_hw_init(nic)))
- goto err_clean_cbs;
- if (!nic->ethercat) {
- e100_set_multicast_list(nic->netdev);
- e100_start_receiver(nic, NULL); // FIXME
- }
- mod_timer(&nic->watchdog, jiffies);
- if (!nic->ethercat) {
- if((err = request_irq(nic->pdev->irq, e100_intr, IRQF_SHARED,
- nic->netdev->name, nic->netdev)))
- goto err_no_irq;
- netif_wake_queue(nic->netdev);
- netif_poll_enable(nic->netdev);
- /* enable ints _after_ enabling poll, preventing a race between
- * disable ints+schedule */
- e100_enable_irq(nic);
- }
- return 0;
-
-err_no_irq:
- del_timer_sync(&nic->watchdog);
-err_clean_cbs:
- e100_clean_cbs(nic);
-err_rx_clean_list:
- e100_rx_clean_list(nic);
- return err;
-}
-
-static void e100_down(struct nic *nic)
-{
- if (!nic->ethercat) {
- /* wait here for poll to complete */
- netif_poll_disable(nic->netdev);
- netif_stop_queue(nic->netdev);
- }
- e100_hw_reset(nic);
- if (!nic->ethercat)
- free_irq(nic->pdev->irq, nic->netdev);
- del_timer_sync(&nic->watchdog);
- if (!nic->ethercat)
- netif_carrier_off(nic->netdev);
- e100_clean_cbs(nic);
- e100_rx_clean_list(nic);
-}
-
-static void e100_tx_timeout(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-
- /* Reset outside of interrupt context, to avoid request_irq
- * in interrupt context */
- schedule_work(&nic->tx_timeout_task); // FIXME
-}
-
-static void e100_tx_timeout_task(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-
- DPRINTK(TX_ERR, DEBUG, "scb.status=0x%02X\n",
- readb(&nic->csr->scb.status));
- e100_down(netdev_priv(netdev));
- e100_up(netdev_priv(netdev));
-}
-
-static int e100_loopback_test(struct nic *nic, enum loopback loopback_mode)
-{
- int err;
- struct sk_buff *skb;
-
- /* Use driver resources to perform internal MAC or PHY
- * loopback test. A single packet is prepared and transmitted
- * in loopback mode, and the test passes if the received
- * packet compares byte-for-byte to the transmitted packet. */
-
- if((err = e100_rx_alloc_list(nic)))
- return err;
- if((err = e100_alloc_cbs(nic)))
- goto err_clean_rx;
-
- /* ICH PHY loopback is broken so do MAC loopback instead */
- if(nic->flags & ich && loopback_mode == lb_phy)
- loopback_mode = lb_mac;
-
- nic->loopback = loopback_mode;
- if((err = e100_hw_init(nic)))
- goto err_loopback_none;
-
- if(loopback_mode == lb_phy)
- mdio_write(nic->netdev, nic->mii.phy_id, MII_BMCR,
- BMCR_LOOPBACK);
-
- e100_start_receiver(nic, NULL);
-
- if(!(skb = dev_alloc_skb(ETH_DATA_LEN))) {
- err = -ENOMEM;
- goto err_loopback_none;
- }
- skb_put(skb, ETH_DATA_LEN);
- memset(skb->data, 0xFF, ETH_DATA_LEN);
- e100_xmit_frame(skb, nic->netdev);
-
- msleep(10);
-
- pci_dma_sync_single_for_cpu(nic->pdev, nic->rx_to_clean->dma_addr,
- RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
-
- if(memcmp(nic->rx_to_clean->skb->data + sizeof(struct rfd),
- skb->data, ETH_DATA_LEN))
- err = -EAGAIN;
-
-err_loopback_none:
- mdio_write(nic->netdev, nic->mii.phy_id, MII_BMCR, 0);
- nic->loopback = lb_none;
- e100_clean_cbs(nic);
- e100_hw_reset(nic);
-err_clean_rx:
- e100_rx_clean_list(nic);
- return err;
-}
-
-#define MII_LED_CONTROL 0x1B
-static void e100_blink_led(unsigned long data)
-{
- struct nic *nic = (struct nic *)data;
- enum led_state {
- led_on = 0x01,
- led_off = 0x04,
- led_on_559 = 0x05,
- led_on_557 = 0x07,
- };
-
- nic->leds = (nic->leds & led_on) ? led_off :
- (nic->mac < mac_82559_D101M) ? led_on_557 : led_on_559;
- mdio_write(nic->netdev, nic->mii.phy_id, MII_LED_CONTROL, nic->leds);
- mod_timer(&nic->blink_timer, jiffies + HZ / 4);
-}
-
-static int e100_get_settings(struct net_device *netdev, struct ethtool_cmd *cmd)
-{
- struct nic *nic = netdev_priv(netdev);
- return mii_ethtool_gset(&nic->mii, cmd);
-}
-
-static int e100_set_settings(struct net_device *netdev, struct ethtool_cmd *cmd)
-{
- struct nic *nic = netdev_priv(netdev);
- int err;
-
- mdio_write(netdev, nic->mii.phy_id, MII_BMCR, BMCR_RESET);
- err = mii_ethtool_sset(&nic->mii, cmd);
- e100_exec_cb(nic, NULL, e100_configure);
-
- return err;
-}
-
-static void e100_get_drvinfo(struct net_device *netdev,
- struct ethtool_drvinfo *info)
-{
- struct nic *nic = netdev_priv(netdev);
- strcpy(info->driver, DRV_NAME);
- strcpy(info->version, DRV_VERSION);
- strcpy(info->fw_version, "N/A");
- strcpy(info->bus_info, pci_name(nic->pdev));
-}
-
-static int e100_get_regs_len(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-#define E100_PHY_REGS 0x1C
-#define E100_REGS_LEN 1 + E100_PHY_REGS + \
- sizeof(nic->mem->dump_buf) / sizeof(u32)
- return E100_REGS_LEN * sizeof(u32);
-}
-
-static void e100_get_regs(struct net_device *netdev,
- struct ethtool_regs *regs, void *p)
-{
- struct nic *nic = netdev_priv(netdev);
- u32 *buff = p;
- int i;
-
- regs->version = (1 << 24) | nic->rev_id;
- buff[0] = readb(&nic->csr->scb.cmd_hi) << 24 |
- readb(&nic->csr->scb.cmd_lo) << 16 |
- readw(&nic->csr->scb.status);
- for(i = E100_PHY_REGS; i >= 0; i--)
- buff[1 + E100_PHY_REGS - i] =
- mdio_read(netdev, nic->mii.phy_id, i);
- memset(nic->mem->dump_buf, 0, sizeof(nic->mem->dump_buf));
- e100_exec_cb(nic, NULL, e100_dump);
- msleep(10);
- memcpy(&buff[2 + E100_PHY_REGS], nic->mem->dump_buf,
- sizeof(nic->mem->dump_buf));
-}
-
-static void e100_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
-{
- struct nic *nic = netdev_priv(netdev);
- wol->supported = (nic->mac >= mac_82558_D101_A4) ? WAKE_MAGIC : 0;
- wol->wolopts = (nic->flags & wol_magic) ? WAKE_MAGIC : 0;
-}
-
-static int e100_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
-{
- struct nic *nic = netdev_priv(netdev);
-
- if(wol->wolopts != WAKE_MAGIC && wol->wolopts != 0)
- return -EOPNOTSUPP;
-
- if(wol->wolopts)
- nic->flags |= wol_magic;
- else
- nic->flags &= ~wol_magic;
-
- e100_exec_cb(nic, NULL, e100_configure);
-
- return 0;
-}
-
-static u32 e100_get_msglevel(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return nic->msg_enable;
-}
-
-static void e100_set_msglevel(struct net_device *netdev, u32 value)
-{
- struct nic *nic = netdev_priv(netdev);
- nic->msg_enable = value;
-}
-
-static int e100_nway_reset(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return mii_nway_restart(&nic->mii);
-}
-
-static u32 e100_get_link(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return mii_link_ok(&nic->mii);
-}
-
-static int e100_get_eeprom_len(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return nic->eeprom_wc << 1;
-}
-
-#define E100_EEPROM_MAGIC 0x1234
-static int e100_get_eeprom(struct net_device *netdev,
- struct ethtool_eeprom *eeprom, u8 *bytes)
-{
- struct nic *nic = netdev_priv(netdev);
-
- eeprom->magic = E100_EEPROM_MAGIC;
- memcpy(bytes, &((u8 *)nic->eeprom)[eeprom->offset], eeprom->len);
-
- return 0;
-}
-
-static int e100_set_eeprom(struct net_device *netdev,
- struct ethtool_eeprom *eeprom, u8 *bytes)
-{
- struct nic *nic = netdev_priv(netdev);
-
- if(eeprom->magic != E100_EEPROM_MAGIC)
- return -EINVAL;
-
- memcpy(&((u8 *)nic->eeprom)[eeprom->offset], bytes, eeprom->len);
-
- return e100_eeprom_save(nic, eeprom->offset >> 1,
- (eeprom->len >> 1) + 1);
-}
-
-static void e100_get_ringparam(struct net_device *netdev,
- struct ethtool_ringparam *ring)
-{
- struct nic *nic = netdev_priv(netdev);
- struct param_range *rfds = &nic->params.rfds;
- struct param_range *cbs = &nic->params.cbs;
-
- ring->rx_max_pending = rfds->max;
- ring->tx_max_pending = cbs->max;
- ring->rx_mini_max_pending = 0;
- ring->rx_jumbo_max_pending = 0;
- ring->rx_pending = rfds->count;
- ring->tx_pending = cbs->count;
- ring->rx_mini_pending = 0;
- ring->rx_jumbo_pending = 0;
-}
-
-static int e100_set_ringparam(struct net_device *netdev,
- struct ethtool_ringparam *ring)
-{
- struct nic *nic = netdev_priv(netdev);
- struct param_range *rfds = &nic->params.rfds;
- struct param_range *cbs = &nic->params.cbs;
-
- if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
- return -EINVAL;
-
- if(netif_running(netdev))
- e100_down(nic);
- rfds->count = max(ring->rx_pending, rfds->min);
- rfds->count = min(rfds->count, rfds->max);
- cbs->count = max(ring->tx_pending, cbs->min);
- cbs->count = min(cbs->count, cbs->max);
- DPRINTK(DRV, INFO, "Ring Param settings: rx: %d, tx %d\n",
- rfds->count, cbs->count);
- if(netif_running(netdev))
- e100_up(nic);
-
- return 0;
-}
-
-static const char e100_gstrings_test[][ETH_GSTRING_LEN] = {
- "Link test (on/offline)",
- "Eeprom test (on/offline)",
- "Self test (offline)",
- "Mac loopback (offline)",
- "Phy loopback (offline)",
-};
-#define E100_TEST_LEN sizeof(e100_gstrings_test) / ETH_GSTRING_LEN
-
-static int e100_diag_test_count(struct net_device *netdev)
-{
- return E100_TEST_LEN;
-}
-
-static void e100_diag_test(struct net_device *netdev,
- struct ethtool_test *test, u64 *data)
-{
- struct ethtool_cmd cmd;
- struct nic *nic = netdev_priv(netdev);
- int i, err;
-
- memset(data, 0, E100_TEST_LEN * sizeof(u64));
- data[0] = !mii_link_ok(&nic->mii);
- data[1] = e100_eeprom_load(nic);
- if(test->flags & ETH_TEST_FL_OFFLINE) {
-
- /* save speed, duplex & autoneg settings */
- err = mii_ethtool_gset(&nic->mii, &cmd);
-
- if(netif_running(netdev))
- e100_down(nic);
- data[2] = e100_self_test(nic);
- data[3] = e100_loopback_test(nic, lb_mac);
- data[4] = e100_loopback_test(nic, lb_phy);
-
- /* restore speed, duplex & autoneg settings */
- err = mii_ethtool_sset(&nic->mii, &cmd);
-
- if(netif_running(netdev))
- e100_up(nic);
- }
- for(i = 0; i < E100_TEST_LEN; i++)
- test->flags |= data[i] ? ETH_TEST_FL_FAILED : 0;
-
- msleep_interruptible(4 * 1000);
-}
-
-static int e100_phys_id(struct net_device *netdev, u32 data)
-{
- struct nic *nic = netdev_priv(netdev);
-
- if(!data || data > (u32)(MAX_SCHEDULE_TIMEOUT / HZ))
- data = (u32)(MAX_SCHEDULE_TIMEOUT / HZ);
- mod_timer(&nic->blink_timer, jiffies);
- msleep_interruptible(data * 1000);
- del_timer_sync(&nic->blink_timer);
- mdio_write(netdev, nic->mii.phy_id, MII_LED_CONTROL, 0);
-
- return 0;
-}
-
-static const char e100_gstrings_stats[][ETH_GSTRING_LEN] = {
- "rx_packets", "tx_packets", "rx_bytes", "tx_bytes", "rx_errors",
- "tx_errors", "rx_dropped", "tx_dropped", "multicast", "collisions",
- "rx_length_errors", "rx_over_errors", "rx_crc_errors",
- "rx_frame_errors", "rx_fifo_errors", "rx_missed_errors",
- "tx_aborted_errors", "tx_carrier_errors", "tx_fifo_errors",
- "tx_heartbeat_errors", "tx_window_errors",
- /* device-specific stats */
- "tx_deferred", "tx_single_collisions", "tx_multi_collisions",
- "tx_flow_control_pause", "rx_flow_control_pause",
- "rx_flow_control_unsupported", "tx_tco_packets", "rx_tco_packets",
-};
-#define E100_NET_STATS_LEN 21
-#define E100_STATS_LEN sizeof(e100_gstrings_stats) / ETH_GSTRING_LEN
-
-static int e100_get_stats_count(struct net_device *netdev)
-{
- return E100_STATS_LEN;
-}
-
-static void e100_get_ethtool_stats(struct net_device *netdev,
- struct ethtool_stats *stats, u64 *data)
-{
- struct nic *nic = netdev_priv(netdev);
- int i;
-
- for(i = 0; i < E100_NET_STATS_LEN; i++)
- data[i] = ((unsigned long *)&nic->net_stats)[i];
-
- data[i++] = nic->tx_deferred;
- data[i++] = nic->tx_single_collisions;
- data[i++] = nic->tx_multiple_collisions;
- data[i++] = nic->tx_fc_pause;
- data[i++] = nic->rx_fc_pause;
- data[i++] = nic->rx_fc_unsupported;
- data[i++] = nic->tx_tco_frames;
- data[i++] = nic->rx_tco_frames;
-}
-
-static void e100_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
-{
- switch(stringset) {
- case ETH_SS_TEST:
- memcpy(data, *e100_gstrings_test, sizeof(e100_gstrings_test));
- break;
- case ETH_SS_STATS:
- memcpy(data, *e100_gstrings_stats, sizeof(e100_gstrings_stats));
- break;
- }
-}
-
-static struct ethtool_ops e100_ethtool_ops = {
- .get_settings = e100_get_settings,
- .set_settings = e100_set_settings,
- .get_drvinfo = e100_get_drvinfo,
- .get_regs_len = e100_get_regs_len,
- .get_regs = e100_get_regs,
- .get_wol = e100_get_wol,
- .set_wol = e100_set_wol,
- .get_msglevel = e100_get_msglevel,
- .set_msglevel = e100_set_msglevel,
- .nway_reset = e100_nway_reset,
- .get_link = e100_get_link,
- .get_eeprom_len = e100_get_eeprom_len,
- .get_eeprom = e100_get_eeprom,
- .set_eeprom = e100_set_eeprom,
- .get_ringparam = e100_get_ringparam,
- .set_ringparam = e100_set_ringparam,
- .self_test_count = e100_diag_test_count,
- .self_test = e100_diag_test,
- .get_strings = e100_get_strings,
- .phys_id = e100_phys_id,
- .get_stats_count = e100_get_stats_count,
- .get_ethtool_stats = e100_get_ethtool_stats,
- .get_perm_addr = ethtool_op_get_perm_addr,
-};
-
-static int e100_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
-{
- struct nic *nic = netdev_priv(netdev);
-
- return generic_mii_ioctl(&nic->mii, if_mii(ifr), cmd, NULL);
-}
-
-static int e100_alloc(struct nic *nic)
-{
- nic->mem = pci_alloc_consistent(nic->pdev, sizeof(struct mem),
- &nic->dma_addr);
- return nic->mem ? 0 : -ENOMEM;
-}
-
-static void e100_free(struct nic *nic)
-{
- if(nic->mem) {
- pci_free_consistent(nic->pdev, sizeof(struct mem),
- nic->mem, nic->dma_addr);
- nic->mem = NULL;
- }
-}
-
-static int e100_open(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- int err = 0;
-
- if (!nic->ethercat)
- netif_carrier_off(netdev);
- if((err = e100_up(nic)))
- DPRINTK(IFUP, ERR, "Cannot open interface, aborting.\n");
- return err;
-}
-
-static int e100_close(struct net_device *netdev)
-{
- e100_down(netdev_priv(netdev));
- return 0;
-}
-
-static int __devinit e100_probe(struct pci_dev *pdev,
- const struct pci_device_id *ent)
-{
- struct net_device *netdev;
- struct nic *nic;
- int err;
-
- if(!(netdev = alloc_etherdev(sizeof(struct nic)))) {
- if(((1 << debug) - 1) & NETIF_MSG_PROBE)
- printk(KERN_ERR PFX "Etherdev alloc failed, abort.\n");
- return -ENOMEM;
- }
-
- netdev->open = e100_open;
- netdev->stop = e100_close;
- netdev->hard_start_xmit = e100_xmit_frame;
- netdev->get_stats = e100_get_stats;
- netdev->set_multicast_list = e100_set_multicast_list;
- netdev->set_mac_address = e100_set_mac_address;
- netdev->change_mtu = e100_change_mtu;
- netdev->do_ioctl = e100_do_ioctl;
- SET_ETHTOOL_OPS(netdev, &e100_ethtool_ops);
- netdev->tx_timeout = e100_tx_timeout;
- netdev->watchdog_timeo = E100_WATCHDOG_PERIOD;
- netdev->poll = e100_poll;
- netdev->weight = E100_NAPI_WEIGHT;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- netdev->poll_controller = e100_netpoll;
-#endif
- strcpy(netdev->name, pci_name(pdev));
-
- nic = netdev_priv(netdev);
- nic->netdev = netdev;
- nic->pdev = pdev;
- nic->msg_enable = (1 << debug) - 1;
- pci_set_drvdata(pdev, netdev);
-
- if (e100_device_index++ == ec_device_index) {
- nic->ethercat = 1;
- e100_ec_netdev = netdev;
- }
- else {
- nic->ethercat = 0;
- }
- nic->ecdev = NULL;
-
- if((err = pci_enable_device(pdev))) {
- DPRINTK(PROBE, ERR, "Cannot enable PCI device, aborting.\n");
- goto err_out_free_dev;
- }
-
- if(!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {
- DPRINTK(PROBE, ERR, "Cannot find proper PCI device "
- "base address, aborting.\n");
- err = -ENODEV;
- goto err_out_disable_pdev;
- }
-
- if((err = pci_request_regions(pdev, DRV_NAME))) {
- DPRINTK(PROBE, ERR, "Cannot obtain PCI resources, aborting.\n");
- goto err_out_disable_pdev;
- }
-
- if((err = pci_set_dma_mask(pdev, DMA_32BIT_MASK))) {
- DPRINTK(PROBE, ERR, "No usable DMA configuration, aborting.\n");
- goto err_out_free_res;
- }
-
- SET_MODULE_OWNER(netdev);
- SET_NETDEV_DEV(netdev, &pdev->dev);
-
- nic->csr = ioremap(pci_resource_start(pdev, 0), sizeof(struct csr));
- if(!nic->csr) {
- DPRINTK(PROBE, ERR, "Cannot map device registers, aborting.\n");
- err = -ENOMEM;
- goto err_out_free_res;
- }
-
- if(ent->driver_data)
- nic->flags |= ich;
- else
- nic->flags &= ~ich;
-
- e100_get_defaults(nic);
-
- /* locks must be initialized before calling hw_reset */
- spin_lock_init(&nic->cb_lock);
- spin_lock_init(&nic->cmd_lock);
- spin_lock_init(&nic->mdio_lock);
-
- /* Reset the device before pci_set_master() in case device is in some
- * funky state and has an interrupt pending - hint: we don't have the
- * interrupt handler registered yet. */
- e100_hw_reset(nic);
-
- pci_set_master(pdev);
-
- init_timer(&nic->watchdog);
- nic->watchdog.function = e100_watchdog;
- nic->watchdog.data = (unsigned long)nic;
- init_timer(&nic->blink_timer);
- nic->blink_timer.function = e100_blink_led;
- nic->blink_timer.data = (unsigned long)nic;
-
- INIT_WORK(&nic->tx_timeout_task,
- (void (*)(void *))e100_tx_timeout_task, netdev);
-
- if((err = e100_alloc(nic))) {
- DPRINTK(PROBE, ERR, "Cannot alloc driver memory, aborting.\n");
- goto err_out_iounmap;
- }
-
- if((err = e100_eeprom_load(nic)))
- goto err_out_free;
-
- e100_phy_init(nic);
-
- memcpy(netdev->dev_addr, nic->eeprom, ETH_ALEN);
- memcpy(netdev->perm_addr, nic->eeprom, ETH_ALEN);
- if(!is_valid_ether_addr(netdev->perm_addr)) {
- DPRINTK(PROBE, ERR, "Invalid MAC address from "
- "EEPROM, aborting.\n");
- err = -EAGAIN;
- goto err_out_free;
- }
-
- /* Wol magic packet can be enabled from eeprom */
- if((nic->mac >= mac_82558_D101_A4) &&
- (nic->eeprom[eeprom_id] & eeprom_id_wol))
- nic->flags |= wol_magic;
-
- /* ack any pending wake events, disable PME */
- err = pci_enable_wake(pdev, 0, 0);
- if (err)
- DPRINTK(PROBE, ERR, "Error clearing wake event\n");
-
- if (!nic->ethercat) {
- strcpy(netdev->name, "eth%d");
- if((err = register_netdev(netdev))) {
- DPRINTK(PROBE, ERR, "Cannot register net device, aborting.\n");
- goto err_out_free;
- }
- }
- else {
- strcpy(netdev->name, "ec0");
- }
-
- DPRINTK(PROBE, INFO, "addr 0x%llx, irq %d, "
- "MAC addr %02X:%02X:%02X:%02X:%02X:%02X\n",
- (unsigned long long)pci_resource_start(pdev, 0), pdev->irq,
- netdev->dev_addr[0], netdev->dev_addr[1], netdev->dev_addr[2],
- netdev->dev_addr[3], netdev->dev_addr[4], netdev->dev_addr[5]);
-
- return 0;
-
-err_out_free:
- e100_free(nic);
-err_out_iounmap:
- iounmap(nic->csr);
-err_out_free_res:
- pci_release_regions(pdev);
-err_out_disable_pdev:
- pci_disable_device(pdev);
-err_out_free_dev:
- pci_set_drvdata(pdev, NULL);
- free_netdev(netdev);
- return err;
-}
-
-static void __devexit e100_remove(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
-
- if(netdev) {
- struct nic *nic = netdev_priv(netdev);
- if (!nic->ethercat)
- unregister_netdev(netdev);
- e100_free(nic);
- iounmap(nic->csr);
- free_netdev(netdev);
- pci_release_regions(pdev);
- pci_disable_device(pdev);
- pci_set_drvdata(pdev, NULL);
- }
-}
-
-#ifdef CONFIG_PM
-static int e100_suspend(struct pci_dev *pdev, pm_message_t state)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
- int retval;
-
- if (nic->ethercat || netif_running(netdev))
- e100_down(nic);
- e100_hw_reset(nic);
- if (!nic->ethercat)
- netif_device_detach(netdev);
-
- pci_save_state(pdev);
- retval = pci_enable_wake(pdev, pci_choose_state(pdev, state),
- nic->flags & (wol_magic | e100_asf(nic)));
- if (retval)
- DPRINTK(PROBE,ERR, "Error enabling wake\n");
- pci_disable_device(pdev);
- retval = pci_set_power_state(pdev, pci_choose_state(pdev, state));
- if (retval)
- DPRINTK(PROBE,ERR, "Error %d setting power state\n", retval);
-
- return 0;
-}
-
-static int e100_resume(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
- int retval;
-
- retval = pci_set_power_state(pdev, PCI_D0);
- if (retval)
- DPRINTK(PROBE,ERR, "Error waking adapter\n");
- pci_restore_state(pdev);
- /* ack any pending wake events, disable PME */
- retval = pci_enable_wake(pdev, 0, 0);
- if (retval)
- DPRINTK(PROBE,ERR, "Error clearing wake events\n");
-
- if (!nic->ethercat)
- netif_device_attach(netdev);
- if (nic->ethercat || netif_running(netdev))
- e100_up(nic);
-
- return 0;
-}
-#endif
-
-
-static void e100_shutdown(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
- int retval;
-
-#ifdef CONFIG_PM
- retval = pci_enable_wake(pdev, 0, nic->flags & (wol_magic | e100_asf(nic)));
-#else
- retval = pci_enable_wake(pdev, 0, nic->flags & (wol_magic));
-#endif
- if (retval)
- DPRINTK(PROBE,ERR, "Error enabling wake\n");
-}
-
-/* ------------------ PCI Error Recovery infrastructure -------------- */
-/**
- * e100_io_error_detected - called when PCI error is detected.
- * @pdev: Pointer to PCI device
- * @state: The current pci conneection state
- */
-static pci_ers_result_t e100_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
-
- /* Similar to calling e100_down(), but avoids adpater I/O. */
- netdev->stop(netdev);
-
- if (!nic->ethercat) {
- /* Detach; put netif into state similar to hotplug unplug. */
- netif_poll_enable(netdev);
- netif_device_detach(netdev);
- }
-
- /* Request a slot reset. */
- return PCI_ERS_RESULT_NEED_RESET;
-}
-
-/**
- * e100_io_slot_reset - called after the pci bus has been reset.
- * @pdev: Pointer to PCI device
- *
- * Restart the card from scratch.
- */
-static pci_ers_result_t e100_io_slot_reset(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
-
- if (pci_enable_device(pdev)) {
- printk(KERN_ERR "e100: Cannot re-enable PCI device after reset.\n");
- return PCI_ERS_RESULT_DISCONNECT;
- }
- pci_set_master(pdev);
-
- /* Only one device per card can do a reset */
- if (0 != PCI_FUNC(pdev->devfn))
- return PCI_ERS_RESULT_RECOVERED;
- e100_hw_reset(nic);
- e100_phy_init(nic);
-
- return PCI_ERS_RESULT_RECOVERED;
-}
-
-/**
- * e100_io_resume - resume normal operations
- * @pdev: Pointer to PCI device
- *
- * Resume normal operations after an error recovery
- * sequence has been completed.
- */
-static void e100_io_resume(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
-
- /* ack any pending wake events, disable PME */
- pci_enable_wake(pdev, 0, 0);
-
- if (!nic->ethercat)
- netif_device_attach(netdev);
- if (nic->ethercat || netif_running(netdev)) {
- e100_open(netdev);
- mod_timer(&nic->watchdog, jiffies);
- }
-}
-
-static struct pci_error_handlers e100_err_handler = {
- .error_detected = e100_io_error_detected,
- .slot_reset = e100_io_slot_reset,
- .resume = e100_io_resume,
-};
-
-static struct pci_driver e100_driver = {
- .name = DRV_NAME,
- .id_table = e100_id_table,
- .probe = e100_probe,
- .remove = __devexit_p(e100_remove),
-#ifdef CONFIG_PM
- .suspend = e100_suspend,
- .resume = e100_resume,
-#endif
- .shutdown = e100_shutdown,
- .err_handler = &e100_err_handler,
-};
-
-static int __init e100_init_module(void)
-{
- struct nic *nic;
-
- printk(KERN_INFO DRV_NAME " " DRV_DESCRIPTION " " DRV_VERSION
- ", master " EC_MASTER_VERSION "\n");
- printk(KERN_INFO DRV_NAME " ec_device_index is %i\n", ec_device_index);
-
- if (pci_module_init(&e100_driver) < 0) {
- printk(KERN_ERR DRV_NAME " Failed to init PCI module.\n");
- goto out_return;
- }
-
- if (e100_ec_netdev) {
- nic = netdev_priv(e100_ec_netdev);
- printk(KERN_INFO DRV_NAME " Registering EtherCAT device...\n");
- if (!(nic->ecdev = ecdev_register(ec_device_master_index,
- e100_ec_netdev, e100_ec_poll, THIS_MODULE))) {
- printk(KERN_ERR DRV_NAME " Failed to register EtherCAT device!\n");
- goto out_pci;
- }
- printk(KERN_INFO DRV_NAME " Opening EtherCAT device...\n");
- if (ecdev_open(nic->ecdev)) {
- printk(KERN_ERR DRV_NAME " Failed to open EtherCAT device!\n");
- goto out_unregister;
- }
-
- printk(KERN_INFO DRV_NAME " EtherCAT device ready.\n");
- } else {
- printk(KERN_WARNING DRV_NAME " No EtherCAT device registered!\n");
- }
-
- return 0;
-
-out_unregister:
- printk(KERN_INFO DRV_NAME " Unregistering EtherCAT device...\n");
- ecdev_unregister(ec_device_master_index, nic->ecdev);
-out_pci:
- pci_unregister_driver(&e100_driver);
-out_return:
- return -1;
-}
-
-static void __exit e100_cleanup_module(void)
-{
- printk(KERN_INFO DRV_NAME " Cleaning up module...\n");
-
- if (e100_ec_netdev) {
- struct nic *nic = netdev_priv(e100_ec_netdev);
- printk(KERN_INFO DRV_NAME " Closing EtherCAT device...\n");
- ecdev_close(nic->ecdev);
- printk(KERN_INFO DRV_NAME " Unregistering EtherCAT device...\n");
- ecdev_unregister(ec_device_master_index, nic->ecdev);
- }
-
- pci_unregister_driver(&e100_driver);
-
- printk(KERN_INFO DRV_NAME " module cleaned up.\n");
-}
-
-module_init(e100_init_module);
-module_exit(e100_cleanup_module);
--- a/devices/e100-2.6.18-orig.c Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,2889 +0,0 @@
-/*******************************************************************************
-
-
- Copyright(c) 1999 - 2005 Intel Corporation. All rights reserved.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 2 of the License, or (at your option)
- any later version.
-
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc., 59
- Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-
- The full GNU General Public License is included in this distribution in the
- file called LICENSE.
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-
-*******************************************************************************/
-
-/*
- * e100.c: Intel(R) PRO/100 ethernet driver
- *
- * (Re)written 2003 by scott.feldman@intel.com. Based loosely on
- * original e100 driver, but better described as a munging of
- * e100, e1000, eepro100, tg3, 8139cp, and other drivers.
- *
- * References:
- * Intel 8255x 10/100 Mbps Ethernet Controller Family,
- * Open Source Software Developers Manual,
- * http://sourceforge.net/projects/e1000
- *
- *
- * Theory of Operation
- *
- * I. General
- *
- * The driver supports Intel(R) 10/100 Mbps PCI Fast Ethernet
- * controller family, which includes the 82557, 82558, 82559, 82550,
- * 82551, and 82562 devices. 82558 and greater controllers
- * integrate the Intel 82555 PHY. The controllers are used in
- * server and client network interface cards, as well as in
- * LAN-On-Motherboard (LOM), CardBus, MiniPCI, and ICHx
- * configurations. 8255x supports a 32-bit linear addressing
- * mode and operates at 33Mhz PCI clock rate.
- *
- * II. Driver Operation
- *
- * Memory-mapped mode is used exclusively to access the device's
- * shared-memory structure, the Control/Status Registers (CSR). All
- * setup, configuration, and control of the device, including queuing
- * of Tx, Rx, and configuration commands is through the CSR.
- * cmd_lock serializes accesses to the CSR command register. cb_lock
- * protects the shared Command Block List (CBL).
- *
- * 8255x is highly MII-compliant and all access to the PHY go
- * through the Management Data Interface (MDI). Consequently, the
- * driver leverages the mii.c library shared with other MII-compliant
- * devices.
- *
- * Big- and Little-Endian byte order as well as 32- and 64-bit
- * archs are supported. Weak-ordered memory and non-cache-coherent
- * archs are supported.
- *
- * III. Transmit
- *
- * A Tx skb is mapped and hangs off of a TCB. TCBs are linked
- * together in a fixed-size ring (CBL) thus forming the flexible mode
- * memory structure. A TCB marked with the suspend-bit indicates
- * the end of the ring. The last TCB processed suspends the
- * controller, and the controller can be restarted by issue a CU
- * resume command to continue from the suspend point, or a CU start
- * command to start at a given position in the ring.
- *
- * Non-Tx commands (config, multicast setup, etc) are linked
- * into the CBL ring along with Tx commands. The common structure
- * used for both Tx and non-Tx commands is the Command Block (CB).
- *
- * cb_to_use is the next CB to use for queuing a command; cb_to_clean
- * is the next CB to check for completion; cb_to_send is the first
- * CB to start on in case of a previous failure to resume. CB clean
- * up happens in interrupt context in response to a CU interrupt.
- * cbs_avail keeps track of number of free CB resources available.
- *
- * Hardware padding of short packets to minimum packet size is
- * enabled. 82557 pads with 7Eh, while the later controllers pad
- * with 00h.
- *
- * IV. Recieve
- *
- * The Receive Frame Area (RFA) comprises a ring of Receive Frame
- * Descriptors (RFD) + data buffer, thus forming the simplified mode
- * memory structure. Rx skbs are allocated to contain both the RFD
- * and the data buffer, but the RFD is pulled off before the skb is
- * indicated. The data buffer is aligned such that encapsulated
- * protocol headers are u32-aligned. Since the RFD is part of the
- * mapped shared memory, and completion status is contained within
- * the RFD, the RFD must be dma_sync'ed to maintain a consistent
- * view from software and hardware.
- *
- * Under typical operation, the receive unit (RU) is start once,
- * and the controller happily fills RFDs as frames arrive. If
- * replacement RFDs cannot be allocated, or the RU goes non-active,
- * the RU must be restarted. Frame arrival generates an interrupt,
- * and Rx indication and re-allocation happen in the same context,
- * therefore no locking is required. A software-generated interrupt
- * is generated from the watchdog to recover from a failed allocation
- * senario where all Rx resources have been indicated and none re-
- * placed.
- *
- * V. Miscellaneous
- *
- * VLAN offloading of tagging, stripping and filtering is not
- * supported, but driver will accommodate the extra 4-byte VLAN tag
- * for processing by upper layers. Tx/Rx Checksum offloading is not
- * supported. Tx Scatter/Gather is not supported. Jumbo Frames is
- * not supported (hardware limitation).
- *
- * MagicPacket(tm) WoL support is enabled/disabled via ethtool.
- *
- * Thanks to JC (jchapman@katalix.com) for helping with
- * testing/troubleshooting the development driver.
- *
- * TODO:
- * o several entry points race with dev->close
- * o check for tx-no-resources/stop Q races with tx clean/wake Q
- *
- * FIXES:
- * 2005/12/02 - Michael O'Donnell <Michael.ODonnell at stratus dot com>
- * - Stratus87247: protect MDI control register manipulations
- */
-
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/slab.h>
-#include <linux/delay.h>
-#include <linux/init.h>
-#include <linux/pci.h>
-#include <linux/dma-mapping.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/mii.h>
-#include <linux/if_vlan.h>
-#include <linux/skbuff.h>
-#include <linux/ethtool.h>
-#include <linux/string.h>
-#include <asm/unaligned.h>
-
-
-#define DRV_NAME "e100"
-#define DRV_EXT "-NAPI"
-#define DRV_VERSION "3.5.10-k2"DRV_EXT
-#define DRV_DESCRIPTION "Intel(R) PRO/100 Network Driver"
-#define DRV_COPYRIGHT "Copyright(c) 1999-2005 Intel Corporation"
-#define PFX DRV_NAME ": "
-
-#define E100_WATCHDOG_PERIOD (2 * HZ)
-#define E100_NAPI_WEIGHT 16
-
-MODULE_DESCRIPTION(DRV_DESCRIPTION);
-MODULE_AUTHOR(DRV_COPYRIGHT);
-MODULE_LICENSE("GPL");
-MODULE_VERSION(DRV_VERSION);
-
-static int debug = 3;
-static int eeprom_bad_csum_allow = 0;
-module_param(debug, int, 0);
-module_param(eeprom_bad_csum_allow, int, 0);
-MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
-MODULE_PARM_DESC(eeprom_bad_csum_allow, "Allow bad eeprom checksums");
-#define DPRINTK(nlevel, klevel, fmt, args...) \
- (void)((NETIF_MSG_##nlevel & nic->msg_enable) && \
- printk(KERN_##klevel PFX "%s: %s: " fmt, nic->netdev->name, \
- __FUNCTION__ , ## args))
-
-#define INTEL_8255X_ETHERNET_DEVICE(device_id, ich) {\
- PCI_VENDOR_ID_INTEL, device_id, PCI_ANY_ID, PCI_ANY_ID, \
- PCI_CLASS_NETWORK_ETHERNET << 8, 0xFFFF00, ich }
-static struct pci_device_id e100_id_table[] = {
- INTEL_8255X_ETHERNET_DEVICE(0x1029, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x1030, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x1031, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1032, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1033, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1034, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1038, 3),
- INTEL_8255X_ETHERNET_DEVICE(0x1039, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103A, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103B, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103C, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103D, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x103E, 4),
- INTEL_8255X_ETHERNET_DEVICE(0x1050, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1051, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1052, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1053, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1054, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1055, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1056, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1057, 5),
- INTEL_8255X_ETHERNET_DEVICE(0x1059, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x1064, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1065, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1066, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1067, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1068, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1069, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x106A, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x106B, 6),
- INTEL_8255X_ETHERNET_DEVICE(0x1091, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1092, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1093, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1094, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1095, 7),
- INTEL_8255X_ETHERNET_DEVICE(0x1209, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x1229, 0),
- INTEL_8255X_ETHERNET_DEVICE(0x2449, 2),
- INTEL_8255X_ETHERNET_DEVICE(0x2459, 2),
- INTEL_8255X_ETHERNET_DEVICE(0x245D, 2),
- INTEL_8255X_ETHERNET_DEVICE(0x27DC, 7),
- { 0, }
-};
-MODULE_DEVICE_TABLE(pci, e100_id_table);
-
-enum mac {
- mac_82557_D100_A = 0,
- mac_82557_D100_B = 1,
- mac_82557_D100_C = 2,
- mac_82558_D101_A4 = 4,
- mac_82558_D101_B0 = 5,
- mac_82559_D101M = 8,
- mac_82559_D101S = 9,
- mac_82550_D102 = 12,
- mac_82550_D102_C = 13,
- mac_82551_E = 14,
- mac_82551_F = 15,
- mac_82551_10 = 16,
- mac_unknown = 0xFF,
-};
-
-enum phy {
- phy_100a = 0x000003E0,
- phy_100c = 0x035002A8,
- phy_82555_tx = 0x015002A8,
- phy_nsc_tx = 0x5C002000,
- phy_82562_et = 0x033002A8,
- phy_82562_em = 0x032002A8,
- phy_82562_ek = 0x031002A8,
- phy_82562_eh = 0x017002A8,
- phy_unknown = 0xFFFFFFFF,
-};
-
-/* CSR (Control/Status Registers) */
-struct csr {
- struct {
- u8 status;
- u8 stat_ack;
- u8 cmd_lo;
- u8 cmd_hi;
- u32 gen_ptr;
- } scb;
- u32 port;
- u16 flash_ctrl;
- u8 eeprom_ctrl_lo;
- u8 eeprom_ctrl_hi;
- u32 mdi_ctrl;
- u32 rx_dma_count;
-};
-
-enum scb_status {
- rus_ready = 0x10,
- rus_mask = 0x3C,
-};
-
-enum ru_state {
- RU_SUSPENDED = 0,
- RU_RUNNING = 1,
- RU_UNINITIALIZED = -1,
-};
-
-enum scb_stat_ack {
- stat_ack_not_ours = 0x00,
- stat_ack_sw_gen = 0x04,
- stat_ack_rnr = 0x10,
- stat_ack_cu_idle = 0x20,
- stat_ack_frame_rx = 0x40,
- stat_ack_cu_cmd_done = 0x80,
- stat_ack_not_present = 0xFF,
- stat_ack_rx = (stat_ack_sw_gen | stat_ack_rnr | stat_ack_frame_rx),
- stat_ack_tx = (stat_ack_cu_idle | stat_ack_cu_cmd_done),
-};
-
-enum scb_cmd_hi {
- irq_mask_none = 0x00,
- irq_mask_all = 0x01,
- irq_sw_gen = 0x02,
-};
-
-enum scb_cmd_lo {
- cuc_nop = 0x00,
- ruc_start = 0x01,
- ruc_load_base = 0x06,
- cuc_start = 0x10,
- cuc_resume = 0x20,
- cuc_dump_addr = 0x40,
- cuc_dump_stats = 0x50,
- cuc_load_base = 0x60,
- cuc_dump_reset = 0x70,
-};
-
-enum cuc_dump {
- cuc_dump_complete = 0x0000A005,
- cuc_dump_reset_complete = 0x0000A007,
-};
-
-enum port {
- software_reset = 0x0000,
- selftest = 0x0001,
- selective_reset = 0x0002,
-};
-
-enum eeprom_ctrl_lo {
- eesk = 0x01,
- eecs = 0x02,
- eedi = 0x04,
- eedo = 0x08,
-};
-
-enum mdi_ctrl {
- mdi_write = 0x04000000,
- mdi_read = 0x08000000,
- mdi_ready = 0x10000000,
-};
-
-enum eeprom_op {
- op_write = 0x05,
- op_read = 0x06,
- op_ewds = 0x10,
- op_ewen = 0x13,
-};
-
-enum eeprom_offsets {
- eeprom_cnfg_mdix = 0x03,
- eeprom_id = 0x0A,
- eeprom_config_asf = 0x0D,
- eeprom_smbus_addr = 0x90,
-};
-
-enum eeprom_cnfg_mdix {
- eeprom_mdix_enabled = 0x0080,
-};
-
-enum eeprom_id {
- eeprom_id_wol = 0x0020,
-};
-
-enum eeprom_config_asf {
- eeprom_asf = 0x8000,
- eeprom_gcl = 0x4000,
-};
-
-enum cb_status {
- cb_complete = 0x8000,
- cb_ok = 0x2000,
-};
-
-enum cb_command {
- cb_nop = 0x0000,
- cb_iaaddr = 0x0001,
- cb_config = 0x0002,
- cb_multi = 0x0003,
- cb_tx = 0x0004,
- cb_ucode = 0x0005,
- cb_dump = 0x0006,
- cb_tx_sf = 0x0008,
- cb_cid = 0x1f00,
- cb_i = 0x2000,
- cb_s = 0x4000,
- cb_el = 0x8000,
-};
-
-struct rfd {
- u16 status;
- u16 command;
- u32 link;
- u32 rbd;
- u16 actual_size;
- u16 size;
-};
-
-struct rx {
- struct rx *next, *prev;
- struct sk_buff *skb;
- dma_addr_t dma_addr;
-};
-
-#if defined(__BIG_ENDIAN_BITFIELD)
-#define X(a,b) b,a
-#else
-#define X(a,b) a,b
-#endif
-struct config {
-/*0*/ u8 X(byte_count:6, pad0:2);
-/*1*/ u8 X(X(rx_fifo_limit:4, tx_fifo_limit:3), pad1:1);
-/*2*/ u8 adaptive_ifs;
-/*3*/ u8 X(X(X(X(mwi_enable:1, type_enable:1), read_align_enable:1),
- term_write_cache_line:1), pad3:4);
-/*4*/ u8 X(rx_dma_max_count:7, pad4:1);
-/*5*/ u8 X(tx_dma_max_count:7, dma_max_count_enable:1);
-/*6*/ u8 X(X(X(X(X(X(X(late_scb_update:1, direct_rx_dma:1),
- tno_intr:1), cna_intr:1), standard_tcb:1), standard_stat_counter:1),
- rx_discard_overruns:1), rx_save_bad_frames:1);
-/*7*/ u8 X(X(X(X(X(rx_discard_short_frames:1, tx_underrun_retry:2),
- pad7:2), rx_extended_rfd:1), tx_two_frames_in_fifo:1),
- tx_dynamic_tbd:1);
-/*8*/ u8 X(X(mii_mode:1, pad8:6), csma_disabled:1);
-/*9*/ u8 X(X(X(X(X(rx_tcpudp_checksum:1, pad9:3), vlan_arp_tco:1),
- link_status_wake:1), arp_wake:1), mcmatch_wake:1);
-/*10*/ u8 X(X(X(pad10:3, no_source_addr_insertion:1), preamble_length:2),
- loopback:2);
-/*11*/ u8 X(linear_priority:3, pad11:5);
-/*12*/ u8 X(X(linear_priority_mode:1, pad12:3), ifs:4);
-/*13*/ u8 ip_addr_lo;
-/*14*/ u8 ip_addr_hi;
-/*15*/ u8 X(X(X(X(X(X(X(promiscuous_mode:1, broadcast_disabled:1),
- wait_after_win:1), pad15_1:1), ignore_ul_bit:1), crc_16_bit:1),
- pad15_2:1), crs_or_cdt:1);
-/*16*/ u8 fc_delay_lo;
-/*17*/ u8 fc_delay_hi;
-/*18*/ u8 X(X(X(X(X(rx_stripping:1, tx_padding:1), rx_crc_transfer:1),
- rx_long_ok:1), fc_priority_threshold:3), pad18:1);
-/*19*/ u8 X(X(X(X(X(X(X(addr_wake:1, magic_packet_disable:1),
- fc_disable:1), fc_restop:1), fc_restart:1), fc_reject:1),
- full_duplex_force:1), full_duplex_pin:1);
-/*20*/ u8 X(X(X(pad20_1:5, fc_priority_location:1), multi_ia:1), pad20_2:1);
-/*21*/ u8 X(X(pad21_1:3, multicast_all:1), pad21_2:4);
-/*22*/ u8 X(X(rx_d102_mode:1, rx_vlan_drop:1), pad22:6);
- u8 pad_d102[9];
-};
-
-#define E100_MAX_MULTICAST_ADDRS 64
-struct multi {
- u16 count;
- u8 addr[E100_MAX_MULTICAST_ADDRS * ETH_ALEN + 2/*pad*/];
-};
-
-/* Important: keep total struct u32-aligned */
-#define UCODE_SIZE 134
-struct cb {
- u16 status;
- u16 command;
- u32 link;
- union {
- u8 iaaddr[ETH_ALEN];
- u32 ucode[UCODE_SIZE];
- struct config config;
- struct multi multi;
- struct {
- u32 tbd_array;
- u16 tcb_byte_count;
- u8 threshold;
- u8 tbd_count;
- struct {
- u32 buf_addr;
- u16 size;
- u16 eol;
- } tbd;
- } tcb;
- u32 dump_buffer_addr;
- } u;
- struct cb *next, *prev;
- dma_addr_t dma_addr;
- struct sk_buff *skb;
-};
-
-enum loopback {
- lb_none = 0, lb_mac = 1, lb_phy = 3,
-};
-
-struct stats {
- u32 tx_good_frames, tx_max_collisions, tx_late_collisions,
- tx_underruns, tx_lost_crs, tx_deferred, tx_single_collisions,
- tx_multiple_collisions, tx_total_collisions;
- u32 rx_good_frames, rx_crc_errors, rx_alignment_errors,
- rx_resource_errors, rx_overrun_errors, rx_cdt_errors,
- rx_short_frame_errors;
- u32 fc_xmt_pause, fc_rcv_pause, fc_rcv_unsupported;
- u16 xmt_tco_frames, rcv_tco_frames;
- u32 complete;
-};
-
-struct mem {
- struct {
- u32 signature;
- u32 result;
- } selftest;
- struct stats stats;
- u8 dump_buf[596];
-};
-
-struct param_range {
- u32 min;
- u32 max;
- u32 count;
-};
-
-struct params {
- struct param_range rfds;
- struct param_range cbs;
-};
-
-struct nic {
- /* Begin: frequently used values: keep adjacent for cache effect */
- u32 msg_enable ____cacheline_aligned;
- struct net_device *netdev;
- struct pci_dev *pdev;
-
- struct rx *rxs ____cacheline_aligned;
- struct rx *rx_to_use;
- struct rx *rx_to_clean;
- struct rfd blank_rfd;
- enum ru_state ru_running;
-
- spinlock_t cb_lock ____cacheline_aligned;
- spinlock_t cmd_lock;
- struct csr __iomem *csr;
- enum scb_cmd_lo cuc_cmd;
- unsigned int cbs_avail;
- struct cb *cbs;
- struct cb *cb_to_use;
- struct cb *cb_to_send;
- struct cb *cb_to_clean;
- u16 tx_command;
- /* End: frequently used values: keep adjacent for cache effect */
-
- enum {
- ich = (1 << 0),
- promiscuous = (1 << 1),
- multicast_all = (1 << 2),
- wol_magic = (1 << 3),
- ich_10h_workaround = (1 << 4),
- } flags ____cacheline_aligned;
-
- enum mac mac;
- enum phy phy;
- struct params params;
- struct net_device_stats net_stats;
- struct timer_list watchdog;
- struct timer_list blink_timer;
- struct mii_if_info mii;
- struct work_struct tx_timeout_task;
- enum loopback loopback;
-
- struct mem *mem;
- dma_addr_t dma_addr;
-
- dma_addr_t cbs_dma_addr;
- u8 adaptive_ifs;
- u8 tx_threshold;
- u32 tx_frames;
- u32 tx_collisions;
- u32 tx_deferred;
- u32 tx_single_collisions;
- u32 tx_multiple_collisions;
- u32 tx_fc_pause;
- u32 tx_tco_frames;
-
- u32 rx_fc_pause;
- u32 rx_fc_unsupported;
- u32 rx_tco_frames;
- u32 rx_over_length_errors;
-
- u8 rev_id;
- u16 leds;
- u16 eeprom_wc;
- u16 eeprom[256];
- spinlock_t mdio_lock;
-};
-
-static inline void e100_write_flush(struct nic *nic)
-{
- /* Flush previous PCI writes through intermediate bridges
- * by doing a benign read */
- (void)readb(&nic->csr->scb.status);
-}
-
-static void e100_enable_irq(struct nic *nic)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&nic->cmd_lock, flags);
- writeb(irq_mask_none, &nic->csr->scb.cmd_hi);
- e100_write_flush(nic);
- spin_unlock_irqrestore(&nic->cmd_lock, flags);
-}
-
-static void e100_disable_irq(struct nic *nic)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&nic->cmd_lock, flags);
- writeb(irq_mask_all, &nic->csr->scb.cmd_hi);
- e100_write_flush(nic);
- spin_unlock_irqrestore(&nic->cmd_lock, flags);
-}
-
-static void e100_hw_reset(struct nic *nic)
-{
- /* Put CU and RU into idle with a selective reset to get
- * device off of PCI bus */
- writel(selective_reset, &nic->csr->port);
- e100_write_flush(nic); udelay(20);
-
- /* Now fully reset device */
- writel(software_reset, &nic->csr->port);
- e100_write_flush(nic); udelay(20);
-
- /* Mask off our interrupt line - it's unmasked after reset */
- e100_disable_irq(nic);
-}
-
-static int e100_self_test(struct nic *nic)
-{
- u32 dma_addr = nic->dma_addr + offsetof(struct mem, selftest);
-
- /* Passing the self-test is a pretty good indication
- * that the device can DMA to/from host memory */
-
- nic->mem->selftest.signature = 0;
- nic->mem->selftest.result = 0xFFFFFFFF;
-
- writel(selftest | dma_addr, &nic->csr->port);
- e100_write_flush(nic);
- /* Wait 10 msec for self-test to complete */
- msleep(10);
-
- /* Interrupts are enabled after self-test */
- e100_disable_irq(nic);
-
- /* Check results of self-test */
- if(nic->mem->selftest.result != 0) {
- DPRINTK(HW, ERR, "Self-test failed: result=0x%08X\n",
- nic->mem->selftest.result);
- return -ETIMEDOUT;
- }
- if(nic->mem->selftest.signature == 0) {
- DPRINTK(HW, ERR, "Self-test failed: timed out\n");
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-static void e100_eeprom_write(struct nic *nic, u16 addr_len, u16 addr, u16 data)
-{
- u32 cmd_addr_data[3];
- u8 ctrl;
- int i, j;
-
- /* Three cmds: write/erase enable, write data, write/erase disable */
- cmd_addr_data[0] = op_ewen << (addr_len - 2);
- cmd_addr_data[1] = (((op_write << addr_len) | addr) << 16) |
- cpu_to_le16(data);
- cmd_addr_data[2] = op_ewds << (addr_len - 2);
-
- /* Bit-bang cmds to write word to eeprom */
- for(j = 0; j < 3; j++) {
-
- /* Chip select */
- writeb(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- for(i = 31; i >= 0; i--) {
- ctrl = (cmd_addr_data[j] & (1 << i)) ?
- eecs | eedi : eecs;
- writeb(ctrl, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- writeb(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
- }
- /* Wait 10 msec for cmd to complete */
- msleep(10);
-
- /* Chip deselect */
- writeb(0, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
- }
-};
-
-/* General technique stolen from the eepro100 driver - very clever */
-static u16 e100_eeprom_read(struct nic *nic, u16 *addr_len, u16 addr)
-{
- u32 cmd_addr_data;
- u16 data = 0;
- u8 ctrl;
- int i;
-
- cmd_addr_data = ((op_read << *addr_len) | addr) << 16;
-
- /* Chip select */
- writeb(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- /* Bit-bang to read word from eeprom */
- for(i = 31; i >= 0; i--) {
- ctrl = (cmd_addr_data & (1 << i)) ? eecs | eedi : eecs;
- writeb(ctrl, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- writeb(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- /* Eeprom drives a dummy zero to EEDO after receiving
- * complete address. Use this to adjust addr_len. */
- ctrl = readb(&nic->csr->eeprom_ctrl_lo);
- if(!(ctrl & eedo) && i > 16) {
- *addr_len -= (i - 16);
- i = 17;
- }
-
- data = (data << 1) | (ctrl & eedo ? 1 : 0);
- }
-
- /* Chip deselect */
- writeb(0, &nic->csr->eeprom_ctrl_lo);
- e100_write_flush(nic); udelay(4);
-
- return le16_to_cpu(data);
-};
-
-/* Load entire EEPROM image into driver cache and validate checksum */
-static int e100_eeprom_load(struct nic *nic)
-{
- u16 addr, addr_len = 8, checksum = 0;
-
- /* Try reading with an 8-bit addr len to discover actual addr len */
- e100_eeprom_read(nic, &addr_len, 0);
- nic->eeprom_wc = 1 << addr_len;
-
- for(addr = 0; addr < nic->eeprom_wc; addr++) {
- nic->eeprom[addr] = e100_eeprom_read(nic, &addr_len, addr);
- if(addr < nic->eeprom_wc - 1)
- checksum += cpu_to_le16(nic->eeprom[addr]);
- }
-
- /* The checksum, stored in the last word, is calculated such that
- * the sum of words should be 0xBABA */
- checksum = le16_to_cpu(0xBABA - checksum);
- if(checksum != nic->eeprom[nic->eeprom_wc - 1]) {
- DPRINTK(PROBE, ERR, "EEPROM corrupted\n");
- if (!eeprom_bad_csum_allow)
- return -EAGAIN;
- }
-
- return 0;
-}
-
-/* Save (portion of) driver EEPROM cache to device and update checksum */
-static int e100_eeprom_save(struct nic *nic, u16 start, u16 count)
-{
- u16 addr, addr_len = 8, checksum = 0;
-
- /* Try reading with an 8-bit addr len to discover actual addr len */
- e100_eeprom_read(nic, &addr_len, 0);
- nic->eeprom_wc = 1 << addr_len;
-
- if(start + count >= nic->eeprom_wc)
- return -EINVAL;
-
- for(addr = start; addr < start + count; addr++)
- e100_eeprom_write(nic, addr_len, addr, nic->eeprom[addr]);
-
- /* The checksum, stored in the last word, is calculated such that
- * the sum of words should be 0xBABA */
- for(addr = 0; addr < nic->eeprom_wc - 1; addr++)
- checksum += cpu_to_le16(nic->eeprom[addr]);
- nic->eeprom[nic->eeprom_wc - 1] = le16_to_cpu(0xBABA - checksum);
- e100_eeprom_write(nic, addr_len, nic->eeprom_wc - 1,
- nic->eeprom[nic->eeprom_wc - 1]);
-
- return 0;
-}
-
-#define E100_WAIT_SCB_TIMEOUT 20000 /* we might have to wait 100ms!!! */
-#define E100_WAIT_SCB_FAST 20 /* delay like the old code */
-static int e100_exec_cmd(struct nic *nic, u8 cmd, dma_addr_t dma_addr)
-{
- unsigned long flags;
- unsigned int i;
- int err = 0;
-
- spin_lock_irqsave(&nic->cmd_lock, flags);
-
- /* Previous command is accepted when SCB clears */
- for(i = 0; i < E100_WAIT_SCB_TIMEOUT; i++) {
- if(likely(!readb(&nic->csr->scb.cmd_lo)))
- break;
- cpu_relax();
- if(unlikely(i > E100_WAIT_SCB_FAST))
- udelay(5);
- }
- if(unlikely(i == E100_WAIT_SCB_TIMEOUT)) {
- err = -EAGAIN;
- goto err_unlock;
- }
-
- if(unlikely(cmd != cuc_resume))
- writel(dma_addr, &nic->csr->scb.gen_ptr);
- writeb(cmd, &nic->csr->scb.cmd_lo);
-
-err_unlock:
- spin_unlock_irqrestore(&nic->cmd_lock, flags);
-
- return err;
-}
-
-static int e100_exec_cb(struct nic *nic, struct sk_buff *skb,
- void (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *))
-{
- struct cb *cb;
- unsigned long flags;
- int err = 0;
-
- spin_lock_irqsave(&nic->cb_lock, flags);
-
- if(unlikely(!nic->cbs_avail)) {
- err = -ENOMEM;
- goto err_unlock;
- }
-
- cb = nic->cb_to_use;
- nic->cb_to_use = cb->next;
- nic->cbs_avail--;
- cb->skb = skb;
-
- if(unlikely(!nic->cbs_avail))
- err = -ENOSPC;
-
- cb_prepare(nic, cb, skb);
-
- /* Order is important otherwise we'll be in a race with h/w:
- * set S-bit in current first, then clear S-bit in previous. */
- cb->command |= cpu_to_le16(cb_s);
- wmb();
- cb->prev->command &= cpu_to_le16(~cb_s);
-
- while(nic->cb_to_send != nic->cb_to_use) {
- if(unlikely(e100_exec_cmd(nic, nic->cuc_cmd,
- nic->cb_to_send->dma_addr))) {
- /* Ok, here's where things get sticky. It's
- * possible that we can't schedule the command
- * because the controller is too busy, so
- * let's just queue the command and try again
- * when another command is scheduled. */
- if(err == -ENOSPC) {
- //request a reset
- schedule_work(&nic->tx_timeout_task);
- }
- break;
- } else {
- nic->cuc_cmd = cuc_resume;
- nic->cb_to_send = nic->cb_to_send->next;
- }
- }
-
-err_unlock:
- spin_unlock_irqrestore(&nic->cb_lock, flags);
-
- return err;
-}
-
-static u16 mdio_ctrl(struct nic *nic, u32 addr, u32 dir, u32 reg, u16 data)
-{
- u32 data_out = 0;
- unsigned int i;
- unsigned long flags;
-
-
- /*
- * Stratus87247: we shouldn't be writing the MDI control
- * register until the Ready bit shows True. Also, since
- * manipulation of the MDI control registers is a multi-step
- * procedure it should be done under lock.
- */
- spin_lock_irqsave(&nic->mdio_lock, flags);
- for (i = 100; i; --i) {
- if (readl(&nic->csr->mdi_ctrl) & mdi_ready)
- break;
- udelay(20);
- }
- if (unlikely(!i)) {
- printk("e100.mdio_ctrl(%s) won't go Ready\n",
- nic->netdev->name );
- spin_unlock_irqrestore(&nic->mdio_lock, flags);
- return 0; /* No way to indicate timeout error */
- }
- writel((reg << 16) | (addr << 21) | dir | data, &nic->csr->mdi_ctrl);
-
- for (i = 0; i < 100; i++) {
- udelay(20);
- if ((data_out = readl(&nic->csr->mdi_ctrl)) & mdi_ready)
- break;
- }
- spin_unlock_irqrestore(&nic->mdio_lock, flags);
- DPRINTK(HW, DEBUG,
- "%s:addr=%d, reg=%d, data_in=0x%04X, data_out=0x%04X\n",
- dir == mdi_read ? "READ" : "WRITE", addr, reg, data, data_out);
- return (u16)data_out;
-}
-
-static int mdio_read(struct net_device *netdev, int addr, int reg)
-{
- return mdio_ctrl(netdev_priv(netdev), addr, mdi_read, reg, 0);
-}
-
-static void mdio_write(struct net_device *netdev, int addr, int reg, int data)
-{
- mdio_ctrl(netdev_priv(netdev), addr, mdi_write, reg, data);
-}
-
-static void e100_get_defaults(struct nic *nic)
-{
- struct param_range rfds = { .min = 16, .max = 256, .count = 256 };
- struct param_range cbs = { .min = 64, .max = 256, .count = 128 };
-
- pci_read_config_byte(nic->pdev, PCI_REVISION_ID, &nic->rev_id);
- /* MAC type is encoded as rev ID; exception: ICH is treated as 82559 */
- nic->mac = (nic->flags & ich) ? mac_82559_D101M : nic->rev_id;
- if(nic->mac == mac_unknown)
- nic->mac = mac_82557_D100_A;
-
- nic->params.rfds = rfds;
- nic->params.cbs = cbs;
-
- /* Quadwords to DMA into FIFO before starting frame transmit */
- nic->tx_threshold = 0xE0;
-
- /* no interrupt for every tx completion, delay = 256us if not 557*/
- nic->tx_command = cpu_to_le16(cb_tx | cb_tx_sf |
- ((nic->mac >= mac_82558_D101_A4) ? cb_cid : cb_i));
-
- /* Template for a freshly allocated RFD */
- nic->blank_rfd.command = cpu_to_le16(cb_el);
- nic->blank_rfd.rbd = 0xFFFFFFFF;
- nic->blank_rfd.size = cpu_to_le16(VLAN_ETH_FRAME_LEN);
-
- /* MII setup */
- nic->mii.phy_id_mask = 0x1F;
- nic->mii.reg_num_mask = 0x1F;
- nic->mii.dev = nic->netdev;
- nic->mii.mdio_read = mdio_read;
- nic->mii.mdio_write = mdio_write;
-}
-
-static void e100_configure(struct nic *nic, struct cb *cb, struct sk_buff *skb)
-{
- struct config *config = &cb->u.config;
- u8 *c = (u8 *)config;
-
- cb->command = cpu_to_le16(cb_config);
-
- memset(config, 0, sizeof(struct config));
-
- config->byte_count = 0x16; /* bytes in this struct */
- config->rx_fifo_limit = 0x8; /* bytes in FIFO before DMA */
- config->direct_rx_dma = 0x1; /* reserved */
- config->standard_tcb = 0x1; /* 1=standard, 0=extended */
- config->standard_stat_counter = 0x1; /* 1=standard, 0=extended */
- config->rx_discard_short_frames = 0x1; /* 1=discard, 0=pass */
- config->tx_underrun_retry = 0x3; /* # of underrun retries */
- config->mii_mode = 0x1; /* 1=MII mode, 0=503 mode */
- config->pad10 = 0x6;
- config->no_source_addr_insertion = 0x1; /* 1=no, 0=yes */
- config->preamble_length = 0x2; /* 0=1, 1=3, 2=7, 3=15 bytes */
- config->ifs = 0x6; /* x16 = inter frame spacing */
- config->ip_addr_hi = 0xF2; /* ARP IP filter - not used */
- config->pad15_1 = 0x1;
- config->pad15_2 = 0x1;
- config->crs_or_cdt = 0x0; /* 0=CRS only, 1=CRS or CDT */
- config->fc_delay_hi = 0x40; /* time delay for fc frame */
- config->tx_padding = 0x1; /* 1=pad short frames */
- config->fc_priority_threshold = 0x7; /* 7=priority fc disabled */
- config->pad18 = 0x1;
- config->full_duplex_pin = 0x1; /* 1=examine FDX# pin */
- config->pad20_1 = 0x1F;
- config->fc_priority_location = 0x1; /* 1=byte#31, 0=byte#19 */
- config->pad21_1 = 0x5;
-
- config->adaptive_ifs = nic->adaptive_ifs;
- config->loopback = nic->loopback;
-
- if(nic->mii.force_media && nic->mii.full_duplex)
- config->full_duplex_force = 0x1; /* 1=force, 0=auto */
-
- if(nic->flags & promiscuous || nic->loopback) {
- config->rx_save_bad_frames = 0x1; /* 1=save, 0=discard */
- config->rx_discard_short_frames = 0x0; /* 1=discard, 0=save */
- config->promiscuous_mode = 0x1; /* 1=on, 0=off */
- }
-
- if(nic->flags & multicast_all)
- config->multicast_all = 0x1; /* 1=accept, 0=no */
-
- /* disable WoL when up */
- if(netif_running(nic->netdev) || !(nic->flags & wol_magic))
- config->magic_packet_disable = 0x1; /* 1=off, 0=on */
-
- if(nic->mac >= mac_82558_D101_A4) {
- config->fc_disable = 0x1; /* 1=Tx fc off, 0=Tx fc on */
- config->mwi_enable = 0x1; /* 1=enable, 0=disable */
- config->standard_tcb = 0x0; /* 1=standard, 0=extended */
- config->rx_long_ok = 0x1; /* 1=VLANs ok, 0=standard */
- if(nic->mac >= mac_82559_D101M)
- config->tno_intr = 0x1; /* TCO stats enable */
- else
- config->standard_stat_counter = 0x0;
- }
-
- DPRINTK(HW, DEBUG, "[00-07]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- c[0], c[1], c[2], c[3], c[4], c[5], c[6], c[7]);
- DPRINTK(HW, DEBUG, "[08-15]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- c[8], c[9], c[10], c[11], c[12], c[13], c[14], c[15]);
- DPRINTK(HW, DEBUG, "[16-23]=%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- c[16], c[17], c[18], c[19], c[20], c[21], c[22], c[23]);
-}
-
-/********************************************************/
-/* Micro code for 8086:1229 Rev 8 */
-/********************************************************/
-
-/* Parameter values for the D101M B-step */
-#define D101M_CPUSAVER_TIMER_DWORD 78
-#define D101M_CPUSAVER_BUNDLE_DWORD 65
-#define D101M_CPUSAVER_MIN_SIZE_DWORD 126
-
-#define D101M_B_RCVBUNDLE_UCODE \
-{\
-0x00550215, 0xFFFF0437, 0xFFFFFFFF, 0x06A70789, 0xFFFFFFFF, 0x0558FFFF, \
-0x000C0001, 0x00101312, 0x000C0008, 0x00380216, \
-0x0010009C, 0x00204056, 0x002380CC, 0x00380056, \
-0x0010009C, 0x00244C0B, 0x00000800, 0x00124818, \
-0x00380438, 0x00000000, 0x00140000, 0x00380555, \
-0x00308000, 0x00100662, 0x00100561, 0x000E0408, \
-0x00134861, 0x000C0002, 0x00103093, 0x00308000, \
-0x00100624, 0x00100561, 0x000E0408, 0x00100861, \
-0x000C007E, 0x00222C21, 0x000C0002, 0x00103093, \
-0x00380C7A, 0x00080000, 0x00103090, 0x00380C7A, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x00244C2D, 0x00010004, 0x00041000, \
-0x003A0437, 0x00044010, 0x0038078A, 0x00000000, \
-0x00100099, 0x00206C7A, 0x0010009C, 0x00244C48, \
-0x00130824, 0x000C0001, 0x00101213, 0x00260C75, \
-0x00041000, 0x00010004, 0x00130826, 0x000C0006, \
-0x002206A8, 0x0013C926, 0x00101313, 0x003806A8, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00080600, 0x00101B10, 0x00050004, 0x00100826, \
-0x00101210, 0x00380C34, 0x00000000, 0x00000000, \
-0x0021155B, 0x00100099, 0x00206559, 0x0010009C, \
-0x00244559, 0x00130836, 0x000C0000, 0x00220C62, \
-0x000C0001, 0x00101B13, 0x00229C0E, 0x00210C0E, \
-0x00226C0E, 0x00216C0E, 0x0022FC0E, 0x00215C0E, \
-0x00214C0E, 0x00380555, 0x00010004, 0x00041000, \
-0x00278C67, 0x00040800, 0x00018100, 0x003A0437, \
-0x00130826, 0x000C0001, 0x00220559, 0x00101313, \
-0x00380559, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00130831, 0x0010090B, 0x00124813, \
-0x000CFF80, 0x002606AB, 0x00041000, 0x00010004, \
-0x003806A8, 0x00000000, 0x00000000, 0x00000000, \
-}
-
-/********************************************************/
-/* Micro code for 8086:1229 Rev 9 */
-/********************************************************/
-
-/* Parameter values for the D101S */
-#define D101S_CPUSAVER_TIMER_DWORD 78
-#define D101S_CPUSAVER_BUNDLE_DWORD 67
-#define D101S_CPUSAVER_MIN_SIZE_DWORD 128
-
-#define D101S_RCVBUNDLE_UCODE \
-{\
-0x00550242, 0xFFFF047E, 0xFFFFFFFF, 0x06FF0818, 0xFFFFFFFF, 0x05A6FFFF, \
-0x000C0001, 0x00101312, 0x000C0008, 0x00380243, \
-0x0010009C, 0x00204056, 0x002380D0, 0x00380056, \
-0x0010009C, 0x00244F8B, 0x00000800, 0x00124818, \
-0x0038047F, 0x00000000, 0x00140000, 0x003805A3, \
-0x00308000, 0x00100610, 0x00100561, 0x000E0408, \
-0x00134861, 0x000C0002, 0x00103093, 0x00308000, \
-0x00100624, 0x00100561, 0x000E0408, 0x00100861, \
-0x000C007E, 0x00222FA1, 0x000C0002, 0x00103093, \
-0x00380F90, 0x00080000, 0x00103090, 0x00380F90, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x0010009C, 0x00244FAD, 0x00010004, 0x00041000, \
-0x003A047E, 0x00044010, 0x00380819, 0x00000000, \
-0x00100099, 0x00206FFD, 0x0010009A, 0x0020AFFD, \
-0x0010009C, 0x00244FC8, 0x00130824, 0x000C0001, \
-0x00101213, 0x00260FF7, 0x00041000, 0x00010004, \
-0x00130826, 0x000C0006, 0x00220700, 0x0013C926, \
-0x00101313, 0x00380700, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00080600, 0x00101B10, 0x00050004, 0x00100826, \
-0x00101210, 0x00380FB6, 0x00000000, 0x00000000, \
-0x002115A9, 0x00100099, 0x002065A7, 0x0010009A, \
-0x0020A5A7, 0x0010009C, 0x002445A7, 0x00130836, \
-0x000C0000, 0x00220FE4, 0x000C0001, 0x00101B13, \
-0x00229F8E, 0x00210F8E, 0x00226F8E, 0x00216F8E, \
-0x0022FF8E, 0x00215F8E, 0x00214F8E, 0x003805A3, \
-0x00010004, 0x00041000, 0x00278FE9, 0x00040800, \
-0x00018100, 0x003A047E, 0x00130826, 0x000C0001, \
-0x002205A7, 0x00101313, 0x003805A7, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00130831, \
-0x0010090B, 0x00124813, 0x000CFF80, 0x00260703, \
-0x00041000, 0x00010004, 0x00380700 \
-}
-
-/********************************************************/
-/* Micro code for the 8086:1229 Rev F/10 */
-/********************************************************/
-
-/* Parameter values for the D102 E-step */
-#define D102_E_CPUSAVER_TIMER_DWORD 42
-#define D102_E_CPUSAVER_BUNDLE_DWORD 54
-#define D102_E_CPUSAVER_MIN_SIZE_DWORD 46
-
-#define D102_E_RCVBUNDLE_UCODE \
-{\
-0x007D028F, 0x0E4204F9, 0x14ED0C85, 0x14FA14E9, 0x0EF70E36, 0x1FFF1FFF, \
-0x00E014B9, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014BD, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014D5, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014C1, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00E014C8, 0x00000000, 0x00000000, 0x00000000, \
-0x00200600, 0x00E014EE, 0x00000000, 0x00000000, \
-0x0030FF80, 0x00940E46, 0x00038200, 0x00102000, \
-0x00E00E43, 0x00000000, 0x00000000, 0x00000000, \
-0x00300006, 0x00E014FB, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00906E41, 0x00800E3C, 0x00E00E39, 0x00000000, \
-0x00906EFD, 0x00900EFD, 0x00E00EF8, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-0x00000000, 0x00000000, 0x00000000, 0x00000000, \
-}
-
-static void e100_setup_ucode(struct nic *nic, struct cb *cb, struct sk_buff *skb)
-{
-/* *INDENT-OFF* */
- static struct {
- u32 ucode[UCODE_SIZE + 1];
- u8 mac;
- u8 timer_dword;
- u8 bundle_dword;
- u8 min_size_dword;
- } ucode_opts[] = {
- { D101M_B_RCVBUNDLE_UCODE,
- mac_82559_D101M,
- D101M_CPUSAVER_TIMER_DWORD,
- D101M_CPUSAVER_BUNDLE_DWORD,
- D101M_CPUSAVER_MIN_SIZE_DWORD },
- { D101S_RCVBUNDLE_UCODE,
- mac_82559_D101S,
- D101S_CPUSAVER_TIMER_DWORD,
- D101S_CPUSAVER_BUNDLE_DWORD,
- D101S_CPUSAVER_MIN_SIZE_DWORD },
- { D102_E_RCVBUNDLE_UCODE,
- mac_82551_F,
- D102_E_CPUSAVER_TIMER_DWORD,
- D102_E_CPUSAVER_BUNDLE_DWORD,
- D102_E_CPUSAVER_MIN_SIZE_DWORD },
- { D102_E_RCVBUNDLE_UCODE,
- mac_82551_10,
- D102_E_CPUSAVER_TIMER_DWORD,
- D102_E_CPUSAVER_BUNDLE_DWORD,
- D102_E_CPUSAVER_MIN_SIZE_DWORD },
- { {0}, 0, 0, 0, 0}
- }, *opts;
-/* *INDENT-ON* */
-
-/*************************************************************************
-* CPUSaver parameters
-*
-* All CPUSaver parameters are 16-bit literals that are part of a
-* "move immediate value" instruction. By changing the value of
-* the literal in the instruction before the code is loaded, the
-* driver can change the algorithm.
-*
-* INTDELAY - This loads the dead-man timer with its inital value.
-* When this timer expires the interrupt is asserted, and the
-* timer is reset each time a new packet is received. (see
-* BUNDLEMAX below to set the limit on number of chained packets)
-* The current default is 0x600 or 1536. Experiments show that
-* the value should probably stay within the 0x200 - 0x1000.
-*
-* BUNDLEMAX -
-* This sets the maximum number of frames that will be bundled. In
-* some situations, such as the TCP windowing algorithm, it may be
-* better to limit the growth of the bundle size than let it go as
-* high as it can, because that could cause too much added latency.
-* The default is six, because this is the number of packets in the
-* default TCP window size. A value of 1 would make CPUSaver indicate
-* an interrupt for every frame received. If you do not want to put
-* a limit on the bundle size, set this value to xFFFF.
-*
-* BUNDLESMALL -
-* This contains a bit-mask describing the minimum size frame that
-* will be bundled. The default masks the lower 7 bits, which means
-* that any frame less than 128 bytes in length will not be bundled,
-* but will instead immediately generate an interrupt. This does
-* not affect the current bundle in any way. Any frame that is 128
-* bytes or large will be bundled normally. This feature is meant
-* to provide immediate indication of ACK frames in a TCP environment.
-* Customers were seeing poor performance when a machine with CPUSaver
-* enabled was sending but not receiving. The delay introduced when
-* the ACKs were received was enough to reduce total throughput, because
-* the sender would sit idle until the ACK was finally seen.
-*
-* The current default is 0xFF80, which masks out the lower 7 bits.
-* This means that any frame which is x7F (127) bytes or smaller
-* will cause an immediate interrupt. Because this value must be a
-* bit mask, there are only a few valid values that can be used. To
-* turn this feature off, the driver can write the value xFFFF to the
-* lower word of this instruction (in the same way that the other
-* parameters are used). Likewise, a value of 0xF800 (2047) would
-* cause an interrupt to be generated for every frame, because all
-* standard Ethernet frames are <= 2047 bytes in length.
-*************************************************************************/
-
-/* if you wish to disable the ucode functionality, while maintaining the
- * workarounds it provides, set the following defines to:
- * BUNDLESMALL 0
- * BUNDLEMAX 1
- * INTDELAY 1
- */
-#define BUNDLESMALL 1
-#define BUNDLEMAX (u16)6
-#define INTDELAY (u16)1536 /* 0x600 */
-
- /* do not load u-code for ICH devices */
- if (nic->flags & ich)
- goto noloaducode;
-
- /* Search for ucode match against h/w rev_id */
- for (opts = ucode_opts; opts->mac; opts++) {
- int i;
- u32 *ucode = opts->ucode;
- if (nic->mac != opts->mac)
- continue;
-
- /* Insert user-tunable settings */
- ucode[opts->timer_dword] &= 0xFFFF0000;
- ucode[opts->timer_dword] |= INTDELAY;
- ucode[opts->bundle_dword] &= 0xFFFF0000;
- ucode[opts->bundle_dword] |= BUNDLEMAX;
- ucode[opts->min_size_dword] &= 0xFFFF0000;
- ucode[opts->min_size_dword] |= (BUNDLESMALL) ? 0xFFFF : 0xFF80;
-
- for (i = 0; i < UCODE_SIZE; i++)
- cb->u.ucode[i] = cpu_to_le32(ucode[i]);
- cb->command = cpu_to_le16(cb_ucode | cb_el);
- return;
- }
-
-noloaducode:
- cb->command = cpu_to_le16(cb_nop | cb_el);
-}
-
-static inline int e100_exec_cb_wait(struct nic *nic, struct sk_buff *skb,
- void (*cb_prepare)(struct nic *, struct cb *, struct sk_buff *))
-{
- int err = 0, counter = 50;
- struct cb *cb = nic->cb_to_clean;
-
- if ((err = e100_exec_cb(nic, NULL, e100_setup_ucode)))
- DPRINTK(PROBE,ERR, "ucode cmd failed with error %d\n", err);
-
- /* must restart cuc */
- nic->cuc_cmd = cuc_start;
-
- /* wait for completion */
- e100_write_flush(nic);
- udelay(10);
-
- /* wait for possibly (ouch) 500ms */
- while (!(cb->status & cpu_to_le16(cb_complete))) {
- msleep(10);
- if (!--counter) break;
- }
-
- /* ack any interupts, something could have been set */
- writeb(~0, &nic->csr->scb.stat_ack);
-
- /* if the command failed, or is not OK, notify and return */
- if (!counter || !(cb->status & cpu_to_le16(cb_ok))) {
- DPRINTK(PROBE,ERR, "ucode load failed\n");
- err = -EPERM;
- }
-
- return err;
-}
-
-static void e100_setup_iaaddr(struct nic *nic, struct cb *cb,
- struct sk_buff *skb)
-{
- cb->command = cpu_to_le16(cb_iaaddr);
- memcpy(cb->u.iaaddr, nic->netdev->dev_addr, ETH_ALEN);
-}
-
-static void e100_dump(struct nic *nic, struct cb *cb, struct sk_buff *skb)
-{
- cb->command = cpu_to_le16(cb_dump);
- cb->u.dump_buffer_addr = cpu_to_le32(nic->dma_addr +
- offsetof(struct mem, dump_buf));
-}
-
-#define NCONFIG_AUTO_SWITCH 0x0080
-#define MII_NSC_CONG MII_RESV1
-#define NSC_CONG_ENABLE 0x0100
-#define NSC_CONG_TXREADY 0x0400
-#define ADVERTISE_FC_SUPPORTED 0x0400
-static int e100_phy_init(struct nic *nic)
-{
- struct net_device *netdev = nic->netdev;
- u32 addr;
- u16 bmcr, stat, id_lo, id_hi, cong;
-
- /* Discover phy addr by searching addrs in order {1,0,2,..., 31} */
- for(addr = 0; addr < 32; addr++) {
- nic->mii.phy_id = (addr == 0) ? 1 : (addr == 1) ? 0 : addr;
- bmcr = mdio_read(netdev, nic->mii.phy_id, MII_BMCR);
- stat = mdio_read(netdev, nic->mii.phy_id, MII_BMSR);
- stat = mdio_read(netdev, nic->mii.phy_id, MII_BMSR);
- if(!((bmcr == 0xFFFF) || ((stat == 0) && (bmcr == 0))))
- break;
- }
- DPRINTK(HW, DEBUG, "phy_addr = %d\n", nic->mii.phy_id);
- if(addr == 32)
- return -EAGAIN;
-
- /* Selected the phy and isolate the rest */
- for(addr = 0; addr < 32; addr++) {
- if(addr != nic->mii.phy_id) {
- mdio_write(netdev, addr, MII_BMCR, BMCR_ISOLATE);
- } else {
- bmcr = mdio_read(netdev, addr, MII_BMCR);
- mdio_write(netdev, addr, MII_BMCR,
- bmcr & ~BMCR_ISOLATE);
- }
- }
-
- /* Get phy ID */
- id_lo = mdio_read(netdev, nic->mii.phy_id, MII_PHYSID1);
- id_hi = mdio_read(netdev, nic->mii.phy_id, MII_PHYSID2);
- nic->phy = (u32)id_hi << 16 | (u32)id_lo;
- DPRINTK(HW, DEBUG, "phy ID = 0x%08X\n", nic->phy);
-
- /* Handle National tx phys */
-#define NCS_PHY_MODEL_MASK 0xFFF0FFFF
- if((nic->phy & NCS_PHY_MODEL_MASK) == phy_nsc_tx) {
- /* Disable congestion control */
- cong = mdio_read(netdev, nic->mii.phy_id, MII_NSC_CONG);
- cong |= NSC_CONG_TXREADY;
- cong &= ~NSC_CONG_ENABLE;
- mdio_write(netdev, nic->mii.phy_id, MII_NSC_CONG, cong);
- }
-
- if((nic->mac >= mac_82550_D102) || ((nic->flags & ich) &&
- (mdio_read(netdev, nic->mii.phy_id, MII_TPISTATUS) & 0x8000))) {
- /* enable/disable MDI/MDI-X auto-switching.
- MDI/MDI-X auto-switching is disabled for 82551ER/QM chips */
- if((nic->mac == mac_82551_E) || (nic->mac == mac_82551_F) ||
- (nic->mac == mac_82551_10) || (nic->mii.force_media) ||
- !(nic->eeprom[eeprom_cnfg_mdix] & eeprom_mdix_enabled))
- mdio_write(netdev, nic->mii.phy_id, MII_NCONFIG, 0);
- else
- mdio_write(netdev, nic->mii.phy_id, MII_NCONFIG, NCONFIG_AUTO_SWITCH);
- }
-
- return 0;
-}
-
-static int e100_hw_init(struct nic *nic)
-{
- int err;
-
- e100_hw_reset(nic);
-
- DPRINTK(HW, ERR, "e100_hw_init\n");
- if(!in_interrupt() && (err = e100_self_test(nic)))
- return err;
-
- if((err = e100_phy_init(nic)))
- return err;
- if((err = e100_exec_cmd(nic, cuc_load_base, 0)))
- return err;
- if((err = e100_exec_cmd(nic, ruc_load_base, 0)))
- return err;
- if ((err = e100_exec_cb_wait(nic, NULL, e100_setup_ucode)))
- return err;
- if((err = e100_exec_cb(nic, NULL, e100_configure)))
- return err;
- if((err = e100_exec_cb(nic, NULL, e100_setup_iaaddr)))
- return err;
- if((err = e100_exec_cmd(nic, cuc_dump_addr,
- nic->dma_addr + offsetof(struct mem, stats))))
- return err;
- if((err = e100_exec_cmd(nic, cuc_dump_reset, 0)))
- return err;
-
- e100_disable_irq(nic);
-
- return 0;
-}
-
-static void e100_multi(struct nic *nic, struct cb *cb, struct sk_buff *skb)
-{
- struct net_device *netdev = nic->netdev;
- struct dev_mc_list *list = netdev->mc_list;
- u16 i, count = min(netdev->mc_count, E100_MAX_MULTICAST_ADDRS);
-
- cb->command = cpu_to_le16(cb_multi);
- cb->u.multi.count = cpu_to_le16(count * ETH_ALEN);
- for(i = 0; list && i < count; i++, list = list->next)
- memcpy(&cb->u.multi.addr[i*ETH_ALEN], &list->dmi_addr,
- ETH_ALEN);
-}
-
-static void e100_set_multicast_list(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-
- DPRINTK(HW, DEBUG, "mc_count=%d, flags=0x%04X\n",
- netdev->mc_count, netdev->flags);
-
- if(netdev->flags & IFF_PROMISC)
- nic->flags |= promiscuous;
- else
- nic->flags &= ~promiscuous;
-
- if(netdev->flags & IFF_ALLMULTI ||
- netdev->mc_count > E100_MAX_MULTICAST_ADDRS)
- nic->flags |= multicast_all;
- else
- nic->flags &= ~multicast_all;
-
- e100_exec_cb(nic, NULL, e100_configure);
- e100_exec_cb(nic, NULL, e100_multi);
-}
-
-static void e100_update_stats(struct nic *nic)
-{
- struct net_device_stats *ns = &nic->net_stats;
- struct stats *s = &nic->mem->stats;
- u32 *complete = (nic->mac < mac_82558_D101_A4) ? &s->fc_xmt_pause :
- (nic->mac < mac_82559_D101M) ? (u32 *)&s->xmt_tco_frames :
- &s->complete;
-
- /* Device's stats reporting may take several microseconds to
- * complete, so where always waiting for results of the
- * previous command. */
-
- if(*complete == le32_to_cpu(cuc_dump_reset_complete)) {
- *complete = 0;
- nic->tx_frames = le32_to_cpu(s->tx_good_frames);
- nic->tx_collisions = le32_to_cpu(s->tx_total_collisions);
- ns->tx_aborted_errors += le32_to_cpu(s->tx_max_collisions);
- ns->tx_window_errors += le32_to_cpu(s->tx_late_collisions);
- ns->tx_carrier_errors += le32_to_cpu(s->tx_lost_crs);
- ns->tx_fifo_errors += le32_to_cpu(s->tx_underruns);
- ns->collisions += nic->tx_collisions;
- ns->tx_errors += le32_to_cpu(s->tx_max_collisions) +
- le32_to_cpu(s->tx_lost_crs);
- ns->rx_length_errors += le32_to_cpu(s->rx_short_frame_errors) +
- nic->rx_over_length_errors;
- ns->rx_crc_errors += le32_to_cpu(s->rx_crc_errors);
- ns->rx_frame_errors += le32_to_cpu(s->rx_alignment_errors);
- ns->rx_over_errors += le32_to_cpu(s->rx_overrun_errors);
- ns->rx_fifo_errors += le32_to_cpu(s->rx_overrun_errors);
- ns->rx_missed_errors += le32_to_cpu(s->rx_resource_errors);
- ns->rx_errors += le32_to_cpu(s->rx_crc_errors) +
- le32_to_cpu(s->rx_alignment_errors) +
- le32_to_cpu(s->rx_short_frame_errors) +
- le32_to_cpu(s->rx_cdt_errors);
- nic->tx_deferred += le32_to_cpu(s->tx_deferred);
- nic->tx_single_collisions +=
- le32_to_cpu(s->tx_single_collisions);
- nic->tx_multiple_collisions +=
- le32_to_cpu(s->tx_multiple_collisions);
- if(nic->mac >= mac_82558_D101_A4) {
- nic->tx_fc_pause += le32_to_cpu(s->fc_xmt_pause);
- nic->rx_fc_pause += le32_to_cpu(s->fc_rcv_pause);
- nic->rx_fc_unsupported +=
- le32_to_cpu(s->fc_rcv_unsupported);
- if(nic->mac >= mac_82559_D101M) {
- nic->tx_tco_frames +=
- le16_to_cpu(s->xmt_tco_frames);
- nic->rx_tco_frames +=
- le16_to_cpu(s->rcv_tco_frames);
- }
- }
- }
-
-
- if(e100_exec_cmd(nic, cuc_dump_reset, 0))
- DPRINTK(TX_ERR, DEBUG, "exec cuc_dump_reset failed\n");
-}
-
-static void e100_adjust_adaptive_ifs(struct nic *nic, int speed, int duplex)
-{
- /* Adjust inter-frame-spacing (IFS) between two transmits if
- * we're getting collisions on a half-duplex connection. */
-
- if(duplex == DUPLEX_HALF) {
- u32 prev = nic->adaptive_ifs;
- u32 min_frames = (speed == SPEED_100) ? 1000 : 100;
-
- if((nic->tx_frames / 32 < nic->tx_collisions) &&
- (nic->tx_frames > min_frames)) {
- if(nic->adaptive_ifs < 60)
- nic->adaptive_ifs += 5;
- } else if (nic->tx_frames < min_frames) {
- if(nic->adaptive_ifs >= 5)
- nic->adaptive_ifs -= 5;
- }
- if(nic->adaptive_ifs != prev)
- e100_exec_cb(nic, NULL, e100_configure);
- }
-}
-
-static void e100_watchdog(unsigned long data)
-{
- struct nic *nic = (struct nic *)data;
- struct ethtool_cmd cmd;
-
- DPRINTK(TIMER, DEBUG, "right now = %ld\n", jiffies);
-
- /* mii library handles link maintenance tasks */
-
- mii_ethtool_gset(&nic->mii, &cmd);
-
- if(mii_link_ok(&nic->mii) && !netif_carrier_ok(nic->netdev)) {
- DPRINTK(LINK, INFO, "link up, %sMbps, %s-duplex\n",
- cmd.speed == SPEED_100 ? "100" : "10",
- cmd.duplex == DUPLEX_FULL ? "full" : "half");
- } else if(!mii_link_ok(&nic->mii) && netif_carrier_ok(nic->netdev)) {
- DPRINTK(LINK, INFO, "link down\n");
- }
-
- mii_check_link(&nic->mii);
-
- /* Software generated interrupt to recover from (rare) Rx
- * allocation failure.
- * Unfortunately have to use a spinlock to not re-enable interrupts
- * accidentally, due to hardware that shares a register between the
- * interrupt mask bit and the SW Interrupt generation bit */
- spin_lock_irq(&nic->cmd_lock);
- writeb(readb(&nic->csr->scb.cmd_hi) | irq_sw_gen,&nic->csr->scb.cmd_hi);
- e100_write_flush(nic);
- spin_unlock_irq(&nic->cmd_lock);
-
- e100_update_stats(nic);
- e100_adjust_adaptive_ifs(nic, cmd.speed, cmd.duplex);
-
- if(nic->mac <= mac_82557_D100_C)
- /* Issue a multicast command to workaround a 557 lock up */
- e100_set_multicast_list(nic->netdev);
-
- if(nic->flags & ich && cmd.speed==SPEED_10 && cmd.duplex==DUPLEX_HALF)
- /* Need SW workaround for ICH[x] 10Mbps/half duplex Tx hang. */
- nic->flags |= ich_10h_workaround;
- else
- nic->flags &= ~ich_10h_workaround;
-
- mod_timer(&nic->watchdog, jiffies + E100_WATCHDOG_PERIOD);
-}
-
-static void e100_xmit_prepare(struct nic *nic, struct cb *cb,
- struct sk_buff *skb)
-{
- cb->command = nic->tx_command;
- /* interrupt every 16 packets regardless of delay */
- if((nic->cbs_avail & ~15) == nic->cbs_avail)
- cb->command |= cpu_to_le16(cb_i);
- cb->u.tcb.tbd_array = cb->dma_addr + offsetof(struct cb, u.tcb.tbd);
- cb->u.tcb.tcb_byte_count = 0;
- cb->u.tcb.threshold = nic->tx_threshold;
- cb->u.tcb.tbd_count = 1;
- cb->u.tcb.tbd.buf_addr = cpu_to_le32(pci_map_single(nic->pdev,
- skb->data, skb->len, PCI_DMA_TODEVICE));
- /* check for mapping failure? */
- cb->u.tcb.tbd.size = cpu_to_le16(skb->len);
-}
-
-static int e100_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- int err;
-
- if(nic->flags & ich_10h_workaround) {
- /* SW workaround for ICH[x] 10Mbps/half duplex Tx hang.
- Issue a NOP command followed by a 1us delay before
- issuing the Tx command. */
- if(e100_exec_cmd(nic, cuc_nop, 0))
- DPRINTK(TX_ERR, DEBUG, "exec cuc_nop failed\n");
- udelay(1);
- }
-
- err = e100_exec_cb(nic, skb, e100_xmit_prepare);
-
- switch(err) {
- case -ENOSPC:
- /* We queued the skb, but now we're out of space. */
- DPRINTK(TX_ERR, DEBUG, "No space for CB\n");
- netif_stop_queue(netdev);
- break;
- case -ENOMEM:
- /* This is a hard error - log it. */
- DPRINTK(TX_ERR, DEBUG, "Out of Tx resources, returning skb\n");
- netif_stop_queue(netdev);
- return 1;
- }
-
- netdev->trans_start = jiffies;
- return 0;
-}
-
-static int e100_tx_clean(struct nic *nic)
-{
- struct cb *cb;
- int tx_cleaned = 0;
-
- spin_lock(&nic->cb_lock);
-
- DPRINTK(TX_DONE, DEBUG, "cb->status = 0x%04X\n",
- nic->cb_to_clean->status);
-
- /* Clean CBs marked complete */
- for(cb = nic->cb_to_clean;
- cb->status & cpu_to_le16(cb_complete);
- cb = nic->cb_to_clean = cb->next) {
- if(likely(cb->skb != NULL)) {
- nic->net_stats.tx_packets++;
- nic->net_stats.tx_bytes += cb->skb->len;
-
- pci_unmap_single(nic->pdev,
- le32_to_cpu(cb->u.tcb.tbd.buf_addr),
- le16_to_cpu(cb->u.tcb.tbd.size),
- PCI_DMA_TODEVICE);
- dev_kfree_skb_any(cb->skb);
- cb->skb = NULL;
- tx_cleaned = 1;
- }
- cb->status = 0;
- nic->cbs_avail++;
- }
-
- spin_unlock(&nic->cb_lock);
-
- /* Recover from running out of Tx resources in xmit_frame */
- if(unlikely(tx_cleaned && netif_queue_stopped(nic->netdev)))
- netif_wake_queue(nic->netdev);
-
- return tx_cleaned;
-}
-
-static void e100_clean_cbs(struct nic *nic)
-{
- if(nic->cbs) {
- while(nic->cbs_avail != nic->params.cbs.count) {
- struct cb *cb = nic->cb_to_clean;
- if(cb->skb) {
- pci_unmap_single(nic->pdev,
- le32_to_cpu(cb->u.tcb.tbd.buf_addr),
- le16_to_cpu(cb->u.tcb.tbd.size),
- PCI_DMA_TODEVICE);
- dev_kfree_skb(cb->skb);
- }
- nic->cb_to_clean = nic->cb_to_clean->next;
- nic->cbs_avail++;
- }
- pci_free_consistent(nic->pdev,
- sizeof(struct cb) * nic->params.cbs.count,
- nic->cbs, nic->cbs_dma_addr);
- nic->cbs = NULL;
- nic->cbs_avail = 0;
- }
- nic->cuc_cmd = cuc_start;
- nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean =
- nic->cbs;
-}
-
-static int e100_alloc_cbs(struct nic *nic)
-{
- struct cb *cb;
- unsigned int i, count = nic->params.cbs.count;
-
- nic->cuc_cmd = cuc_start;
- nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean = NULL;
- nic->cbs_avail = 0;
-
- nic->cbs = pci_alloc_consistent(nic->pdev,
- sizeof(struct cb) * count, &nic->cbs_dma_addr);
- if(!nic->cbs)
- return -ENOMEM;
-
- for(cb = nic->cbs, i = 0; i < count; cb++, i++) {
- cb->next = (i + 1 < count) ? cb + 1 : nic->cbs;
- cb->prev = (i == 0) ? nic->cbs + count - 1 : cb - 1;
-
- cb->dma_addr = nic->cbs_dma_addr + i * sizeof(struct cb);
- cb->link = cpu_to_le32(nic->cbs_dma_addr +
- ((i+1) % count) * sizeof(struct cb));
- cb->skb = NULL;
- }
-
- nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean = nic->cbs;
- nic->cbs_avail = count;
-
- return 0;
-}
-
-static inline void e100_start_receiver(struct nic *nic, struct rx *rx)
-{
- if(!nic->rxs) return;
- if(RU_SUSPENDED != nic->ru_running) return;
-
- /* handle init time starts */
- if(!rx) rx = nic->rxs;
-
- /* (Re)start RU if suspended or idle and RFA is non-NULL */
- if(rx->skb) {
- e100_exec_cmd(nic, ruc_start, rx->dma_addr);
- nic->ru_running = RU_RUNNING;
- }
-}
-
-#define RFD_BUF_LEN (sizeof(struct rfd) + VLAN_ETH_FRAME_LEN)
-static int e100_rx_alloc_skb(struct nic *nic, struct rx *rx)
-{
- if(!(rx->skb = dev_alloc_skb(RFD_BUF_LEN + NET_IP_ALIGN)))
- return -ENOMEM;
-
- /* Align, init, and map the RFD. */
- rx->skb->dev = nic->netdev;
- skb_reserve(rx->skb, NET_IP_ALIGN);
- memcpy(rx->skb->data, &nic->blank_rfd, sizeof(struct rfd));
- rx->dma_addr = pci_map_single(nic->pdev, rx->skb->data,
- RFD_BUF_LEN, PCI_DMA_BIDIRECTIONAL);
-
- if(pci_dma_mapping_error(rx->dma_addr)) {
- dev_kfree_skb_any(rx->skb);
- rx->skb = NULL;
- rx->dma_addr = 0;
- return -ENOMEM;
- }
-
- /* Link the RFD to end of RFA by linking previous RFD to
- * this one, and clearing EL bit of previous. */
- if(rx->prev->skb) {
- struct rfd *prev_rfd = (struct rfd *)rx->prev->skb->data;
- put_unaligned(cpu_to_le32(rx->dma_addr),
- (u32 *)&prev_rfd->link);
- wmb();
- prev_rfd->command &= ~cpu_to_le16(cb_el);
- pci_dma_sync_single_for_device(nic->pdev, rx->prev->dma_addr,
- sizeof(struct rfd), PCI_DMA_TODEVICE);
- }
-
- return 0;
-}
-
-static int e100_rx_indicate(struct nic *nic, struct rx *rx,
- unsigned int *work_done, unsigned int work_to_do)
-{
- struct sk_buff *skb = rx->skb;
- struct rfd *rfd = (struct rfd *)skb->data;
- u16 rfd_status, actual_size;
-
- if(unlikely(work_done && *work_done >= work_to_do))
- return -EAGAIN;
-
- /* Need to sync before taking a peek at cb_complete bit */
- pci_dma_sync_single_for_cpu(nic->pdev, rx->dma_addr,
- sizeof(struct rfd), PCI_DMA_FROMDEVICE);
- rfd_status = le16_to_cpu(rfd->status);
-
- DPRINTK(RX_STATUS, DEBUG, "status=0x%04X\n", rfd_status);
-
- /* If data isn't ready, nothing to indicate */
- if(unlikely(!(rfd_status & cb_complete)))
- return -ENODATA;
-
- /* Get actual data size */
- actual_size = le16_to_cpu(rfd->actual_size) & 0x3FFF;
- if(unlikely(actual_size > RFD_BUF_LEN - sizeof(struct rfd)))
- actual_size = RFD_BUF_LEN - sizeof(struct rfd);
-
- /* Get data */
- pci_unmap_single(nic->pdev, rx->dma_addr,
- RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
-
- /* this allows for a fast restart without re-enabling interrupts */
- if(le16_to_cpu(rfd->command) & cb_el)
- nic->ru_running = RU_SUSPENDED;
-
- /* Pull off the RFD and put the actual data (minus eth hdr) */
- skb_reserve(skb, sizeof(struct rfd));
- skb_put(skb, actual_size);
- skb->protocol = eth_type_trans(skb, nic->netdev);
-
- if(unlikely(!(rfd_status & cb_ok))) {
- /* Don't indicate if hardware indicates errors */
- dev_kfree_skb_any(skb);
- } else if(actual_size > ETH_DATA_LEN + VLAN_ETH_HLEN) {
- /* Don't indicate oversized frames */
- nic->rx_over_length_errors++;
- dev_kfree_skb_any(skb);
- } else {
- nic->net_stats.rx_packets++;
- nic->net_stats.rx_bytes += actual_size;
- nic->netdev->last_rx = jiffies;
- netif_receive_skb(skb);
- if(work_done)
- (*work_done)++;
- }
-
- rx->skb = NULL;
-
- return 0;
-}
-
-static void e100_rx_clean(struct nic *nic, unsigned int *work_done,
- unsigned int work_to_do)
-{
- struct rx *rx;
- int restart_required = 0;
- struct rx *rx_to_start = NULL;
-
- /* are we already rnr? then pay attention!!! this ensures that
- * the state machine progression never allows a start with a
- * partially cleaned list, avoiding a race between hardware
- * and rx_to_clean when in NAPI mode */
- if(RU_SUSPENDED == nic->ru_running)
- restart_required = 1;
-
- /* Indicate newly arrived packets */
- for(rx = nic->rx_to_clean; rx->skb; rx = nic->rx_to_clean = rx->next) {
- int err = e100_rx_indicate(nic, rx, work_done, work_to_do);
- if(-EAGAIN == err) {
- /* hit quota so have more work to do, restart once
- * cleanup is complete */
- restart_required = 0;
- break;
- } else if(-ENODATA == err)
- break; /* No more to clean */
- }
-
- /* save our starting point as the place we'll restart the receiver */
- if(restart_required)
- rx_to_start = nic->rx_to_clean;
-
- /* Alloc new skbs to refill list */
- for(rx = nic->rx_to_use; !rx->skb; rx = nic->rx_to_use = rx->next) {
- if(unlikely(e100_rx_alloc_skb(nic, rx)))
- break; /* Better luck next time (see watchdog) */
- }
-
- if(restart_required) {
- // ack the rnr?
- writeb(stat_ack_rnr, &nic->csr->scb.stat_ack);
- e100_start_receiver(nic, rx_to_start);
- if(work_done)
- (*work_done)++;
- }
-}
-
-static void e100_rx_clean_list(struct nic *nic)
-{
- struct rx *rx;
- unsigned int i, count = nic->params.rfds.count;
-
- nic->ru_running = RU_UNINITIALIZED;
-
- if(nic->rxs) {
- for(rx = nic->rxs, i = 0; i < count; rx++, i++) {
- if(rx->skb) {
- pci_unmap_single(nic->pdev, rx->dma_addr,
- RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
- dev_kfree_skb(rx->skb);
- }
- }
- kfree(nic->rxs);
- nic->rxs = NULL;
- }
-
- nic->rx_to_use = nic->rx_to_clean = NULL;
-}
-
-static int e100_rx_alloc_list(struct nic *nic)
-{
- struct rx *rx;
- unsigned int i, count = nic->params.rfds.count;
-
- nic->rx_to_use = nic->rx_to_clean = NULL;
- nic->ru_running = RU_UNINITIALIZED;
-
- if(!(nic->rxs = kmalloc(sizeof(struct rx) * count, GFP_ATOMIC)))
- return -ENOMEM;
- memset(nic->rxs, 0, sizeof(struct rx) * count);
-
- for(rx = nic->rxs, i = 0; i < count; rx++, i++) {
- rx->next = (i + 1 < count) ? rx + 1 : nic->rxs;
- rx->prev = (i == 0) ? nic->rxs + count - 1 : rx - 1;
- if(e100_rx_alloc_skb(nic, rx)) {
- e100_rx_clean_list(nic);
- return -ENOMEM;
- }
- }
-
- nic->rx_to_use = nic->rx_to_clean = nic->rxs;
- nic->ru_running = RU_SUSPENDED;
-
- return 0;
-}
-
-static irqreturn_t e100_intr(int irq, void *dev_id, struct pt_regs *regs)
-{
- struct net_device *netdev = dev_id;
- struct nic *nic = netdev_priv(netdev);
- u8 stat_ack = readb(&nic->csr->scb.stat_ack);
-
- DPRINTK(INTR, DEBUG, "stat_ack = 0x%02X\n", stat_ack);
-
- if(stat_ack == stat_ack_not_ours || /* Not our interrupt */
- stat_ack == stat_ack_not_present) /* Hardware is ejected */
- return IRQ_NONE;
-
- /* Ack interrupt(s) */
- writeb(stat_ack, &nic->csr->scb.stat_ack);
-
- /* We hit Receive No Resource (RNR); restart RU after cleaning */
- if(stat_ack & stat_ack_rnr)
- nic->ru_running = RU_SUSPENDED;
-
- if(likely(netif_rx_schedule_prep(netdev))) {
- e100_disable_irq(nic);
- __netif_rx_schedule(netdev);
- }
-
- return IRQ_HANDLED;
-}
-
-static int e100_poll(struct net_device *netdev, int *budget)
-{
- struct nic *nic = netdev_priv(netdev);
- unsigned int work_to_do = min(netdev->quota, *budget);
- unsigned int work_done = 0;
- int tx_cleaned;
-
- e100_rx_clean(nic, &work_done, work_to_do);
- tx_cleaned = e100_tx_clean(nic);
-
- /* If no Rx and Tx cleanup work was done, exit polling mode. */
- if((!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
- netif_rx_complete(netdev);
- e100_enable_irq(nic);
- return 0;
- }
-
- *budget -= work_done;
- netdev->quota -= work_done;
-
- return 1;
-}
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void e100_netpoll(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-
- e100_disable_irq(nic);
- e100_intr(nic->pdev->irq, netdev, NULL);
- e100_tx_clean(nic);
- e100_enable_irq(nic);
-}
-#endif
-
-static struct net_device_stats *e100_get_stats(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return &nic->net_stats;
-}
-
-static int e100_set_mac_address(struct net_device *netdev, void *p)
-{
- struct nic *nic = netdev_priv(netdev);
- struct sockaddr *addr = p;
-
- if (!is_valid_ether_addr(addr->sa_data))
- return -EADDRNOTAVAIL;
-
- memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
- e100_exec_cb(nic, NULL, e100_setup_iaaddr);
-
- return 0;
-}
-
-static int e100_change_mtu(struct net_device *netdev, int new_mtu)
-{
- if(new_mtu < ETH_ZLEN || new_mtu > ETH_DATA_LEN)
- return -EINVAL;
- netdev->mtu = new_mtu;
- return 0;
-}
-
-#ifdef CONFIG_PM
-static int e100_asf(struct nic *nic)
-{
- /* ASF can be enabled from eeprom */
- return((nic->pdev->device >= 0x1050) && (nic->pdev->device <= 0x1057) &&
- (nic->eeprom[eeprom_config_asf] & eeprom_asf) &&
- !(nic->eeprom[eeprom_config_asf] & eeprom_gcl) &&
- ((nic->eeprom[eeprom_smbus_addr] & 0xFF) != 0xFE));
-}
-#endif
-
-static int e100_up(struct nic *nic)
-{
- int err;
-
- if((err = e100_rx_alloc_list(nic)))
- return err;
- if((err = e100_alloc_cbs(nic)))
- goto err_rx_clean_list;
- if((err = e100_hw_init(nic)))
- goto err_clean_cbs;
- e100_set_multicast_list(nic->netdev);
- e100_start_receiver(nic, NULL);
- mod_timer(&nic->watchdog, jiffies);
- if((err = request_irq(nic->pdev->irq, e100_intr, IRQF_SHARED,
- nic->netdev->name, nic->netdev)))
- goto err_no_irq;
- netif_wake_queue(nic->netdev);
- netif_poll_enable(nic->netdev);
- /* enable ints _after_ enabling poll, preventing a race between
- * disable ints+schedule */
- e100_enable_irq(nic);
- return 0;
-
-err_no_irq:
- del_timer_sync(&nic->watchdog);
-err_clean_cbs:
- e100_clean_cbs(nic);
-err_rx_clean_list:
- e100_rx_clean_list(nic);
- return err;
-}
-
-static void e100_down(struct nic *nic)
-{
- /* wait here for poll to complete */
- netif_poll_disable(nic->netdev);
- netif_stop_queue(nic->netdev);
- e100_hw_reset(nic);
- free_irq(nic->pdev->irq, nic->netdev);
- del_timer_sync(&nic->watchdog);
- netif_carrier_off(nic->netdev);
- e100_clean_cbs(nic);
- e100_rx_clean_list(nic);
-}
-
-static void e100_tx_timeout(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-
- /* Reset outside of interrupt context, to avoid request_irq
- * in interrupt context */
- schedule_work(&nic->tx_timeout_task);
-}
-
-static void e100_tx_timeout_task(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-
- DPRINTK(TX_ERR, DEBUG, "scb.status=0x%02X\n",
- readb(&nic->csr->scb.status));
- e100_down(netdev_priv(netdev));
- e100_up(netdev_priv(netdev));
-}
-
-static int e100_loopback_test(struct nic *nic, enum loopback loopback_mode)
-{
- int err;
- struct sk_buff *skb;
-
- /* Use driver resources to perform internal MAC or PHY
- * loopback test. A single packet is prepared and transmitted
- * in loopback mode, and the test passes if the received
- * packet compares byte-for-byte to the transmitted packet. */
-
- if((err = e100_rx_alloc_list(nic)))
- return err;
- if((err = e100_alloc_cbs(nic)))
- goto err_clean_rx;
-
- /* ICH PHY loopback is broken so do MAC loopback instead */
- if(nic->flags & ich && loopback_mode == lb_phy)
- loopback_mode = lb_mac;
-
- nic->loopback = loopback_mode;
- if((err = e100_hw_init(nic)))
- goto err_loopback_none;
-
- if(loopback_mode == lb_phy)
- mdio_write(nic->netdev, nic->mii.phy_id, MII_BMCR,
- BMCR_LOOPBACK);
-
- e100_start_receiver(nic, NULL);
-
- if(!(skb = dev_alloc_skb(ETH_DATA_LEN))) {
- err = -ENOMEM;
- goto err_loopback_none;
- }
- skb_put(skb, ETH_DATA_LEN);
- memset(skb->data, 0xFF, ETH_DATA_LEN);
- e100_xmit_frame(skb, nic->netdev);
-
- msleep(10);
-
- pci_dma_sync_single_for_cpu(nic->pdev, nic->rx_to_clean->dma_addr,
- RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
-
- if(memcmp(nic->rx_to_clean->skb->data + sizeof(struct rfd),
- skb->data, ETH_DATA_LEN))
- err = -EAGAIN;
-
-err_loopback_none:
- mdio_write(nic->netdev, nic->mii.phy_id, MII_BMCR, 0);
- nic->loopback = lb_none;
- e100_clean_cbs(nic);
- e100_hw_reset(nic);
-err_clean_rx:
- e100_rx_clean_list(nic);
- return err;
-}
-
-#define MII_LED_CONTROL 0x1B
-static void e100_blink_led(unsigned long data)
-{
- struct nic *nic = (struct nic *)data;
- enum led_state {
- led_on = 0x01,
- led_off = 0x04,
- led_on_559 = 0x05,
- led_on_557 = 0x07,
- };
-
- nic->leds = (nic->leds & led_on) ? led_off :
- (nic->mac < mac_82559_D101M) ? led_on_557 : led_on_559;
- mdio_write(nic->netdev, nic->mii.phy_id, MII_LED_CONTROL, nic->leds);
- mod_timer(&nic->blink_timer, jiffies + HZ / 4);
-}
-
-static int e100_get_settings(struct net_device *netdev, struct ethtool_cmd *cmd)
-{
- struct nic *nic = netdev_priv(netdev);
- return mii_ethtool_gset(&nic->mii, cmd);
-}
-
-static int e100_set_settings(struct net_device *netdev, struct ethtool_cmd *cmd)
-{
- struct nic *nic = netdev_priv(netdev);
- int err;
-
- mdio_write(netdev, nic->mii.phy_id, MII_BMCR, BMCR_RESET);
- err = mii_ethtool_sset(&nic->mii, cmd);
- e100_exec_cb(nic, NULL, e100_configure);
-
- return err;
-}
-
-static void e100_get_drvinfo(struct net_device *netdev,
- struct ethtool_drvinfo *info)
-{
- struct nic *nic = netdev_priv(netdev);
- strcpy(info->driver, DRV_NAME);
- strcpy(info->version, DRV_VERSION);
- strcpy(info->fw_version, "N/A");
- strcpy(info->bus_info, pci_name(nic->pdev));
-}
-
-static int e100_get_regs_len(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
-#define E100_PHY_REGS 0x1C
-#define E100_REGS_LEN 1 + E100_PHY_REGS + \
- sizeof(nic->mem->dump_buf) / sizeof(u32)
- return E100_REGS_LEN * sizeof(u32);
-}
-
-static void e100_get_regs(struct net_device *netdev,
- struct ethtool_regs *regs, void *p)
-{
- struct nic *nic = netdev_priv(netdev);
- u32 *buff = p;
- int i;
-
- regs->version = (1 << 24) | nic->rev_id;
- buff[0] = readb(&nic->csr->scb.cmd_hi) << 24 |
- readb(&nic->csr->scb.cmd_lo) << 16 |
- readw(&nic->csr->scb.status);
- for(i = E100_PHY_REGS; i >= 0; i--)
- buff[1 + E100_PHY_REGS - i] =
- mdio_read(netdev, nic->mii.phy_id, i);
- memset(nic->mem->dump_buf, 0, sizeof(nic->mem->dump_buf));
- e100_exec_cb(nic, NULL, e100_dump);
- msleep(10);
- memcpy(&buff[2 + E100_PHY_REGS], nic->mem->dump_buf,
- sizeof(nic->mem->dump_buf));
-}
-
-static void e100_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
-{
- struct nic *nic = netdev_priv(netdev);
- wol->supported = (nic->mac >= mac_82558_D101_A4) ? WAKE_MAGIC : 0;
- wol->wolopts = (nic->flags & wol_magic) ? WAKE_MAGIC : 0;
-}
-
-static int e100_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
-{
- struct nic *nic = netdev_priv(netdev);
-
- if(wol->wolopts != WAKE_MAGIC && wol->wolopts != 0)
- return -EOPNOTSUPP;
-
- if(wol->wolopts)
- nic->flags |= wol_magic;
- else
- nic->flags &= ~wol_magic;
-
- e100_exec_cb(nic, NULL, e100_configure);
-
- return 0;
-}
-
-static u32 e100_get_msglevel(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return nic->msg_enable;
-}
-
-static void e100_set_msglevel(struct net_device *netdev, u32 value)
-{
- struct nic *nic = netdev_priv(netdev);
- nic->msg_enable = value;
-}
-
-static int e100_nway_reset(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return mii_nway_restart(&nic->mii);
-}
-
-static u32 e100_get_link(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return mii_link_ok(&nic->mii);
-}
-
-static int e100_get_eeprom_len(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- return nic->eeprom_wc << 1;
-}
-
-#define E100_EEPROM_MAGIC 0x1234
-static int e100_get_eeprom(struct net_device *netdev,
- struct ethtool_eeprom *eeprom, u8 *bytes)
-{
- struct nic *nic = netdev_priv(netdev);
-
- eeprom->magic = E100_EEPROM_MAGIC;
- memcpy(bytes, &((u8 *)nic->eeprom)[eeprom->offset], eeprom->len);
-
- return 0;
-}
-
-static int e100_set_eeprom(struct net_device *netdev,
- struct ethtool_eeprom *eeprom, u8 *bytes)
-{
- struct nic *nic = netdev_priv(netdev);
-
- if(eeprom->magic != E100_EEPROM_MAGIC)
- return -EINVAL;
-
- memcpy(&((u8 *)nic->eeprom)[eeprom->offset], bytes, eeprom->len);
-
- return e100_eeprom_save(nic, eeprom->offset >> 1,
- (eeprom->len >> 1) + 1);
-}
-
-static void e100_get_ringparam(struct net_device *netdev,
- struct ethtool_ringparam *ring)
-{
- struct nic *nic = netdev_priv(netdev);
- struct param_range *rfds = &nic->params.rfds;
- struct param_range *cbs = &nic->params.cbs;
-
- ring->rx_max_pending = rfds->max;
- ring->tx_max_pending = cbs->max;
- ring->rx_mini_max_pending = 0;
- ring->rx_jumbo_max_pending = 0;
- ring->rx_pending = rfds->count;
- ring->tx_pending = cbs->count;
- ring->rx_mini_pending = 0;
- ring->rx_jumbo_pending = 0;
-}
-
-static int e100_set_ringparam(struct net_device *netdev,
- struct ethtool_ringparam *ring)
-{
- struct nic *nic = netdev_priv(netdev);
- struct param_range *rfds = &nic->params.rfds;
- struct param_range *cbs = &nic->params.cbs;
-
- if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
- return -EINVAL;
-
- if(netif_running(netdev))
- e100_down(nic);
- rfds->count = max(ring->rx_pending, rfds->min);
- rfds->count = min(rfds->count, rfds->max);
- cbs->count = max(ring->tx_pending, cbs->min);
- cbs->count = min(cbs->count, cbs->max);
- DPRINTK(DRV, INFO, "Ring Param settings: rx: %d, tx %d\n",
- rfds->count, cbs->count);
- if(netif_running(netdev))
- e100_up(nic);
-
- return 0;
-}
-
-static const char e100_gstrings_test[][ETH_GSTRING_LEN] = {
- "Link test (on/offline)",
- "Eeprom test (on/offline)",
- "Self test (offline)",
- "Mac loopback (offline)",
- "Phy loopback (offline)",
-};
-#define E100_TEST_LEN sizeof(e100_gstrings_test) / ETH_GSTRING_LEN
-
-static int e100_diag_test_count(struct net_device *netdev)
-{
- return E100_TEST_LEN;
-}
-
-static void e100_diag_test(struct net_device *netdev,
- struct ethtool_test *test, u64 *data)
-{
- struct ethtool_cmd cmd;
- struct nic *nic = netdev_priv(netdev);
- int i, err;
-
- memset(data, 0, E100_TEST_LEN * sizeof(u64));
- data[0] = !mii_link_ok(&nic->mii);
- data[1] = e100_eeprom_load(nic);
- if(test->flags & ETH_TEST_FL_OFFLINE) {
-
- /* save speed, duplex & autoneg settings */
- err = mii_ethtool_gset(&nic->mii, &cmd);
-
- if(netif_running(netdev))
- e100_down(nic);
- data[2] = e100_self_test(nic);
- data[3] = e100_loopback_test(nic, lb_mac);
- data[4] = e100_loopback_test(nic, lb_phy);
-
- /* restore speed, duplex & autoneg settings */
- err = mii_ethtool_sset(&nic->mii, &cmd);
-
- if(netif_running(netdev))
- e100_up(nic);
- }
- for(i = 0; i < E100_TEST_LEN; i++)
- test->flags |= data[i] ? ETH_TEST_FL_FAILED : 0;
-
- msleep_interruptible(4 * 1000);
-}
-
-static int e100_phys_id(struct net_device *netdev, u32 data)
-{
- struct nic *nic = netdev_priv(netdev);
-
- if(!data || data > (u32)(MAX_SCHEDULE_TIMEOUT / HZ))
- data = (u32)(MAX_SCHEDULE_TIMEOUT / HZ);
- mod_timer(&nic->blink_timer, jiffies);
- msleep_interruptible(data * 1000);
- del_timer_sync(&nic->blink_timer);
- mdio_write(netdev, nic->mii.phy_id, MII_LED_CONTROL, 0);
-
- return 0;
-}
-
-static const char e100_gstrings_stats[][ETH_GSTRING_LEN] = {
- "rx_packets", "tx_packets", "rx_bytes", "tx_bytes", "rx_errors",
- "tx_errors", "rx_dropped", "tx_dropped", "multicast", "collisions",
- "rx_length_errors", "rx_over_errors", "rx_crc_errors",
- "rx_frame_errors", "rx_fifo_errors", "rx_missed_errors",
- "tx_aborted_errors", "tx_carrier_errors", "tx_fifo_errors",
- "tx_heartbeat_errors", "tx_window_errors",
- /* device-specific stats */
- "tx_deferred", "tx_single_collisions", "tx_multi_collisions",
- "tx_flow_control_pause", "rx_flow_control_pause",
- "rx_flow_control_unsupported", "tx_tco_packets", "rx_tco_packets",
-};
-#define E100_NET_STATS_LEN 21
-#define E100_STATS_LEN sizeof(e100_gstrings_stats) / ETH_GSTRING_LEN
-
-static int e100_get_stats_count(struct net_device *netdev)
-{
- return E100_STATS_LEN;
-}
-
-static void e100_get_ethtool_stats(struct net_device *netdev,
- struct ethtool_stats *stats, u64 *data)
-{
- struct nic *nic = netdev_priv(netdev);
- int i;
-
- for(i = 0; i < E100_NET_STATS_LEN; i++)
- data[i] = ((unsigned long *)&nic->net_stats)[i];
-
- data[i++] = nic->tx_deferred;
- data[i++] = nic->tx_single_collisions;
- data[i++] = nic->tx_multiple_collisions;
- data[i++] = nic->tx_fc_pause;
- data[i++] = nic->rx_fc_pause;
- data[i++] = nic->rx_fc_unsupported;
- data[i++] = nic->tx_tco_frames;
- data[i++] = nic->rx_tco_frames;
-}
-
-static void e100_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
-{
- switch(stringset) {
- case ETH_SS_TEST:
- memcpy(data, *e100_gstrings_test, sizeof(e100_gstrings_test));
- break;
- case ETH_SS_STATS:
- memcpy(data, *e100_gstrings_stats, sizeof(e100_gstrings_stats));
- break;
- }
-}
-
-static struct ethtool_ops e100_ethtool_ops = {
- .get_settings = e100_get_settings,
- .set_settings = e100_set_settings,
- .get_drvinfo = e100_get_drvinfo,
- .get_regs_len = e100_get_regs_len,
- .get_regs = e100_get_regs,
- .get_wol = e100_get_wol,
- .set_wol = e100_set_wol,
- .get_msglevel = e100_get_msglevel,
- .set_msglevel = e100_set_msglevel,
- .nway_reset = e100_nway_reset,
- .get_link = e100_get_link,
- .get_eeprom_len = e100_get_eeprom_len,
- .get_eeprom = e100_get_eeprom,
- .set_eeprom = e100_set_eeprom,
- .get_ringparam = e100_get_ringparam,
- .set_ringparam = e100_set_ringparam,
- .self_test_count = e100_diag_test_count,
- .self_test = e100_diag_test,
- .get_strings = e100_get_strings,
- .phys_id = e100_phys_id,
- .get_stats_count = e100_get_stats_count,
- .get_ethtool_stats = e100_get_ethtool_stats,
- .get_perm_addr = ethtool_op_get_perm_addr,
-};
-
-static int e100_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
-{
- struct nic *nic = netdev_priv(netdev);
-
- return generic_mii_ioctl(&nic->mii, if_mii(ifr), cmd, NULL);
-}
-
-static int e100_alloc(struct nic *nic)
-{
- nic->mem = pci_alloc_consistent(nic->pdev, sizeof(struct mem),
- &nic->dma_addr);
- return nic->mem ? 0 : -ENOMEM;
-}
-
-static void e100_free(struct nic *nic)
-{
- if(nic->mem) {
- pci_free_consistent(nic->pdev, sizeof(struct mem),
- nic->mem, nic->dma_addr);
- nic->mem = NULL;
- }
-}
-
-static int e100_open(struct net_device *netdev)
-{
- struct nic *nic = netdev_priv(netdev);
- int err = 0;
-
- netif_carrier_off(netdev);
- if((err = e100_up(nic)))
- DPRINTK(IFUP, ERR, "Cannot open interface, aborting.\n");
- return err;
-}
-
-static int e100_close(struct net_device *netdev)
-{
- e100_down(netdev_priv(netdev));
- return 0;
-}
-
-static int __devinit e100_probe(struct pci_dev *pdev,
- const struct pci_device_id *ent)
-{
- struct net_device *netdev;
- struct nic *nic;
- int err;
-
- if(!(netdev = alloc_etherdev(sizeof(struct nic)))) {
- if(((1 << debug) - 1) & NETIF_MSG_PROBE)
- printk(KERN_ERR PFX "Etherdev alloc failed, abort.\n");
- return -ENOMEM;
- }
-
- netdev->open = e100_open;
- netdev->stop = e100_close;
- netdev->hard_start_xmit = e100_xmit_frame;
- netdev->get_stats = e100_get_stats;
- netdev->set_multicast_list = e100_set_multicast_list;
- netdev->set_mac_address = e100_set_mac_address;
- netdev->change_mtu = e100_change_mtu;
- netdev->do_ioctl = e100_do_ioctl;
- SET_ETHTOOL_OPS(netdev, &e100_ethtool_ops);
- netdev->tx_timeout = e100_tx_timeout;
- netdev->watchdog_timeo = E100_WATCHDOG_PERIOD;
- netdev->poll = e100_poll;
- netdev->weight = E100_NAPI_WEIGHT;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- netdev->poll_controller = e100_netpoll;
-#endif
- strcpy(netdev->name, pci_name(pdev));
-
- nic = netdev_priv(netdev);
- nic->netdev = netdev;
- nic->pdev = pdev;
- nic->msg_enable = (1 << debug) - 1;
- pci_set_drvdata(pdev, netdev);
-
- if((err = pci_enable_device(pdev))) {
- DPRINTK(PROBE, ERR, "Cannot enable PCI device, aborting.\n");
- goto err_out_free_dev;
- }
-
- if(!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {
- DPRINTK(PROBE, ERR, "Cannot find proper PCI device "
- "base address, aborting.\n");
- err = -ENODEV;
- goto err_out_disable_pdev;
- }
-
- if((err = pci_request_regions(pdev, DRV_NAME))) {
- DPRINTK(PROBE, ERR, "Cannot obtain PCI resources, aborting.\n");
- goto err_out_disable_pdev;
- }
-
- if((err = pci_set_dma_mask(pdev, DMA_32BIT_MASK))) {
- DPRINTK(PROBE, ERR, "No usable DMA configuration, aborting.\n");
- goto err_out_free_res;
- }
-
- SET_MODULE_OWNER(netdev);
- SET_NETDEV_DEV(netdev, &pdev->dev);
-
- nic->csr = ioremap(pci_resource_start(pdev, 0), sizeof(struct csr));
- if(!nic->csr) {
- DPRINTK(PROBE, ERR, "Cannot map device registers, aborting.\n");
- err = -ENOMEM;
- goto err_out_free_res;
- }
-
- if(ent->driver_data)
- nic->flags |= ich;
- else
- nic->flags &= ~ich;
-
- e100_get_defaults(nic);
-
- /* locks must be initialized before calling hw_reset */
- spin_lock_init(&nic->cb_lock);
- spin_lock_init(&nic->cmd_lock);
- spin_lock_init(&nic->mdio_lock);
-
- /* Reset the device before pci_set_master() in case device is in some
- * funky state and has an interrupt pending - hint: we don't have the
- * interrupt handler registered yet. */
- e100_hw_reset(nic);
-
- pci_set_master(pdev);
-
- init_timer(&nic->watchdog);
- nic->watchdog.function = e100_watchdog;
- nic->watchdog.data = (unsigned long)nic;
- init_timer(&nic->blink_timer);
- nic->blink_timer.function = e100_blink_led;
- nic->blink_timer.data = (unsigned long)nic;
-
- INIT_WORK(&nic->tx_timeout_task,
- (void (*)(void *))e100_tx_timeout_task, netdev);
-
- if((err = e100_alloc(nic))) {
- DPRINTK(PROBE, ERR, "Cannot alloc driver memory, aborting.\n");
- goto err_out_iounmap;
- }
-
- if((err = e100_eeprom_load(nic)))
- goto err_out_free;
-
- e100_phy_init(nic);
-
- memcpy(netdev->dev_addr, nic->eeprom, ETH_ALEN);
- memcpy(netdev->perm_addr, nic->eeprom, ETH_ALEN);
- if(!is_valid_ether_addr(netdev->perm_addr)) {
- DPRINTK(PROBE, ERR, "Invalid MAC address from "
- "EEPROM, aborting.\n");
- err = -EAGAIN;
- goto err_out_free;
- }
-
- /* Wol magic packet can be enabled from eeprom */
- if((nic->mac >= mac_82558_D101_A4) &&
- (nic->eeprom[eeprom_id] & eeprom_id_wol))
- nic->flags |= wol_magic;
-
- /* ack any pending wake events, disable PME */
- err = pci_enable_wake(pdev, 0, 0);
- if (err)
- DPRINTK(PROBE, ERR, "Error clearing wake event\n");
-
- strcpy(netdev->name, "eth%d");
- if((err = register_netdev(netdev))) {
- DPRINTK(PROBE, ERR, "Cannot register net device, aborting.\n");
- goto err_out_free;
- }
-
- DPRINTK(PROBE, INFO, "addr 0x%llx, irq %d, "
- "MAC addr %02X:%02X:%02X:%02X:%02X:%02X\n",
- (unsigned long long)pci_resource_start(pdev, 0), pdev->irq,
- netdev->dev_addr[0], netdev->dev_addr[1], netdev->dev_addr[2],
- netdev->dev_addr[3], netdev->dev_addr[4], netdev->dev_addr[5]);
-
- return 0;
-
-err_out_free:
- e100_free(nic);
-err_out_iounmap:
- iounmap(nic->csr);
-err_out_free_res:
- pci_release_regions(pdev);
-err_out_disable_pdev:
- pci_disable_device(pdev);
-err_out_free_dev:
- pci_set_drvdata(pdev, NULL);
- free_netdev(netdev);
- return err;
-}
-
-static void __devexit e100_remove(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
-
- if(netdev) {
- struct nic *nic = netdev_priv(netdev);
- unregister_netdev(netdev);
- e100_free(nic);
- iounmap(nic->csr);
- free_netdev(netdev);
- pci_release_regions(pdev);
- pci_disable_device(pdev);
- pci_set_drvdata(pdev, NULL);
- }
-}
-
-#ifdef CONFIG_PM
-static int e100_suspend(struct pci_dev *pdev, pm_message_t state)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
- int retval;
-
- if(netif_running(netdev))
- e100_down(nic);
- e100_hw_reset(nic);
- netif_device_detach(netdev);
-
- pci_save_state(pdev);
- retval = pci_enable_wake(pdev, pci_choose_state(pdev, state),
- nic->flags & (wol_magic | e100_asf(nic)));
- if (retval)
- DPRINTK(PROBE,ERR, "Error enabling wake\n");
- pci_disable_device(pdev);
- retval = pci_set_power_state(pdev, pci_choose_state(pdev, state));
- if (retval)
- DPRINTK(PROBE,ERR, "Error %d setting power state\n", retval);
-
- return 0;
-}
-
-static int e100_resume(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
- int retval;
-
- retval = pci_set_power_state(pdev, PCI_D0);
- if (retval)
- DPRINTK(PROBE,ERR, "Error waking adapter\n");
- pci_restore_state(pdev);
- /* ack any pending wake events, disable PME */
- retval = pci_enable_wake(pdev, 0, 0);
- if (retval)
- DPRINTK(PROBE,ERR, "Error clearing wake events\n");
-
- netif_device_attach(netdev);
- if(netif_running(netdev))
- e100_up(nic);
-
- return 0;
-}
-#endif
-
-
-static void e100_shutdown(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
- int retval;
-
-#ifdef CONFIG_PM
- retval = pci_enable_wake(pdev, 0, nic->flags & (wol_magic | e100_asf(nic)));
-#else
- retval = pci_enable_wake(pdev, 0, nic->flags & (wol_magic));
-#endif
- if (retval)
- DPRINTK(PROBE,ERR, "Error enabling wake\n");
-}
-
-/* ------------------ PCI Error Recovery infrastructure -------------- */
-/**
- * e100_io_error_detected - called when PCI error is detected.
- * @pdev: Pointer to PCI device
- * @state: The current pci conneection state
- */
-static pci_ers_result_t e100_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
-
- /* Similar to calling e100_down(), but avoids adpater I/O. */
- netdev->stop(netdev);
-
- /* Detach; put netif into state similar to hotplug unplug. */
- netif_poll_enable(netdev);
- netif_device_detach(netdev);
-
- /* Request a slot reset. */
- return PCI_ERS_RESULT_NEED_RESET;
-}
-
-/**
- * e100_io_slot_reset - called after the pci bus has been reset.
- * @pdev: Pointer to PCI device
- *
- * Restart the card from scratch.
- */
-static pci_ers_result_t e100_io_slot_reset(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
-
- if (pci_enable_device(pdev)) {
- printk(KERN_ERR "e100: Cannot re-enable PCI device after reset.\n");
- return PCI_ERS_RESULT_DISCONNECT;
- }
- pci_set_master(pdev);
-
- /* Only one device per card can do a reset */
- if (0 != PCI_FUNC(pdev->devfn))
- return PCI_ERS_RESULT_RECOVERED;
- e100_hw_reset(nic);
- e100_phy_init(nic);
-
- return PCI_ERS_RESULT_RECOVERED;
-}
-
-/**
- * e100_io_resume - resume normal operations
- * @pdev: Pointer to PCI device
- *
- * Resume normal operations after an error recovery
- * sequence has been completed.
- */
-static void e100_io_resume(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct nic *nic = netdev_priv(netdev);
-
- /* ack any pending wake events, disable PME */
- pci_enable_wake(pdev, 0, 0);
-
- netif_device_attach(netdev);
- if (netif_running(netdev)) {
- e100_open(netdev);
- mod_timer(&nic->watchdog, jiffies);
- }
-}
-
-static struct pci_error_handlers e100_err_handler = {
- .error_detected = e100_io_error_detected,
- .slot_reset = e100_io_slot_reset,
- .resume = e100_io_resume,
-};
-
-static struct pci_driver e100_driver = {
- .name = DRV_NAME,
- .id_table = e100_id_table,
- .probe = e100_probe,
- .remove = __devexit_p(e100_remove),
-#ifdef CONFIG_PM
- .suspend = e100_suspend,
- .resume = e100_resume,
-#endif
- .shutdown = e100_shutdown,
- .err_handler = &e100_err_handler,
-};
-
-static int __init e100_init_module(void)
-{
- if(((1 << debug) - 1) & NETIF_MSG_DRV) {
- printk(KERN_INFO PFX "%s, %s\n", DRV_DESCRIPTION, DRV_VERSION);
- printk(KERN_INFO PFX "%s\n", DRV_COPYRIGHT);
- }
- return pci_module_init(&e100_driver);
-}
-
-static void __exit e100_cleanup_module(void)
-{
- pci_unregister_driver(&e100_driver);
-}
-
-module_init(e100_init_module);
-module_exit(e100_cleanup_module);
--- a/devices/e1000/Kbuild.in Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/e1000/Kbuild.in Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
# vim: syntax=make
#
--- a/devices/e1000/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/e1000/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/devices/e1000/e1000_main-2.6.13-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/e1000/e1000_main-2.6.13-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -24,6 +24,8 @@
Linux NICS <linux.nics@intel.com>
Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ vim: noexpandtab
+
*******************************************************************************/
#include "e1000-2.6.13-ethercat.h"
--- a/devices/e1000/e1000_main-2.6.18-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/e1000/e1000_main-2.6.18-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -25,6 +25,8 @@
e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ vim: noexpandtab
+
*******************************************************************************/
#include "e1000-2.6.18-ethercat.h"
@@ -465,8 +467,14 @@
* next_to_use != next_to_clean */
for (i = 0; i < adapter->num_rx_queues; i++) {
struct e1000_rx_ring *ring = &adapter->rx_ring[i];
- adapter->alloc_rx_buf(adapter, ring,
- E1000_DESC_UNUSED(ring));
+ if (adapter->ecdev) {
+ /* fill rx ring completely! */
+ adapter->alloc_rx_buf(adapter, ring, ring->count);
+ } else {
+ /* this one leaves the last ring element unallocated! */
+ adapter->alloc_rx_buf(adapter, ring,
+ E1000_DESC_UNUSED(ring));
+ }
}
adapter->tx_queue_len = netdev->tx_queue_len;
@@ -2170,7 +2178,14 @@
/* No need to loop, because 82542 supports only 1 queue */
struct e1000_rx_ring *ring = &adapter->rx_ring[0];
e1000_configure_rx(adapter);
- adapter->alloc_rx_buf(adapter, ring, E1000_DESC_UNUSED(ring));
+ if (adapter->ecdev) {
+ /* fill rx ring completely! */
+ adapter->alloc_rx_buf(adapter, ring, ring->count);
+ } else {
+ /* this one leaves the last ring element unallocated! */
+ adapter->alloc_rx_buf(adapter, ring, E1000_DESC_UNUSED(ring));
+ }
+
}
}
--- a/devices/e1000/e1000_main-2.6.20-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/e1000/e1000_main-2.6.20-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -23,6 +23,8 @@
Linux NICS <linux.nics@intel.com>
e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+ vim: noexpandtab
*******************************************************************************/
@@ -557,8 +559,14 @@
* next_to_use != next_to_clean */
for (i = 0; i < adapter->num_rx_queues; i++) {
struct e1000_rx_ring *ring = &adapter->rx_ring[i];
- adapter->alloc_rx_buf(adapter, ring,
- E1000_DESC_UNUSED(ring));
+ if (adapter->ecdev) {
+ /* fill rx ring completely! */
+ adapter->alloc_rx_buf(adapter, ring, ring->count);
+ } else {
+ /* this one leaves the last ring element unallocated! */
+ adapter->alloc_rx_buf(adapter, ring,
+ E1000_DESC_UNUSED(ring));
+ }
}
adapter->tx_queue_len = netdev->tx_queue_len;
@@ -2395,7 +2403,14 @@
/* No need to loop, because 82542 supports only 1 queue */
struct e1000_rx_ring *ring = &adapter->rx_ring[0];
e1000_configure_rx(adapter);
- adapter->alloc_rx_buf(adapter, ring, E1000_DESC_UNUSED(ring));
+ if (adapter->ecdev) {
+ /* fill rx ring completely! */
+ adapter->alloc_rx_buf(adapter, ring, ring->count);
+ } else {
+ /* this one leaves the last ring element unallocated! */
+ adapter->alloc_rx_buf(adapter, ring, E1000_DESC_UNUSED(ring));
+ }
+
}
}
@@ -3856,11 +3871,11 @@
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
int i;
+
+ if (adapter->ecdev) {
#ifdef CONFIG_E1000_NAPI
- int ec_work_done = 0;
-#endif
-
- if (adapter->ecdev) {
+ int ec_work_done = 0;
+#endif
for (i = 0; i < E1000_MAX_INTR; i++)
#ifdef CONFIG_E1000_NAPI
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring,
@@ -3959,9 +3974,6 @@
struct e1000_hw *hw = &adapter->hw;
uint32_t rctl, icr = E1000_READ_REG(hw, ICR);
int i;
-#ifdef CONFIG_E1000_NAPI
- int ec_work_done = 0;
-#endif
if (unlikely(!icr))
return IRQ_NONE; /* Not our interrupt */
@@ -3999,6 +4011,9 @@
}
if (adapter->ecdev) {
+#ifdef CONFIG_E1000_NAPI
+ int ec_work_done = 0;
+#endif
for (i = 0; i < E1000_MAX_INTR; i++)
#ifdef CONFIG_E1000_NAPI
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring,
--- a/devices/e1000/e1000_main-2.6.22-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/e1000/e1000_main-2.6.22-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -24,6 +24,8 @@
e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ vim: noexpandtab
+
*******************************************************************************/
#include "e1000-2.6.22-ethercat.h"
@@ -540,8 +542,14 @@
* next_to_use != next_to_clean */
for (i = 0; i < adapter->num_rx_queues; i++) {
struct e1000_rx_ring *ring = &adapter->rx_ring[i];
- adapter->alloc_rx_buf(adapter, ring,
- E1000_DESC_UNUSED(ring));
+ if (adapter->ecdev) {
+ /* fill rx ring completely! */
+ adapter->alloc_rx_buf(adapter, ring, ring->count);
+ } else {
+ /* this one leaves the last ring element unallocated! */
+ adapter->alloc_rx_buf(adapter, ring,
+ E1000_DESC_UNUSED(ring));
+ }
}
adapter->tx_queue_len = netdev->tx_queue_len;
@@ -2396,7 +2404,14 @@
/* No need to loop, because 82542 supports only 1 queue */
struct e1000_rx_ring *ring = &adapter->rx_ring[0];
e1000_configure_rx(adapter);
- adapter->alloc_rx_buf(adapter, ring, E1000_DESC_UNUSED(ring));
+ if (adapter->ecdev) {
+ /* fill rx ring completely! */
+ adapter->alloc_rx_buf(adapter, ring, ring->count);
+ } else {
+ /* this one leaves the last ring element unallocated! */
+ adapter->alloc_rx_buf(adapter, ring, E1000_DESC_UNUSED(ring));
+ }
+
}
}
@@ -3836,12 +3851,12 @@
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
int i;
+ uint32_t icr = E1000_READ_REG(hw, ICR);
+
+ if (adapter->ecdev) {
#ifdef CONFIG_E1000_NAPI
- int ec_work_done = 0;
+ int ec_work_done = 0;
#endif
- uint32_t icr = E1000_READ_REG(hw, ICR);
-
- if (adapter->ecdev) {
for (i = 0; i < E1000_MAX_INTR; i++)
#ifdef CONFIG_E1000_NAPI
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring,
@@ -3916,9 +3931,6 @@
struct e1000_hw *hw = &adapter->hw;
uint32_t rctl, icr = E1000_READ_REG(hw, ICR);
int i;
-#ifdef CONFIG_E1000_NAPI
- int ec_work_done = 0;
-#endif
if (unlikely(!icr))
return IRQ_NONE; /* Not our interrupt */
@@ -3956,6 +3968,9 @@
}
if (adapter->ecdev) {
+#ifdef CONFIG_E1000_NAPI
+ int ec_work_done = 0;
+#endif
for (i = 0; i < E1000_MAX_INTR; i++)
#ifdef CONFIG_E1000_NAPI
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring,
--- a/devices/e1000/e1000_main-2.6.24-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/e1000/e1000_main-2.6.24-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -24,6 +24,8 @@
e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ vim: noexpandtab
+
*******************************************************************************/
#include "e1000-2.6.24-ethercat.h"
@@ -541,8 +543,14 @@
* next_to_use != next_to_clean */
for (i = 0; i < adapter->num_rx_queues; i++) {
struct e1000_rx_ring *ring = &adapter->rx_ring[i];
- adapter->alloc_rx_buf(adapter, ring,
- E1000_DESC_UNUSED(ring));
+ if (adapter->ecdev) {
+ /* fill rx ring completely! */
+ adapter->alloc_rx_buf(adapter, ring, ring->count);
+ } else {
+ /* this one leaves the last ring element unallocated! */
+ adapter->alloc_rx_buf(adapter, ring,
+ E1000_DESC_UNUSED(ring));
+ }
}
adapter->tx_queue_len = netdev->tx_queue_len;
@@ -2394,7 +2402,14 @@
/* No need to loop, because 82542 supports only 1 queue */
struct e1000_rx_ring *ring = &adapter->rx_ring[0];
e1000_configure_rx(adapter);
- adapter->alloc_rx_buf(adapter, ring, E1000_DESC_UNUSED(ring));
+ if (adapter->ecdev) {
+ /* fill rx ring completely! */
+ adapter->alloc_rx_buf(adapter, ring, ring->count);
+ } else {
+ /* this one leaves the last ring element unallocated! */
+ adapter->alloc_rx_buf(adapter, ring, E1000_DESC_UNUSED(ring));
+ }
+
}
}
@@ -3833,10 +3848,12 @@
struct net_device *netdev = data;
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
- int ec_work_done = 0;
int i;
if (adapter->ecdev) {
+#ifdef CONFIG_E1000_NAPI
+ int ec_work_done = 0;
+#endif
for (i = 0; i < E1000_MAX_INTR; i++)
#ifdef CONFIG_E1000_NAPI
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring,
@@ -3913,7 +3930,6 @@
struct e1000_hw *hw = &adapter->hw;
uint32_t rctl, icr = E1000_READ_REG(hw, ICR);
int i;
- int ec_work_done = 0;
if (unlikely(!icr))
return IRQ_NONE; /* Not our interrupt */
@@ -3951,6 +3967,9 @@
}
if (adapter->ecdev) {
+#ifdef CONFIG_E1000_NAPI
+ int ec_work_done = 0;
+#endif
for (i = 0; i < E1000_MAX_INTR; i++)
#ifdef CONFIG_E1000_NAPI
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring,
--- a/devices/ecdev.h Mon Oct 19 14:33:59 2009 +0200
+++ b/devices/ecdev.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/devices/forcedeth-2.6.17-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,3506 +0,0 @@
-/*
- * forcedeth: Ethernet driver for NVIDIA nForce media access controllers.
- *
- * Note: This driver is a cleanroom reimplementation based on reverse
- * engineered documentation written by Carl-Daniel Hailfinger
- * and Andrew de Quincey. It's neither supported nor endorsed
- * by NVIDIA Corp. Use at your own risk.
- *
- * NVIDIA, nForce and other NVIDIA marks are trademarks or registered
- * trademarks of NVIDIA Corporation in the United States and other
- * countries.
- *
- * Copyright (C) 2003,4,5 Manfred Spraul
- * Copyright (C) 2004 Andrew de Quincey (wol support)
- * Copyright (C) 2004 Carl-Daniel Hailfinger (invalid MAC handling, insane
- * IRQ rate fixes, bigendian fixes, cleanups, verification)
- * Copyright (c) 2004 NVIDIA Corporation
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
- * Changelog:
- * 0.01: 05 Oct 2003: First release that compiles without warnings.
- * 0.02: 05 Oct 2003: Fix bug for nv_drain_tx: do not try to free NULL skbs.
- * Check all PCI BARs for the register window.
- * udelay added to mii_rw.
- * 0.03: 06 Oct 2003: Initialize dev->irq.
- * 0.04: 07 Oct 2003: Initialize np->lock, reduce handled irqs, add printks.
- * 0.05: 09 Oct 2003: printk removed again, irq status print tx_timeout.
- * 0.06: 10 Oct 2003: MAC Address read updated, pff flag generation updated,
- * irq mask updated
- * 0.07: 14 Oct 2003: Further irq mask updates.
- * 0.08: 20 Oct 2003: rx_desc.Length initialization added, nv_alloc_rx refill
- * added into irq handler, NULL check for drain_ring.
- * 0.09: 20 Oct 2003: Basic link speed irq implementation. Only handle the
- * requested interrupt sources.
- * 0.10: 20 Oct 2003: First cleanup for release.
- * 0.11: 21 Oct 2003: hexdump for tx added, rx buffer sizes increased.
- * MAC Address init fix, set_multicast cleanup.
- * 0.12: 23 Oct 2003: Cleanups for release.
- * 0.13: 25 Oct 2003: Limit for concurrent tx packets increased to 10.
- * Set link speed correctly. start rx before starting
- * tx (nv_start_rx sets the link speed).
- * 0.14: 25 Oct 2003: Nic dependant irq mask.
- * 0.15: 08 Nov 2003: fix smp deadlock with set_multicast_list during
- * open.
- * 0.16: 15 Nov 2003: include file cleanup for ppc64, rx buffer size
- * increased to 1628 bytes.
- * 0.17: 16 Nov 2003: undo rx buffer size increase. Substract 1 from
- * the tx length.
- * 0.18: 17 Nov 2003: fix oops due to late initialization of dev_stats
- * 0.19: 29 Nov 2003: Handle RxNoBuf, detect & handle invalid mac
- * addresses, really stop rx if already running
- * in nv_start_rx, clean up a bit.
- * 0.20: 07 Dec 2003: alloc fixes
- * 0.21: 12 Jan 2004: additional alloc fix, nic polling fix.
- * 0.22: 19 Jan 2004: reprogram timer to a sane rate, avoid lockup
- * on close.
- * 0.23: 26 Jan 2004: various small cleanups
- * 0.24: 27 Feb 2004: make driver even less anonymous in backtraces
- * 0.25: 09 Mar 2004: wol support
- * 0.26: 03 Jun 2004: netdriver specific annotation, sparse-related fixes
- * 0.27: 19 Jun 2004: Gigabit support, new descriptor rings,
- * added CK804/MCP04 device IDs, code fixes
- * for registers, link status and other minor fixes.
- * 0.28: 21 Jun 2004: Big cleanup, making driver mostly endian safe
- * 0.29: 31 Aug 2004: Add backup timer for link change notification.
- * 0.30: 25 Sep 2004: rx checksum support for nf 250 Gb. Add rx reset
- * into nv_close, otherwise reenabling for wol can
- * cause DMA to kfree'd memory.
- * 0.31: 14 Nov 2004: ethtool support for getting/setting link
- * capabilities.
- * 0.32: 16 Apr 2005: RX_ERROR4 handling added.
- * 0.33: 16 May 2005: Support for MCP51 added.
- * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics.
- * 0.35: 26 Jun 2005: Support for MCP55 added.
- * 0.36: 28 Jun 2005: Add jumbo frame support.
- * 0.37: 10 Jul 2005: Additional ethtool support, cleanup of pci id list
- * 0.38: 16 Jul 2005: tx irq rewrite: Use global flags instead of
- * per-packet flags.
- * 0.39: 18 Jul 2005: Add 64bit descriptor support.
- * 0.40: 19 Jul 2005: Add support for mac address change.
- * 0.41: 30 Jul 2005: Write back original MAC in nv_close instead
- * of nv_remove
- * 0.42: 06 Aug 2005: Fix lack of link speed initialization
- * in the second (and later) nv_open call
- * 0.43: 10 Aug 2005: Add support for tx checksum.
- * 0.44: 20 Aug 2005: Add support for scatter gather and segmentation.
- * 0.45: 18 Sep 2005: Remove nv_stop/start_rx from every link check
- * 0.46: 20 Oct 2005: Add irq optimization modes.
- * 0.47: 26 Oct 2005: Add phyaddr 0 in phy scan.
- * 0.48: 24 Dec 2005: Disable TSO, bugfix for pci_map_single
- * 0.49: 10 Dec 2005: Fix tso for large buffers.
- * 0.50: 20 Jan 2006: Add 8021pq tagging support.
- * 0.51: 20 Jan 2006: Add 64bit consistent memory allocation for rings.
- * 0.52: 20 Jan 2006: Add MSI/MSIX support.
- * 0.53: 19 Mar 2006: Fix init from low power mode and add hw reset.
- * 0.54: 21 Mar 2006: Fix spin locks for multi irqs and cleanup.
- *
- * Known bugs:
- * We suspect that on some hardware no TX done interrupts are generated.
- * This means recovery from netif_stop_queue only happens if the hw timer
- * interrupt fires (100 times/second, configurable with NVREG_POLL_DEFAULT)
- * and the timer is active in the IRQMask, or if a rx packet arrives by chance.
- * If your hardware reliably generates tx done interrupts, then you can remove
- * DEV_NEED_TIMERIRQ from the driver_data flags.
- * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few
- * superfluous timer interrupts from the nic.
- */
-#define FORCEDETH_VERSION "0.54"
-#define DRV_NAME "forcedeth"
-
-#include <linux/module.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/delay.h>
-#include <linux/spinlock.h>
-#include <linux/ethtool.h>
-#include <linux/timer.h>
-#include <linux/skbuff.h>
-#include <linux/mii.h>
-#include <linux/random.h>
-#include <linux/init.h>
-#include <linux/if_vlan.h>
-#include <linux/dma-mapping.h>
-
-#include <asm/irq.h>
-#include <asm/io.h>
-#include <asm/uaccess.h>
-#include <asm/system.h>
-
-#include "../globals.h"
-#include "ecdev.h"
-
-#if 0
-#define dprintk printk
-#else
-#define dprintk(x...) do { } while (0)
-#endif
-
-
-/*
- * Hardware access:
- */
-
-#define DEV_NEED_TIMERIRQ 0x0001 /* set the timer irq flag in the irq mask */
-#define DEV_NEED_LINKTIMER 0x0002 /* poll link settings. Relies on the timer irq */
-#define DEV_HAS_LARGEDESC 0x0004 /* device supports jumbo frames and needs packet format 2 */
-#define DEV_HAS_HIGH_DMA 0x0008 /* device supports 64bit dma */
-#define DEV_HAS_CHECKSUM 0x0010 /* device supports tx and rx checksum offloads */
-#define DEV_HAS_VLAN 0x0020 /* device supports vlan tagging and striping */
-#define DEV_HAS_MSI 0x0040 /* device supports MSI */
-#define DEV_HAS_MSI_X 0x0080 /* device supports MSI-X */
-#define DEV_HAS_POWER_CNTRL 0x0100 /* device supports power savings */
-
-enum {
- NvRegIrqStatus = 0x000,
-#define NVREG_IRQSTAT_MIIEVENT 0x040
-#define NVREG_IRQSTAT_MASK 0x1ff
- NvRegIrqMask = 0x004,
-#define NVREG_IRQ_RX_ERROR 0x0001
-#define NVREG_IRQ_RX 0x0002
-#define NVREG_IRQ_RX_NOBUF 0x0004
-#define NVREG_IRQ_TX_ERR 0x0008
-#define NVREG_IRQ_TX_OK 0x0010
-#define NVREG_IRQ_TIMER 0x0020
-#define NVREG_IRQ_LINK 0x0040
-#define NVREG_IRQ_RX_FORCED 0x0080
-#define NVREG_IRQ_TX_FORCED 0x0100
-#define NVREG_IRQMASK_THROUGHPUT 0x00df
-#define NVREG_IRQMASK_CPU 0x0040
-#define NVREG_IRQ_TX_ALL (NVREG_IRQ_TX_ERR|NVREG_IRQ_TX_OK|NVREG_IRQ_TX_FORCED)
-#define NVREG_IRQ_RX_ALL (NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_RX_FORCED)
-#define NVREG_IRQ_OTHER (NVREG_IRQ_TIMER|NVREG_IRQ_LINK)
-
-#define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR| \
- NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RX_FORCED| \
- NVREG_IRQ_TX_FORCED))
-
- NvRegUnknownSetupReg6 = 0x008,
-#define NVREG_UNKSETUP6_VAL 3
-
-/*
- * NVREG_POLL_DEFAULT is the interval length of the timer source on the nic
- * NVREG_POLL_DEFAULT=97 would result in an interval length of 1 ms
- */
- NvRegPollingInterval = 0x00c,
-#define NVREG_POLL_DEFAULT_THROUGHPUT 970
-#define NVREG_POLL_DEFAULT_CPU 13
- NvRegMSIMap0 = 0x020,
- NvRegMSIMap1 = 0x024,
- NvRegMSIIrqMask = 0x030,
-#define NVREG_MSI_VECTOR_0_ENABLED 0x01
- NvRegMisc1 = 0x080,
-#define NVREG_MISC1_HD 0x02
-#define NVREG_MISC1_FORCE 0x3b0f3c
-
- NvRegMacReset = 0x3c,
-#define NVREG_MAC_RESET_ASSERT 0x0F3
- NvRegTransmitterControl = 0x084,
-#define NVREG_XMITCTL_START 0x01
- NvRegTransmitterStatus = 0x088,
-#define NVREG_XMITSTAT_BUSY 0x01
-
- NvRegPacketFilterFlags = 0x8c,
-#define NVREG_PFF_ALWAYS 0x7F0008
-#define NVREG_PFF_PROMISC 0x80
-#define NVREG_PFF_MYADDR 0x20
-
- NvRegOffloadConfig = 0x90,
-#define NVREG_OFFLOAD_HOMEPHY 0x601
-#define NVREG_OFFLOAD_NORMAL RX_NIC_BUFSIZE
- NvRegReceiverControl = 0x094,
-#define NVREG_RCVCTL_START 0x01
- NvRegReceiverStatus = 0x98,
-#define NVREG_RCVSTAT_BUSY 0x01
-
- NvRegRandomSeed = 0x9c,
-#define NVREG_RNDSEED_MASK 0x00ff
-#define NVREG_RNDSEED_FORCE 0x7f00
-#define NVREG_RNDSEED_FORCE2 0x2d00
-#define NVREG_RNDSEED_FORCE3 0x7400
-
- NvRegUnknownSetupReg1 = 0xA0,
-#define NVREG_UNKSETUP1_VAL 0x16070f
- NvRegUnknownSetupReg2 = 0xA4,
-#define NVREG_UNKSETUP2_VAL 0x16
- NvRegMacAddrA = 0xA8,
- NvRegMacAddrB = 0xAC,
- NvRegMulticastAddrA = 0xB0,
-#define NVREG_MCASTADDRA_FORCE 0x01
- NvRegMulticastAddrB = 0xB4,
- NvRegMulticastMaskA = 0xB8,
- NvRegMulticastMaskB = 0xBC,
-
- NvRegPhyInterface = 0xC0,
-#define PHY_RGMII 0x10000000
-
- NvRegTxRingPhysAddr = 0x100,
- NvRegRxRingPhysAddr = 0x104,
- NvRegRingSizes = 0x108,
-#define NVREG_RINGSZ_TXSHIFT 0
-#define NVREG_RINGSZ_RXSHIFT 16
- NvRegUnknownTransmitterReg = 0x10c,
- NvRegLinkSpeed = 0x110,
-#define NVREG_LINKSPEED_FORCE 0x10000
-#define NVREG_LINKSPEED_10 1000
-#define NVREG_LINKSPEED_100 100
-#define NVREG_LINKSPEED_1000 50
-#define NVREG_LINKSPEED_MASK (0xFFF)
- NvRegUnknownSetupReg5 = 0x130,
-#define NVREG_UNKSETUP5_BIT31 (1<<31)
- NvRegUnknownSetupReg3 = 0x13c,
-#define NVREG_UNKSETUP3_VAL1 0x200010
- NvRegTxRxControl = 0x144,
-#define NVREG_TXRXCTL_KICK 0x0001
-#define NVREG_TXRXCTL_BIT1 0x0002
-#define NVREG_TXRXCTL_BIT2 0x0004
-#define NVREG_TXRXCTL_IDLE 0x0008
-#define NVREG_TXRXCTL_RESET 0x0010
-#define NVREG_TXRXCTL_RXCHECK 0x0400
-#define NVREG_TXRXCTL_DESC_1 0
-#define NVREG_TXRXCTL_DESC_2 0x02100
-#define NVREG_TXRXCTL_DESC_3 0x02200
-#define NVREG_TXRXCTL_VLANSTRIP 0x00040
-#define NVREG_TXRXCTL_VLANINS 0x00080
- NvRegTxRingPhysAddrHigh = 0x148,
- NvRegRxRingPhysAddrHigh = 0x14C,
- NvRegMIIStatus = 0x180,
-#define NVREG_MIISTAT_ERROR 0x0001
-#define NVREG_MIISTAT_LINKCHANGE 0x0008
-#define NVREG_MIISTAT_MASK 0x000f
-#define NVREG_MIISTAT_MASK2 0x000f
- NvRegUnknownSetupReg4 = 0x184,
-#define NVREG_UNKSETUP4_VAL 8
-
- NvRegAdapterControl = 0x188,
-#define NVREG_ADAPTCTL_START 0x02
-#define NVREG_ADAPTCTL_LINKUP 0x04
-#define NVREG_ADAPTCTL_PHYVALID 0x40000
-#define NVREG_ADAPTCTL_RUNNING 0x100000
-#define NVREG_ADAPTCTL_PHYSHIFT 24
- NvRegMIISpeed = 0x18c,
-#define NVREG_MIISPEED_BIT8 (1<<8)
-#define NVREG_MIIDELAY 5
- NvRegMIIControl = 0x190,
-#define NVREG_MIICTL_INUSE 0x08000
-#define NVREG_MIICTL_WRITE 0x00400
-#define NVREG_MIICTL_ADDRSHIFT 5
- NvRegMIIData = 0x194,
- NvRegWakeUpFlags = 0x200,
-#define NVREG_WAKEUPFLAGS_VAL 0x7770
-#define NVREG_WAKEUPFLAGS_BUSYSHIFT 24
-#define NVREG_WAKEUPFLAGS_ENABLESHIFT 16
-#define NVREG_WAKEUPFLAGS_D3SHIFT 12
-#define NVREG_WAKEUPFLAGS_D2SHIFT 8
-#define NVREG_WAKEUPFLAGS_D1SHIFT 4
-#define NVREG_WAKEUPFLAGS_D0SHIFT 0
-#define NVREG_WAKEUPFLAGS_ACCEPT_MAGPAT 0x01
-#define NVREG_WAKEUPFLAGS_ACCEPT_WAKEUPPAT 0x02
-#define NVREG_WAKEUPFLAGS_ACCEPT_LINKCHANGE 0x04
-#define NVREG_WAKEUPFLAGS_ENABLE 0x1111
-
- NvRegPatternCRC = 0x204,
- NvRegPatternMask = 0x208,
- NvRegPowerCap = 0x268,
-#define NVREG_POWERCAP_D3SUPP (1<<30)
-#define NVREG_POWERCAP_D2SUPP (1<<26)
-#define NVREG_POWERCAP_D1SUPP (1<<25)
- NvRegPowerState = 0x26c,
-#define NVREG_POWERSTATE_POWEREDUP 0x8000
-#define NVREG_POWERSTATE_VALID 0x0100
-#define NVREG_POWERSTATE_MASK 0x0003
-#define NVREG_POWERSTATE_D0 0x0000
-#define NVREG_POWERSTATE_D1 0x0001
-#define NVREG_POWERSTATE_D2 0x0002
-#define NVREG_POWERSTATE_D3 0x0003
- NvRegVlanControl = 0x300,
-#define NVREG_VLANCONTROL_ENABLE 0x2000
- NvRegMSIXMap0 = 0x3e0,
- NvRegMSIXMap1 = 0x3e4,
- NvRegMSIXIrqStatus = 0x3f0,
-
- NvRegPowerState2 = 0x600,
-#define NVREG_POWERSTATE2_POWERUP_MASK 0x0F11
-#define NVREG_POWERSTATE2_POWERUP_REV_A3 0x0001
-};
-
-/* Big endian: should work, but is untested */
-struct ring_desc {
- u32 PacketBuffer;
- u32 FlagLen;
-};
-
-struct ring_desc_ex {
- u32 PacketBufferHigh;
- u32 PacketBufferLow;
- u32 TxVlan;
- u32 FlagLen;
-};
-
-typedef union _ring_type {
- struct ring_desc* orig;
- struct ring_desc_ex* ex;
-} ring_type;
-
-#define FLAG_MASK_V1 0xffff0000
-#define FLAG_MASK_V2 0xffffc000
-#define LEN_MASK_V1 (0xffffffff ^ FLAG_MASK_V1)
-#define LEN_MASK_V2 (0xffffffff ^ FLAG_MASK_V2)
-
-#define NV_TX_LASTPACKET (1<<16)
-#define NV_TX_RETRYERROR (1<<19)
-#define NV_TX_FORCED_INTERRUPT (1<<24)
-#define NV_TX_DEFERRED (1<<26)
-#define NV_TX_CARRIERLOST (1<<27)
-#define NV_TX_LATECOLLISION (1<<28)
-#define NV_TX_UNDERFLOW (1<<29)
-#define NV_TX_ERROR (1<<30)
-#define NV_TX_VALID (1<<31)
-
-#define NV_TX2_LASTPACKET (1<<29)
-#define NV_TX2_RETRYERROR (1<<18)
-#define NV_TX2_FORCED_INTERRUPT (1<<30)
-#define NV_TX2_DEFERRED (1<<25)
-#define NV_TX2_CARRIERLOST (1<<26)
-#define NV_TX2_LATECOLLISION (1<<27)
-#define NV_TX2_UNDERFLOW (1<<28)
-/* error and valid are the same for both */
-#define NV_TX2_ERROR (1<<30)
-#define NV_TX2_VALID (1<<31)
-#define NV_TX2_TSO (1<<28)
-#define NV_TX2_TSO_SHIFT 14
-#define NV_TX2_TSO_MAX_SHIFT 14
-#define NV_TX2_TSO_MAX_SIZE (1<<NV_TX2_TSO_MAX_SHIFT)
-#define NV_TX2_CHECKSUM_L3 (1<<27)
-#define NV_TX2_CHECKSUM_L4 (1<<26)
-
-#define NV_TX3_VLAN_TAG_PRESENT (1<<18)
-
-#define NV_RX_DESCRIPTORVALID (1<<16)
-#define NV_RX_MISSEDFRAME (1<<17)
-#define NV_RX_SUBSTRACT1 (1<<18)
-#define NV_RX_ERROR1 (1<<23)
-#define NV_RX_ERROR2 (1<<24)
-#define NV_RX_ERROR3 (1<<25)
-#define NV_RX_ERROR4 (1<<26)
-#define NV_RX_CRCERR (1<<27)
-#define NV_RX_OVERFLOW (1<<28)
-#define NV_RX_FRAMINGERR (1<<29)
-#define NV_RX_ERROR (1<<30)
-#define NV_RX_AVAIL (1<<31)
-
-#define NV_RX2_CHECKSUMMASK (0x1C000000)
-#define NV_RX2_CHECKSUMOK1 (0x10000000)
-#define NV_RX2_CHECKSUMOK2 (0x14000000)
-#define NV_RX2_CHECKSUMOK3 (0x18000000)
-#define NV_RX2_DESCRIPTORVALID (1<<29)
-#define NV_RX2_SUBSTRACT1 (1<<25)
-#define NV_RX2_ERROR1 (1<<18)
-#define NV_RX2_ERROR2 (1<<19)
-#define NV_RX2_ERROR3 (1<<20)
-#define NV_RX2_ERROR4 (1<<21)
-#define NV_RX2_CRCERR (1<<22)
-#define NV_RX2_OVERFLOW (1<<23)
-#define NV_RX2_FRAMINGERR (1<<24)
-/* error and avail are the same for both */
-#define NV_RX2_ERROR (1<<30)
-#define NV_RX2_AVAIL (1<<31)
-
-#define NV_RX3_VLAN_TAG_PRESENT (1<<16)
-#define NV_RX3_VLAN_TAG_MASK (0x0000FFFF)
-
-/* Miscelaneous hardware related defines: */
-#define NV_PCI_REGSZ_VER1 0x270
-#define NV_PCI_REGSZ_VER2 0x604
-
-/* various timeout delays: all in usec */
-#define NV_TXRX_RESET_DELAY 4
-#define NV_TXSTOP_DELAY1 10
-#define NV_TXSTOP_DELAY1MAX 500000
-#define NV_TXSTOP_DELAY2 100
-#define NV_RXSTOP_DELAY1 10
-#define NV_RXSTOP_DELAY1MAX 500000
-#define NV_RXSTOP_DELAY2 100
-#define NV_SETUP5_DELAY 5
-#define NV_SETUP5_DELAYMAX 50000
-#define NV_POWERUP_DELAY 5
-#define NV_POWERUP_DELAYMAX 5000
-#define NV_MIIBUSY_DELAY 50
-#define NV_MIIPHY_DELAY 10
-#define NV_MIIPHY_DELAYMAX 10000
-#define NV_MAC_RESET_DELAY 64
-
-#define NV_WAKEUPPATTERNS 5
-#define NV_WAKEUPMASKENTRIES 4
-
-/* General driver defaults */
-#define NV_WATCHDOG_TIMEO (5*HZ)
-
-#define RX_RING 128
-#define TX_RING 256
-/*
- * If your nic mysteriously hangs then try to reduce the limits
- * to 1/0: It might be required to set NV_TX_LASTPACKET in the
- * last valid ring entry. But this would be impossible to
- * implement - probably a disassembly error.
- */
-#define TX_LIMIT_STOP 255
-#define TX_LIMIT_START 254
-
-/* rx/tx mac addr + type + vlan + align + slack*/
-#define NV_RX_HEADERS (64)
-/* even more slack. */
-#define NV_RX_ALLOC_PAD (64)
-
-/* maximum mtu size */
-#define NV_PKTLIMIT_1 ETH_DATA_LEN /* hard limit not known */
-#define NV_PKTLIMIT_2 9100 /* Actual limit according to NVidia: 9202 */
-
-#define OOM_REFILL (1+HZ/20)
-#define POLL_WAIT (1+HZ/100)
-#define LINK_TIMEOUT (3*HZ)
-
-/*
- * desc_ver values:
- * The nic supports three different descriptor types:
- * - DESC_VER_1: Original
- * - DESC_VER_2: support for jumbo frames.
- * - DESC_VER_3: 64-bit format.
- */
-#define DESC_VER_1 1
-#define DESC_VER_2 2
-#define DESC_VER_3 3
-
-/* PHY defines */
-#define PHY_OUI_MARVELL 0x5043
-#define PHY_OUI_CICADA 0x03f1
-#define PHYID1_OUI_MASK 0x03ff
-#define PHYID1_OUI_SHFT 6
-#define PHYID2_OUI_MASK 0xfc00
-#define PHYID2_OUI_SHFT 10
-#define PHY_INIT1 0x0f000
-#define PHY_INIT2 0x0e00
-#define PHY_INIT3 0x01000
-#define PHY_INIT4 0x0200
-#define PHY_INIT5 0x0004
-#define PHY_INIT6 0x02000
-#define PHY_GIGABIT 0x0100
-
-#define PHY_TIMEOUT 0x1
-#define PHY_ERROR 0x2
-
-#define PHY_100 0x1
-#define PHY_1000 0x2
-#define PHY_HALF 0x100
-
-/* FIXME: MII defines that should be added to <linux/mii.h> */
-#define MII_1000BT_CR 0x09
-#define MII_1000BT_SR 0x0a
-#define ADVERTISE_1000FULL 0x0200
-#define ADVERTISE_1000HALF 0x0100
-#define LPA_1000FULL 0x0800
-#define LPA_1000HALF 0x0400
-
-/* MSI/MSI-X defines */
-#define NV_MSI_X_MAX_VECTORS 8
-#define NV_MSI_X_VECTORS_MASK 0x000f
-#define NV_MSI_CAPABLE 0x0010
-#define NV_MSI_X_CAPABLE 0x0020
-#define NV_MSI_ENABLED 0x0040
-#define NV_MSI_X_ENABLED 0x0080
-
-#define NV_MSI_X_VECTOR_ALL 0x0
-#define NV_MSI_X_VECTOR_RX 0x0
-#define NV_MSI_X_VECTOR_TX 0x1
-#define NV_MSI_X_VECTOR_OTHER 0x2
-
-/*
- * SMP locking:
- * All hardware access under dev->priv->lock, except the performance
- * critical parts:
- * - rx is (pseudo-) lockless: it relies on the single-threading provided
- * by the arch code for interrupts.
- * - tx setup is lockless: it relies on dev->xmit_lock. Actual submission
- * needs dev->priv->lock :-(
- * - set_multicast_list: preparation lockless, relies on dev->xmit_lock.
- */
-
-/* in dev: base, irq */
-struct fe_priv {
- spinlock_t lock;
-
- /* General data:
- * Locking: spin_lock(&np->lock); */
- struct net_device_stats stats;
- int in_shutdown;
- u32 linkspeed;
- int duplex;
- int autoneg;
- int fixed_mode;
- int phyaddr;
- int wolenabled;
- unsigned int phy_oui;
- u16 gigabit;
-
- /* General data: RO fields */
- dma_addr_t ring_addr;
- struct pci_dev *pci_dev;
- u32 orig_mac[2];
- u32 irqmask;
- u32 desc_ver;
- u32 txrxctl_bits;
- u32 vlanctl_bits;
- u32 driver_data;
- u32 register_size;
-
- void __iomem *base;
-
- /* rx specific fields.
- * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
- */
- ring_type rx_ring;
- unsigned int cur_rx, refill_rx;
- struct sk_buff *rx_skbuff[RX_RING];
- dma_addr_t rx_dma[RX_RING];
- unsigned int rx_buf_sz;
- unsigned int pkt_limit;
- struct timer_list oom_kick;
- struct timer_list nic_poll;
- u32 nic_poll_irq;
-
- /* media detection workaround.
- * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
- */
- int need_linktimer;
- unsigned long link_timeout;
- /*
- * tx specific fields.
- */
- ring_type tx_ring;
- unsigned int next_tx, nic_tx;
- struct sk_buff *tx_skbuff[TX_RING];
- dma_addr_t tx_dma[TX_RING];
- unsigned int tx_dma_len[TX_RING];
- u32 tx_flags;
-
- /* vlan fields */
- struct vlan_group *vlangrp;
-
- /* msi/msi-x fields */
- u32 msi_flags;
- struct msix_entry msi_x_entry[NV_MSI_X_MAX_VECTORS];
-
- ec_device_t *ecdev;
-};
-
-/*
- * Maximum number of loops until we assume that a bit in the irq mask
- * is stuck. Overridable with module param.
- */
-static int max_interrupt_work = 5;
-
-/*
- * Optimization can be either throuput mode or cpu mode
- *
- * Throughput Mode: Every tx and rx packet will generate an interrupt.
- * CPU Mode: Interrupts are controlled by a timer.
- */
-#define NV_OPTIMIZATION_MODE_THROUGHPUT 0
-#define NV_OPTIMIZATION_MODE_CPU 1
-static int optimization_mode = NV_OPTIMIZATION_MODE_THROUGHPUT;
-
-/*
- * Poll interval for timer irq
- *
- * This interval determines how frequent an interrupt is generated.
- * The is value is determined by [(time_in_micro_secs * 100) / (2^10)]
- * Min = 0, and Max = 65535
- */
-static int poll_interval = -1;
-
-/*
- * Disable MSI interrupts
- */
-static int disable_msi = 0;
-
-/*
- * Disable MSIX interrupts
- */
-static int disable_msix = 0;
-
-static int board_idx = -1;
-
-static inline struct fe_priv *get_nvpriv(struct net_device *dev)
-{
- return netdev_priv(dev);
-}
-
-static inline u8 __iomem *get_hwbase(struct net_device *dev)
-{
- return ((struct fe_priv *)netdev_priv(dev))->base;
-}
-
-static inline void pci_push(u8 __iomem *base)
-{
- /* force out pending posted writes */
- readl(base);
-}
-
-static inline u32 nv_descr_getlength(struct ring_desc *prd, u32 v)
-{
- return le32_to_cpu(prd->FlagLen)
- & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2);
-}
-
-static inline u32 nv_descr_getlength_ex(struct ring_desc_ex *prd, u32 v)
-{
- return le32_to_cpu(prd->FlagLen) & LEN_MASK_V2;
-}
-
-static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target,
- int delay, int delaymax, const char *msg)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- pci_push(base);
- do {
- udelay(delay);
- delaymax -= delay;
- if (delaymax < 0) {
- if (msg)
- printk(msg);
- return 1;
- }
- } while ((readl(base + offset) & mask) != target);
- return 0;
-}
-
-#define NV_SETUP_RX_RING 0x01
-#define NV_SETUP_TX_RING 0x02
-
-static void setup_hw_rings(struct net_device *dev, int rxtx_flags)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- if (rxtx_flags & NV_SETUP_RX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
- }
- if (rxtx_flags & NV_SETUP_TX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr + RX_RING*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr);
- }
- } else {
- if (rxtx_flags & NV_SETUP_RX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
- writel((u32) (cpu_to_le64(np->ring_addr) >> 32), base + NvRegRxRingPhysAddrHigh);
- }
- if (rxtx_flags & NV_SETUP_TX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr + RX_RING*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr);
- writel((u32) (cpu_to_le64(np->ring_addr + RX_RING*sizeof(struct ring_desc_ex)) >> 32), base + NvRegTxRingPhysAddrHigh);
- }
- }
-}
-
-static int using_multi_irqs(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!(np->msi_flags & NV_MSI_X_ENABLED) ||
- ((np->msi_flags & NV_MSI_X_ENABLED) &&
- ((np->msi_flags & NV_MSI_X_VECTORS_MASK) == 0x1)))
- return 0;
- else
- return 1;
-}
-
-static void nv_enable_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
-}
-
-static void nv_disable_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- } else {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
-}
-
-/* In MSIX mode, a write to irqmask behaves as XOR */
-static void nv_enable_hw_interrupts(struct net_device *dev, u32 mask)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- writel(mask, base + NvRegIrqMask);
-}
-
-static void nv_disable_hw_interrupts(struct net_device *dev, u32 mask)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->msi_flags & NV_MSI_X_ENABLED) {
- writel(mask, base + NvRegIrqMask);
- } else {
- if (np->msi_flags & NV_MSI_ENABLED)
- writel(0, base + NvRegMSIIrqMask);
- writel(0, base + NvRegIrqMask);
- }
-}
-
-#define MII_READ (-1)
-/* mii_rw: read/write a register on the PHY.
- *
- * Caller must guarantee serialization
- */
-static int mii_rw(struct net_device *dev, int addr, int miireg, int value)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 reg;
- int retval;
-
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
-
- reg = readl(base + NvRegMIIControl);
- if (reg & NVREG_MIICTL_INUSE) {
- writel(NVREG_MIICTL_INUSE, base + NvRegMIIControl);
- udelay(NV_MIIBUSY_DELAY);
- }
-
- reg = (addr << NVREG_MIICTL_ADDRSHIFT) | miireg;
- if (value != MII_READ) {
- writel(value, base + NvRegMIIData);
- reg |= NVREG_MIICTL_WRITE;
- }
- writel(reg, base + NvRegMIIControl);
-
- if (reg_delay(dev, NvRegMIIControl, NVREG_MIICTL_INUSE, 0,
- NV_MIIPHY_DELAY, NV_MIIPHY_DELAYMAX, NULL)) {
- dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d timed out.\n",
- dev->name, miireg, addr);
- retval = -1;
- } else if (value != MII_READ) {
- /* it was a write operation - fewer failures are detectable */
- dprintk(KERN_DEBUG "%s: mii_rw wrote 0x%x to reg %d at PHY %d\n",
- dev->name, value, miireg, addr);
- retval = 0;
- } else if (readl(base + NvRegMIIStatus) & NVREG_MIISTAT_ERROR) {
- dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d failed.\n",
- dev->name, miireg, addr);
- retval = -1;
- } else {
- retval = readl(base + NvRegMIIData);
- dprintk(KERN_DEBUG "%s: mii_rw read from reg %d at PHY %d: 0x%x.\n",
- dev->name, miireg, addr, retval);
- }
-
- return retval;
-}
-
-static int phy_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 miicontrol;
- unsigned int tries = 0;
-
- miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- miicontrol |= BMCR_RESET;
- if (mii_rw(dev, np->phyaddr, MII_BMCR, miicontrol)) {
- return -1;
- }
-
- /* wait for 500ms */
- msleep(500);
-
- /* must wait till reset is deasserted */
- while (miicontrol & BMCR_RESET) {
- msleep(10);
- miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- /* FIXME: 100 tries seem excessive */
- if (tries++ > 100)
- return -1;
- }
- return 0;
-}
-
-static int phy_init(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 phyinterface, phy_reserved, mii_status, mii_control, mii_control_1000,reg;
-
- /* set advertise register */
- reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- reg |= (ADVERTISE_10HALF|ADVERTISE_10FULL|ADVERTISE_100HALF|ADVERTISE_100FULL|0x800|0x400);
- if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) {
- printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
-
- /* get phy interface type */
- phyinterface = readl(base + NvRegPhyInterface);
-
- /* see if gigabit phy */
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- if (mii_status & PHY_GIGABIT) {
- np->gigabit = PHY_GIGABIT;
- mii_control_1000 = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- mii_control_1000 &= ~ADVERTISE_1000HALF;
- if (phyinterface & PHY_RGMII)
- mii_control_1000 |= ADVERTISE_1000FULL;
- else
- mii_control_1000 &= ~ADVERTISE_1000FULL;
-
- if (mii_rw(dev, np->phyaddr, MII_1000BT_CR, mii_control_1000)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- else
- np->gigabit = 0;
-
- /* reset the phy */
- if (phy_reset(dev)) {
- printk(KERN_INFO "%s: phy reset failed\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
-
- /* phy vendor specific configuration */
- if ((np->phy_oui == PHY_OUI_CICADA) && (phyinterface & PHY_RGMII) ) {
- phy_reserved = mii_rw(dev, np->phyaddr, MII_RESV1, MII_READ);
- phy_reserved &= ~(PHY_INIT1 | PHY_INIT2);
- phy_reserved |= (PHY_INIT3 | PHY_INIT4);
- if (mii_rw(dev, np->phyaddr, MII_RESV1, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- phy_reserved = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ);
- phy_reserved |= PHY_INIT5;
- if (mii_rw(dev, np->phyaddr, MII_NCONFIG, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- if (np->phy_oui == PHY_OUI_CICADA) {
- phy_reserved = mii_rw(dev, np->phyaddr, MII_SREVISION, MII_READ);
- phy_reserved |= PHY_INIT6;
- if (mii_rw(dev, np->phyaddr, MII_SREVISION, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
-
- /* restart auto negotiation */
- mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE);
- if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) {
- return PHY_ERROR;
- }
-
- return 0;
-}
-
-static void nv_start_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_start_rx\n", dev->name);
- /* Already running? Stop it. */
- if (readl(base + NvRegReceiverControl) & NVREG_RCVCTL_START) {
- writel(0, base + NvRegReceiverControl);
- pci_push(base);
- }
- writel(np->linkspeed, base + NvRegLinkSpeed);
- pci_push(base);
- writel(NVREG_RCVCTL_START, base + NvRegReceiverControl);
- dprintk(KERN_DEBUG "%s: nv_start_rx to duplex %d, speed 0x%08x.\n",
- dev->name, np->duplex, np->linkspeed);
- pci_push(base);
-}
-
-static void nv_stop_rx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_stop_rx\n", dev->name);
- writel(0, base + NvRegReceiverControl);
- reg_delay(dev, NvRegReceiverStatus, NVREG_RCVSTAT_BUSY, 0,
- NV_RXSTOP_DELAY1, NV_RXSTOP_DELAY1MAX,
- KERN_INFO "nv_stop_rx: ReceiverStatus remained busy");
-
- udelay(NV_RXSTOP_DELAY2);
- writel(0, base + NvRegLinkSpeed);
-}
-
-static void nv_start_tx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_start_tx\n", dev->name);
- writel(NVREG_XMITCTL_START, base + NvRegTransmitterControl);
- pci_push(base);
-}
-
-static void nv_stop_tx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_stop_tx\n", dev->name);
- writel(0, base + NvRegTransmitterControl);
- reg_delay(dev, NvRegTransmitterStatus, NVREG_XMITSTAT_BUSY, 0,
- NV_TXSTOP_DELAY1, NV_TXSTOP_DELAY1MAX,
- KERN_INFO "nv_stop_tx: TransmitterStatus remained busy");
-
- udelay(NV_TXSTOP_DELAY2);
- writel(0, base + NvRegUnknownTransmitterReg);
-}
-
-static void nv_txrx_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_txrx_reset\n", dev->name);
- writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
- udelay(NV_TXRX_RESET_DELAY);
- writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
-}
-
-static void nv_mac_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_mac_reset\n", dev->name);
- writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
- writel(NVREG_MAC_RESET_ASSERT, base + NvRegMacReset);
- pci_push(base);
- udelay(NV_MAC_RESET_DELAY);
- writel(0, base + NvRegMacReset);
- pci_push(base);
- udelay(NV_MAC_RESET_DELAY);
- writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
-}
-
-/*
- * nv_get_stats: dev->get_stats function
- * Get latest stats value from the nic.
- * Called with read_lock(&dev_base_lock) held for read -
- * only synchronized against unregister_netdevice.
- */
-static struct net_device_stats *nv_get_stats(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- /* It seems that the nic always generates interrupts and doesn't
- * accumulate errors internally. Thus the current values in np->stats
- * are already up to date.
- */
- return &np->stats;
-}
-
-/*
- * nv_alloc_rx: fill rx ring entries.
- * Return 1 if the allocations for the skbs failed and the
- * rx engine is without Available descriptors
- */
-static int nv_alloc_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- unsigned int refill_rx = np->refill_rx;
- int nr;
-
- while (np->cur_rx != refill_rx) {
- struct sk_buff *skb;
-
- nr = refill_rx % RX_RING;
- if (np->rx_skbuff[nr] == NULL) {
-
- skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD);
- if (!skb)
- break;
-
- skb->dev = dev;
- np->rx_skbuff[nr] = skb;
- } else {
- skb = np->rx_skbuff[nr];
- }
- np->rx_dma[nr] = pci_map_single(np->pci_dev, skb->data,
- skb->end-skb->data, PCI_DMA_FROMDEVICE);
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig[nr].PacketBuffer = cpu_to_le32(np->rx_dma[nr]);
- wmb();
- np->rx_ring.orig[nr].FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL);
- } else {
- np->rx_ring.ex[nr].PacketBufferHigh = cpu_to_le64(np->rx_dma[nr]) >> 32;
- np->rx_ring.ex[nr].PacketBufferLow = cpu_to_le64(np->rx_dma[nr]) & 0x0FFFFFFFF;
- wmb();
- np->rx_ring.ex[nr].FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL);
- }
- dprintk(KERN_DEBUG "%s: nv_alloc_rx: Packet %d marked as Available\n",
- dev->name, refill_rx);
- refill_rx++;
- }
- np->refill_rx = refill_rx;
- if (np->cur_rx - refill_rx == RX_RING)
- return 1;
- return 0;
-}
-
-static void nv_do_rx_refill(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- } else {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
- if (nv_alloc_rx(dev)) {
- spin_lock_irq(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
-}
-
-static void nv_init_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
-
- np->cur_rx = RX_RING;
- np->refill_rx = 0;
- for (i = 0; i < RX_RING; i++)
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->rx_ring.orig[i].FlagLen = 0;
- else
- np->rx_ring.ex[i].FlagLen = 0;
-}
-
-static void nv_init_tx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
-
- np->next_tx = np->nic_tx = 0;
- for (i = 0; i < TX_RING; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->tx_ring.orig[i].FlagLen = 0;
- else
- np->tx_ring.ex[i].FlagLen = 0;
- np->tx_skbuff[i] = NULL;
- np->tx_dma[i] = 0;
- }
-}
-
-static int nv_init_ring(struct net_device *dev)
-{
- nv_init_tx(dev);
- nv_init_rx(dev);
- return nv_alloc_rx(dev);
-}
-
-static int nv_release_txskb(struct net_device *dev, unsigned int skbnr)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- dprintk(KERN_INFO "%s: nv_release_txskb for skbnr %d\n",
- dev->name, skbnr);
-
- if (np->tx_dma[skbnr]) {
- pci_unmap_page(np->pci_dev, np->tx_dma[skbnr],
- np->tx_dma_len[skbnr],
- PCI_DMA_TODEVICE);
- np->tx_dma[skbnr] = 0;
- }
-
- if (np->tx_skbuff[skbnr]) {
- if (!np->ecdev) dev_kfree_skb_any(np->tx_skbuff[skbnr]);
- np->tx_skbuff[skbnr] = NULL;
- return 1;
- } else {
- return 0;
- }
-}
-
-static void nv_drain_tx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- unsigned int i;
-
- for (i = 0; i < TX_RING; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->tx_ring.orig[i].FlagLen = 0;
- else
- np->tx_ring.ex[i].FlagLen = 0;
- if (nv_release_txskb(dev, i))
- np->stats.tx_dropped++;
- }
-}
-
-static void nv_drain_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
- for (i = 0; i < RX_RING; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->rx_ring.orig[i].FlagLen = 0;
- else
- np->rx_ring.ex[i].FlagLen = 0;
- wmb();
- if (np->rx_skbuff[i]) {
- pci_unmap_single(np->pci_dev, np->rx_dma[i],
- np->rx_skbuff[i]->end-np->rx_skbuff[i]->data,
- PCI_DMA_FROMDEVICE);
- if (!np->ecdev) dev_kfree_skb(np->rx_skbuff[i]);
- np->rx_skbuff[i] = NULL;
- }
- }
-}
-
-static void drain_ring(struct net_device *dev)
-{
- nv_drain_tx(dev);
- nv_drain_rx(dev);
-}
-
-/*
- * nv_start_xmit: dev->hard_start_xmit function
- * Called with dev->xmit_lock held.
- */
-static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 tx_flags = 0;
- u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET);
- unsigned int fragments = skb_shinfo(skb)->nr_frags;
- unsigned int nr = (np->next_tx - 1) % TX_RING;
- unsigned int start_nr = np->next_tx % TX_RING;
- unsigned int i;
- u32 offset = 0;
- u32 bcnt;
- u32 size = skb->len-skb->data_len;
- u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
- u32 tx_flags_vlan = 0;
-
- /* add fragments to entries count */
- for (i = 0; i < fragments; i++) {
- entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) +
- ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
- }
-
- if (!np->ecdev) {
- spin_lock_irq(&np->lock);
-
- if ((np->next_tx - np->nic_tx + entries - 1) > TX_LIMIT_STOP) {
- spin_unlock_irq(&np->lock);
- netif_stop_queue(dev);
- return NETDEV_TX_BUSY;
- }
- }
-
- /* setup the header buffer */
- do {
- bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- nr = (nr + 1) % TX_RING;
-
- np->tx_dma[nr] = pci_map_single(np->pci_dev, skb->data + offset, bcnt,
- PCI_DMA_TODEVICE);
- np->tx_dma_len[nr] = bcnt;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].PacketBuffer = cpu_to_le32(np->tx_dma[nr]);
- np->tx_ring.orig[nr].FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
- } else {
- np->tx_ring.ex[nr].PacketBufferHigh = cpu_to_le64(np->tx_dma[nr]) >> 32;
- np->tx_ring.ex[nr].PacketBufferLow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF;
- np->tx_ring.ex[nr].FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
- }
- tx_flags = np->tx_flags;
- offset += bcnt;
- size -= bcnt;
- } while(size);
-
- /* setup the fragments */
- for (i = 0; i < fragments; i++) {
- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- u32 size = frag->size;
- offset = 0;
-
- do {
- bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- nr = (nr + 1) % TX_RING;
-
- np->tx_dma[nr] = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
- PCI_DMA_TODEVICE);
- np->tx_dma_len[nr] = bcnt;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].PacketBuffer = cpu_to_le32(np->tx_dma[nr]);
- np->tx_ring.orig[nr].FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
- } else {
- np->tx_ring.ex[nr].PacketBufferHigh = cpu_to_le64(np->tx_dma[nr]) >> 32;
- np->tx_ring.ex[nr].PacketBufferLow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF;
- np->tx_ring.ex[nr].FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
- }
- offset += bcnt;
- size -= bcnt;
- } while (size);
- }
-
- /* set last fragment flag */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].FlagLen |= cpu_to_le32(tx_flags_extra);
- } else {
- np->tx_ring.ex[nr].FlagLen |= cpu_to_le32(tx_flags_extra);
- }
-
- np->tx_skbuff[nr] = skb;
-
-#ifdef NETIF_F_TSO
- if (skb_shinfo(skb)->tso_size)
- tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->tso_size << NV_TX2_TSO_SHIFT);
- else
-#endif
- tx_flags_extra = (skb->ip_summed == CHECKSUM_HW ? (NV_TX2_CHECKSUM_L3|NV_TX2_CHECKSUM_L4) : 0);
-
- /* vlan tag */
- if (np->vlangrp && vlan_tx_tag_present(skb)) {
- tx_flags_vlan = NV_TX3_VLAN_TAG_PRESENT | vlan_tx_tag_get(skb);
- }
-
- /* set tx flags */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[start_nr].FlagLen |= cpu_to_le32(tx_flags | tx_flags_extra);
- } else {
- np->tx_ring.ex[start_nr].TxVlan = cpu_to_le32(tx_flags_vlan);
- np->tx_ring.ex[start_nr].FlagLen |= cpu_to_le32(tx_flags | tx_flags_extra);
- }
-
- dprintk(KERN_DEBUG "%s: nv_start_xmit: packet %d (entries %d) queued for transmission. tx_flags_extra: %x\n",
- dev->name, np->next_tx, entries, tx_flags_extra);
- {
- int j;
- for (j=0; j<64; j++) {
- if ((j%16) == 0)
- dprintk("\n%03x:", j);
- dprintk(" %02x", ((unsigned char*)skb->data)[j]);
- }
- dprintk("\n");
- }
-
- np->next_tx += entries;
-
- dev->trans_start = jiffies;
- if (!np->ecdev) spin_unlock_irq(&np->lock);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(get_hwbase(dev));
- return NETDEV_TX_OK;
-}
-
-/*
- * nv_tx_done: check for completed packets, release the skbs.
- *
- * Caller must own np->lock.
- */
-static void nv_tx_done(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 Flags;
- unsigned int i;
- struct sk_buff *skb;
-
- while (np->nic_tx != np->next_tx) {
- i = np->nic_tx % TX_RING;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- Flags = le32_to_cpu(np->tx_ring.orig[i].FlagLen);
- else
- Flags = le32_to_cpu(np->tx_ring.ex[i].FlagLen);
-
- dprintk(KERN_DEBUG "%s: nv_tx_done: looking at packet %d, Flags 0x%x.\n",
- dev->name, np->nic_tx, Flags);
- if (Flags & NV_TX_VALID)
- break;
- if (np->desc_ver == DESC_VER_1) {
- if (Flags & NV_TX_LASTPACKET) {
- skb = np->tx_skbuff[i];
- if (Flags & (NV_TX_RETRYERROR|NV_TX_CARRIERLOST|NV_TX_LATECOLLISION|
- NV_TX_UNDERFLOW|NV_TX_ERROR)) {
- if (Flags & NV_TX_UNDERFLOW)
- np->stats.tx_fifo_errors++;
- if (Flags & NV_TX_CARRIERLOST)
- np->stats.tx_carrier_errors++;
- np->stats.tx_errors++;
- } else {
- np->stats.tx_packets++;
- np->stats.tx_bytes += skb->len;
- }
- }
- } else {
- if (Flags & NV_TX2_LASTPACKET) {
- skb = np->tx_skbuff[i];
- if (Flags & (NV_TX2_RETRYERROR|NV_TX2_CARRIERLOST|NV_TX2_LATECOLLISION|
- NV_TX2_UNDERFLOW|NV_TX2_ERROR)) {
- if (Flags & NV_TX2_UNDERFLOW)
- np->stats.tx_fifo_errors++;
- if (Flags & NV_TX2_CARRIERLOST)
- np->stats.tx_carrier_errors++;
- np->stats.tx_errors++;
- } else {
- np->stats.tx_packets++;
- np->stats.tx_bytes += skb->len;
- }
- }
- }
- nv_release_txskb(dev, i);
- np->nic_tx++;
- }
- if (!np->ecdev && np->next_tx - np->nic_tx < TX_LIMIT_START)
- netif_wake_queue(dev);
-}
-
-/*
- * nv_tx_timeout: dev->tx_timeout function
- * Called with dev->xmit_lock held.
- */
-static void nv_tx_timeout(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 status;
-
- if (np->msi_flags & NV_MSI_X_ENABLED)
- status = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- else
- status = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
-
- printk(KERN_INFO "%s: Got tx_timeout. irq: %08x\n", dev->name, status);
-
- {
- int i;
-
- printk(KERN_INFO "%s: Ring at %lx: next %d nic %d\n",
- dev->name, (unsigned long)np->ring_addr,
- np->next_tx, np->nic_tx);
- printk(KERN_INFO "%s: Dumping tx registers\n", dev->name);
- for (i=0;i<=np->register_size;i+= 32) {
- printk(KERN_INFO "%3x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
- i,
- readl(base + i + 0), readl(base + i + 4),
- readl(base + i + 8), readl(base + i + 12),
- readl(base + i + 16), readl(base + i + 20),
- readl(base + i + 24), readl(base + i + 28));
- }
- printk(KERN_INFO "%s: Dumping tx ring\n", dev->name);
- for (i=0;i<TX_RING;i+= 4) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- printk(KERN_INFO "%03x: %08x %08x // %08x %08x // %08x %08x // %08x %08x\n",
- i,
- le32_to_cpu(np->tx_ring.orig[i].PacketBuffer),
- le32_to_cpu(np->tx_ring.orig[i].FlagLen),
- le32_to_cpu(np->tx_ring.orig[i+1].PacketBuffer),
- le32_to_cpu(np->tx_ring.orig[i+1].FlagLen),
- le32_to_cpu(np->tx_ring.orig[i+2].PacketBuffer),
- le32_to_cpu(np->tx_ring.orig[i+2].FlagLen),
- le32_to_cpu(np->tx_ring.orig[i+3].PacketBuffer),
- le32_to_cpu(np->tx_ring.orig[i+3].FlagLen));
- } else {
- printk(KERN_INFO "%03x: %08x %08x %08x // %08x %08x %08x // %08x %08x %08x // %08x %08x %08x\n",
- i,
- le32_to_cpu(np->tx_ring.ex[i].PacketBufferHigh),
- le32_to_cpu(np->tx_ring.ex[i].PacketBufferLow),
- le32_to_cpu(np->tx_ring.ex[i].FlagLen),
- le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferHigh),
- le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferLow),
- le32_to_cpu(np->tx_ring.ex[i+1].FlagLen),
- le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferHigh),
- le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferLow),
- le32_to_cpu(np->tx_ring.ex[i+2].FlagLen),
- le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferHigh),
- le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferLow),
- le32_to_cpu(np->tx_ring.ex[i+3].FlagLen));
- }
- }
- }
-
- if (!np->ecdev) spin_lock_irq(&np->lock);
-
- /* 1) stop tx engine */
- nv_stop_tx(dev);
-
- /* 2) check that the packets were not sent already: */
- nv_tx_done(dev);
-
- /* 3) if there are dead entries: clear everything */
- if (np->next_tx != np->nic_tx) {
- printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name);
- nv_drain_tx(dev);
- np->next_tx = np->nic_tx = 0;
- setup_hw_rings(dev, NV_SETUP_TX_RING);
- if (!np->ecdev) netif_wake_queue(dev);
- }
-
- /* 4) restart tx engine */
- nv_start_tx(dev);
- if (!np->ecdev) spin_unlock_irq(&np->lock);
-}
-
-/*
- * Called when the nic notices a mismatch between the actual data len on the
- * wire and the len indicated in the 802 header
- */
-static int nv_getlen(struct net_device *dev, void *packet, int datalen)
-{
- int hdrlen; /* length of the 802 header */
- int protolen; /* length as stored in the proto field */
-
- /* 1) calculate len according to header */
- if ( ((struct vlan_ethhdr *)packet)->h_vlan_proto == __constant_htons(ETH_P_8021Q)) {
- protolen = ntohs( ((struct vlan_ethhdr *)packet)->h_vlan_encapsulated_proto );
- hdrlen = VLAN_HLEN;
- } else {
- protolen = ntohs( ((struct ethhdr *)packet)->h_proto);
- hdrlen = ETH_HLEN;
- }
- dprintk(KERN_DEBUG "%s: nv_getlen: datalen %d, protolen %d, hdrlen %d\n",
- dev->name, datalen, protolen, hdrlen);
- if (protolen > ETH_DATA_LEN)
- return datalen; /* Value in proto field not a len, no checks possible */
-
- protolen += hdrlen;
- /* consistency checks: */
- if (datalen > ETH_ZLEN) {
- if (datalen >= protolen) {
- /* more data on wire than in 802 header, trim of
- * additional data.
- */
- dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
- dev->name, protolen);
- return protolen;
- } else {
- /* less data on wire than mentioned in header.
- * Discard the packet.
- */
- dprintk(KERN_DEBUG "%s: nv_getlen: discarding long packet.\n",
- dev->name);
- return -1;
- }
- } else {
- /* short packet. Accept only if 802 values are also short */
- if (protolen > ETH_ZLEN) {
- dprintk(KERN_DEBUG "%s: nv_getlen: discarding short packet.\n",
- dev->name);
- return -1;
- }
- dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
- dev->name, datalen);
- return datalen;
- }
-}
-
-static void nv_rx_process(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 Flags;
- u32 vlanflags = 0;
-
-
- for (;;) {
- struct sk_buff *skb;
- int len;
- int i;
- if (np->cur_rx - np->refill_rx >= RX_RING)
- break; /* we scanned the whole ring - do not continue */
-
- i = np->cur_rx % RX_RING;
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- Flags = le32_to_cpu(np->rx_ring.orig[i].FlagLen);
- len = nv_descr_getlength(&np->rx_ring.orig[i], np->desc_ver);
- } else {
- Flags = le32_to_cpu(np->rx_ring.ex[i].FlagLen);
- len = nv_descr_getlength_ex(&np->rx_ring.ex[i], np->desc_ver);
- vlanflags = le32_to_cpu(np->rx_ring.ex[i].PacketBufferLow);
- }
-
- dprintk(KERN_DEBUG "%s: nv_rx_process: looking at packet %d, Flags 0x%x.\n",
- dev->name, np->cur_rx, Flags);
-
- if (Flags & NV_RX_AVAIL)
- break; /* still owned by hardware, */
-
- /*
- * the packet is for us - immediately tear down the pci mapping.
- * TODO: check if a prefetch of the first cacheline improves
- * the performance.
- */
- pci_unmap_single(np->pci_dev, np->rx_dma[i],
- np->rx_skbuff[i]->end-np->rx_skbuff[i]->data,
- PCI_DMA_FROMDEVICE);
-
- {
- int j;
- dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",Flags);
- for (j=0; j<64; j++) {
- if ((j%16) == 0)
- dprintk("\n%03x:", j);
- dprintk(" %02x", ((unsigned char*)np->rx_skbuff[i]->data)[j]);
- }
- dprintk("\n");
- }
- /* look at what we actually got: */
- if (np->desc_ver == DESC_VER_1) {
- if (!(Flags & NV_RX_DESCRIPTORVALID))
- goto next_pkt;
-
- if (Flags & NV_RX_ERROR) {
- if (Flags & NV_RX_MISSEDFRAME) {
- np->stats.rx_missed_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & (NV_RX_ERROR1|NV_RX_ERROR2|NV_RX_ERROR3)) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX_CRCERR) {
- np->stats.rx_crc_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX_OVERFLOW) {
- np->stats.rx_over_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX_ERROR4) {
- len = nv_getlen(dev, np->rx_skbuff[i]->data, len);
- if (len < 0) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- }
- /* framing errors are soft errors. */
- if (Flags & NV_RX_FRAMINGERR) {
- if (Flags & NV_RX_SUBSTRACT1) {
- len--;
- }
- }
- }
- } else {
- if (!(Flags & NV_RX2_DESCRIPTORVALID))
- goto next_pkt;
-
- if (Flags & NV_RX2_ERROR) {
- if (Flags & (NV_RX2_ERROR1|NV_RX2_ERROR2|NV_RX2_ERROR3)) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX2_CRCERR) {
- np->stats.rx_crc_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX2_OVERFLOW) {
- np->stats.rx_over_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX2_ERROR4) {
- len = nv_getlen(dev, np->rx_skbuff[i]->data, len);
- if (len < 0) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- }
- /* framing errors are soft errors */
- if (Flags & NV_RX2_FRAMINGERR) {
- if (Flags & NV_RX2_SUBSTRACT1) {
- len--;
- }
- }
- }
- Flags &= NV_RX2_CHECKSUMMASK;
- if (Flags == NV_RX2_CHECKSUMOK1 ||
- Flags == NV_RX2_CHECKSUMOK2 ||
- Flags == NV_RX2_CHECKSUMOK3) {
- dprintk(KERN_DEBUG "%s: hw checksum hit!.\n", dev->name);
- np->rx_skbuff[i]->ip_summed = CHECKSUM_UNNECESSARY;
- } else {
- dprintk(KERN_DEBUG "%s: hwchecksum miss!.\n", dev->name);
- }
- }
- if (np->ecdev) {
- ecdev_receive(np->ecdev, np->rx_skbuff[i]->data, len);
- }
- else {
- /* got a valid packet - forward it to the network core */
- skb = np->rx_skbuff[i];
- np->rx_skbuff[i] = NULL;
-
- skb_put(skb, len);
- skb->protocol = eth_type_trans(skb, dev);
- dprintk(KERN_DEBUG "%s: nv_rx_process: packet %d with %d bytes, proto %d accepted.\n",
- dev->name, np->cur_rx, len, skb->protocol);
- if (np->vlangrp && (vlanflags & NV_RX3_VLAN_TAG_PRESENT)) {
- vlan_hwaccel_rx(skb, np->vlangrp, vlanflags & NV_RX3_VLAN_TAG_MASK);
- } else {
- netif_rx(skb);
- }
- }
- dev->last_rx = jiffies;
- np->stats.rx_packets++;
- np->stats.rx_bytes += len;
-next_pkt:
- np->cur_rx++;
- }
-}
-
-static void set_bufsize(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (dev->mtu <= ETH_DATA_LEN)
- np->rx_buf_sz = ETH_DATA_LEN + NV_RX_HEADERS;
- else
- np->rx_buf_sz = dev->mtu + NV_RX_HEADERS;
-}
-
-/*
- * nv_change_mtu: dev->change_mtu function
- * Called with dev_base_lock held for read.
- */
-static int nv_change_mtu(struct net_device *dev, int new_mtu)
-{
- struct fe_priv *np = netdev_priv(dev);
- int old_mtu;
-
- if (new_mtu < 64 || new_mtu > np->pkt_limit)
- return -EINVAL;
-
- old_mtu = dev->mtu;
- dev->mtu = new_mtu;
-
- /* return early if the buffer sizes will not change */
- if (old_mtu <= ETH_DATA_LEN && new_mtu <= ETH_DATA_LEN)
- return 0;
- if (old_mtu == new_mtu)
- return 0;
-
- /* synchronized against open : rtnl_lock() held by caller */
- if (netif_running(dev)) {
- u8 __iomem *base = get_hwbase(dev);
- /*
- * It seems that the nic preloads valid ring entries into an
- * internal buffer. The procedure for flushing everything is
- * guessed, there is probably a simpler approach.
- * Changing the MTU is a rare event, it shouldn't matter.
- */
- nv_disable_irq(dev);
- spin_lock_bh(&dev->xmit_lock);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain rx queue */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
- /* reinit driver view of the rx queue */
- nv_init_rx(dev);
- nv_init_tx(dev);
- /* alloc new rx buffers */
- set_bufsize(dev);
- if (nv_alloc_rx(dev)) {
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- }
- /* reinit nic view of the rx queue */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((RX_RING-1) << NVREG_RINGSZ_RXSHIFT) + ((TX_RING-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(base);
-
- /* restart rx engine */
- nv_start_rx(dev);
- nv_start_tx(dev);
- spin_unlock(&np->lock);
- spin_unlock_bh(&dev->xmit_lock);
- nv_enable_irq(dev);
- }
- return 0;
-}
-
-static void nv_copy_mac_to_hw(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 mac[2];
-
- mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
- (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
- mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
-
- writel(mac[0], base + NvRegMacAddrA);
- writel(mac[1], base + NvRegMacAddrB);
-}
-
-/*
- * nv_set_mac_address: dev->set_mac_address function
- * Called with rtnl_lock() held.
- */
-static int nv_set_mac_address(struct net_device *dev, void *addr)
-{
- struct fe_priv *np = netdev_priv(dev);
- struct sockaddr *macaddr = (struct sockaddr*)addr;
-
- if(!is_valid_ether_addr(macaddr->sa_data))
- return -EADDRNOTAVAIL;
-
- /* synchronized against open : rtnl_lock() held by caller */
- memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN);
-
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- spin_lock_irq(&np->lock);
-
- /* stop rx engine */
- nv_stop_rx(dev);
-
- /* set mac address */
- nv_copy_mac_to_hw(dev);
-
- /* restart rx engine */
- nv_start_rx(dev);
- spin_unlock_irq(&np->lock);
- spin_unlock_bh(&dev->xmit_lock);
- } else {
- nv_copy_mac_to_hw(dev);
- }
- return 0;
-}
-
-/*
- * nv_set_multicast: dev->set_multicast function
- * Called with dev->xmit_lock held.
- */
-static void nv_set_multicast(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 addr[2];
- u32 mask[2];
- u32 pff;
-
- memset(addr, 0, sizeof(addr));
- memset(mask, 0, sizeof(mask));
-
- if (dev->flags & IFF_PROMISC) {
- printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n", dev->name);
- pff = NVREG_PFF_PROMISC;
- } else {
- pff = NVREG_PFF_MYADDR;
-
- if (dev->flags & IFF_ALLMULTI || dev->mc_list) {
- u32 alwaysOff[2];
- u32 alwaysOn[2];
-
- alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0xffffffff;
- if (dev->flags & IFF_ALLMULTI) {
- alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0;
- } else {
- struct dev_mc_list *walk;
-
- walk = dev->mc_list;
- while (walk != NULL) {
- u32 a, b;
- a = le32_to_cpu(*(u32 *) walk->dmi_addr);
- b = le16_to_cpu(*(u16 *) (&walk->dmi_addr[4]));
- alwaysOn[0] &= a;
- alwaysOff[0] &= ~a;
- alwaysOn[1] &= b;
- alwaysOff[1] &= ~b;
- walk = walk->next;
- }
- }
- addr[0] = alwaysOn[0];
- addr[1] = alwaysOn[1];
- mask[0] = alwaysOn[0] | alwaysOff[0];
- mask[1] = alwaysOn[1] | alwaysOff[1];
- }
- }
- addr[0] |= NVREG_MCASTADDRA_FORCE;
- pff |= NVREG_PFF_ALWAYS;
- spin_lock_irq(&np->lock);
- nv_stop_rx(dev);
- writel(addr[0], base + NvRegMulticastAddrA);
- writel(addr[1], base + NvRegMulticastAddrB);
- writel(mask[0], base + NvRegMulticastMaskA);
- writel(mask[1], base + NvRegMulticastMaskB);
- writel(pff, base + NvRegPacketFilterFlags);
- dprintk(KERN_INFO "%s: reconfiguration for multicast lists.\n",
- dev->name);
- nv_start_rx(dev);
- spin_unlock_irq(&np->lock);
-}
-
-/**
- * nv_update_linkspeed: Setup the MAC according to the link partner
- * @dev: Network device to be configured
- *
- * The function queries the PHY and checks if there is a link partner.
- * If yes, then it sets up the MAC accordingly. Otherwise, the MAC is
- * set to 10 MBit HD.
- *
- * The function returns 0 if there is no link partner and 1 if there is
- * a good link partner.
- */
-static int nv_update_linkspeed(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int adv, lpa;
- int newls = np->linkspeed;
- int newdup = np->duplex;
- int mii_status;
- int retval = 0;
- u32 control_1000, status_1000, phyreg;
-
- /* BMSR_LSTATUS is latched, read it twice:
- * we want the current value.
- */
- mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
-
- if (!(mii_status & BMSR_LSTATUS)) {
- dprintk(KERN_DEBUG "%s: no link detected by phy - falling back to 10HD.\n",
- dev->name);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- retval = 0;
- goto set_speed;
- }
-
- if (np->autoneg == 0) {
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: autoneg off, PHY set to 0x%04x.\n",
- dev->name, np->fixed_mode);
- if (np->fixed_mode & LPA_100FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 1;
- } else if (np->fixed_mode & LPA_100HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 0;
- } else if (np->fixed_mode & LPA_10FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 1;
- } else {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- }
- retval = 1;
- goto set_speed;
- }
- /* check auto negotiation is complete */
- if (!(mii_status & BMSR_ANEGCOMPLETE)) {
- /* still in autonegotiation - configure nic for 10 MBit HD and wait. */
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- retval = 0;
- dprintk(KERN_DEBUG "%s: autoneg not completed - falling back to 10HD.\n", dev->name);
- goto set_speed;
- }
-
- retval = 1;
- if (np->gigabit == PHY_GIGABIT) {
- control_1000 = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- status_1000 = mii_rw(dev, np->phyaddr, MII_1000BT_SR, MII_READ);
-
- if ((control_1000 & ADVERTISE_1000FULL) &&
- (status_1000 & LPA_1000FULL)) {
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: GBit ethernet detected.\n",
- dev->name);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_1000;
- newdup = 1;
- goto set_speed;
- }
- }
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ);
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: PHY advertises 0x%04x, lpa 0x%04x.\n",
- dev->name, adv, lpa);
-
- /* FIXME: handle parallel detection properly */
- lpa = lpa & adv;
- if (lpa & LPA_100FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 1;
- } else if (lpa & LPA_100HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 0;
- } else if (lpa & LPA_10FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 1;
- } else if (lpa & LPA_10HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- } else {
- dprintk(KERN_DEBUG "%s: bad ability %04x - falling back to 10HD.\n", dev->name, lpa);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- }
-
-set_speed:
- if (np->duplex == newdup && np->linkspeed == newls)
- return retval;
-
- dprintk(KERN_INFO "%s: changing link setting from %d/%d to %d/%d.\n",
- dev->name, np->linkspeed, np->duplex, newls, newdup);
-
- np->duplex = newdup;
- np->linkspeed = newls;
-
- if (np->gigabit == PHY_GIGABIT) {
- phyreg = readl(base + NvRegRandomSeed);
- phyreg &= ~(0x3FF00);
- if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_10)
- phyreg |= NVREG_RNDSEED_FORCE3;
- else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_100)
- phyreg |= NVREG_RNDSEED_FORCE2;
- else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_1000)
- phyreg |= NVREG_RNDSEED_FORCE;
- writel(phyreg, base + NvRegRandomSeed);
- }
-
- phyreg = readl(base + NvRegPhyInterface);
- phyreg &= ~(PHY_HALF|PHY_100|PHY_1000);
- if (np->duplex == 0)
- phyreg |= PHY_HALF;
- if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_100)
- phyreg |= PHY_100;
- else if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
- phyreg |= PHY_1000;
- writel(phyreg, base + NvRegPhyInterface);
-
- writel(NVREG_MISC1_FORCE | ( np->duplex ? 0 : NVREG_MISC1_HD),
- base + NvRegMisc1);
- pci_push(base);
- writel(np->linkspeed, base + NvRegLinkSpeed);
- pci_push(base);
-
- return retval;
-}
-
-static void nv_linkchange(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->ecdev) {
- int link = nv_update_linkspeed(dev);
- ecdev_set_link(np->ecdev, link);
- return;
- }
-
- if (nv_update_linkspeed(dev)) {
- if (!netif_carrier_ok(dev)) {
- netif_carrier_on(dev);
- printk(KERN_INFO "%s: link up.\n", dev->name);
- nv_start_rx(dev);
- }
- } else {
- if (netif_carrier_ok(dev)) {
- netif_carrier_off(dev);
- printk(KERN_INFO "%s: link down.\n", dev->name);
- nv_stop_rx(dev);
- }
- }
-}
-
-static void nv_link_irq(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 miistat;
-
- miistat = readl(base + NvRegMIIStatus);
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
- dprintk(KERN_INFO "%s: link change irq, status 0x%x.\n", dev->name, miistat);
-
- if (miistat & (NVREG_MIISTAT_LINKCHANGE))
- nv_linkchange(dev);
- dprintk(KERN_DEBUG "%s: link change notification done.\n", dev->name);
-}
-
-static irqreturn_t nv_nic_irq(int foo, void *data, struct pt_regs *regs)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq\n", dev->name);
-
- for (i=0; ; i++) {
- if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
- events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- } else {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
- }
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- if (!np->ecdev) spin_lock(&np->lock);
- nv_tx_done(dev);
- if (!np->ecdev) spin_unlock(&np->lock);
-
- nv_rx_process(dev);
- if (nv_alloc_rx(dev)) {
- spin_lock(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock(&np->lock);
- }
-
- if (events & NVREG_IRQ_LINK) {
- if (!np->ecdev) spin_lock(&np->lock);
- nv_link_irq(dev);
- if (!np->ecdev) spin_unlock(&np->lock);
- }
- if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- if (!np->ecdev) spin_lock(&np->lock);
- nv_linkchange(dev);
- if (!np->ecdev) spin_unlock(&np->lock);
- np->link_timeout = jiffies + LINK_TIMEOUT;
- }
- if (events & (NVREG_IRQ_TX_ERR)) {
- dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
- dev->name, events);
- }
- if (events & (NVREG_IRQ_UNKNOWN)) {
- printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- if (!np->ecdev) {
- spin_lock(&np->lock);
- /* disable interrupts on the nic */
- if (!(np->msi_flags & NV_MSI_X_ENABLED))
- writel(0, base + NvRegIrqMask);
- else
- writel(np->irqmask, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq = np->irqmask;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- spin_unlock(&np->lock);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_tx(int foo, void *data, struct pt_regs *regs)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_tx\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_TX_ALL;
- writel(NVREG_IRQ_TX_ALL, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: tx irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- if (!np->ecdev) spin_lock_irq(&np->lock);
- nv_tx_done(dev);
- if (!np->ecdev) spin_unlock_irq(&np->lock);
-
- if (events & (NVREG_IRQ_TX_ERR)) {
- dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- if (!np->ecdev) {
- spin_lock_irq(&np->lock);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_TX_ALL, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_TX_ALL;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- spin_unlock_irq(&np->lock);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_tx.\n", dev->name, i);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_tx completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_rx(int foo, void *data, struct pt_regs *regs)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_rx\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL;
- writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: rx irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- nv_rx_process(dev);
- if (nv_alloc_rx(dev) && !np->ecdev) {
- spin_lock_irq(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
-
- if (i > max_interrupt_work) {
- if (!np->ecdev) {
- spin_lock_irq(&np->lock);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_RX_ALL;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- spin_unlock_irq(&np->lock);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_rx.\n", dev->name, i);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_rx completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_other(int foo, void *data, struct pt_regs *regs)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_other\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_OTHER;
- writel(NVREG_IRQ_OTHER, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- if (events & NVREG_IRQ_LINK) {
- if (!np->ecdev) spin_lock_irq(&np->lock);
- nv_link_irq(dev);
- if (!np->ecdev) spin_unlock_irq(&np->lock);
- }
- if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- if (!np->ecdev) spin_lock_irq(&np->lock);
- nv_linkchange(dev);
- if (!np->ecdev) spin_unlock_irq(&np->lock);
- np->link_timeout = jiffies + LINK_TIMEOUT;
- }
- if (events & (NVREG_IRQ_UNKNOWN)) {
- printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- if (!np->ecdev) {
- spin_lock_irq(&np->lock);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_OTHER, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_OTHER;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- spin_unlock_irq(&np->lock);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_other.\n", dev->name, i);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_other completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-void ec_poll(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (!using_multi_irqs(dev)) {
- nv_nic_irq((int) 0, dev, (struct pt_regs *) NULL);
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- nv_nic_irq_rx((int) 0, dev, (struct pt_regs *) NULL);
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- nv_nic_irq_tx((int) 0, dev, (struct pt_regs *) NULL);
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- nv_nic_irq_other((int) 0, dev, (struct pt_regs *) NULL);
- }
- }
-}
-
-static void nv_do_nic_poll(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 mask = 0;
-
- /*
- * First disable irq(s) and then
- * reenable interrupts on the nic, we have to do this before calling
- * nv_nic_irq because that may decide to do otherwise
- */
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- mask = np->irqmask;
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- mask |= NVREG_IRQ_RX_ALL;
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- mask |= NVREG_IRQ_TX_ALL;
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- mask |= NVREG_IRQ_OTHER;
- }
- }
- np->nic_poll_irq = 0;
-
- /* FIXME: Do we need synchronize_irq(dev->irq) here? */
-
- writel(mask, base + NvRegIrqMask);
- pci_push(base);
-
- if (!using_multi_irqs(dev)) {
- nv_nic_irq((int) 0, (void *) data, (struct pt_regs *) NULL);
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- nv_nic_irq_rx((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- nv_nic_irq_tx((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- nv_nic_irq_other((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
- }
-}
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void nv_poll_controller(struct net_device *dev)
-{
- nv_do_nic_poll((unsigned long) dev);
-}
-#endif
-
-static void nv_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
-{
- struct fe_priv *np = netdev_priv(dev);
- strcpy(info->driver, "forcedeth");
- strcpy(info->version, FORCEDETH_VERSION);
- strcpy(info->bus_info, pci_name(np->pci_dev));
-}
-
-static void nv_get_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
-{
- struct fe_priv *np = netdev_priv(dev);
- wolinfo->supported = WAKE_MAGIC;
-
- spin_lock_irq(&np->lock);
- if (np->wolenabled)
- wolinfo->wolopts = WAKE_MAGIC;
- spin_unlock_irq(&np->lock);
-}
-
-static int nv_set_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- spin_lock_irq(&np->lock);
- if (wolinfo->wolopts == 0) {
- writel(0, base + NvRegWakeUpFlags);
- np->wolenabled = 0;
- }
- if (wolinfo->wolopts & WAKE_MAGIC) {
- writel(NVREG_WAKEUPFLAGS_ENABLE, base + NvRegWakeUpFlags);
- np->wolenabled = 1;
- }
- spin_unlock_irq(&np->lock);
- return 0;
-}
-
-static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct fe_priv *np = netdev_priv(dev);
- int adv;
-
- spin_lock_irq(&np->lock);
- ecmd->port = PORT_MII;
- if (!netif_running(dev)) {
- /* We do not track link speed / duplex setting if the
- * interface is disabled. Force a link check */
- nv_update_linkspeed(dev);
- }
- switch(np->linkspeed & (NVREG_LINKSPEED_MASK)) {
- case NVREG_LINKSPEED_10:
- ecmd->speed = SPEED_10;
- break;
- case NVREG_LINKSPEED_100:
- ecmd->speed = SPEED_100;
- break;
- case NVREG_LINKSPEED_1000:
- ecmd->speed = SPEED_1000;
- break;
- }
- ecmd->duplex = DUPLEX_HALF;
- if (np->duplex)
- ecmd->duplex = DUPLEX_FULL;
-
- ecmd->autoneg = np->autoneg;
-
- ecmd->advertising = ADVERTISED_MII;
- if (np->autoneg) {
- ecmd->advertising |= ADVERTISED_Autoneg;
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- } else {
- adv = np->fixed_mode;
- }
- if (adv & ADVERTISE_10HALF)
- ecmd->advertising |= ADVERTISED_10baseT_Half;
- if (adv & ADVERTISE_10FULL)
- ecmd->advertising |= ADVERTISED_10baseT_Full;
- if (adv & ADVERTISE_100HALF)
- ecmd->advertising |= ADVERTISED_100baseT_Half;
- if (adv & ADVERTISE_100FULL)
- ecmd->advertising |= ADVERTISED_100baseT_Full;
- if (np->autoneg && np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- if (adv & ADVERTISE_1000FULL)
- ecmd->advertising |= ADVERTISED_1000baseT_Full;
- }
-
- ecmd->supported = (SUPPORTED_Autoneg |
- SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full |
- SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
- SUPPORTED_MII);
- if (np->gigabit == PHY_GIGABIT)
- ecmd->supported |= SUPPORTED_1000baseT_Full;
-
- ecmd->phy_address = np->phyaddr;
- ecmd->transceiver = XCVR_EXTERNAL;
-
- /* ignore maxtxpkt, maxrxpkt for now */
- spin_unlock_irq(&np->lock);
- return 0;
-}
-
-static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (ecmd->port != PORT_MII)
- return -EINVAL;
- if (ecmd->transceiver != XCVR_EXTERNAL)
- return -EINVAL;
- if (ecmd->phy_address != np->phyaddr) {
- /* TODO: support switching between multiple phys. Should be
- * trivial, but not enabled due to lack of test hardware. */
- return -EINVAL;
- }
- if (ecmd->autoneg == AUTONEG_ENABLE) {
- u32 mask;
-
- mask = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
- ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full;
- if (np->gigabit == PHY_GIGABIT)
- mask |= ADVERTISED_1000baseT_Full;
-
- if ((ecmd->advertising & mask) == 0)
- return -EINVAL;
-
- } else if (ecmd->autoneg == AUTONEG_DISABLE) {
- /* Note: autonegotiation disable, speed 1000 intentionally
- * forbidden - noone should need that. */
-
- if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100)
- return -EINVAL;
- if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL)
- return -EINVAL;
- } else {
- return -EINVAL;
- }
-
- spin_lock_irq(&np->lock);
- if (ecmd->autoneg == AUTONEG_ENABLE) {
- int adv, bmcr;
-
- np->autoneg = 1;
-
- /* advertise only what has been requested */
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4);
- if (ecmd->advertising & ADVERTISED_10baseT_Half)
- adv |= ADVERTISE_10HALF;
- if (ecmd->advertising & ADVERTISED_10baseT_Full)
- adv |= ADVERTISE_10FULL;
- if (ecmd->advertising & ADVERTISED_100baseT_Half)
- adv |= ADVERTISE_100HALF;
- if (ecmd->advertising & ADVERTISED_100baseT_Full)
- adv |= ADVERTISE_100FULL;
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
-
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- adv &= ~ADVERTISE_1000FULL;
- if (ecmd->advertising & ADVERTISED_1000baseT_Full)
- adv |= ADVERTISE_1000FULL;
- mii_rw(dev, np->phyaddr, MII_1000BT_CR, adv);
- }
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
-
- } else {
- int adv, bmcr;
-
- np->autoneg = 0;
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4);
- if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_HALF)
- adv |= ADVERTISE_10HALF;
- if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_FULL)
- adv |= ADVERTISE_10FULL;
- if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_HALF)
- adv |= ADVERTISE_100HALF;
- if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_FULL)
- adv |= ADVERTISE_100FULL;
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
- np->fixed_mode = adv;
-
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- adv &= ~ADVERTISE_1000FULL;
- mii_rw(dev, np->phyaddr, MII_1000BT_CR, adv);
- }
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr |= ~(BMCR_ANENABLE|BMCR_SPEED100|BMCR_FULLDPLX);
- if (adv & (ADVERTISE_10FULL|ADVERTISE_100FULL))
- bmcr |= BMCR_FULLDPLX;
- if (adv & (ADVERTISE_100HALF|ADVERTISE_100FULL))
- bmcr |= BMCR_SPEED100;
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
-
- if (netif_running(dev)) {
- /* Wait a bit and then reconfigure the nic. */
- udelay(10);
- nv_linkchange(dev);
- }
- }
- spin_unlock_irq(&np->lock);
-
- return 0;
-}
-
-#define FORCEDETH_REGS_VER 1
-
-static int nv_get_regs_len(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- return np->register_size;
-}
-
-static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 *rbuf = buf;
- int i;
-
- regs->version = FORCEDETH_REGS_VER;
- spin_lock_irq(&np->lock);
- for (i = 0;i <= np->register_size/sizeof(u32); i++)
- rbuf[i] = readl(base + i*sizeof(u32));
- spin_unlock_irq(&np->lock);
-}
-
-static int nv_nway_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int ret;
-
- spin_lock_irq(&np->lock);
- if (np->autoneg) {
- int bmcr;
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
-
- ret = 0;
- } else {
- ret = -EINVAL;
- }
- spin_unlock_irq(&np->lock);
-
- return ret;
-}
-
-#ifdef NETIF_F_TSO
-static int nv_set_tso(struct net_device *dev, u32 value)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if ((np->driver_data & DEV_HAS_CHECKSUM))
- return ethtool_op_set_tso(dev, value);
- else
- return value ? -EOPNOTSUPP : 0;
-}
-#endif
-
-static struct ethtool_ops ops = {
- .get_drvinfo = nv_get_drvinfo,
- .get_link = ethtool_op_get_link,
- .get_wol = nv_get_wol,
- .set_wol = nv_set_wol,
- .get_settings = nv_get_settings,
- .set_settings = nv_set_settings,
- .get_regs_len = nv_get_regs_len,
- .get_regs = nv_get_regs,
- .nway_reset = nv_nway_reset,
- .get_perm_addr = ethtool_op_get_perm_addr,
-#ifdef NETIF_F_TSO
- .get_tso = ethtool_op_get_tso,
- .set_tso = nv_set_tso
-#endif
-};
-
-static void nv_vlan_rx_register(struct net_device *dev, struct vlan_group *grp)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- spin_lock_irq(&np->lock);
-
- /* save vlan group */
- np->vlangrp = grp;
-
- if (grp) {
- /* enable vlan on MAC */
- np->txrxctl_bits |= NVREG_TXRXCTL_VLANSTRIP | NVREG_TXRXCTL_VLANINS;
- } else {
- /* disable vlan on MAC */
- np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANSTRIP;
- np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANINS;
- }
-
- writel(np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
-
- spin_unlock_irq(&np->lock);
-};
-
-static void nv_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid)
-{
- /* nothing to do */
-};
-
-static void set_msix_vector_map(struct net_device *dev, u32 vector, u32 irqmask)
-{
- u8 __iomem *base = get_hwbase(dev);
- int i;
- u32 msixmap = 0;
-
- /* Each interrupt bit can be mapped to a MSIX vector (4 bits).
- * MSIXMap0 represents the first 8 interrupts and MSIXMap1 represents
- * the remaining 8 interrupts.
- */
- for (i = 0; i < 8; i++) {
- if ((irqmask >> i) & 0x1) {
- msixmap |= vector << (i << 2);
- }
- }
- writel(readl(base + NvRegMSIXMap0) | msixmap, base + NvRegMSIXMap0);
-
- msixmap = 0;
- for (i = 0; i < 8; i++) {
- if ((irqmask >> (i + 8)) & 0x1) {
- msixmap |= vector << (i << 2);
- }
- }
- writel(readl(base + NvRegMSIXMap1) | msixmap, base + NvRegMSIXMap1);
-}
-
-static int nv_request_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int i;
-
- if (np->msi_flags & NV_MSI_X_CAPABLE) {
- for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
- np->msi_x_entry[i].entry = i;
- }
- if ((ret = pci_enable_msix(np->pci_dev, np->msi_x_entry, (np->msi_flags & NV_MSI_X_VECTORS_MASK))) == 0) {
- np->msi_flags |= NV_MSI_X_ENABLED;
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) {
- /* Request irq for rx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_err;
- }
- /* Request irq for tx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_free_rx;
- }
- /* Request irq for link and timer handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_free_tx;
- }
- /* map interrupts to their respective vector */
- writel(0, base + NvRegMSIXMap0);
- writel(0, base + NvRegMSIXMap1);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER);
- } else {
- /* Request irq for all interrupts */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_err;
- }
-
- /* map interrupts to vector 0 */
- writel(0, base + NvRegMSIXMap0);
- writel(0, base + NvRegMSIXMap1);
- }
- }
- }
- if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) {
- if ((ret = pci_enable_msi(np->pci_dev)) == 0) {
- np->msi_flags |= NV_MSI_ENABLED;
- if (request_irq(np->pci_dev->irq, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
- pci_disable_msi(np->pci_dev);
- np->msi_flags &= ~NV_MSI_ENABLED;
- goto out_err;
- }
-
- /* map interrupts to vector 0 */
- writel(0, base + NvRegMSIMap0);
- writel(0, base + NvRegMSIMap1);
- /* enable msi vector 0 */
- writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask);
- }
- }
- if (ret != 0) {
- if (request_irq(np->pci_dev->irq, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0)
- goto out_err;
- }
-
- return 0;
-out_free_tx:
- free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev);
-out_free_rx:
- free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev);
-out_err:
- return 1;
-}
-
-static void nv_free_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- int i;
-
- if (np->msi_flags & NV_MSI_X_ENABLED) {
- for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
- free_irq(np->msi_x_entry[i].vector, dev);
- }
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- } else {
- free_irq(np->pci_dev->irq, dev);
- if (np->msi_flags & NV_MSI_ENABLED) {
- pci_disable_msi(np->pci_dev);
- np->msi_flags &= ~NV_MSI_ENABLED;
- }
- }
-}
-
-static int nv_open(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int oom, i;
-
- dprintk(KERN_DEBUG "nv_open: begin\n");
-
- /* 1) erase previous misconfiguration */
- if (np->driver_data & DEV_HAS_POWER_CNTRL)
- nv_mac_reset(dev);
- /* 4.1-1: stop adapter: ignored, 4.3 seems to be overkill */
- writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
- writel(0, base + NvRegMulticastAddrB);
- writel(0, base + NvRegMulticastMaskA);
- writel(0, base + NvRegMulticastMaskB);
- writel(0, base + NvRegPacketFilterFlags);
-
- writel(0, base + NvRegTransmitterControl);
- writel(0, base + NvRegReceiverControl);
-
- writel(0, base + NvRegAdapterControl);
-
- /* 2) initialize descriptor rings */
- set_bufsize(dev);
- oom = nv_init_ring(dev);
-
- writel(0, base + NvRegLinkSpeed);
- writel(0, base + NvRegUnknownTransmitterReg);
- nv_txrx_reset(dev);
- writel(0, base + NvRegUnknownSetupReg6);
-
- np->in_shutdown = 0;
-
- /* 3) set mac address */
- nv_copy_mac_to_hw(dev);
-
- /* 4) give hw rings */
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((RX_RING-1) << NVREG_RINGSZ_RXSHIFT) + ((TX_RING-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
-
- /* 5) continue setup */
- writel(np->linkspeed, base + NvRegLinkSpeed);
- writel(NVREG_UNKSETUP3_VAL1, base + NvRegUnknownSetupReg3);
- writel(np->txrxctl_bits, base + NvRegTxRxControl);
- writel(np->vlanctl_bits, base + NvRegVlanControl);
- pci_push(base);
- writel(NVREG_TXRXCTL_BIT1|np->txrxctl_bits, base + NvRegTxRxControl);
- reg_delay(dev, NvRegUnknownSetupReg5, NVREG_UNKSETUP5_BIT31, NVREG_UNKSETUP5_BIT31,
- NV_SETUP5_DELAY, NV_SETUP5_DELAYMAX,
- KERN_INFO "open: SetupReg5, Bit 31 remained off\n");
-
- writel(0, base + NvRegUnknownSetupReg4);
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus);
-
- /* 6) continue setup */
- writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1);
- writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus);
- writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags);
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
-
- writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus);
- get_random_bytes(&i, sizeof(i));
- writel(NVREG_RNDSEED_FORCE | (i&NVREG_RNDSEED_MASK), base + NvRegRandomSeed);
- writel(NVREG_UNKSETUP1_VAL, base + NvRegUnknownSetupReg1);
- writel(NVREG_UNKSETUP2_VAL, base + NvRegUnknownSetupReg2);
- if (poll_interval == -1) {
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT)
- writel(NVREG_POLL_DEFAULT_THROUGHPUT, base + NvRegPollingInterval);
- else
- writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval);
- }
- else
- writel(poll_interval & 0xFFFF, base + NvRegPollingInterval);
- writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
- writel((np->phyaddr << NVREG_ADAPTCTL_PHYSHIFT)|NVREG_ADAPTCTL_PHYVALID|NVREG_ADAPTCTL_RUNNING,
- base + NvRegAdapterControl);
- writel(NVREG_MIISPEED_BIT8|NVREG_MIIDELAY, base + NvRegMIISpeed);
- writel(NVREG_UNKSETUP4_VAL, base + NvRegUnknownSetupReg4);
- writel(NVREG_WAKEUPFLAGS_VAL, base + NvRegWakeUpFlags);
-
- i = readl(base + NvRegPowerState);
- if ( (i & NVREG_POWERSTATE_POWEREDUP) == 0)
- writel(NVREG_POWERSTATE_POWEREDUP|i, base + NvRegPowerState);
-
- pci_push(base);
- udelay(10);
- writel(readl(base + NvRegPowerState) | NVREG_POWERSTATE_VALID, base + NvRegPowerState);
-
- nv_disable_hw_interrupts(dev, np->irqmask);
- pci_push(base);
- writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus);
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- pci_push(base);
-
- if (!np->ecdev) {
- if (nv_request_irq(dev)) {
- goto out_drain;
- }
-
- /* ask for interrupts */
- nv_enable_hw_interrupts(dev, np->irqmask);
-
- spin_lock_irq(&np->lock);
- }
-
- writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
- writel(0, base + NvRegMulticastAddrB);
- writel(0, base + NvRegMulticastMaskA);
- writel(0, base + NvRegMulticastMaskB);
- writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags);
- /* One manual link speed update: Interrupts are enabled, future link
- * speed changes cause interrupts and are handled by nv_link_irq().
- */
- {
- u32 miistat;
- miistat = readl(base + NvRegMIIStatus);
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
- dprintk(KERN_INFO "startup: got 0x%08x.\n", miistat);
- }
- /* set linkspeed to invalid value, thus force nv_update_linkspeed
- * to init hw */
- np->linkspeed = 0;
- ret = nv_update_linkspeed(dev);
- nv_start_rx(dev);
- nv_start_tx(dev);
-
- if (np->ecdev) {
- ecdev_set_link(np->ecdev, ret);
- }
- else {
- netif_start_queue(dev);
- if (ret) {
- netif_carrier_on(dev);
- } else {
- printk("%s: no link during initialization.\n", dev->name);
- netif_carrier_off(dev);
- }
- if (oom)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
-
- return 0;
-out_drain:
- drain_ring(dev);
- return ret;
-}
-
-static int nv_close(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base;
-
- if (!np->ecdev) {
- spin_lock_irq(&np->lock);
- np->in_shutdown = 1;
- spin_unlock_irq(&np->lock);
- synchronize_irq(dev->irq);
-
- del_timer_sync(&np->oom_kick);
- del_timer_sync(&np->nic_poll);
-
- netif_stop_queue(dev);
- spin_lock_irq(&np->lock);
- }
-
- nv_stop_tx(dev);
- nv_stop_rx(dev);
- nv_txrx_reset(dev);
-
- base = get_hwbase(dev);
-
- if (!np->ecdev) {
- /* disable interrupts on the nic or we will lock up */
- nv_disable_hw_interrupts(dev, np->irqmask);
- pci_push(base);
- dprintk(KERN_INFO "%s: Irqmask is zero again\n", dev->name);
-
- spin_unlock_irq(&np->lock);
-
- nv_free_irq(dev);
- }
-
- drain_ring(dev);
-
- if (np->wolenabled)
- nv_start_rx(dev);
-
- /* special op: write back the misordered MAC address - otherwise
- * the next nv_probe would see a wrong address.
- */
- writel(np->orig_mac[0], base + NvRegMacAddrA);
- writel(np->orig_mac[1], base + NvRegMacAddrB);
-
- /* FIXME: power down nic */
-
- return 0;
-}
-
-static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
-{
- struct net_device *dev;
- struct fe_priv *np;
- unsigned long addr;
- u8 __iomem *base;
- int err, i;
- u32 powerstate;
-
- board_idx++;
-
- dev = alloc_etherdev(sizeof(struct fe_priv));
- err = -ENOMEM;
- if (!dev)
- goto out;
-
- np = netdev_priv(dev);
- np->pci_dev = pci_dev;
- spin_lock_init(&np->lock);
- SET_MODULE_OWNER(dev);
- SET_NETDEV_DEV(dev, &pci_dev->dev);
-
- init_timer(&np->oom_kick);
- np->oom_kick.data = (unsigned long) dev;
- np->oom_kick.function = &nv_do_rx_refill; /* timer handler */
- init_timer(&np->nic_poll);
- np->nic_poll.data = (unsigned long) dev;
- np->nic_poll.function = &nv_do_nic_poll; /* timer handler */
-
- err = pci_enable_device(pci_dev);
- if (err) {
- printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n",
- err, pci_name(pci_dev));
- goto out_free;
- }
-
- pci_set_master(pci_dev);
-
- err = pci_request_regions(pci_dev, DRV_NAME);
- if (err < 0)
- goto out_disable;
-
- if (id->driver_data & (DEV_HAS_VLAN|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL))
- np->register_size = NV_PCI_REGSZ_VER2;
- else
- np->register_size = NV_PCI_REGSZ_VER1;
-
- err = -EINVAL;
- addr = 0;
- for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
- dprintk(KERN_DEBUG "%s: resource %d start %p len %ld flags 0x%08lx.\n",
- pci_name(pci_dev), i, (void*)pci_resource_start(pci_dev, i),
- pci_resource_len(pci_dev, i),
- pci_resource_flags(pci_dev, i));
- if (pci_resource_flags(pci_dev, i) & IORESOURCE_MEM &&
- pci_resource_len(pci_dev, i) >= np->register_size) {
- addr = pci_resource_start(pci_dev, i);
- break;
- }
- }
- if (i == DEVICE_COUNT_RESOURCE) {
- printk(KERN_INFO "forcedeth: Couldn't find register window for device %s.\n",
- pci_name(pci_dev));
- goto out_relreg;
- }
-
- /* copy of driver data */
- np->driver_data = id->driver_data;
-
- /* handle different descriptor versions */
- if (id->driver_data & DEV_HAS_HIGH_DMA) {
- /* packet format 3: supports 40-bit addressing */
- np->desc_ver = DESC_VER_3;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_3;
- if (pci_set_dma_mask(pci_dev, DMA_39BIT_MASK)) {
- printk(KERN_INFO "forcedeth: 64-bit DMA failed, using 32-bit addressing for device %s.\n",
- pci_name(pci_dev));
- } else {
- dev->features |= NETIF_F_HIGHDMA;
- printk(KERN_INFO "forcedeth: using HIGHDMA\n");
- }
- if (pci_set_consistent_dma_mask(pci_dev, 0x0000007fffffffffULL)) {
- printk(KERN_INFO "forcedeth: 64-bit DMA (consistent) failed for device %s.\n",
- pci_name(pci_dev));
- }
- } else if (id->driver_data & DEV_HAS_LARGEDESC) {
- /* packet format 2: supports jumbo frames */
- np->desc_ver = DESC_VER_2;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_2;
- } else {
- /* original packet format */
- np->desc_ver = DESC_VER_1;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_1;
- }
-
- np->pkt_limit = NV_PKTLIMIT_1;
- if (id->driver_data & DEV_HAS_LARGEDESC)
- np->pkt_limit = NV_PKTLIMIT_2;
-
- if (id->driver_data & DEV_HAS_CHECKSUM) {
- np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
- dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;
-#ifdef NETIF_F_TSO
- dev->features |= NETIF_F_TSO;
-#endif
- }
-
- np->vlanctl_bits = 0;
- if (id->driver_data & DEV_HAS_VLAN) {
- np->vlanctl_bits = NVREG_VLANCONTROL_ENABLE;
- dev->features |= NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_TX;
- dev->vlan_rx_register = nv_vlan_rx_register;
- dev->vlan_rx_kill_vid = nv_vlan_rx_kill_vid;
- }
-
- np->msi_flags = 0;
- if ((id->driver_data & DEV_HAS_MSI) && !disable_msi) {
- np->msi_flags |= NV_MSI_CAPABLE;
- }
- if ((id->driver_data & DEV_HAS_MSI_X) && !disable_msix) {
- np->msi_flags |= NV_MSI_X_CAPABLE;
- }
-
- err = -ENOMEM;
- np->base = ioremap(addr, np->register_size);
- if (!np->base)
- goto out_relreg;
- dev->base_addr = (unsigned long)np->base;
-
- dev->irq = pci_dev->irq;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig = pci_alloc_consistent(pci_dev,
- sizeof(struct ring_desc) * (RX_RING + TX_RING),
- &np->ring_addr);
- if (!np->rx_ring.orig)
- goto out_unmap;
- np->tx_ring.orig = &np->rx_ring.orig[RX_RING];
- } else {
- np->rx_ring.ex = pci_alloc_consistent(pci_dev,
- sizeof(struct ring_desc_ex) * (RX_RING + TX_RING),
- &np->ring_addr);
- if (!np->rx_ring.ex)
- goto out_unmap;
- np->tx_ring.ex = &np->rx_ring.ex[RX_RING];
- }
-
- dev->open = nv_open;
- dev->stop = nv_close;
- dev->hard_start_xmit = nv_start_xmit;
- dev->get_stats = nv_get_stats;
- dev->change_mtu = nv_change_mtu;
- dev->set_mac_address = nv_set_mac_address;
- dev->set_multicast_list = nv_set_multicast;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = nv_poll_controller;
-#endif
- SET_ETHTOOL_OPS(dev, &ops);
- dev->tx_timeout = nv_tx_timeout;
- dev->watchdog_timeo = NV_WATCHDOG_TIMEO;
-
- pci_set_drvdata(pci_dev, dev);
-
- /* read the mac address */
- base = get_hwbase(dev);
- np->orig_mac[0] = readl(base + NvRegMacAddrA);
- np->orig_mac[1] = readl(base + NvRegMacAddrB);
-
- dev->dev_addr[0] = (np->orig_mac[1] >> 8) & 0xff;
- dev->dev_addr[1] = (np->orig_mac[1] >> 0) & 0xff;
- dev->dev_addr[2] = (np->orig_mac[0] >> 24) & 0xff;
- dev->dev_addr[3] = (np->orig_mac[0] >> 16) & 0xff;
- dev->dev_addr[4] = (np->orig_mac[0] >> 8) & 0xff;
- dev->dev_addr[5] = (np->orig_mac[0] >> 0) & 0xff;
- memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
-
- if (!is_valid_ether_addr(dev->perm_addr)) {
- /*
- * Bad mac address. At least one bios sets the mac address
- * to 01:23:45:67:89:ab
- */
- printk(KERN_ERR "%s: Invalid Mac address detected: %02x:%02x:%02x:%02x:%02x:%02x\n",
- pci_name(pci_dev),
- dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
- dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
- printk(KERN_ERR "Please complain to your hardware vendor. Switching to a random MAC.\n");
- dev->dev_addr[0] = 0x00;
- dev->dev_addr[1] = 0x00;
- dev->dev_addr[2] = 0x6c;
- get_random_bytes(&dev->dev_addr[3], 3);
- }
-
- dprintk(KERN_DEBUG "%s: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(pci_dev),
- dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
- dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
-
- /* disable WOL */
- writel(0, base + NvRegWakeUpFlags);
- np->wolenabled = 0;
-
- if (id->driver_data & DEV_HAS_POWER_CNTRL) {
- u8 revision_id;
- pci_read_config_byte(pci_dev, PCI_REVISION_ID, &revision_id);
-
- /* take phy and nic out of low power mode */
- powerstate = readl(base + NvRegPowerState2);
- powerstate &= ~NVREG_POWERSTATE2_POWERUP_MASK;
- if ((id->device == PCI_DEVICE_ID_NVIDIA_NVENET_12 ||
- id->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) &&
- revision_id >= 0xA3)
- powerstate |= NVREG_POWERSTATE2_POWERUP_REV_A3;
- writel(powerstate, base + NvRegPowerState2);
- }
-
- if (np->desc_ver == DESC_VER_1) {
- np->tx_flags = NV_TX_VALID;
- } else {
- np->tx_flags = NV_TX2_VALID;
- }
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) {
- np->irqmask = NVREG_IRQMASK_THROUGHPUT;
- if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
- np->msi_flags |= 0x0003;
- } else {
- np->irqmask = NVREG_IRQMASK_CPU;
- if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
- np->msi_flags |= 0x0001;
- }
-
- if (id->driver_data & DEV_NEED_TIMERIRQ)
- np->irqmask |= NVREG_IRQ_TIMER;
- if (id->driver_data & DEV_NEED_LINKTIMER) {
- dprintk(KERN_INFO "%s: link timer on.\n", pci_name(pci_dev));
- np->need_linktimer = 1;
- np->link_timeout = jiffies + LINK_TIMEOUT;
- } else {
- dprintk(KERN_INFO "%s: link timer off.\n", pci_name(pci_dev));
- np->need_linktimer = 0;
- }
-
- /* find a suitable phy */
- for (i = 1; i <= 32; i++) {
- int id1, id2;
- int phyaddr = i & 0x1F;
-
- spin_lock_irq(&np->lock);
- id1 = mii_rw(dev, phyaddr, MII_PHYSID1, MII_READ);
- spin_unlock_irq(&np->lock);
- if (id1 < 0 || id1 == 0xffff)
- continue;
- spin_lock_irq(&np->lock);
- id2 = mii_rw(dev, phyaddr, MII_PHYSID2, MII_READ);
- spin_unlock_irq(&np->lock);
- if (id2 < 0 || id2 == 0xffff)
- continue;
-
- id1 = (id1 & PHYID1_OUI_MASK) << PHYID1_OUI_SHFT;
- id2 = (id2 & PHYID2_OUI_MASK) >> PHYID2_OUI_SHFT;
- dprintk(KERN_DEBUG "%s: open: Found PHY %04x:%04x at address %d.\n",
- pci_name(pci_dev), id1, id2, phyaddr);
- np->phyaddr = phyaddr;
- np->phy_oui = id1 | id2;
- break;
- }
- if (i == 33) {
- printk(KERN_INFO "%s: open: Could not find a valid PHY.\n",
- pci_name(pci_dev));
- goto out_freering;
- }
-
- /* reset it */
- phy_init(dev);
-
- /* set default link speed settings */
- np->linkspeed = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- np->duplex = 0;
- np->autoneg = 1;
-
- // offer device to EtherCAT master module
- np->ecdev = ecdev_offer(dev, ec_poll, THIS_MODULE);
- if (np->ecdev) {
- if (ecdev_open(np->ecdev)) {
- ecdev_withdraw(np->ecdev);
- goto out_freering;
- }
- } else {
- err = register_netdev(dev);
- if (err) {
- printk(KERN_INFO "forcedeth: unable to register netdev: %d\n", err);
- goto out_freering;
- }
- }
- printk(KERN_INFO "%s: forcedeth.c: subsystem: %05x:%04x bound to %s\n",
- dev->name, pci_dev->subsystem_vendor, pci_dev->subsystem_device,
- pci_name(pci_dev));
-
- return 0;
-
-out_freering:
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING),
- np->rx_ring.orig, np->ring_addr);
- else
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (RX_RING + TX_RING),
- np->rx_ring.ex, np->ring_addr);
- pci_set_drvdata(pci_dev, NULL);
-out_unmap:
- iounmap(get_hwbase(dev));
-out_relreg:
- pci_release_regions(pci_dev);
-out_disable:
- pci_disable_device(pci_dev);
-out_free:
- free_netdev(dev);
-out:
- return err;
-}
-
-static void __devexit nv_remove(struct pci_dev *pci_dev)
-{
- struct net_device *dev = pci_get_drvdata(pci_dev);
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->ecdev) {
- ecdev_close(np->ecdev);
- ecdev_withdraw(np->ecdev);
- }
- else {
- unregister_netdev(dev);
- }
-
- /* free all structures */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING), np->rx_ring.orig, np->ring_addr);
- else
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (RX_RING + TX_RING), np->rx_ring.ex, np->ring_addr);
- iounmap(get_hwbase(dev));
- pci_release_regions(pci_dev);
- pci_disable_device(pci_dev);
- free_netdev(dev);
- pci_set_drvdata(pci_dev, NULL);
-}
-
-static struct pci_device_id pci_tbl[] = {
- { /* nForce Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_1),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce2 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_2),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_3),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_4),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_5),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_6),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_7),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* CK804 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_8),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* CK804 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_9),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP04 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_10),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP04 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_11),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP51 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_12),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP51 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_13),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP55 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_14),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP55 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_15),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL,
- },
- {0,},
-};
-
-static struct pci_driver driver = {
- .name = "forcedeth",
- .id_table = pci_tbl,
- .probe = nv_probe,
- .remove = __devexit_p(nv_remove),
-};
-
-
-static int __init init_nic(void)
-{
- printk(KERN_INFO "forcedeth: EtherCAT-capable nForce ethernet driver."
- " Version %s, master %s.\n",
- FORCEDETH_VERSION, EC_MASTER_VERSION);
- return pci_module_init(&driver);
-}
-
-static void __exit exit_nic(void)
-{
- pci_unregister_driver(&driver);
-}
-
-module_param(max_interrupt_work, int, 0);
-MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt");
-module_param(optimization_mode, int, 0);
-MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer.");
-module_param(poll_interval, int, 0);
-MODULE_PARM_DESC(poll_interval, "Interval determines how frequent timer interrupt is generated by [(time_in_micro_secs * 100) / (2^10)]. Min is 0 and Max is 65535.");
-module_param(disable_msi, int, 0);
-MODULE_PARM_DESC(disable_msi, "Disable MSI interrupts by setting to 1.");
-module_param(disable_msix, int, 0);
-MODULE_PARM_DESC(disable_msix, "Disable MSIX interrupts by setting to 1.");
-
-MODULE_AUTHOR("Dipl.-Ing. (FH) Florian Pose <fp@igh-essen.com>");
-MODULE_DESCRIPTION("EtherCAT-capable nForce ethernet driver");
-MODULE_LICENSE("GPL");
-
-//MODULE_DEVICE_TABLE(pci, pci_tbl); // prevent auto-loading
-
-module_init(init_nic);
-module_exit(exit_nic);
--- a/devices/forcedeth-2.6.17-orig.c Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,3423 +0,0 @@
-/*
- * forcedeth: Ethernet driver for NVIDIA nForce media access controllers.
- *
- * Note: This driver is a cleanroom reimplementation based on reverse
- * engineered documentation written by Carl-Daniel Hailfinger
- * and Andrew de Quincey. It's neither supported nor endorsed
- * by NVIDIA Corp. Use at your own risk.
- *
- * NVIDIA, nForce and other NVIDIA marks are trademarks or registered
- * trademarks of NVIDIA Corporation in the United States and other
- * countries.
- *
- * Copyright (C) 2003,4,5 Manfred Spraul
- * Copyright (C) 2004 Andrew de Quincey (wol support)
- * Copyright (C) 2004 Carl-Daniel Hailfinger (invalid MAC handling, insane
- * IRQ rate fixes, bigendian fixes, cleanups, verification)
- * Copyright (c) 2004 NVIDIA Corporation
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
- * Changelog:
- * 0.01: 05 Oct 2003: First release that compiles without warnings.
- * 0.02: 05 Oct 2003: Fix bug for nv_drain_tx: do not try to free NULL skbs.
- * Check all PCI BARs for the register window.
- * udelay added to mii_rw.
- * 0.03: 06 Oct 2003: Initialize dev->irq.
- * 0.04: 07 Oct 2003: Initialize np->lock, reduce handled irqs, add printks.
- * 0.05: 09 Oct 2003: printk removed again, irq status print tx_timeout.
- * 0.06: 10 Oct 2003: MAC Address read updated, pff flag generation updated,
- * irq mask updated
- * 0.07: 14 Oct 2003: Further irq mask updates.
- * 0.08: 20 Oct 2003: rx_desc.Length initialization added, nv_alloc_rx refill
- * added into irq handler, NULL check for drain_ring.
- * 0.09: 20 Oct 2003: Basic link speed irq implementation. Only handle the
- * requested interrupt sources.
- * 0.10: 20 Oct 2003: First cleanup for release.
- * 0.11: 21 Oct 2003: hexdump for tx added, rx buffer sizes increased.
- * MAC Address init fix, set_multicast cleanup.
- * 0.12: 23 Oct 2003: Cleanups for release.
- * 0.13: 25 Oct 2003: Limit for concurrent tx packets increased to 10.
- * Set link speed correctly. start rx before starting
- * tx (nv_start_rx sets the link speed).
- * 0.14: 25 Oct 2003: Nic dependant irq mask.
- * 0.15: 08 Nov 2003: fix smp deadlock with set_multicast_list during
- * open.
- * 0.16: 15 Nov 2003: include file cleanup for ppc64, rx buffer size
- * increased to 1628 bytes.
- * 0.17: 16 Nov 2003: undo rx buffer size increase. Substract 1 from
- * the tx length.
- * 0.18: 17 Nov 2003: fix oops due to late initialization of dev_stats
- * 0.19: 29 Nov 2003: Handle RxNoBuf, detect & handle invalid mac
- * addresses, really stop rx if already running
- * in nv_start_rx, clean up a bit.
- * 0.20: 07 Dec 2003: alloc fixes
- * 0.21: 12 Jan 2004: additional alloc fix, nic polling fix.
- * 0.22: 19 Jan 2004: reprogram timer to a sane rate, avoid lockup
- * on close.
- * 0.23: 26 Jan 2004: various small cleanups
- * 0.24: 27 Feb 2004: make driver even less anonymous in backtraces
- * 0.25: 09 Mar 2004: wol support
- * 0.26: 03 Jun 2004: netdriver specific annotation, sparse-related fixes
- * 0.27: 19 Jun 2004: Gigabit support, new descriptor rings,
- * added CK804/MCP04 device IDs, code fixes
- * for registers, link status and other minor fixes.
- * 0.28: 21 Jun 2004: Big cleanup, making driver mostly endian safe
- * 0.29: 31 Aug 2004: Add backup timer for link change notification.
- * 0.30: 25 Sep 2004: rx checksum support for nf 250 Gb. Add rx reset
- * into nv_close, otherwise reenabling for wol can
- * cause DMA to kfree'd memory.
- * 0.31: 14 Nov 2004: ethtool support for getting/setting link
- * capabilities.
- * 0.32: 16 Apr 2005: RX_ERROR4 handling added.
- * 0.33: 16 May 2005: Support for MCP51 added.
- * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics.
- * 0.35: 26 Jun 2005: Support for MCP55 added.
- * 0.36: 28 Jun 2005: Add jumbo frame support.
- * 0.37: 10 Jul 2005: Additional ethtool support, cleanup of pci id list
- * 0.38: 16 Jul 2005: tx irq rewrite: Use global flags instead of
- * per-packet flags.
- * 0.39: 18 Jul 2005: Add 64bit descriptor support.
- * 0.40: 19 Jul 2005: Add support for mac address change.
- * 0.41: 30 Jul 2005: Write back original MAC in nv_close instead
- * of nv_remove
- * 0.42: 06 Aug 2005: Fix lack of link speed initialization
- * in the second (and later) nv_open call
- * 0.43: 10 Aug 2005: Add support for tx checksum.
- * 0.44: 20 Aug 2005: Add support for scatter gather and segmentation.
- * 0.45: 18 Sep 2005: Remove nv_stop/start_rx from every link check
- * 0.46: 20 Oct 2005: Add irq optimization modes.
- * 0.47: 26 Oct 2005: Add phyaddr 0 in phy scan.
- * 0.48: 24 Dec 2005: Disable TSO, bugfix for pci_map_single
- * 0.49: 10 Dec 2005: Fix tso for large buffers.
- * 0.50: 20 Jan 2006: Add 8021pq tagging support.
- * 0.51: 20 Jan 2006: Add 64bit consistent memory allocation for rings.
- * 0.52: 20 Jan 2006: Add MSI/MSIX support.
- * 0.53: 19 Mar 2006: Fix init from low power mode and add hw reset.
- * 0.54: 21 Mar 2006: Fix spin locks for multi irqs and cleanup.
- *
- * Known bugs:
- * We suspect that on some hardware no TX done interrupts are generated.
- * This means recovery from netif_stop_queue only happens if the hw timer
- * interrupt fires (100 times/second, configurable with NVREG_POLL_DEFAULT)
- * and the timer is active in the IRQMask, or if a rx packet arrives by chance.
- * If your hardware reliably generates tx done interrupts, then you can remove
- * DEV_NEED_TIMERIRQ from the driver_data flags.
- * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few
- * superfluous timer interrupts from the nic.
- */
-#define FORCEDETH_VERSION "0.54"
-#define DRV_NAME "forcedeth"
-
-#include <linux/module.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/delay.h>
-#include <linux/spinlock.h>
-#include <linux/ethtool.h>
-#include <linux/timer.h>
-#include <linux/skbuff.h>
-#include <linux/mii.h>
-#include <linux/random.h>
-#include <linux/init.h>
-#include <linux/if_vlan.h>
-#include <linux/dma-mapping.h>
-
-#include <asm/irq.h>
-#include <asm/io.h>
-#include <asm/uaccess.h>
-#include <asm/system.h>
-
-#if 0
-#define dprintk printk
-#else
-#define dprintk(x...) do { } while (0)
-#endif
-
-
-/*
- * Hardware access:
- */
-
-#define DEV_NEED_TIMERIRQ 0x0001 /* set the timer irq flag in the irq mask */
-#define DEV_NEED_LINKTIMER 0x0002 /* poll link settings. Relies on the timer irq */
-#define DEV_HAS_LARGEDESC 0x0004 /* device supports jumbo frames and needs packet format 2 */
-#define DEV_HAS_HIGH_DMA 0x0008 /* device supports 64bit dma */
-#define DEV_HAS_CHECKSUM 0x0010 /* device supports tx and rx checksum offloads */
-#define DEV_HAS_VLAN 0x0020 /* device supports vlan tagging and striping */
-#define DEV_HAS_MSI 0x0040 /* device supports MSI */
-#define DEV_HAS_MSI_X 0x0080 /* device supports MSI-X */
-#define DEV_HAS_POWER_CNTRL 0x0100 /* device supports power savings */
-
-enum {
- NvRegIrqStatus = 0x000,
-#define NVREG_IRQSTAT_MIIEVENT 0x040
-#define NVREG_IRQSTAT_MASK 0x1ff
- NvRegIrqMask = 0x004,
-#define NVREG_IRQ_RX_ERROR 0x0001
-#define NVREG_IRQ_RX 0x0002
-#define NVREG_IRQ_RX_NOBUF 0x0004
-#define NVREG_IRQ_TX_ERR 0x0008
-#define NVREG_IRQ_TX_OK 0x0010
-#define NVREG_IRQ_TIMER 0x0020
-#define NVREG_IRQ_LINK 0x0040
-#define NVREG_IRQ_RX_FORCED 0x0080
-#define NVREG_IRQ_TX_FORCED 0x0100
-#define NVREG_IRQMASK_THROUGHPUT 0x00df
-#define NVREG_IRQMASK_CPU 0x0040
-#define NVREG_IRQ_TX_ALL (NVREG_IRQ_TX_ERR|NVREG_IRQ_TX_OK|NVREG_IRQ_TX_FORCED)
-#define NVREG_IRQ_RX_ALL (NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_RX_FORCED)
-#define NVREG_IRQ_OTHER (NVREG_IRQ_TIMER|NVREG_IRQ_LINK)
-
-#define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR| \
- NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RX_FORCED| \
- NVREG_IRQ_TX_FORCED))
-
- NvRegUnknownSetupReg6 = 0x008,
-#define NVREG_UNKSETUP6_VAL 3
-
-/*
- * NVREG_POLL_DEFAULT is the interval length of the timer source on the nic
- * NVREG_POLL_DEFAULT=97 would result in an interval length of 1 ms
- */
- NvRegPollingInterval = 0x00c,
-#define NVREG_POLL_DEFAULT_THROUGHPUT 970
-#define NVREG_POLL_DEFAULT_CPU 13
- NvRegMSIMap0 = 0x020,
- NvRegMSIMap1 = 0x024,
- NvRegMSIIrqMask = 0x030,
-#define NVREG_MSI_VECTOR_0_ENABLED 0x01
- NvRegMisc1 = 0x080,
-#define NVREG_MISC1_HD 0x02
-#define NVREG_MISC1_FORCE 0x3b0f3c
-
- NvRegMacReset = 0x3c,
-#define NVREG_MAC_RESET_ASSERT 0x0F3
- NvRegTransmitterControl = 0x084,
-#define NVREG_XMITCTL_START 0x01
- NvRegTransmitterStatus = 0x088,
-#define NVREG_XMITSTAT_BUSY 0x01
-
- NvRegPacketFilterFlags = 0x8c,
-#define NVREG_PFF_ALWAYS 0x7F0008
-#define NVREG_PFF_PROMISC 0x80
-#define NVREG_PFF_MYADDR 0x20
-
- NvRegOffloadConfig = 0x90,
-#define NVREG_OFFLOAD_HOMEPHY 0x601
-#define NVREG_OFFLOAD_NORMAL RX_NIC_BUFSIZE
- NvRegReceiverControl = 0x094,
-#define NVREG_RCVCTL_START 0x01
- NvRegReceiverStatus = 0x98,
-#define NVREG_RCVSTAT_BUSY 0x01
-
- NvRegRandomSeed = 0x9c,
-#define NVREG_RNDSEED_MASK 0x00ff
-#define NVREG_RNDSEED_FORCE 0x7f00
-#define NVREG_RNDSEED_FORCE2 0x2d00
-#define NVREG_RNDSEED_FORCE3 0x7400
-
- NvRegUnknownSetupReg1 = 0xA0,
-#define NVREG_UNKSETUP1_VAL 0x16070f
- NvRegUnknownSetupReg2 = 0xA4,
-#define NVREG_UNKSETUP2_VAL 0x16
- NvRegMacAddrA = 0xA8,
- NvRegMacAddrB = 0xAC,
- NvRegMulticastAddrA = 0xB0,
-#define NVREG_MCASTADDRA_FORCE 0x01
- NvRegMulticastAddrB = 0xB4,
- NvRegMulticastMaskA = 0xB8,
- NvRegMulticastMaskB = 0xBC,
-
- NvRegPhyInterface = 0xC0,
-#define PHY_RGMII 0x10000000
-
- NvRegTxRingPhysAddr = 0x100,
- NvRegRxRingPhysAddr = 0x104,
- NvRegRingSizes = 0x108,
-#define NVREG_RINGSZ_TXSHIFT 0
-#define NVREG_RINGSZ_RXSHIFT 16
- NvRegUnknownTransmitterReg = 0x10c,
- NvRegLinkSpeed = 0x110,
-#define NVREG_LINKSPEED_FORCE 0x10000
-#define NVREG_LINKSPEED_10 1000
-#define NVREG_LINKSPEED_100 100
-#define NVREG_LINKSPEED_1000 50
-#define NVREG_LINKSPEED_MASK (0xFFF)
- NvRegUnknownSetupReg5 = 0x130,
-#define NVREG_UNKSETUP5_BIT31 (1<<31)
- NvRegUnknownSetupReg3 = 0x13c,
-#define NVREG_UNKSETUP3_VAL1 0x200010
- NvRegTxRxControl = 0x144,
-#define NVREG_TXRXCTL_KICK 0x0001
-#define NVREG_TXRXCTL_BIT1 0x0002
-#define NVREG_TXRXCTL_BIT2 0x0004
-#define NVREG_TXRXCTL_IDLE 0x0008
-#define NVREG_TXRXCTL_RESET 0x0010
-#define NVREG_TXRXCTL_RXCHECK 0x0400
-#define NVREG_TXRXCTL_DESC_1 0
-#define NVREG_TXRXCTL_DESC_2 0x02100
-#define NVREG_TXRXCTL_DESC_3 0x02200
-#define NVREG_TXRXCTL_VLANSTRIP 0x00040
-#define NVREG_TXRXCTL_VLANINS 0x00080
- NvRegTxRingPhysAddrHigh = 0x148,
- NvRegRxRingPhysAddrHigh = 0x14C,
- NvRegMIIStatus = 0x180,
-#define NVREG_MIISTAT_ERROR 0x0001
-#define NVREG_MIISTAT_LINKCHANGE 0x0008
-#define NVREG_MIISTAT_MASK 0x000f
-#define NVREG_MIISTAT_MASK2 0x000f
- NvRegUnknownSetupReg4 = 0x184,
-#define NVREG_UNKSETUP4_VAL 8
-
- NvRegAdapterControl = 0x188,
-#define NVREG_ADAPTCTL_START 0x02
-#define NVREG_ADAPTCTL_LINKUP 0x04
-#define NVREG_ADAPTCTL_PHYVALID 0x40000
-#define NVREG_ADAPTCTL_RUNNING 0x100000
-#define NVREG_ADAPTCTL_PHYSHIFT 24
- NvRegMIISpeed = 0x18c,
-#define NVREG_MIISPEED_BIT8 (1<<8)
-#define NVREG_MIIDELAY 5
- NvRegMIIControl = 0x190,
-#define NVREG_MIICTL_INUSE 0x08000
-#define NVREG_MIICTL_WRITE 0x00400
-#define NVREG_MIICTL_ADDRSHIFT 5
- NvRegMIIData = 0x194,
- NvRegWakeUpFlags = 0x200,
-#define NVREG_WAKEUPFLAGS_VAL 0x7770
-#define NVREG_WAKEUPFLAGS_BUSYSHIFT 24
-#define NVREG_WAKEUPFLAGS_ENABLESHIFT 16
-#define NVREG_WAKEUPFLAGS_D3SHIFT 12
-#define NVREG_WAKEUPFLAGS_D2SHIFT 8
-#define NVREG_WAKEUPFLAGS_D1SHIFT 4
-#define NVREG_WAKEUPFLAGS_D0SHIFT 0
-#define NVREG_WAKEUPFLAGS_ACCEPT_MAGPAT 0x01
-#define NVREG_WAKEUPFLAGS_ACCEPT_WAKEUPPAT 0x02
-#define NVREG_WAKEUPFLAGS_ACCEPT_LINKCHANGE 0x04
-#define NVREG_WAKEUPFLAGS_ENABLE 0x1111
-
- NvRegPatternCRC = 0x204,
- NvRegPatternMask = 0x208,
- NvRegPowerCap = 0x268,
-#define NVREG_POWERCAP_D3SUPP (1<<30)
-#define NVREG_POWERCAP_D2SUPP (1<<26)
-#define NVREG_POWERCAP_D1SUPP (1<<25)
- NvRegPowerState = 0x26c,
-#define NVREG_POWERSTATE_POWEREDUP 0x8000
-#define NVREG_POWERSTATE_VALID 0x0100
-#define NVREG_POWERSTATE_MASK 0x0003
-#define NVREG_POWERSTATE_D0 0x0000
-#define NVREG_POWERSTATE_D1 0x0001
-#define NVREG_POWERSTATE_D2 0x0002
-#define NVREG_POWERSTATE_D3 0x0003
- NvRegVlanControl = 0x300,
-#define NVREG_VLANCONTROL_ENABLE 0x2000
- NvRegMSIXMap0 = 0x3e0,
- NvRegMSIXMap1 = 0x3e4,
- NvRegMSIXIrqStatus = 0x3f0,
-
- NvRegPowerState2 = 0x600,
-#define NVREG_POWERSTATE2_POWERUP_MASK 0x0F11
-#define NVREG_POWERSTATE2_POWERUP_REV_A3 0x0001
-};
-
-/* Big endian: should work, but is untested */
-struct ring_desc {
- u32 PacketBuffer;
- u32 FlagLen;
-};
-
-struct ring_desc_ex {
- u32 PacketBufferHigh;
- u32 PacketBufferLow;
- u32 TxVlan;
- u32 FlagLen;
-};
-
-typedef union _ring_type {
- struct ring_desc* orig;
- struct ring_desc_ex* ex;
-} ring_type;
-
-#define FLAG_MASK_V1 0xffff0000
-#define FLAG_MASK_V2 0xffffc000
-#define LEN_MASK_V1 (0xffffffff ^ FLAG_MASK_V1)
-#define LEN_MASK_V2 (0xffffffff ^ FLAG_MASK_V2)
-
-#define NV_TX_LASTPACKET (1<<16)
-#define NV_TX_RETRYERROR (1<<19)
-#define NV_TX_FORCED_INTERRUPT (1<<24)
-#define NV_TX_DEFERRED (1<<26)
-#define NV_TX_CARRIERLOST (1<<27)
-#define NV_TX_LATECOLLISION (1<<28)
-#define NV_TX_UNDERFLOW (1<<29)
-#define NV_TX_ERROR (1<<30)
-#define NV_TX_VALID (1<<31)
-
-#define NV_TX2_LASTPACKET (1<<29)
-#define NV_TX2_RETRYERROR (1<<18)
-#define NV_TX2_FORCED_INTERRUPT (1<<30)
-#define NV_TX2_DEFERRED (1<<25)
-#define NV_TX2_CARRIERLOST (1<<26)
-#define NV_TX2_LATECOLLISION (1<<27)
-#define NV_TX2_UNDERFLOW (1<<28)
-/* error and valid are the same for both */
-#define NV_TX2_ERROR (1<<30)
-#define NV_TX2_VALID (1<<31)
-#define NV_TX2_TSO (1<<28)
-#define NV_TX2_TSO_SHIFT 14
-#define NV_TX2_TSO_MAX_SHIFT 14
-#define NV_TX2_TSO_MAX_SIZE (1<<NV_TX2_TSO_MAX_SHIFT)
-#define NV_TX2_CHECKSUM_L3 (1<<27)
-#define NV_TX2_CHECKSUM_L4 (1<<26)
-
-#define NV_TX3_VLAN_TAG_PRESENT (1<<18)
-
-#define NV_RX_DESCRIPTORVALID (1<<16)
-#define NV_RX_MISSEDFRAME (1<<17)
-#define NV_RX_SUBSTRACT1 (1<<18)
-#define NV_RX_ERROR1 (1<<23)
-#define NV_RX_ERROR2 (1<<24)
-#define NV_RX_ERROR3 (1<<25)
-#define NV_RX_ERROR4 (1<<26)
-#define NV_RX_CRCERR (1<<27)
-#define NV_RX_OVERFLOW (1<<28)
-#define NV_RX_FRAMINGERR (1<<29)
-#define NV_RX_ERROR (1<<30)
-#define NV_RX_AVAIL (1<<31)
-
-#define NV_RX2_CHECKSUMMASK (0x1C000000)
-#define NV_RX2_CHECKSUMOK1 (0x10000000)
-#define NV_RX2_CHECKSUMOK2 (0x14000000)
-#define NV_RX2_CHECKSUMOK3 (0x18000000)
-#define NV_RX2_DESCRIPTORVALID (1<<29)
-#define NV_RX2_SUBSTRACT1 (1<<25)
-#define NV_RX2_ERROR1 (1<<18)
-#define NV_RX2_ERROR2 (1<<19)
-#define NV_RX2_ERROR3 (1<<20)
-#define NV_RX2_ERROR4 (1<<21)
-#define NV_RX2_CRCERR (1<<22)
-#define NV_RX2_OVERFLOW (1<<23)
-#define NV_RX2_FRAMINGERR (1<<24)
-/* error and avail are the same for both */
-#define NV_RX2_ERROR (1<<30)
-#define NV_RX2_AVAIL (1<<31)
-
-#define NV_RX3_VLAN_TAG_PRESENT (1<<16)
-#define NV_RX3_VLAN_TAG_MASK (0x0000FFFF)
-
-/* Miscelaneous hardware related defines: */
-#define NV_PCI_REGSZ_VER1 0x270
-#define NV_PCI_REGSZ_VER2 0x604
-
-/* various timeout delays: all in usec */
-#define NV_TXRX_RESET_DELAY 4
-#define NV_TXSTOP_DELAY1 10
-#define NV_TXSTOP_DELAY1MAX 500000
-#define NV_TXSTOP_DELAY2 100
-#define NV_RXSTOP_DELAY1 10
-#define NV_RXSTOP_DELAY1MAX 500000
-#define NV_RXSTOP_DELAY2 100
-#define NV_SETUP5_DELAY 5
-#define NV_SETUP5_DELAYMAX 50000
-#define NV_POWERUP_DELAY 5
-#define NV_POWERUP_DELAYMAX 5000
-#define NV_MIIBUSY_DELAY 50
-#define NV_MIIPHY_DELAY 10
-#define NV_MIIPHY_DELAYMAX 10000
-#define NV_MAC_RESET_DELAY 64
-
-#define NV_WAKEUPPATTERNS 5
-#define NV_WAKEUPMASKENTRIES 4
-
-/* General driver defaults */
-#define NV_WATCHDOG_TIMEO (5*HZ)
-
-#define RX_RING 128
-#define TX_RING 256
-/*
- * If your nic mysteriously hangs then try to reduce the limits
- * to 1/0: It might be required to set NV_TX_LASTPACKET in the
- * last valid ring entry. But this would be impossible to
- * implement - probably a disassembly error.
- */
-#define TX_LIMIT_STOP 255
-#define TX_LIMIT_START 254
-
-/* rx/tx mac addr + type + vlan + align + slack*/
-#define NV_RX_HEADERS (64)
-/* even more slack. */
-#define NV_RX_ALLOC_PAD (64)
-
-/* maximum mtu size */
-#define NV_PKTLIMIT_1 ETH_DATA_LEN /* hard limit not known */
-#define NV_PKTLIMIT_2 9100 /* Actual limit according to NVidia: 9202 */
-
-#define OOM_REFILL (1+HZ/20)
-#define POLL_WAIT (1+HZ/100)
-#define LINK_TIMEOUT (3*HZ)
-
-/*
- * desc_ver values:
- * The nic supports three different descriptor types:
- * - DESC_VER_1: Original
- * - DESC_VER_2: support for jumbo frames.
- * - DESC_VER_3: 64-bit format.
- */
-#define DESC_VER_1 1
-#define DESC_VER_2 2
-#define DESC_VER_3 3
-
-/* PHY defines */
-#define PHY_OUI_MARVELL 0x5043
-#define PHY_OUI_CICADA 0x03f1
-#define PHYID1_OUI_MASK 0x03ff
-#define PHYID1_OUI_SHFT 6
-#define PHYID2_OUI_MASK 0xfc00
-#define PHYID2_OUI_SHFT 10
-#define PHY_INIT1 0x0f000
-#define PHY_INIT2 0x0e00
-#define PHY_INIT3 0x01000
-#define PHY_INIT4 0x0200
-#define PHY_INIT5 0x0004
-#define PHY_INIT6 0x02000
-#define PHY_GIGABIT 0x0100
-
-#define PHY_TIMEOUT 0x1
-#define PHY_ERROR 0x2
-
-#define PHY_100 0x1
-#define PHY_1000 0x2
-#define PHY_HALF 0x100
-
-/* FIXME: MII defines that should be added to <linux/mii.h> */
-#define MII_1000BT_CR 0x09
-#define MII_1000BT_SR 0x0a
-#define ADVERTISE_1000FULL 0x0200
-#define ADVERTISE_1000HALF 0x0100
-#define LPA_1000FULL 0x0800
-#define LPA_1000HALF 0x0400
-
-/* MSI/MSI-X defines */
-#define NV_MSI_X_MAX_VECTORS 8
-#define NV_MSI_X_VECTORS_MASK 0x000f
-#define NV_MSI_CAPABLE 0x0010
-#define NV_MSI_X_CAPABLE 0x0020
-#define NV_MSI_ENABLED 0x0040
-#define NV_MSI_X_ENABLED 0x0080
-
-#define NV_MSI_X_VECTOR_ALL 0x0
-#define NV_MSI_X_VECTOR_RX 0x0
-#define NV_MSI_X_VECTOR_TX 0x1
-#define NV_MSI_X_VECTOR_OTHER 0x2
-
-/*
- * SMP locking:
- * All hardware access under dev->priv->lock, except the performance
- * critical parts:
- * - rx is (pseudo-) lockless: it relies on the single-threading provided
- * by the arch code for interrupts.
- * - tx setup is lockless: it relies on dev->xmit_lock. Actual submission
- * needs dev->priv->lock :-(
- * - set_multicast_list: preparation lockless, relies on dev->xmit_lock.
- */
-
-/* in dev: base, irq */
-struct fe_priv {
- spinlock_t lock;
-
- /* General data:
- * Locking: spin_lock(&np->lock); */
- struct net_device_stats stats;
- int in_shutdown;
- u32 linkspeed;
- int duplex;
- int autoneg;
- int fixed_mode;
- int phyaddr;
- int wolenabled;
- unsigned int phy_oui;
- u16 gigabit;
-
- /* General data: RO fields */
- dma_addr_t ring_addr;
- struct pci_dev *pci_dev;
- u32 orig_mac[2];
- u32 irqmask;
- u32 desc_ver;
- u32 txrxctl_bits;
- u32 vlanctl_bits;
- u32 driver_data;
- u32 register_size;
-
- void __iomem *base;
-
- /* rx specific fields.
- * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
- */
- ring_type rx_ring;
- unsigned int cur_rx, refill_rx;
- struct sk_buff *rx_skbuff[RX_RING];
- dma_addr_t rx_dma[RX_RING];
- unsigned int rx_buf_sz;
- unsigned int pkt_limit;
- struct timer_list oom_kick;
- struct timer_list nic_poll;
- u32 nic_poll_irq;
-
- /* media detection workaround.
- * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
- */
- int need_linktimer;
- unsigned long link_timeout;
- /*
- * tx specific fields.
- */
- ring_type tx_ring;
- unsigned int next_tx, nic_tx;
- struct sk_buff *tx_skbuff[TX_RING];
- dma_addr_t tx_dma[TX_RING];
- unsigned int tx_dma_len[TX_RING];
- u32 tx_flags;
-
- /* vlan fields */
- struct vlan_group *vlangrp;
-
- /* msi/msi-x fields */
- u32 msi_flags;
- struct msix_entry msi_x_entry[NV_MSI_X_MAX_VECTORS];
-};
-
-/*
- * Maximum number of loops until we assume that a bit in the irq mask
- * is stuck. Overridable with module param.
- */
-static int max_interrupt_work = 5;
-
-/*
- * Optimization can be either throuput mode or cpu mode
- *
- * Throughput Mode: Every tx and rx packet will generate an interrupt.
- * CPU Mode: Interrupts are controlled by a timer.
- */
-#define NV_OPTIMIZATION_MODE_THROUGHPUT 0
-#define NV_OPTIMIZATION_MODE_CPU 1
-static int optimization_mode = NV_OPTIMIZATION_MODE_THROUGHPUT;
-
-/*
- * Poll interval for timer irq
- *
- * This interval determines how frequent an interrupt is generated.
- * The is value is determined by [(time_in_micro_secs * 100) / (2^10)]
- * Min = 0, and Max = 65535
- */
-static int poll_interval = -1;
-
-/*
- * Disable MSI interrupts
- */
-static int disable_msi = 0;
-
-/*
- * Disable MSIX interrupts
- */
-static int disable_msix = 0;
-
-static inline struct fe_priv *get_nvpriv(struct net_device *dev)
-{
- return netdev_priv(dev);
-}
-
-static inline u8 __iomem *get_hwbase(struct net_device *dev)
-{
- return ((struct fe_priv *)netdev_priv(dev))->base;
-}
-
-static inline void pci_push(u8 __iomem *base)
-{
- /* force out pending posted writes */
- readl(base);
-}
-
-static inline u32 nv_descr_getlength(struct ring_desc *prd, u32 v)
-{
- return le32_to_cpu(prd->FlagLen)
- & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2);
-}
-
-static inline u32 nv_descr_getlength_ex(struct ring_desc_ex *prd, u32 v)
-{
- return le32_to_cpu(prd->FlagLen) & LEN_MASK_V2;
-}
-
-static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target,
- int delay, int delaymax, const char *msg)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- pci_push(base);
- do {
- udelay(delay);
- delaymax -= delay;
- if (delaymax < 0) {
- if (msg)
- printk(msg);
- return 1;
- }
- } while ((readl(base + offset) & mask) != target);
- return 0;
-}
-
-#define NV_SETUP_RX_RING 0x01
-#define NV_SETUP_TX_RING 0x02
-
-static void setup_hw_rings(struct net_device *dev, int rxtx_flags)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- if (rxtx_flags & NV_SETUP_RX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
- }
- if (rxtx_flags & NV_SETUP_TX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr + RX_RING*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr);
- }
- } else {
- if (rxtx_flags & NV_SETUP_RX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
- writel((u32) (cpu_to_le64(np->ring_addr) >> 32), base + NvRegRxRingPhysAddrHigh);
- }
- if (rxtx_flags & NV_SETUP_TX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr + RX_RING*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr);
- writel((u32) (cpu_to_le64(np->ring_addr + RX_RING*sizeof(struct ring_desc_ex)) >> 32), base + NvRegTxRingPhysAddrHigh);
- }
- }
-}
-
-static int using_multi_irqs(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!(np->msi_flags & NV_MSI_X_ENABLED) ||
- ((np->msi_flags & NV_MSI_X_ENABLED) &&
- ((np->msi_flags & NV_MSI_X_VECTORS_MASK) == 0x1)))
- return 0;
- else
- return 1;
-}
-
-static void nv_enable_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
-}
-
-static void nv_disable_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- } else {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
-}
-
-/* In MSIX mode, a write to irqmask behaves as XOR */
-static void nv_enable_hw_interrupts(struct net_device *dev, u32 mask)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- writel(mask, base + NvRegIrqMask);
-}
-
-static void nv_disable_hw_interrupts(struct net_device *dev, u32 mask)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->msi_flags & NV_MSI_X_ENABLED) {
- writel(mask, base + NvRegIrqMask);
- } else {
- if (np->msi_flags & NV_MSI_ENABLED)
- writel(0, base + NvRegMSIIrqMask);
- writel(0, base + NvRegIrqMask);
- }
-}
-
-#define MII_READ (-1)
-/* mii_rw: read/write a register on the PHY.
- *
- * Caller must guarantee serialization
- */
-static int mii_rw(struct net_device *dev, int addr, int miireg, int value)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 reg;
- int retval;
-
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
-
- reg = readl(base + NvRegMIIControl);
- if (reg & NVREG_MIICTL_INUSE) {
- writel(NVREG_MIICTL_INUSE, base + NvRegMIIControl);
- udelay(NV_MIIBUSY_DELAY);
- }
-
- reg = (addr << NVREG_MIICTL_ADDRSHIFT) | miireg;
- if (value != MII_READ) {
- writel(value, base + NvRegMIIData);
- reg |= NVREG_MIICTL_WRITE;
- }
- writel(reg, base + NvRegMIIControl);
-
- if (reg_delay(dev, NvRegMIIControl, NVREG_MIICTL_INUSE, 0,
- NV_MIIPHY_DELAY, NV_MIIPHY_DELAYMAX, NULL)) {
- dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d timed out.\n",
- dev->name, miireg, addr);
- retval = -1;
- } else if (value != MII_READ) {
- /* it was a write operation - fewer failures are detectable */
- dprintk(KERN_DEBUG "%s: mii_rw wrote 0x%x to reg %d at PHY %d\n",
- dev->name, value, miireg, addr);
- retval = 0;
- } else if (readl(base + NvRegMIIStatus) & NVREG_MIISTAT_ERROR) {
- dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d failed.\n",
- dev->name, miireg, addr);
- retval = -1;
- } else {
- retval = readl(base + NvRegMIIData);
- dprintk(KERN_DEBUG "%s: mii_rw read from reg %d at PHY %d: 0x%x.\n",
- dev->name, miireg, addr, retval);
- }
-
- return retval;
-}
-
-static int phy_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 miicontrol;
- unsigned int tries = 0;
-
- miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- miicontrol |= BMCR_RESET;
- if (mii_rw(dev, np->phyaddr, MII_BMCR, miicontrol)) {
- return -1;
- }
-
- /* wait for 500ms */
- msleep(500);
-
- /* must wait till reset is deasserted */
- while (miicontrol & BMCR_RESET) {
- msleep(10);
- miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- /* FIXME: 100 tries seem excessive */
- if (tries++ > 100)
- return -1;
- }
- return 0;
-}
-
-static int phy_init(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 phyinterface, phy_reserved, mii_status, mii_control, mii_control_1000,reg;
-
- /* set advertise register */
- reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- reg |= (ADVERTISE_10HALF|ADVERTISE_10FULL|ADVERTISE_100HALF|ADVERTISE_100FULL|0x800|0x400);
- if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) {
- printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
-
- /* get phy interface type */
- phyinterface = readl(base + NvRegPhyInterface);
-
- /* see if gigabit phy */
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- if (mii_status & PHY_GIGABIT) {
- np->gigabit = PHY_GIGABIT;
- mii_control_1000 = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- mii_control_1000 &= ~ADVERTISE_1000HALF;
- if (phyinterface & PHY_RGMII)
- mii_control_1000 |= ADVERTISE_1000FULL;
- else
- mii_control_1000 &= ~ADVERTISE_1000FULL;
-
- if (mii_rw(dev, np->phyaddr, MII_1000BT_CR, mii_control_1000)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- else
- np->gigabit = 0;
-
- /* reset the phy */
- if (phy_reset(dev)) {
- printk(KERN_INFO "%s: phy reset failed\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
-
- /* phy vendor specific configuration */
- if ((np->phy_oui == PHY_OUI_CICADA) && (phyinterface & PHY_RGMII) ) {
- phy_reserved = mii_rw(dev, np->phyaddr, MII_RESV1, MII_READ);
- phy_reserved &= ~(PHY_INIT1 | PHY_INIT2);
- phy_reserved |= (PHY_INIT3 | PHY_INIT4);
- if (mii_rw(dev, np->phyaddr, MII_RESV1, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- phy_reserved = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ);
- phy_reserved |= PHY_INIT5;
- if (mii_rw(dev, np->phyaddr, MII_NCONFIG, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- if (np->phy_oui == PHY_OUI_CICADA) {
- phy_reserved = mii_rw(dev, np->phyaddr, MII_SREVISION, MII_READ);
- phy_reserved |= PHY_INIT6;
- if (mii_rw(dev, np->phyaddr, MII_SREVISION, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
-
- /* restart auto negotiation */
- mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE);
- if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) {
- return PHY_ERROR;
- }
-
- return 0;
-}
-
-static void nv_start_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_start_rx\n", dev->name);
- /* Already running? Stop it. */
- if (readl(base + NvRegReceiverControl) & NVREG_RCVCTL_START) {
- writel(0, base + NvRegReceiverControl);
- pci_push(base);
- }
- writel(np->linkspeed, base + NvRegLinkSpeed);
- pci_push(base);
- writel(NVREG_RCVCTL_START, base + NvRegReceiverControl);
- dprintk(KERN_DEBUG "%s: nv_start_rx to duplex %d, speed 0x%08x.\n",
- dev->name, np->duplex, np->linkspeed);
- pci_push(base);
-}
-
-static void nv_stop_rx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_stop_rx\n", dev->name);
- writel(0, base + NvRegReceiverControl);
- reg_delay(dev, NvRegReceiverStatus, NVREG_RCVSTAT_BUSY, 0,
- NV_RXSTOP_DELAY1, NV_RXSTOP_DELAY1MAX,
- KERN_INFO "nv_stop_rx: ReceiverStatus remained busy");
-
- udelay(NV_RXSTOP_DELAY2);
- writel(0, base + NvRegLinkSpeed);
-}
-
-static void nv_start_tx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_start_tx\n", dev->name);
- writel(NVREG_XMITCTL_START, base + NvRegTransmitterControl);
- pci_push(base);
-}
-
-static void nv_stop_tx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_stop_tx\n", dev->name);
- writel(0, base + NvRegTransmitterControl);
- reg_delay(dev, NvRegTransmitterStatus, NVREG_XMITSTAT_BUSY, 0,
- NV_TXSTOP_DELAY1, NV_TXSTOP_DELAY1MAX,
- KERN_INFO "nv_stop_tx: TransmitterStatus remained busy");
-
- udelay(NV_TXSTOP_DELAY2);
- writel(0, base + NvRegUnknownTransmitterReg);
-}
-
-static void nv_txrx_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_txrx_reset\n", dev->name);
- writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
- udelay(NV_TXRX_RESET_DELAY);
- writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
-}
-
-static void nv_mac_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_mac_reset\n", dev->name);
- writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
- writel(NVREG_MAC_RESET_ASSERT, base + NvRegMacReset);
- pci_push(base);
- udelay(NV_MAC_RESET_DELAY);
- writel(0, base + NvRegMacReset);
- pci_push(base);
- udelay(NV_MAC_RESET_DELAY);
- writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
-}
-
-/*
- * nv_get_stats: dev->get_stats function
- * Get latest stats value from the nic.
- * Called with read_lock(&dev_base_lock) held for read -
- * only synchronized against unregister_netdevice.
- */
-static struct net_device_stats *nv_get_stats(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- /* It seems that the nic always generates interrupts and doesn't
- * accumulate errors internally. Thus the current values in np->stats
- * are already up to date.
- */
- return &np->stats;
-}
-
-/*
- * nv_alloc_rx: fill rx ring entries.
- * Return 1 if the allocations for the skbs failed and the
- * rx engine is without Available descriptors
- */
-static int nv_alloc_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- unsigned int refill_rx = np->refill_rx;
- int nr;
-
- while (np->cur_rx != refill_rx) {
- struct sk_buff *skb;
-
- nr = refill_rx % RX_RING;
- if (np->rx_skbuff[nr] == NULL) {
-
- skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD);
- if (!skb)
- break;
-
- skb->dev = dev;
- np->rx_skbuff[nr] = skb;
- } else {
- skb = np->rx_skbuff[nr];
- }
- np->rx_dma[nr] = pci_map_single(np->pci_dev, skb->data,
- skb->end-skb->data, PCI_DMA_FROMDEVICE);
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig[nr].PacketBuffer = cpu_to_le32(np->rx_dma[nr]);
- wmb();
- np->rx_ring.orig[nr].FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL);
- } else {
- np->rx_ring.ex[nr].PacketBufferHigh = cpu_to_le64(np->rx_dma[nr]) >> 32;
- np->rx_ring.ex[nr].PacketBufferLow = cpu_to_le64(np->rx_dma[nr]) & 0x0FFFFFFFF;
- wmb();
- np->rx_ring.ex[nr].FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL);
- }
- dprintk(KERN_DEBUG "%s: nv_alloc_rx: Packet %d marked as Available\n",
- dev->name, refill_rx);
- refill_rx++;
- }
- np->refill_rx = refill_rx;
- if (np->cur_rx - refill_rx == RX_RING)
- return 1;
- return 0;
-}
-
-static void nv_do_rx_refill(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- } else {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
- if (nv_alloc_rx(dev)) {
- spin_lock_irq(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
-}
-
-static void nv_init_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
-
- np->cur_rx = RX_RING;
- np->refill_rx = 0;
- for (i = 0; i < RX_RING; i++)
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->rx_ring.orig[i].FlagLen = 0;
- else
- np->rx_ring.ex[i].FlagLen = 0;
-}
-
-static void nv_init_tx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
-
- np->next_tx = np->nic_tx = 0;
- for (i = 0; i < TX_RING; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->tx_ring.orig[i].FlagLen = 0;
- else
- np->tx_ring.ex[i].FlagLen = 0;
- np->tx_skbuff[i] = NULL;
- np->tx_dma[i] = 0;
- }
-}
-
-static int nv_init_ring(struct net_device *dev)
-{
- nv_init_tx(dev);
- nv_init_rx(dev);
- return nv_alloc_rx(dev);
-}
-
-static int nv_release_txskb(struct net_device *dev, unsigned int skbnr)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- dprintk(KERN_INFO "%s: nv_release_txskb for skbnr %d\n",
- dev->name, skbnr);
-
- if (np->tx_dma[skbnr]) {
- pci_unmap_page(np->pci_dev, np->tx_dma[skbnr],
- np->tx_dma_len[skbnr],
- PCI_DMA_TODEVICE);
- np->tx_dma[skbnr] = 0;
- }
-
- if (np->tx_skbuff[skbnr]) {
- dev_kfree_skb_any(np->tx_skbuff[skbnr]);
- np->tx_skbuff[skbnr] = NULL;
- return 1;
- } else {
- return 0;
- }
-}
-
-static void nv_drain_tx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- unsigned int i;
-
- for (i = 0; i < TX_RING; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->tx_ring.orig[i].FlagLen = 0;
- else
- np->tx_ring.ex[i].FlagLen = 0;
- if (nv_release_txskb(dev, i))
- np->stats.tx_dropped++;
- }
-}
-
-static void nv_drain_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
- for (i = 0; i < RX_RING; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->rx_ring.orig[i].FlagLen = 0;
- else
- np->rx_ring.ex[i].FlagLen = 0;
- wmb();
- if (np->rx_skbuff[i]) {
- pci_unmap_single(np->pci_dev, np->rx_dma[i],
- np->rx_skbuff[i]->end-np->rx_skbuff[i]->data,
- PCI_DMA_FROMDEVICE);
- dev_kfree_skb(np->rx_skbuff[i]);
- np->rx_skbuff[i] = NULL;
- }
- }
-}
-
-static void drain_ring(struct net_device *dev)
-{
- nv_drain_tx(dev);
- nv_drain_rx(dev);
-}
-
-/*
- * nv_start_xmit: dev->hard_start_xmit function
- * Called with dev->xmit_lock held.
- */
-static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 tx_flags = 0;
- u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET);
- unsigned int fragments = skb_shinfo(skb)->nr_frags;
- unsigned int nr = (np->next_tx - 1) % TX_RING;
- unsigned int start_nr = np->next_tx % TX_RING;
- unsigned int i;
- u32 offset = 0;
- u32 bcnt;
- u32 size = skb->len-skb->data_len;
- u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
- u32 tx_flags_vlan = 0;
-
- /* add fragments to entries count */
- for (i = 0; i < fragments; i++) {
- entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) +
- ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
- }
-
- spin_lock_irq(&np->lock);
-
- if ((np->next_tx - np->nic_tx + entries - 1) > TX_LIMIT_STOP) {
- spin_unlock_irq(&np->lock);
- netif_stop_queue(dev);
- return NETDEV_TX_BUSY;
- }
-
- /* setup the header buffer */
- do {
- bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- nr = (nr + 1) % TX_RING;
-
- np->tx_dma[nr] = pci_map_single(np->pci_dev, skb->data + offset, bcnt,
- PCI_DMA_TODEVICE);
- np->tx_dma_len[nr] = bcnt;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].PacketBuffer = cpu_to_le32(np->tx_dma[nr]);
- np->tx_ring.orig[nr].FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
- } else {
- np->tx_ring.ex[nr].PacketBufferHigh = cpu_to_le64(np->tx_dma[nr]) >> 32;
- np->tx_ring.ex[nr].PacketBufferLow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF;
- np->tx_ring.ex[nr].FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
- }
- tx_flags = np->tx_flags;
- offset += bcnt;
- size -= bcnt;
- } while(size);
-
- /* setup the fragments */
- for (i = 0; i < fragments; i++) {
- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- u32 size = frag->size;
- offset = 0;
-
- do {
- bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- nr = (nr + 1) % TX_RING;
-
- np->tx_dma[nr] = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
- PCI_DMA_TODEVICE);
- np->tx_dma_len[nr] = bcnt;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].PacketBuffer = cpu_to_le32(np->tx_dma[nr]);
- np->tx_ring.orig[nr].FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
- } else {
- np->tx_ring.ex[nr].PacketBufferHigh = cpu_to_le64(np->tx_dma[nr]) >> 32;
- np->tx_ring.ex[nr].PacketBufferLow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF;
- np->tx_ring.ex[nr].FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
- }
- offset += bcnt;
- size -= bcnt;
- } while (size);
- }
-
- /* set last fragment flag */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].FlagLen |= cpu_to_le32(tx_flags_extra);
- } else {
- np->tx_ring.ex[nr].FlagLen |= cpu_to_le32(tx_flags_extra);
- }
-
- np->tx_skbuff[nr] = skb;
-
-#ifdef NETIF_F_TSO
- if (skb_shinfo(skb)->tso_size)
- tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->tso_size << NV_TX2_TSO_SHIFT);
- else
-#endif
- tx_flags_extra = (skb->ip_summed == CHECKSUM_HW ? (NV_TX2_CHECKSUM_L3|NV_TX2_CHECKSUM_L4) : 0);
-
- /* vlan tag */
- if (np->vlangrp && vlan_tx_tag_present(skb)) {
- tx_flags_vlan = NV_TX3_VLAN_TAG_PRESENT | vlan_tx_tag_get(skb);
- }
-
- /* set tx flags */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[start_nr].FlagLen |= cpu_to_le32(tx_flags | tx_flags_extra);
- } else {
- np->tx_ring.ex[start_nr].TxVlan = cpu_to_le32(tx_flags_vlan);
- np->tx_ring.ex[start_nr].FlagLen |= cpu_to_le32(tx_flags | tx_flags_extra);
- }
-
- dprintk(KERN_DEBUG "%s: nv_start_xmit: packet %d (entries %d) queued for transmission. tx_flags_extra: %x\n",
- dev->name, np->next_tx, entries, tx_flags_extra);
- {
- int j;
- for (j=0; j<64; j++) {
- if ((j%16) == 0)
- dprintk("\n%03x:", j);
- dprintk(" %02x", ((unsigned char*)skb->data)[j]);
- }
- dprintk("\n");
- }
-
- np->next_tx += entries;
-
- dev->trans_start = jiffies;
- spin_unlock_irq(&np->lock);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(get_hwbase(dev));
- return NETDEV_TX_OK;
-}
-
-/*
- * nv_tx_done: check for completed packets, release the skbs.
- *
- * Caller must own np->lock.
- */
-static void nv_tx_done(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 Flags;
- unsigned int i;
- struct sk_buff *skb;
-
- while (np->nic_tx != np->next_tx) {
- i = np->nic_tx % TX_RING;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- Flags = le32_to_cpu(np->tx_ring.orig[i].FlagLen);
- else
- Flags = le32_to_cpu(np->tx_ring.ex[i].FlagLen);
-
- dprintk(KERN_DEBUG "%s: nv_tx_done: looking at packet %d, Flags 0x%x.\n",
- dev->name, np->nic_tx, Flags);
- if (Flags & NV_TX_VALID)
- break;
- if (np->desc_ver == DESC_VER_1) {
- if (Flags & NV_TX_LASTPACKET) {
- skb = np->tx_skbuff[i];
- if (Flags & (NV_TX_RETRYERROR|NV_TX_CARRIERLOST|NV_TX_LATECOLLISION|
- NV_TX_UNDERFLOW|NV_TX_ERROR)) {
- if (Flags & NV_TX_UNDERFLOW)
- np->stats.tx_fifo_errors++;
- if (Flags & NV_TX_CARRIERLOST)
- np->stats.tx_carrier_errors++;
- np->stats.tx_errors++;
- } else {
- np->stats.tx_packets++;
- np->stats.tx_bytes += skb->len;
- }
- }
- } else {
- if (Flags & NV_TX2_LASTPACKET) {
- skb = np->tx_skbuff[i];
- if (Flags & (NV_TX2_RETRYERROR|NV_TX2_CARRIERLOST|NV_TX2_LATECOLLISION|
- NV_TX2_UNDERFLOW|NV_TX2_ERROR)) {
- if (Flags & NV_TX2_UNDERFLOW)
- np->stats.tx_fifo_errors++;
- if (Flags & NV_TX2_CARRIERLOST)
- np->stats.tx_carrier_errors++;
- np->stats.tx_errors++;
- } else {
- np->stats.tx_packets++;
- np->stats.tx_bytes += skb->len;
- }
- }
- }
- nv_release_txskb(dev, i);
- np->nic_tx++;
- }
- if (np->next_tx - np->nic_tx < TX_LIMIT_START)
- netif_wake_queue(dev);
-}
-
-/*
- * nv_tx_timeout: dev->tx_timeout function
- * Called with dev->xmit_lock held.
- */
-static void nv_tx_timeout(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 status;
-
- if (np->msi_flags & NV_MSI_X_ENABLED)
- status = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- else
- status = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
-
- printk(KERN_INFO "%s: Got tx_timeout. irq: %08x\n", dev->name, status);
-
- {
- int i;
-
- printk(KERN_INFO "%s: Ring at %lx: next %d nic %d\n",
- dev->name, (unsigned long)np->ring_addr,
- np->next_tx, np->nic_tx);
- printk(KERN_INFO "%s: Dumping tx registers\n", dev->name);
- for (i=0;i<=np->register_size;i+= 32) {
- printk(KERN_INFO "%3x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
- i,
- readl(base + i + 0), readl(base + i + 4),
- readl(base + i + 8), readl(base + i + 12),
- readl(base + i + 16), readl(base + i + 20),
- readl(base + i + 24), readl(base + i + 28));
- }
- printk(KERN_INFO "%s: Dumping tx ring\n", dev->name);
- for (i=0;i<TX_RING;i+= 4) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- printk(KERN_INFO "%03x: %08x %08x // %08x %08x // %08x %08x // %08x %08x\n",
- i,
- le32_to_cpu(np->tx_ring.orig[i].PacketBuffer),
- le32_to_cpu(np->tx_ring.orig[i].FlagLen),
- le32_to_cpu(np->tx_ring.orig[i+1].PacketBuffer),
- le32_to_cpu(np->tx_ring.orig[i+1].FlagLen),
- le32_to_cpu(np->tx_ring.orig[i+2].PacketBuffer),
- le32_to_cpu(np->tx_ring.orig[i+2].FlagLen),
- le32_to_cpu(np->tx_ring.orig[i+3].PacketBuffer),
- le32_to_cpu(np->tx_ring.orig[i+3].FlagLen));
- } else {
- printk(KERN_INFO "%03x: %08x %08x %08x // %08x %08x %08x // %08x %08x %08x // %08x %08x %08x\n",
- i,
- le32_to_cpu(np->tx_ring.ex[i].PacketBufferHigh),
- le32_to_cpu(np->tx_ring.ex[i].PacketBufferLow),
- le32_to_cpu(np->tx_ring.ex[i].FlagLen),
- le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferHigh),
- le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferLow),
- le32_to_cpu(np->tx_ring.ex[i+1].FlagLen),
- le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferHigh),
- le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferLow),
- le32_to_cpu(np->tx_ring.ex[i+2].FlagLen),
- le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferHigh),
- le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferLow),
- le32_to_cpu(np->tx_ring.ex[i+3].FlagLen));
- }
- }
- }
-
- spin_lock_irq(&np->lock);
-
- /* 1) stop tx engine */
- nv_stop_tx(dev);
-
- /* 2) check that the packets were not sent already: */
- nv_tx_done(dev);
-
- /* 3) if there are dead entries: clear everything */
- if (np->next_tx != np->nic_tx) {
- printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name);
- nv_drain_tx(dev);
- np->next_tx = np->nic_tx = 0;
- setup_hw_rings(dev, NV_SETUP_TX_RING);
- netif_wake_queue(dev);
- }
-
- /* 4) restart tx engine */
- nv_start_tx(dev);
- spin_unlock_irq(&np->lock);
-}
-
-/*
- * Called when the nic notices a mismatch between the actual data len on the
- * wire and the len indicated in the 802 header
- */
-static int nv_getlen(struct net_device *dev, void *packet, int datalen)
-{
- int hdrlen; /* length of the 802 header */
- int protolen; /* length as stored in the proto field */
-
- /* 1) calculate len according to header */
- if ( ((struct vlan_ethhdr *)packet)->h_vlan_proto == __constant_htons(ETH_P_8021Q)) {
- protolen = ntohs( ((struct vlan_ethhdr *)packet)->h_vlan_encapsulated_proto );
- hdrlen = VLAN_HLEN;
- } else {
- protolen = ntohs( ((struct ethhdr *)packet)->h_proto);
- hdrlen = ETH_HLEN;
- }
- dprintk(KERN_DEBUG "%s: nv_getlen: datalen %d, protolen %d, hdrlen %d\n",
- dev->name, datalen, protolen, hdrlen);
- if (protolen > ETH_DATA_LEN)
- return datalen; /* Value in proto field not a len, no checks possible */
-
- protolen += hdrlen;
- /* consistency checks: */
- if (datalen > ETH_ZLEN) {
- if (datalen >= protolen) {
- /* more data on wire than in 802 header, trim of
- * additional data.
- */
- dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
- dev->name, protolen);
- return protolen;
- } else {
- /* less data on wire than mentioned in header.
- * Discard the packet.
- */
- dprintk(KERN_DEBUG "%s: nv_getlen: discarding long packet.\n",
- dev->name);
- return -1;
- }
- } else {
- /* short packet. Accept only if 802 values are also short */
- if (protolen > ETH_ZLEN) {
- dprintk(KERN_DEBUG "%s: nv_getlen: discarding short packet.\n",
- dev->name);
- return -1;
- }
- dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
- dev->name, datalen);
- return datalen;
- }
-}
-
-static void nv_rx_process(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 Flags;
- u32 vlanflags = 0;
-
-
- for (;;) {
- struct sk_buff *skb;
- int len;
- int i;
- if (np->cur_rx - np->refill_rx >= RX_RING)
- break; /* we scanned the whole ring - do not continue */
-
- i = np->cur_rx % RX_RING;
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- Flags = le32_to_cpu(np->rx_ring.orig[i].FlagLen);
- len = nv_descr_getlength(&np->rx_ring.orig[i], np->desc_ver);
- } else {
- Flags = le32_to_cpu(np->rx_ring.ex[i].FlagLen);
- len = nv_descr_getlength_ex(&np->rx_ring.ex[i], np->desc_ver);
- vlanflags = le32_to_cpu(np->rx_ring.ex[i].PacketBufferLow);
- }
-
- dprintk(KERN_DEBUG "%s: nv_rx_process: looking at packet %d, Flags 0x%x.\n",
- dev->name, np->cur_rx, Flags);
-
- if (Flags & NV_RX_AVAIL)
- break; /* still owned by hardware, */
-
- /*
- * the packet is for us - immediately tear down the pci mapping.
- * TODO: check if a prefetch of the first cacheline improves
- * the performance.
- */
- pci_unmap_single(np->pci_dev, np->rx_dma[i],
- np->rx_skbuff[i]->end-np->rx_skbuff[i]->data,
- PCI_DMA_FROMDEVICE);
-
- {
- int j;
- dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",Flags);
- for (j=0; j<64; j++) {
- if ((j%16) == 0)
- dprintk("\n%03x:", j);
- dprintk(" %02x", ((unsigned char*)np->rx_skbuff[i]->data)[j]);
- }
- dprintk("\n");
- }
- /* look at what we actually got: */
- if (np->desc_ver == DESC_VER_1) {
- if (!(Flags & NV_RX_DESCRIPTORVALID))
- goto next_pkt;
-
- if (Flags & NV_RX_ERROR) {
- if (Flags & NV_RX_MISSEDFRAME) {
- np->stats.rx_missed_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & (NV_RX_ERROR1|NV_RX_ERROR2|NV_RX_ERROR3)) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX_CRCERR) {
- np->stats.rx_crc_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX_OVERFLOW) {
- np->stats.rx_over_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX_ERROR4) {
- len = nv_getlen(dev, np->rx_skbuff[i]->data, len);
- if (len < 0) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- }
- /* framing errors are soft errors. */
- if (Flags & NV_RX_FRAMINGERR) {
- if (Flags & NV_RX_SUBSTRACT1) {
- len--;
- }
- }
- }
- } else {
- if (!(Flags & NV_RX2_DESCRIPTORVALID))
- goto next_pkt;
-
- if (Flags & NV_RX2_ERROR) {
- if (Flags & (NV_RX2_ERROR1|NV_RX2_ERROR2|NV_RX2_ERROR3)) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX2_CRCERR) {
- np->stats.rx_crc_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX2_OVERFLOW) {
- np->stats.rx_over_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (Flags & NV_RX2_ERROR4) {
- len = nv_getlen(dev, np->rx_skbuff[i]->data, len);
- if (len < 0) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- }
- /* framing errors are soft errors */
- if (Flags & NV_RX2_FRAMINGERR) {
- if (Flags & NV_RX2_SUBSTRACT1) {
- len--;
- }
- }
- }
- Flags &= NV_RX2_CHECKSUMMASK;
- if (Flags == NV_RX2_CHECKSUMOK1 ||
- Flags == NV_RX2_CHECKSUMOK2 ||
- Flags == NV_RX2_CHECKSUMOK3) {
- dprintk(KERN_DEBUG "%s: hw checksum hit!.\n", dev->name);
- np->rx_skbuff[i]->ip_summed = CHECKSUM_UNNECESSARY;
- } else {
- dprintk(KERN_DEBUG "%s: hwchecksum miss!.\n", dev->name);
- }
- }
- /* got a valid packet - forward it to the network core */
- skb = np->rx_skbuff[i];
- np->rx_skbuff[i] = NULL;
-
- skb_put(skb, len);
- skb->protocol = eth_type_trans(skb, dev);
- dprintk(KERN_DEBUG "%s: nv_rx_process: packet %d with %d bytes, proto %d accepted.\n",
- dev->name, np->cur_rx, len, skb->protocol);
- if (np->vlangrp && (vlanflags & NV_RX3_VLAN_TAG_PRESENT)) {
- vlan_hwaccel_rx(skb, np->vlangrp, vlanflags & NV_RX3_VLAN_TAG_MASK);
- } else {
- netif_rx(skb);
- }
- dev->last_rx = jiffies;
- np->stats.rx_packets++;
- np->stats.rx_bytes += len;
-next_pkt:
- np->cur_rx++;
- }
-}
-
-static void set_bufsize(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (dev->mtu <= ETH_DATA_LEN)
- np->rx_buf_sz = ETH_DATA_LEN + NV_RX_HEADERS;
- else
- np->rx_buf_sz = dev->mtu + NV_RX_HEADERS;
-}
-
-/*
- * nv_change_mtu: dev->change_mtu function
- * Called with dev_base_lock held for read.
- */
-static int nv_change_mtu(struct net_device *dev, int new_mtu)
-{
- struct fe_priv *np = netdev_priv(dev);
- int old_mtu;
-
- if (new_mtu < 64 || new_mtu > np->pkt_limit)
- return -EINVAL;
-
- old_mtu = dev->mtu;
- dev->mtu = new_mtu;
-
- /* return early if the buffer sizes will not change */
- if (old_mtu <= ETH_DATA_LEN && new_mtu <= ETH_DATA_LEN)
- return 0;
- if (old_mtu == new_mtu)
- return 0;
-
- /* synchronized against open : rtnl_lock() held by caller */
- if (netif_running(dev)) {
- u8 __iomem *base = get_hwbase(dev);
- /*
- * It seems that the nic preloads valid ring entries into an
- * internal buffer. The procedure for flushing everything is
- * guessed, there is probably a simpler approach.
- * Changing the MTU is a rare event, it shouldn't matter.
- */
- nv_disable_irq(dev);
- spin_lock_bh(&dev->xmit_lock);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain rx queue */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
- /* reinit driver view of the rx queue */
- nv_init_rx(dev);
- nv_init_tx(dev);
- /* alloc new rx buffers */
- set_bufsize(dev);
- if (nv_alloc_rx(dev)) {
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- }
- /* reinit nic view of the rx queue */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((RX_RING-1) << NVREG_RINGSZ_RXSHIFT) + ((TX_RING-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(base);
-
- /* restart rx engine */
- nv_start_rx(dev);
- nv_start_tx(dev);
- spin_unlock(&np->lock);
- spin_unlock_bh(&dev->xmit_lock);
- nv_enable_irq(dev);
- }
- return 0;
-}
-
-static void nv_copy_mac_to_hw(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 mac[2];
-
- mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
- (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
- mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
-
- writel(mac[0], base + NvRegMacAddrA);
- writel(mac[1], base + NvRegMacAddrB);
-}
-
-/*
- * nv_set_mac_address: dev->set_mac_address function
- * Called with rtnl_lock() held.
- */
-static int nv_set_mac_address(struct net_device *dev, void *addr)
-{
- struct fe_priv *np = netdev_priv(dev);
- struct sockaddr *macaddr = (struct sockaddr*)addr;
-
- if(!is_valid_ether_addr(macaddr->sa_data))
- return -EADDRNOTAVAIL;
-
- /* synchronized against open : rtnl_lock() held by caller */
- memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN);
-
- if (netif_running(dev)) {
- spin_lock_bh(&dev->xmit_lock);
- spin_lock_irq(&np->lock);
-
- /* stop rx engine */
- nv_stop_rx(dev);
-
- /* set mac address */
- nv_copy_mac_to_hw(dev);
-
- /* restart rx engine */
- nv_start_rx(dev);
- spin_unlock_irq(&np->lock);
- spin_unlock_bh(&dev->xmit_lock);
- } else {
- nv_copy_mac_to_hw(dev);
- }
- return 0;
-}
-
-/*
- * nv_set_multicast: dev->set_multicast function
- * Called with dev->xmit_lock held.
- */
-static void nv_set_multicast(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 addr[2];
- u32 mask[2];
- u32 pff;
-
- memset(addr, 0, sizeof(addr));
- memset(mask, 0, sizeof(mask));
-
- if (dev->flags & IFF_PROMISC) {
- printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n", dev->name);
- pff = NVREG_PFF_PROMISC;
- } else {
- pff = NVREG_PFF_MYADDR;
-
- if (dev->flags & IFF_ALLMULTI || dev->mc_list) {
- u32 alwaysOff[2];
- u32 alwaysOn[2];
-
- alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0xffffffff;
- if (dev->flags & IFF_ALLMULTI) {
- alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0;
- } else {
- struct dev_mc_list *walk;
-
- walk = dev->mc_list;
- while (walk != NULL) {
- u32 a, b;
- a = le32_to_cpu(*(u32 *) walk->dmi_addr);
- b = le16_to_cpu(*(u16 *) (&walk->dmi_addr[4]));
- alwaysOn[0] &= a;
- alwaysOff[0] &= ~a;
- alwaysOn[1] &= b;
- alwaysOff[1] &= ~b;
- walk = walk->next;
- }
- }
- addr[0] = alwaysOn[0];
- addr[1] = alwaysOn[1];
- mask[0] = alwaysOn[0] | alwaysOff[0];
- mask[1] = alwaysOn[1] | alwaysOff[1];
- }
- }
- addr[0] |= NVREG_MCASTADDRA_FORCE;
- pff |= NVREG_PFF_ALWAYS;
- spin_lock_irq(&np->lock);
- nv_stop_rx(dev);
- writel(addr[0], base + NvRegMulticastAddrA);
- writel(addr[1], base + NvRegMulticastAddrB);
- writel(mask[0], base + NvRegMulticastMaskA);
- writel(mask[1], base + NvRegMulticastMaskB);
- writel(pff, base + NvRegPacketFilterFlags);
- dprintk(KERN_INFO "%s: reconfiguration for multicast lists.\n",
- dev->name);
- nv_start_rx(dev);
- spin_unlock_irq(&np->lock);
-}
-
-/**
- * nv_update_linkspeed: Setup the MAC according to the link partner
- * @dev: Network device to be configured
- *
- * The function queries the PHY and checks if there is a link partner.
- * If yes, then it sets up the MAC accordingly. Otherwise, the MAC is
- * set to 10 MBit HD.
- *
- * The function returns 0 if there is no link partner and 1 if there is
- * a good link partner.
- */
-static int nv_update_linkspeed(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int adv, lpa;
- int newls = np->linkspeed;
- int newdup = np->duplex;
- int mii_status;
- int retval = 0;
- u32 control_1000, status_1000, phyreg;
-
- /* BMSR_LSTATUS is latched, read it twice:
- * we want the current value.
- */
- mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
-
- if (!(mii_status & BMSR_LSTATUS)) {
- dprintk(KERN_DEBUG "%s: no link detected by phy - falling back to 10HD.\n",
- dev->name);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- retval = 0;
- goto set_speed;
- }
-
- if (np->autoneg == 0) {
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: autoneg off, PHY set to 0x%04x.\n",
- dev->name, np->fixed_mode);
- if (np->fixed_mode & LPA_100FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 1;
- } else if (np->fixed_mode & LPA_100HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 0;
- } else if (np->fixed_mode & LPA_10FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 1;
- } else {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- }
- retval = 1;
- goto set_speed;
- }
- /* check auto negotiation is complete */
- if (!(mii_status & BMSR_ANEGCOMPLETE)) {
- /* still in autonegotiation - configure nic for 10 MBit HD and wait. */
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- retval = 0;
- dprintk(KERN_DEBUG "%s: autoneg not completed - falling back to 10HD.\n", dev->name);
- goto set_speed;
- }
-
- retval = 1;
- if (np->gigabit == PHY_GIGABIT) {
- control_1000 = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- status_1000 = mii_rw(dev, np->phyaddr, MII_1000BT_SR, MII_READ);
-
- if ((control_1000 & ADVERTISE_1000FULL) &&
- (status_1000 & LPA_1000FULL)) {
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: GBit ethernet detected.\n",
- dev->name);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_1000;
- newdup = 1;
- goto set_speed;
- }
- }
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ);
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: PHY advertises 0x%04x, lpa 0x%04x.\n",
- dev->name, adv, lpa);
-
- /* FIXME: handle parallel detection properly */
- lpa = lpa & adv;
- if (lpa & LPA_100FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 1;
- } else if (lpa & LPA_100HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 0;
- } else if (lpa & LPA_10FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 1;
- } else if (lpa & LPA_10HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- } else {
- dprintk(KERN_DEBUG "%s: bad ability %04x - falling back to 10HD.\n", dev->name, lpa);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- }
-
-set_speed:
- if (np->duplex == newdup && np->linkspeed == newls)
- return retval;
-
- dprintk(KERN_INFO "%s: changing link setting from %d/%d to %d/%d.\n",
- dev->name, np->linkspeed, np->duplex, newls, newdup);
-
- np->duplex = newdup;
- np->linkspeed = newls;
-
- if (np->gigabit == PHY_GIGABIT) {
- phyreg = readl(base + NvRegRandomSeed);
- phyreg &= ~(0x3FF00);
- if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_10)
- phyreg |= NVREG_RNDSEED_FORCE3;
- else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_100)
- phyreg |= NVREG_RNDSEED_FORCE2;
- else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_1000)
- phyreg |= NVREG_RNDSEED_FORCE;
- writel(phyreg, base + NvRegRandomSeed);
- }
-
- phyreg = readl(base + NvRegPhyInterface);
- phyreg &= ~(PHY_HALF|PHY_100|PHY_1000);
- if (np->duplex == 0)
- phyreg |= PHY_HALF;
- if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_100)
- phyreg |= PHY_100;
- else if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
- phyreg |= PHY_1000;
- writel(phyreg, base + NvRegPhyInterface);
-
- writel(NVREG_MISC1_FORCE | ( np->duplex ? 0 : NVREG_MISC1_HD),
- base + NvRegMisc1);
- pci_push(base);
- writel(np->linkspeed, base + NvRegLinkSpeed);
- pci_push(base);
-
- return retval;
-}
-
-static void nv_linkchange(struct net_device *dev)
-{
- if (nv_update_linkspeed(dev)) {
- if (!netif_carrier_ok(dev)) {
- netif_carrier_on(dev);
- printk(KERN_INFO "%s: link up.\n", dev->name);
- nv_start_rx(dev);
- }
- } else {
- if (netif_carrier_ok(dev)) {
- netif_carrier_off(dev);
- printk(KERN_INFO "%s: link down.\n", dev->name);
- nv_stop_rx(dev);
- }
- }
-}
-
-static void nv_link_irq(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 miistat;
-
- miistat = readl(base + NvRegMIIStatus);
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
- dprintk(KERN_INFO "%s: link change irq, status 0x%x.\n", dev->name, miistat);
-
- if (miistat & (NVREG_MIISTAT_LINKCHANGE))
- nv_linkchange(dev);
- dprintk(KERN_DEBUG "%s: link change notification done.\n", dev->name);
-}
-
-static irqreturn_t nv_nic_irq(int foo, void *data, struct pt_regs *regs)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq\n", dev->name);
-
- for (i=0; ; i++) {
- if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
- events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- } else {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
- }
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- spin_lock(&np->lock);
- nv_tx_done(dev);
- spin_unlock(&np->lock);
-
- nv_rx_process(dev);
- if (nv_alloc_rx(dev)) {
- spin_lock(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock(&np->lock);
- }
-
- if (events & NVREG_IRQ_LINK) {
- spin_lock(&np->lock);
- nv_link_irq(dev);
- spin_unlock(&np->lock);
- }
- if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- spin_lock(&np->lock);
- nv_linkchange(dev);
- spin_unlock(&np->lock);
- np->link_timeout = jiffies + LINK_TIMEOUT;
- }
- if (events & (NVREG_IRQ_TX_ERR)) {
- dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
- dev->name, events);
- }
- if (events & (NVREG_IRQ_UNKNOWN)) {
- printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- spin_lock(&np->lock);
- /* disable interrupts on the nic */
- if (!(np->msi_flags & NV_MSI_X_ENABLED))
- writel(0, base + NvRegIrqMask);
- else
- writel(np->irqmask, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq = np->irqmask;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i);
- spin_unlock(&np->lock);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_tx(int foo, void *data, struct pt_regs *regs)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_tx\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_TX_ALL;
- writel(NVREG_IRQ_TX_ALL, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: tx irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- spin_lock_irq(&np->lock);
- nv_tx_done(dev);
- spin_unlock_irq(&np->lock);
-
- if (events & (NVREG_IRQ_TX_ERR)) {
- dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- spin_lock_irq(&np->lock);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_TX_ALL, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_TX_ALL;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_tx.\n", dev->name, i);
- spin_unlock_irq(&np->lock);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_tx completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_rx(int foo, void *data, struct pt_regs *regs)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_rx\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL;
- writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: rx irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- nv_rx_process(dev);
- if (nv_alloc_rx(dev)) {
- spin_lock_irq(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
-
- if (i > max_interrupt_work) {
- spin_lock_irq(&np->lock);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_RX_ALL;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_rx.\n", dev->name, i);
- spin_unlock_irq(&np->lock);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_rx completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_other(int foo, void *data, struct pt_regs *regs)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_other\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_OTHER;
- writel(NVREG_IRQ_OTHER, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- if (events & NVREG_IRQ_LINK) {
- spin_lock_irq(&np->lock);
- nv_link_irq(dev);
- spin_unlock_irq(&np->lock);
- }
- if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- spin_lock_irq(&np->lock);
- nv_linkchange(dev);
- spin_unlock_irq(&np->lock);
- np->link_timeout = jiffies + LINK_TIMEOUT;
- }
- if (events & (NVREG_IRQ_UNKNOWN)) {
- printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- spin_lock_irq(&np->lock);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_OTHER, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_OTHER;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_other.\n", dev->name, i);
- spin_unlock_irq(&np->lock);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_other completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static void nv_do_nic_poll(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 mask = 0;
-
- /*
- * First disable irq(s) and then
- * reenable interrupts on the nic, we have to do this before calling
- * nv_nic_irq because that may decide to do otherwise
- */
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- mask = np->irqmask;
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- mask |= NVREG_IRQ_RX_ALL;
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- mask |= NVREG_IRQ_TX_ALL;
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- mask |= NVREG_IRQ_OTHER;
- }
- }
- np->nic_poll_irq = 0;
-
- /* FIXME: Do we need synchronize_irq(dev->irq) here? */
-
- writel(mask, base + NvRegIrqMask);
- pci_push(base);
-
- if (!using_multi_irqs(dev)) {
- nv_nic_irq((int) 0, (void *) data, (struct pt_regs *) NULL);
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- nv_nic_irq_rx((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- nv_nic_irq_tx((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- nv_nic_irq_other((int) 0, (void *) data, (struct pt_regs *) NULL);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
- }
-}
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void nv_poll_controller(struct net_device *dev)
-{
- nv_do_nic_poll((unsigned long) dev);
-}
-#endif
-
-static void nv_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
-{
- struct fe_priv *np = netdev_priv(dev);
- strcpy(info->driver, "forcedeth");
- strcpy(info->version, FORCEDETH_VERSION);
- strcpy(info->bus_info, pci_name(np->pci_dev));
-}
-
-static void nv_get_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
-{
- struct fe_priv *np = netdev_priv(dev);
- wolinfo->supported = WAKE_MAGIC;
-
- spin_lock_irq(&np->lock);
- if (np->wolenabled)
- wolinfo->wolopts = WAKE_MAGIC;
- spin_unlock_irq(&np->lock);
-}
-
-static int nv_set_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- spin_lock_irq(&np->lock);
- if (wolinfo->wolopts == 0) {
- writel(0, base + NvRegWakeUpFlags);
- np->wolenabled = 0;
- }
- if (wolinfo->wolopts & WAKE_MAGIC) {
- writel(NVREG_WAKEUPFLAGS_ENABLE, base + NvRegWakeUpFlags);
- np->wolenabled = 1;
- }
- spin_unlock_irq(&np->lock);
- return 0;
-}
-
-static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct fe_priv *np = netdev_priv(dev);
- int adv;
-
- spin_lock_irq(&np->lock);
- ecmd->port = PORT_MII;
- if (!netif_running(dev)) {
- /* We do not track link speed / duplex setting if the
- * interface is disabled. Force a link check */
- nv_update_linkspeed(dev);
- }
- switch(np->linkspeed & (NVREG_LINKSPEED_MASK)) {
- case NVREG_LINKSPEED_10:
- ecmd->speed = SPEED_10;
- break;
- case NVREG_LINKSPEED_100:
- ecmd->speed = SPEED_100;
- break;
- case NVREG_LINKSPEED_1000:
- ecmd->speed = SPEED_1000;
- break;
- }
- ecmd->duplex = DUPLEX_HALF;
- if (np->duplex)
- ecmd->duplex = DUPLEX_FULL;
-
- ecmd->autoneg = np->autoneg;
-
- ecmd->advertising = ADVERTISED_MII;
- if (np->autoneg) {
- ecmd->advertising |= ADVERTISED_Autoneg;
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- } else {
- adv = np->fixed_mode;
- }
- if (adv & ADVERTISE_10HALF)
- ecmd->advertising |= ADVERTISED_10baseT_Half;
- if (adv & ADVERTISE_10FULL)
- ecmd->advertising |= ADVERTISED_10baseT_Full;
- if (adv & ADVERTISE_100HALF)
- ecmd->advertising |= ADVERTISED_100baseT_Half;
- if (adv & ADVERTISE_100FULL)
- ecmd->advertising |= ADVERTISED_100baseT_Full;
- if (np->autoneg && np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- if (adv & ADVERTISE_1000FULL)
- ecmd->advertising |= ADVERTISED_1000baseT_Full;
- }
-
- ecmd->supported = (SUPPORTED_Autoneg |
- SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full |
- SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
- SUPPORTED_MII);
- if (np->gigabit == PHY_GIGABIT)
- ecmd->supported |= SUPPORTED_1000baseT_Full;
-
- ecmd->phy_address = np->phyaddr;
- ecmd->transceiver = XCVR_EXTERNAL;
-
- /* ignore maxtxpkt, maxrxpkt for now */
- spin_unlock_irq(&np->lock);
- return 0;
-}
-
-static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (ecmd->port != PORT_MII)
- return -EINVAL;
- if (ecmd->transceiver != XCVR_EXTERNAL)
- return -EINVAL;
- if (ecmd->phy_address != np->phyaddr) {
- /* TODO: support switching between multiple phys. Should be
- * trivial, but not enabled due to lack of test hardware. */
- return -EINVAL;
- }
- if (ecmd->autoneg == AUTONEG_ENABLE) {
- u32 mask;
-
- mask = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
- ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full;
- if (np->gigabit == PHY_GIGABIT)
- mask |= ADVERTISED_1000baseT_Full;
-
- if ((ecmd->advertising & mask) == 0)
- return -EINVAL;
-
- } else if (ecmd->autoneg == AUTONEG_DISABLE) {
- /* Note: autonegotiation disable, speed 1000 intentionally
- * forbidden - noone should need that. */
-
- if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100)
- return -EINVAL;
- if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL)
- return -EINVAL;
- } else {
- return -EINVAL;
- }
-
- spin_lock_irq(&np->lock);
- if (ecmd->autoneg == AUTONEG_ENABLE) {
- int adv, bmcr;
-
- np->autoneg = 1;
-
- /* advertise only what has been requested */
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4);
- if (ecmd->advertising & ADVERTISED_10baseT_Half)
- adv |= ADVERTISE_10HALF;
- if (ecmd->advertising & ADVERTISED_10baseT_Full)
- adv |= ADVERTISE_10FULL;
- if (ecmd->advertising & ADVERTISED_100baseT_Half)
- adv |= ADVERTISE_100HALF;
- if (ecmd->advertising & ADVERTISED_100baseT_Full)
- adv |= ADVERTISE_100FULL;
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
-
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- adv &= ~ADVERTISE_1000FULL;
- if (ecmd->advertising & ADVERTISED_1000baseT_Full)
- adv |= ADVERTISE_1000FULL;
- mii_rw(dev, np->phyaddr, MII_1000BT_CR, adv);
- }
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
-
- } else {
- int adv, bmcr;
-
- np->autoneg = 0;
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4);
- if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_HALF)
- adv |= ADVERTISE_10HALF;
- if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_FULL)
- adv |= ADVERTISE_10FULL;
- if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_HALF)
- adv |= ADVERTISE_100HALF;
- if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_FULL)
- adv |= ADVERTISE_100FULL;
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
- np->fixed_mode = adv;
-
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_1000BT_CR, MII_READ);
- adv &= ~ADVERTISE_1000FULL;
- mii_rw(dev, np->phyaddr, MII_1000BT_CR, adv);
- }
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr |= ~(BMCR_ANENABLE|BMCR_SPEED100|BMCR_FULLDPLX);
- if (adv & (ADVERTISE_10FULL|ADVERTISE_100FULL))
- bmcr |= BMCR_FULLDPLX;
- if (adv & (ADVERTISE_100HALF|ADVERTISE_100FULL))
- bmcr |= BMCR_SPEED100;
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
-
- if (netif_running(dev)) {
- /* Wait a bit and then reconfigure the nic. */
- udelay(10);
- nv_linkchange(dev);
- }
- }
- spin_unlock_irq(&np->lock);
-
- return 0;
-}
-
-#define FORCEDETH_REGS_VER 1
-
-static int nv_get_regs_len(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- return np->register_size;
-}
-
-static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 *rbuf = buf;
- int i;
-
- regs->version = FORCEDETH_REGS_VER;
- spin_lock_irq(&np->lock);
- for (i = 0;i <= np->register_size/sizeof(u32); i++)
- rbuf[i] = readl(base + i*sizeof(u32));
- spin_unlock_irq(&np->lock);
-}
-
-static int nv_nway_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int ret;
-
- spin_lock_irq(&np->lock);
- if (np->autoneg) {
- int bmcr;
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
-
- ret = 0;
- } else {
- ret = -EINVAL;
- }
- spin_unlock_irq(&np->lock);
-
- return ret;
-}
-
-#ifdef NETIF_F_TSO
-static int nv_set_tso(struct net_device *dev, u32 value)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if ((np->driver_data & DEV_HAS_CHECKSUM))
- return ethtool_op_set_tso(dev, value);
- else
- return value ? -EOPNOTSUPP : 0;
-}
-#endif
-
-static struct ethtool_ops ops = {
- .get_drvinfo = nv_get_drvinfo,
- .get_link = ethtool_op_get_link,
- .get_wol = nv_get_wol,
- .set_wol = nv_set_wol,
- .get_settings = nv_get_settings,
- .set_settings = nv_set_settings,
- .get_regs_len = nv_get_regs_len,
- .get_regs = nv_get_regs,
- .nway_reset = nv_nway_reset,
- .get_perm_addr = ethtool_op_get_perm_addr,
-#ifdef NETIF_F_TSO
- .get_tso = ethtool_op_get_tso,
- .set_tso = nv_set_tso
-#endif
-};
-
-static void nv_vlan_rx_register(struct net_device *dev, struct vlan_group *grp)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- spin_lock_irq(&np->lock);
-
- /* save vlan group */
- np->vlangrp = grp;
-
- if (grp) {
- /* enable vlan on MAC */
- np->txrxctl_bits |= NVREG_TXRXCTL_VLANSTRIP | NVREG_TXRXCTL_VLANINS;
- } else {
- /* disable vlan on MAC */
- np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANSTRIP;
- np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANINS;
- }
-
- writel(np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
-
- spin_unlock_irq(&np->lock);
-};
-
-static void nv_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid)
-{
- /* nothing to do */
-};
-
-static void set_msix_vector_map(struct net_device *dev, u32 vector, u32 irqmask)
-{
- u8 __iomem *base = get_hwbase(dev);
- int i;
- u32 msixmap = 0;
-
- /* Each interrupt bit can be mapped to a MSIX vector (4 bits).
- * MSIXMap0 represents the first 8 interrupts and MSIXMap1 represents
- * the remaining 8 interrupts.
- */
- for (i = 0; i < 8; i++) {
- if ((irqmask >> i) & 0x1) {
- msixmap |= vector << (i << 2);
- }
- }
- writel(readl(base + NvRegMSIXMap0) | msixmap, base + NvRegMSIXMap0);
-
- msixmap = 0;
- for (i = 0; i < 8; i++) {
- if ((irqmask >> (i + 8)) & 0x1) {
- msixmap |= vector << (i << 2);
- }
- }
- writel(readl(base + NvRegMSIXMap1) | msixmap, base + NvRegMSIXMap1);
-}
-
-static int nv_request_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int i;
-
- if (np->msi_flags & NV_MSI_X_CAPABLE) {
- for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
- np->msi_x_entry[i].entry = i;
- }
- if ((ret = pci_enable_msix(np->pci_dev, np->msi_x_entry, (np->msi_flags & NV_MSI_X_VECTORS_MASK))) == 0) {
- np->msi_flags |= NV_MSI_X_ENABLED;
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) {
- /* Request irq for rx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_err;
- }
- /* Request irq for tx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_free_rx;
- }
- /* Request irq for link and timer handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_free_tx;
- }
- /* map interrupts to their respective vector */
- writel(0, base + NvRegMSIXMap0);
- writel(0, base + NvRegMSIXMap1);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER);
- } else {
- /* Request irq for all interrupts */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_err;
- }
-
- /* map interrupts to vector 0 */
- writel(0, base + NvRegMSIXMap0);
- writel(0, base + NvRegMSIXMap1);
- }
- }
- }
- if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) {
- if ((ret = pci_enable_msi(np->pci_dev)) == 0) {
- np->msi_flags |= NV_MSI_ENABLED;
- if (request_irq(np->pci_dev->irq, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
- pci_disable_msi(np->pci_dev);
- np->msi_flags &= ~NV_MSI_ENABLED;
- goto out_err;
- }
-
- /* map interrupts to vector 0 */
- writel(0, base + NvRegMSIMap0);
- writel(0, base + NvRegMSIMap1);
- /* enable msi vector 0 */
- writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask);
- }
- }
- if (ret != 0) {
- if (request_irq(np->pci_dev->irq, &nv_nic_irq, SA_SHIRQ, dev->name, dev) != 0)
- goto out_err;
- }
-
- return 0;
-out_free_tx:
- free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev);
-out_free_rx:
- free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev);
-out_err:
- return 1;
-}
-
-static void nv_free_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- int i;
-
- if (np->msi_flags & NV_MSI_X_ENABLED) {
- for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
- free_irq(np->msi_x_entry[i].vector, dev);
- }
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- } else {
- free_irq(np->pci_dev->irq, dev);
- if (np->msi_flags & NV_MSI_ENABLED) {
- pci_disable_msi(np->pci_dev);
- np->msi_flags &= ~NV_MSI_ENABLED;
- }
- }
-}
-
-static int nv_open(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int oom, i;
-
- dprintk(KERN_DEBUG "nv_open: begin\n");
-
- /* 1) erase previous misconfiguration */
- if (np->driver_data & DEV_HAS_POWER_CNTRL)
- nv_mac_reset(dev);
- /* 4.1-1: stop adapter: ignored, 4.3 seems to be overkill */
- writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
- writel(0, base + NvRegMulticastAddrB);
- writel(0, base + NvRegMulticastMaskA);
- writel(0, base + NvRegMulticastMaskB);
- writel(0, base + NvRegPacketFilterFlags);
-
- writel(0, base + NvRegTransmitterControl);
- writel(0, base + NvRegReceiverControl);
-
- writel(0, base + NvRegAdapterControl);
-
- /* 2) initialize descriptor rings */
- set_bufsize(dev);
- oom = nv_init_ring(dev);
-
- writel(0, base + NvRegLinkSpeed);
- writel(0, base + NvRegUnknownTransmitterReg);
- nv_txrx_reset(dev);
- writel(0, base + NvRegUnknownSetupReg6);
-
- np->in_shutdown = 0;
-
- /* 3) set mac address */
- nv_copy_mac_to_hw(dev);
-
- /* 4) give hw rings */
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((RX_RING-1) << NVREG_RINGSZ_RXSHIFT) + ((TX_RING-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
-
- /* 5) continue setup */
- writel(np->linkspeed, base + NvRegLinkSpeed);
- writel(NVREG_UNKSETUP3_VAL1, base + NvRegUnknownSetupReg3);
- writel(np->txrxctl_bits, base + NvRegTxRxControl);
- writel(np->vlanctl_bits, base + NvRegVlanControl);
- pci_push(base);
- writel(NVREG_TXRXCTL_BIT1|np->txrxctl_bits, base + NvRegTxRxControl);
- reg_delay(dev, NvRegUnknownSetupReg5, NVREG_UNKSETUP5_BIT31, NVREG_UNKSETUP5_BIT31,
- NV_SETUP5_DELAY, NV_SETUP5_DELAYMAX,
- KERN_INFO "open: SetupReg5, Bit 31 remained off\n");
-
- writel(0, base + NvRegUnknownSetupReg4);
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus);
-
- /* 6) continue setup */
- writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1);
- writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus);
- writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags);
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
-
- writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus);
- get_random_bytes(&i, sizeof(i));
- writel(NVREG_RNDSEED_FORCE | (i&NVREG_RNDSEED_MASK), base + NvRegRandomSeed);
- writel(NVREG_UNKSETUP1_VAL, base + NvRegUnknownSetupReg1);
- writel(NVREG_UNKSETUP2_VAL, base + NvRegUnknownSetupReg2);
- if (poll_interval == -1) {
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT)
- writel(NVREG_POLL_DEFAULT_THROUGHPUT, base + NvRegPollingInterval);
- else
- writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval);
- }
- else
- writel(poll_interval & 0xFFFF, base + NvRegPollingInterval);
- writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
- writel((np->phyaddr << NVREG_ADAPTCTL_PHYSHIFT)|NVREG_ADAPTCTL_PHYVALID|NVREG_ADAPTCTL_RUNNING,
- base + NvRegAdapterControl);
- writel(NVREG_MIISPEED_BIT8|NVREG_MIIDELAY, base + NvRegMIISpeed);
- writel(NVREG_UNKSETUP4_VAL, base + NvRegUnknownSetupReg4);
- writel(NVREG_WAKEUPFLAGS_VAL, base + NvRegWakeUpFlags);
-
- i = readl(base + NvRegPowerState);
- if ( (i & NVREG_POWERSTATE_POWEREDUP) == 0)
- writel(NVREG_POWERSTATE_POWEREDUP|i, base + NvRegPowerState);
-
- pci_push(base);
- udelay(10);
- writel(readl(base + NvRegPowerState) | NVREG_POWERSTATE_VALID, base + NvRegPowerState);
-
- nv_disable_hw_interrupts(dev, np->irqmask);
- pci_push(base);
- writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus);
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- pci_push(base);
-
- if (nv_request_irq(dev)) {
- goto out_drain;
- }
-
- /* ask for interrupts */
- nv_enable_hw_interrupts(dev, np->irqmask);
-
- spin_lock_irq(&np->lock);
- writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
- writel(0, base + NvRegMulticastAddrB);
- writel(0, base + NvRegMulticastMaskA);
- writel(0, base + NvRegMulticastMaskB);
- writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags);
- /* One manual link speed update: Interrupts are enabled, future link
- * speed changes cause interrupts and are handled by nv_link_irq().
- */
- {
- u32 miistat;
- miistat = readl(base + NvRegMIIStatus);
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
- dprintk(KERN_INFO "startup: got 0x%08x.\n", miistat);
- }
- /* set linkspeed to invalid value, thus force nv_update_linkspeed
- * to init hw */
- np->linkspeed = 0;
- ret = nv_update_linkspeed(dev);
- nv_start_rx(dev);
- nv_start_tx(dev);
- netif_start_queue(dev);
- if (ret) {
- netif_carrier_on(dev);
- } else {
- printk("%s: no link during initialization.\n", dev->name);
- netif_carrier_off(dev);
- }
- if (oom)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
-
- return 0;
-out_drain:
- drain_ring(dev);
- return ret;
-}
-
-static int nv_close(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base;
-
- spin_lock_irq(&np->lock);
- np->in_shutdown = 1;
- spin_unlock_irq(&np->lock);
- synchronize_irq(dev->irq);
-
- del_timer_sync(&np->oom_kick);
- del_timer_sync(&np->nic_poll);
-
- netif_stop_queue(dev);
- spin_lock_irq(&np->lock);
- nv_stop_tx(dev);
- nv_stop_rx(dev);
- nv_txrx_reset(dev);
-
- /* disable interrupts on the nic or we will lock up */
- base = get_hwbase(dev);
- nv_disable_hw_interrupts(dev, np->irqmask);
- pci_push(base);
- dprintk(KERN_INFO "%s: Irqmask is zero again\n", dev->name);
-
- spin_unlock_irq(&np->lock);
-
- nv_free_irq(dev);
-
- drain_ring(dev);
-
- if (np->wolenabled)
- nv_start_rx(dev);
-
- /* special op: write back the misordered MAC address - otherwise
- * the next nv_probe would see a wrong address.
- */
- writel(np->orig_mac[0], base + NvRegMacAddrA);
- writel(np->orig_mac[1], base + NvRegMacAddrB);
-
- /* FIXME: power down nic */
-
- return 0;
-}
-
-static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
-{
- struct net_device *dev;
- struct fe_priv *np;
- unsigned long addr;
- u8 __iomem *base;
- int err, i;
- u32 powerstate;
-
- dev = alloc_etherdev(sizeof(struct fe_priv));
- err = -ENOMEM;
- if (!dev)
- goto out;
-
- np = netdev_priv(dev);
- np->pci_dev = pci_dev;
- spin_lock_init(&np->lock);
- SET_MODULE_OWNER(dev);
- SET_NETDEV_DEV(dev, &pci_dev->dev);
-
- init_timer(&np->oom_kick);
- np->oom_kick.data = (unsigned long) dev;
- np->oom_kick.function = &nv_do_rx_refill; /* timer handler */
- init_timer(&np->nic_poll);
- np->nic_poll.data = (unsigned long) dev;
- np->nic_poll.function = &nv_do_nic_poll; /* timer handler */
-
- err = pci_enable_device(pci_dev);
- if (err) {
- printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n",
- err, pci_name(pci_dev));
- goto out_free;
- }
-
- pci_set_master(pci_dev);
-
- err = pci_request_regions(pci_dev, DRV_NAME);
- if (err < 0)
- goto out_disable;
-
- if (id->driver_data & (DEV_HAS_VLAN|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL))
- np->register_size = NV_PCI_REGSZ_VER2;
- else
- np->register_size = NV_PCI_REGSZ_VER1;
-
- err = -EINVAL;
- addr = 0;
- for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
- dprintk(KERN_DEBUG "%s: resource %d start %p len %ld flags 0x%08lx.\n",
- pci_name(pci_dev), i, (void*)pci_resource_start(pci_dev, i),
- pci_resource_len(pci_dev, i),
- pci_resource_flags(pci_dev, i));
- if (pci_resource_flags(pci_dev, i) & IORESOURCE_MEM &&
- pci_resource_len(pci_dev, i) >= np->register_size) {
- addr = pci_resource_start(pci_dev, i);
- break;
- }
- }
- if (i == DEVICE_COUNT_RESOURCE) {
- printk(KERN_INFO "forcedeth: Couldn't find register window for device %s.\n",
- pci_name(pci_dev));
- goto out_relreg;
- }
-
- /* copy of driver data */
- np->driver_data = id->driver_data;
-
- /* handle different descriptor versions */
- if (id->driver_data & DEV_HAS_HIGH_DMA) {
- /* packet format 3: supports 40-bit addressing */
- np->desc_ver = DESC_VER_3;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_3;
- if (pci_set_dma_mask(pci_dev, DMA_39BIT_MASK)) {
- printk(KERN_INFO "forcedeth: 64-bit DMA failed, using 32-bit addressing for device %s.\n",
- pci_name(pci_dev));
- } else {
- dev->features |= NETIF_F_HIGHDMA;
- printk(KERN_INFO "forcedeth: using HIGHDMA\n");
- }
- if (pci_set_consistent_dma_mask(pci_dev, 0x0000007fffffffffULL)) {
- printk(KERN_INFO "forcedeth: 64-bit DMA (consistent) failed for device %s.\n",
- pci_name(pci_dev));
- }
- } else if (id->driver_data & DEV_HAS_LARGEDESC) {
- /* packet format 2: supports jumbo frames */
- np->desc_ver = DESC_VER_2;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_2;
- } else {
- /* original packet format */
- np->desc_ver = DESC_VER_1;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_1;
- }
-
- np->pkt_limit = NV_PKTLIMIT_1;
- if (id->driver_data & DEV_HAS_LARGEDESC)
- np->pkt_limit = NV_PKTLIMIT_2;
-
- if (id->driver_data & DEV_HAS_CHECKSUM) {
- np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
- dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;
-#ifdef NETIF_F_TSO
- dev->features |= NETIF_F_TSO;
-#endif
- }
-
- np->vlanctl_bits = 0;
- if (id->driver_data & DEV_HAS_VLAN) {
- np->vlanctl_bits = NVREG_VLANCONTROL_ENABLE;
- dev->features |= NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_TX;
- dev->vlan_rx_register = nv_vlan_rx_register;
- dev->vlan_rx_kill_vid = nv_vlan_rx_kill_vid;
- }
-
- np->msi_flags = 0;
- if ((id->driver_data & DEV_HAS_MSI) && !disable_msi) {
- np->msi_flags |= NV_MSI_CAPABLE;
- }
- if ((id->driver_data & DEV_HAS_MSI_X) && !disable_msix) {
- np->msi_flags |= NV_MSI_X_CAPABLE;
- }
-
- err = -ENOMEM;
- np->base = ioremap(addr, np->register_size);
- if (!np->base)
- goto out_relreg;
- dev->base_addr = (unsigned long)np->base;
-
- dev->irq = pci_dev->irq;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig = pci_alloc_consistent(pci_dev,
- sizeof(struct ring_desc) * (RX_RING + TX_RING),
- &np->ring_addr);
- if (!np->rx_ring.orig)
- goto out_unmap;
- np->tx_ring.orig = &np->rx_ring.orig[RX_RING];
- } else {
- np->rx_ring.ex = pci_alloc_consistent(pci_dev,
- sizeof(struct ring_desc_ex) * (RX_RING + TX_RING),
- &np->ring_addr);
- if (!np->rx_ring.ex)
- goto out_unmap;
- np->tx_ring.ex = &np->rx_ring.ex[RX_RING];
- }
-
- dev->open = nv_open;
- dev->stop = nv_close;
- dev->hard_start_xmit = nv_start_xmit;
- dev->get_stats = nv_get_stats;
- dev->change_mtu = nv_change_mtu;
- dev->set_mac_address = nv_set_mac_address;
- dev->set_multicast_list = nv_set_multicast;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = nv_poll_controller;
-#endif
- SET_ETHTOOL_OPS(dev, &ops);
- dev->tx_timeout = nv_tx_timeout;
- dev->watchdog_timeo = NV_WATCHDOG_TIMEO;
-
- pci_set_drvdata(pci_dev, dev);
-
- /* read the mac address */
- base = get_hwbase(dev);
- np->orig_mac[0] = readl(base + NvRegMacAddrA);
- np->orig_mac[1] = readl(base + NvRegMacAddrB);
-
- dev->dev_addr[0] = (np->orig_mac[1] >> 8) & 0xff;
- dev->dev_addr[1] = (np->orig_mac[1] >> 0) & 0xff;
- dev->dev_addr[2] = (np->orig_mac[0] >> 24) & 0xff;
- dev->dev_addr[3] = (np->orig_mac[0] >> 16) & 0xff;
- dev->dev_addr[4] = (np->orig_mac[0] >> 8) & 0xff;
- dev->dev_addr[5] = (np->orig_mac[0] >> 0) & 0xff;
- memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
-
- if (!is_valid_ether_addr(dev->perm_addr)) {
- /*
- * Bad mac address. At least one bios sets the mac address
- * to 01:23:45:67:89:ab
- */
- printk(KERN_ERR "%s: Invalid Mac address detected: %02x:%02x:%02x:%02x:%02x:%02x\n",
- pci_name(pci_dev),
- dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
- dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
- printk(KERN_ERR "Please complain to your hardware vendor. Switching to a random MAC.\n");
- dev->dev_addr[0] = 0x00;
- dev->dev_addr[1] = 0x00;
- dev->dev_addr[2] = 0x6c;
- get_random_bytes(&dev->dev_addr[3], 3);
- }
-
- dprintk(KERN_DEBUG "%s: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(pci_dev),
- dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
- dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
-
- /* disable WOL */
- writel(0, base + NvRegWakeUpFlags);
- np->wolenabled = 0;
-
- if (id->driver_data & DEV_HAS_POWER_CNTRL) {
- u8 revision_id;
- pci_read_config_byte(pci_dev, PCI_REVISION_ID, &revision_id);
-
- /* take phy and nic out of low power mode */
- powerstate = readl(base + NvRegPowerState2);
- powerstate &= ~NVREG_POWERSTATE2_POWERUP_MASK;
- if ((id->device == PCI_DEVICE_ID_NVIDIA_NVENET_12 ||
- id->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) &&
- revision_id >= 0xA3)
- powerstate |= NVREG_POWERSTATE2_POWERUP_REV_A3;
- writel(powerstate, base + NvRegPowerState2);
- }
-
- if (np->desc_ver == DESC_VER_1) {
- np->tx_flags = NV_TX_VALID;
- } else {
- np->tx_flags = NV_TX2_VALID;
- }
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) {
- np->irqmask = NVREG_IRQMASK_THROUGHPUT;
- if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
- np->msi_flags |= 0x0003;
- } else {
- np->irqmask = NVREG_IRQMASK_CPU;
- if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
- np->msi_flags |= 0x0001;
- }
-
- if (id->driver_data & DEV_NEED_TIMERIRQ)
- np->irqmask |= NVREG_IRQ_TIMER;
- if (id->driver_data & DEV_NEED_LINKTIMER) {
- dprintk(KERN_INFO "%s: link timer on.\n", pci_name(pci_dev));
- np->need_linktimer = 1;
- np->link_timeout = jiffies + LINK_TIMEOUT;
- } else {
- dprintk(KERN_INFO "%s: link timer off.\n", pci_name(pci_dev));
- np->need_linktimer = 0;
- }
-
- /* find a suitable phy */
- for (i = 1; i <= 32; i++) {
- int id1, id2;
- int phyaddr = i & 0x1F;
-
- spin_lock_irq(&np->lock);
- id1 = mii_rw(dev, phyaddr, MII_PHYSID1, MII_READ);
- spin_unlock_irq(&np->lock);
- if (id1 < 0 || id1 == 0xffff)
- continue;
- spin_lock_irq(&np->lock);
- id2 = mii_rw(dev, phyaddr, MII_PHYSID2, MII_READ);
- spin_unlock_irq(&np->lock);
- if (id2 < 0 || id2 == 0xffff)
- continue;
-
- id1 = (id1 & PHYID1_OUI_MASK) << PHYID1_OUI_SHFT;
- id2 = (id2 & PHYID2_OUI_MASK) >> PHYID2_OUI_SHFT;
- dprintk(KERN_DEBUG "%s: open: Found PHY %04x:%04x at address %d.\n",
- pci_name(pci_dev), id1, id2, phyaddr);
- np->phyaddr = phyaddr;
- np->phy_oui = id1 | id2;
- break;
- }
- if (i == 33) {
- printk(KERN_INFO "%s: open: Could not find a valid PHY.\n",
- pci_name(pci_dev));
- goto out_freering;
- }
-
- /* reset it */
- phy_init(dev);
-
- /* set default link speed settings */
- np->linkspeed = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- np->duplex = 0;
- np->autoneg = 1;
-
- err = register_netdev(dev);
- if (err) {
- printk(KERN_INFO "forcedeth: unable to register netdev: %d\n", err);
- goto out_freering;
- }
- printk(KERN_INFO "%s: forcedeth.c: subsystem: %05x:%04x bound to %s\n",
- dev->name, pci_dev->subsystem_vendor, pci_dev->subsystem_device,
- pci_name(pci_dev));
-
- return 0;
-
-out_freering:
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING),
- np->rx_ring.orig, np->ring_addr);
- else
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (RX_RING + TX_RING),
- np->rx_ring.ex, np->ring_addr);
- pci_set_drvdata(pci_dev, NULL);
-out_unmap:
- iounmap(get_hwbase(dev));
-out_relreg:
- pci_release_regions(pci_dev);
-out_disable:
- pci_disable_device(pci_dev);
-out_free:
- free_netdev(dev);
-out:
- return err;
-}
-
-static void __devexit nv_remove(struct pci_dev *pci_dev)
-{
- struct net_device *dev = pci_get_drvdata(pci_dev);
- struct fe_priv *np = netdev_priv(dev);
-
- unregister_netdev(dev);
-
- /* free all structures */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (RX_RING + TX_RING), np->rx_ring.orig, np->ring_addr);
- else
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (RX_RING + TX_RING), np->rx_ring.ex, np->ring_addr);
- iounmap(get_hwbase(dev));
- pci_release_regions(pci_dev);
- pci_disable_device(pci_dev);
- free_netdev(dev);
- pci_set_drvdata(pci_dev, NULL);
-}
-
-static struct pci_device_id pci_tbl[] = {
- { /* nForce Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_1),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce2 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_2),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_3),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_4),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_5),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_6),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_7),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* CK804 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_8),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* CK804 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_9),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP04 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_10),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP04 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_11),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP51 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_12),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP51 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_13),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP55 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_14),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP55 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_15),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL,
- },
- {0,},
-};
-
-static struct pci_driver driver = {
- .name = "forcedeth",
- .id_table = pci_tbl,
- .probe = nv_probe,
- .remove = __devexit_p(nv_remove),
-};
-
-
-static int __init init_nic(void)
-{
- printk(KERN_INFO "forcedeth.c: Reverse Engineered nForce ethernet driver. Version %s.\n", FORCEDETH_VERSION);
- return pci_module_init(&driver);
-}
-
-static void __exit exit_nic(void)
-{
- pci_unregister_driver(&driver);
-}
-
-module_param(max_interrupt_work, int, 0);
-MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt");
-module_param(optimization_mode, int, 0);
-MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer.");
-module_param(poll_interval, int, 0);
-MODULE_PARM_DESC(poll_interval, "Interval determines how frequent timer interrupt is generated by [(time_in_micro_secs * 100) / (2^10)]. Min is 0 and Max is 65535.");
-module_param(disable_msi, int, 0);
-MODULE_PARM_DESC(disable_msi, "Disable MSI interrupts by setting to 1.");
-module_param(disable_msix, int, 0);
-MODULE_PARM_DESC(disable_msix, "Disable MSIX interrupts by setting to 1.");
-
-MODULE_AUTHOR("Manfred Spraul <manfred@colorfullife.com>");
-MODULE_DESCRIPTION("Reverse Engineered nForce ethernet driver");
-MODULE_LICENSE("GPL");
-
-MODULE_DEVICE_TABLE(pci, pci_tbl);
-
-module_init(init_nic);
-module_exit(exit_nic);
--- a/devices/forcedeth-2.6.19-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,4832 +0,0 @@
-/*
- * forcedeth: Ethernet driver for NVIDIA nForce media access controllers.
- *
- * Note: This driver is a cleanroom reimplementation based on reverse
- * engineered documentation written by Carl-Daniel Hailfinger
- * and Andrew de Quincey. It's neither supported nor endorsed
- * by NVIDIA Corp. Use at your own risk.
- *
- * NVIDIA, nForce and other NVIDIA marks are trademarks or registered
- * trademarks of NVIDIA Corporation in the United States and other
- * countries.
- *
- * Copyright (C) 2003,4,5 Manfred Spraul
- * Copyright (C) 2004 Andrew de Quincey (wol support)
- * Copyright (C) 2004 Carl-Daniel Hailfinger (invalid MAC handling, insane
- * IRQ rate fixes, bigendian fixes, cleanups, verification)
- * Copyright (c) 2004 NVIDIA Corporation
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
- * Changelog:
- * 0.01: 05 Oct 2003: First release that compiles without warnings.
- * 0.02: 05 Oct 2003: Fix bug for nv_drain_tx: do not try to free NULL skbs.
- * Check all PCI BARs for the register window.
- * udelay added to mii_rw.
- * 0.03: 06 Oct 2003: Initialize dev->irq.
- * 0.04: 07 Oct 2003: Initialize np->lock, reduce handled irqs, add printks.
- * 0.05: 09 Oct 2003: printk removed again, irq status print tx_timeout.
- * 0.06: 10 Oct 2003: MAC Address read updated, pff flag generation updated,
- * irq mask updated
- * 0.07: 14 Oct 2003: Further irq mask updates.
- * 0.08: 20 Oct 2003: rx_desc.Length initialization added, nv_alloc_rx refill
- * added into irq handler, NULL check for drain_ring.
- * 0.09: 20 Oct 2003: Basic link speed irq implementation. Only handle the
- * requested interrupt sources.
- * 0.10: 20 Oct 2003: First cleanup for release.
- * 0.11: 21 Oct 2003: hexdump for tx added, rx buffer sizes increased.
- * MAC Address init fix, set_multicast cleanup.
- * 0.12: 23 Oct 2003: Cleanups for release.
- * 0.13: 25 Oct 2003: Limit for concurrent tx packets increased to 10.
- * Set link speed correctly. start rx before starting
- * tx (nv_start_rx sets the link speed).
- * 0.14: 25 Oct 2003: Nic dependant irq mask.
- * 0.15: 08 Nov 2003: fix smp deadlock with set_multicast_list during
- * open.
- * 0.16: 15 Nov 2003: include file cleanup for ppc64, rx buffer size
- * increased to 1628 bytes.
- * 0.17: 16 Nov 2003: undo rx buffer size increase. Substract 1 from
- * the tx length.
- * 0.18: 17 Nov 2003: fix oops due to late initialization of dev_stats
- * 0.19: 29 Nov 2003: Handle RxNoBuf, detect & handle invalid mac
- * addresses, really stop rx if already running
- * in nv_start_rx, clean up a bit.
- * 0.20: 07 Dec 2003: alloc fixes
- * 0.21: 12 Jan 2004: additional alloc fix, nic polling fix.
- * 0.22: 19 Jan 2004: reprogram timer to a sane rate, avoid lockup
- * on close.
- * 0.23: 26 Jan 2004: various small cleanups
- * 0.24: 27 Feb 2004: make driver even less anonymous in backtraces
- * 0.25: 09 Mar 2004: wol support
- * 0.26: 03 Jun 2004: netdriver specific annotation, sparse-related fixes
- * 0.27: 19 Jun 2004: Gigabit support, new descriptor rings,
- * added CK804/MCP04 device IDs, code fixes
- * for registers, link status and other minor fixes.
- * 0.28: 21 Jun 2004: Big cleanup, making driver mostly endian safe
- * 0.29: 31 Aug 2004: Add backup timer for link change notification.
- * 0.30: 25 Sep 2004: rx checksum support for nf 250 Gb. Add rx reset
- * into nv_close, otherwise reenabling for wol can
- * cause DMA to kfree'd memory.
- * 0.31: 14 Nov 2004: ethtool support for getting/setting link
- * capabilities.
- * 0.32: 16 Apr 2005: RX_ERROR4 handling added.
- * 0.33: 16 May 2005: Support for MCP51 added.
- * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics.
- * 0.35: 26 Jun 2005: Support for MCP55 added.
- * 0.36: 28 Jun 2005: Add jumbo frame support.
- * 0.37: 10 Jul 2005: Additional ethtool support, cleanup of pci id list
- * 0.38: 16 Jul 2005: tx irq rewrite: Use global flags instead of
- * per-packet flags.
- * 0.39: 18 Jul 2005: Add 64bit descriptor support.
- * 0.40: 19 Jul 2005: Add support for mac address change.
- * 0.41: 30 Jul 2005: Write back original MAC in nv_close instead
- * of nv_remove
- * 0.42: 06 Aug 2005: Fix lack of link speed initialization
- * in the second (and later) nv_open call
- * 0.43: 10 Aug 2005: Add support for tx checksum.
- * 0.44: 20 Aug 2005: Add support for scatter gather and segmentation.
- * 0.45: 18 Sep 2005: Remove nv_stop/start_rx from every link check
- * 0.46: 20 Oct 2005: Add irq optimization modes.
- * 0.47: 26 Oct 2005: Add phyaddr 0 in phy scan.
- * 0.48: 24 Dec 2005: Disable TSO, bugfix for pci_map_single
- * 0.49: 10 Dec 2005: Fix tso for large buffers.
- * 0.50: 20 Jan 2006: Add 8021pq tagging support.
- * 0.51: 20 Jan 2006: Add 64bit consistent memory allocation for rings.
- * 0.52: 20 Jan 2006: Add MSI/MSIX support.
- * 0.53: 19 Mar 2006: Fix init from low power mode and add hw reset.
- * 0.54: 21 Mar 2006: Fix spin locks for multi irqs and cleanup.
- * 0.55: 22 Mar 2006: Add flow control (pause frame).
- * 0.56: 22 Mar 2006: Additional ethtool config and moduleparam support.
- * 0.57: 14 May 2006: Mac address set in probe/remove and order corrections.
- *
- * Known bugs:
- * We suspect that on some hardware no TX done interrupts are generated.
- * This means recovery from netif_stop_queue only happens if the hw timer
- * interrupt fires (100 times/second, configurable with NVREG_POLL_DEFAULT)
- * and the timer is active in the IRQMask, or if a rx packet arrives by chance.
- * If your hardware reliably generates tx done interrupts, then you can remove
- * DEV_NEED_TIMERIRQ from the driver_data flags.
- * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few
- * superfluous timer interrupts from the nic.
- */
-#ifdef CONFIG_FORCEDETH_NAPI
-#define DRIVERNAPI "-NAPI"
-#else
-#define DRIVERNAPI
-#endif
-#define FORCEDETH_VERSION "0.57"
-#define DRV_NAME "forcedeth"
-
-#include <linux/module.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/delay.h>
-#include <linux/spinlock.h>
-#include <linux/ethtool.h>
-#include <linux/timer.h>
-#include <linux/skbuff.h>
-#include <linux/mii.h>
-#include <linux/random.h>
-#include <linux/init.h>
-#include <linux/if_vlan.h>
-#include <linux/dma-mapping.h>
-
-#include <asm/irq.h>
-#include <asm/io.h>
-#include <asm/uaccess.h>
-#include <asm/system.h>
-
-#include "../globals.h"
-#include "ecdev.h"
-
-#if 0
-#define dprintk printk
-#else
-#define dprintk(x...) do { } while (0)
-#endif
-
-
-/*
- * Hardware access:
- */
-
-#define DEV_NEED_TIMERIRQ 0x0001 /* set the timer irq flag in the irq mask */
-#define DEV_NEED_LINKTIMER 0x0002 /* poll link settings. Relies on the timer irq */
-#define DEV_HAS_LARGEDESC 0x0004 /* device supports jumbo frames and needs packet format 2 */
-#define DEV_HAS_HIGH_DMA 0x0008 /* device supports 64bit dma */
-#define DEV_HAS_CHECKSUM 0x0010 /* device supports tx and rx checksum offloads */
-#define DEV_HAS_VLAN 0x0020 /* device supports vlan tagging and striping */
-#define DEV_HAS_MSI 0x0040 /* device supports MSI */
-#define DEV_HAS_MSI_X 0x0080 /* device supports MSI-X */
-#define DEV_HAS_POWER_CNTRL 0x0100 /* device supports power savings */
-#define DEV_HAS_PAUSEFRAME_TX 0x0200 /* device supports tx pause frames */
-#define DEV_HAS_STATISTICS 0x0400 /* device supports hw statistics */
-#define DEV_HAS_TEST_EXTENDED 0x0800 /* device supports extended diagnostic test */
-
-enum {
- NvRegIrqStatus = 0x000,
-#define NVREG_IRQSTAT_MIIEVENT 0x040
-#define NVREG_IRQSTAT_MASK 0x1ff
- NvRegIrqMask = 0x004,
-#define NVREG_IRQ_RX_ERROR 0x0001
-#define NVREG_IRQ_RX 0x0002
-#define NVREG_IRQ_RX_NOBUF 0x0004
-#define NVREG_IRQ_TX_ERR 0x0008
-#define NVREG_IRQ_TX_OK 0x0010
-#define NVREG_IRQ_TIMER 0x0020
-#define NVREG_IRQ_LINK 0x0040
-#define NVREG_IRQ_RX_FORCED 0x0080
-#define NVREG_IRQ_TX_FORCED 0x0100
-#define NVREG_IRQMASK_THROUGHPUT 0x00df
-#define NVREG_IRQMASK_CPU 0x0040
-#define NVREG_IRQ_TX_ALL (NVREG_IRQ_TX_ERR|NVREG_IRQ_TX_OK|NVREG_IRQ_TX_FORCED)
-#define NVREG_IRQ_RX_ALL (NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_RX_FORCED)
-#define NVREG_IRQ_OTHER (NVREG_IRQ_TIMER|NVREG_IRQ_LINK)
-
-#define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR| \
- NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RX_FORCED| \
- NVREG_IRQ_TX_FORCED))
-
- NvRegUnknownSetupReg6 = 0x008,
-#define NVREG_UNKSETUP6_VAL 3
-
-/*
- * NVREG_POLL_DEFAULT is the interval length of the timer source on the nic
- * NVREG_POLL_DEFAULT=97 would result in an interval length of 1 ms
- */
- NvRegPollingInterval = 0x00c,
-#define NVREG_POLL_DEFAULT_THROUGHPUT 970
-#define NVREG_POLL_DEFAULT_CPU 13
- NvRegMSIMap0 = 0x020,
- NvRegMSIMap1 = 0x024,
- NvRegMSIIrqMask = 0x030,
-#define NVREG_MSI_VECTOR_0_ENABLED 0x01
- NvRegMisc1 = 0x080,
-#define NVREG_MISC1_PAUSE_TX 0x01
-#define NVREG_MISC1_HD 0x02
-#define NVREG_MISC1_FORCE 0x3b0f3c
-
- NvRegMacReset = 0x3c,
-#define NVREG_MAC_RESET_ASSERT 0x0F3
- NvRegTransmitterControl = 0x084,
-#define NVREG_XMITCTL_START 0x01
- NvRegTransmitterStatus = 0x088,
-#define NVREG_XMITSTAT_BUSY 0x01
-
- NvRegPacketFilterFlags = 0x8c,
-#define NVREG_PFF_PAUSE_RX 0x08
-#define NVREG_PFF_ALWAYS 0x7F0000
-#define NVREG_PFF_PROMISC 0x80
-#define NVREG_PFF_MYADDR 0x20
-#define NVREG_PFF_LOOPBACK 0x10
-
- NvRegOffloadConfig = 0x90,
-#define NVREG_OFFLOAD_HOMEPHY 0x601
-#define NVREG_OFFLOAD_NORMAL RX_NIC_BUFSIZE
- NvRegReceiverControl = 0x094,
-#define NVREG_RCVCTL_START 0x01
- NvRegReceiverStatus = 0x98,
-#define NVREG_RCVSTAT_BUSY 0x01
-
- NvRegRandomSeed = 0x9c,
-#define NVREG_RNDSEED_MASK 0x00ff
-#define NVREG_RNDSEED_FORCE 0x7f00
-#define NVREG_RNDSEED_FORCE2 0x2d00
-#define NVREG_RNDSEED_FORCE3 0x7400
-
- NvRegTxDeferral = 0xA0,
-#define NVREG_TX_DEFERRAL_DEFAULT 0x15050f
-#define NVREG_TX_DEFERRAL_RGMII_10_100 0x16070f
-#define NVREG_TX_DEFERRAL_RGMII_1000 0x14050f
- NvRegRxDeferral = 0xA4,
-#define NVREG_RX_DEFERRAL_DEFAULT 0x16
- NvRegMacAddrA = 0xA8,
- NvRegMacAddrB = 0xAC,
- NvRegMulticastAddrA = 0xB0,
-#define NVREG_MCASTADDRA_FORCE 0x01
- NvRegMulticastAddrB = 0xB4,
- NvRegMulticastMaskA = 0xB8,
- NvRegMulticastMaskB = 0xBC,
-
- NvRegPhyInterface = 0xC0,
-#define PHY_RGMII 0x10000000
-
- NvRegTxRingPhysAddr = 0x100,
- NvRegRxRingPhysAddr = 0x104,
- NvRegRingSizes = 0x108,
-#define NVREG_RINGSZ_TXSHIFT 0
-#define NVREG_RINGSZ_RXSHIFT 16
- NvRegTransmitPoll = 0x10c,
-#define NVREG_TRANSMITPOLL_MAC_ADDR_REV 0x00008000
- NvRegLinkSpeed = 0x110,
-#define NVREG_LINKSPEED_FORCE 0x10000
-#define NVREG_LINKSPEED_10 1000
-#define NVREG_LINKSPEED_100 100
-#define NVREG_LINKSPEED_1000 50
-#define NVREG_LINKSPEED_MASK (0xFFF)
- NvRegUnknownSetupReg5 = 0x130,
-#define NVREG_UNKSETUP5_BIT31 (1<<31)
- NvRegTxWatermark = 0x13c,
-#define NVREG_TX_WM_DESC1_DEFAULT 0x0200010
-#define NVREG_TX_WM_DESC2_3_DEFAULT 0x1e08000
-#define NVREG_TX_WM_DESC2_3_1000 0xfe08000
- NvRegTxRxControl = 0x144,
-#define NVREG_TXRXCTL_KICK 0x0001
-#define NVREG_TXRXCTL_BIT1 0x0002
-#define NVREG_TXRXCTL_BIT2 0x0004
-#define NVREG_TXRXCTL_IDLE 0x0008
-#define NVREG_TXRXCTL_RESET 0x0010
-#define NVREG_TXRXCTL_RXCHECK 0x0400
-#define NVREG_TXRXCTL_DESC_1 0
-#define NVREG_TXRXCTL_DESC_2 0x02100
-#define NVREG_TXRXCTL_DESC_3 0x02200
-#define NVREG_TXRXCTL_VLANSTRIP 0x00040
-#define NVREG_TXRXCTL_VLANINS 0x00080
- NvRegTxRingPhysAddrHigh = 0x148,
- NvRegRxRingPhysAddrHigh = 0x14C,
- NvRegTxPauseFrame = 0x170,
-#define NVREG_TX_PAUSEFRAME_DISABLE 0x1ff0080
-#define NVREG_TX_PAUSEFRAME_ENABLE 0x0c00030
- NvRegMIIStatus = 0x180,
-#define NVREG_MIISTAT_ERROR 0x0001
-#define NVREG_MIISTAT_LINKCHANGE 0x0008
-#define NVREG_MIISTAT_MASK 0x000f
-#define NVREG_MIISTAT_MASK2 0x000f
- NvRegUnknownSetupReg4 = 0x184,
-#define NVREG_UNKSETUP4_VAL 8
-
- NvRegAdapterControl = 0x188,
-#define NVREG_ADAPTCTL_START 0x02
-#define NVREG_ADAPTCTL_LINKUP 0x04
-#define NVREG_ADAPTCTL_PHYVALID 0x40000
-#define NVREG_ADAPTCTL_RUNNING 0x100000
-#define NVREG_ADAPTCTL_PHYSHIFT 24
- NvRegMIISpeed = 0x18c,
-#define NVREG_MIISPEED_BIT8 (1<<8)
-#define NVREG_MIIDELAY 5
- NvRegMIIControl = 0x190,
-#define NVREG_MIICTL_INUSE 0x08000
-#define NVREG_MIICTL_WRITE 0x00400
-#define NVREG_MIICTL_ADDRSHIFT 5
- NvRegMIIData = 0x194,
- NvRegWakeUpFlags = 0x200,
-#define NVREG_WAKEUPFLAGS_VAL 0x7770
-#define NVREG_WAKEUPFLAGS_BUSYSHIFT 24
-#define NVREG_WAKEUPFLAGS_ENABLESHIFT 16
-#define NVREG_WAKEUPFLAGS_D3SHIFT 12
-#define NVREG_WAKEUPFLAGS_D2SHIFT 8
-#define NVREG_WAKEUPFLAGS_D1SHIFT 4
-#define NVREG_WAKEUPFLAGS_D0SHIFT 0
-#define NVREG_WAKEUPFLAGS_ACCEPT_MAGPAT 0x01
-#define NVREG_WAKEUPFLAGS_ACCEPT_WAKEUPPAT 0x02
-#define NVREG_WAKEUPFLAGS_ACCEPT_LINKCHANGE 0x04
-#define NVREG_WAKEUPFLAGS_ENABLE 0x1111
-
- NvRegPatternCRC = 0x204,
- NvRegPatternMask = 0x208,
- NvRegPowerCap = 0x268,
-#define NVREG_POWERCAP_D3SUPP (1<<30)
-#define NVREG_POWERCAP_D2SUPP (1<<26)
-#define NVREG_POWERCAP_D1SUPP (1<<25)
- NvRegPowerState = 0x26c,
-#define NVREG_POWERSTATE_POWEREDUP 0x8000
-#define NVREG_POWERSTATE_VALID 0x0100
-#define NVREG_POWERSTATE_MASK 0x0003
-#define NVREG_POWERSTATE_D0 0x0000
-#define NVREG_POWERSTATE_D1 0x0001
-#define NVREG_POWERSTATE_D2 0x0002
-#define NVREG_POWERSTATE_D3 0x0003
- NvRegTxCnt = 0x280,
- NvRegTxZeroReXmt = 0x284,
- NvRegTxOneReXmt = 0x288,
- NvRegTxManyReXmt = 0x28c,
- NvRegTxLateCol = 0x290,
- NvRegTxUnderflow = 0x294,
- NvRegTxLossCarrier = 0x298,
- NvRegTxExcessDef = 0x29c,
- NvRegTxRetryErr = 0x2a0,
- NvRegRxFrameErr = 0x2a4,
- NvRegRxExtraByte = 0x2a8,
- NvRegRxLateCol = 0x2ac,
- NvRegRxRunt = 0x2b0,
- NvRegRxFrameTooLong = 0x2b4,
- NvRegRxOverflow = 0x2b8,
- NvRegRxFCSErr = 0x2bc,
- NvRegRxFrameAlignErr = 0x2c0,
- NvRegRxLenErr = 0x2c4,
- NvRegRxUnicast = 0x2c8,
- NvRegRxMulticast = 0x2cc,
- NvRegRxBroadcast = 0x2d0,
- NvRegTxDef = 0x2d4,
- NvRegTxFrame = 0x2d8,
- NvRegRxCnt = 0x2dc,
- NvRegTxPause = 0x2e0,
- NvRegRxPause = 0x2e4,
- NvRegRxDropFrame = 0x2e8,
- NvRegVlanControl = 0x300,
-#define NVREG_VLANCONTROL_ENABLE 0x2000
- NvRegMSIXMap0 = 0x3e0,
- NvRegMSIXMap1 = 0x3e4,
- NvRegMSIXIrqStatus = 0x3f0,
-
- NvRegPowerState2 = 0x600,
-#define NVREG_POWERSTATE2_POWERUP_MASK 0x0F11
-#define NVREG_POWERSTATE2_POWERUP_REV_A3 0x0001
-};
-
-/* Big endian: should work, but is untested */
-struct ring_desc {
- __le32 buf;
- __le32 flaglen;
-};
-
-struct ring_desc_ex {
- __le32 bufhigh;
- __le32 buflow;
- __le32 txvlan;
- __le32 flaglen;
-};
-
-union ring_type {
- struct ring_desc* orig;
- struct ring_desc_ex* ex;
-};
-
-#define FLAG_MASK_V1 0xffff0000
-#define FLAG_MASK_V2 0xffffc000
-#define LEN_MASK_V1 (0xffffffff ^ FLAG_MASK_V1)
-#define LEN_MASK_V2 (0xffffffff ^ FLAG_MASK_V2)
-
-#define NV_TX_LASTPACKET (1<<16)
-#define NV_TX_RETRYERROR (1<<19)
-#define NV_TX_FORCED_INTERRUPT (1<<24)
-#define NV_TX_DEFERRED (1<<26)
-#define NV_TX_CARRIERLOST (1<<27)
-#define NV_TX_LATECOLLISION (1<<28)
-#define NV_TX_UNDERFLOW (1<<29)
-#define NV_TX_ERROR (1<<30)
-#define NV_TX_VALID (1<<31)
-
-#define NV_TX2_LASTPACKET (1<<29)
-#define NV_TX2_RETRYERROR (1<<18)
-#define NV_TX2_FORCED_INTERRUPT (1<<30)
-#define NV_TX2_DEFERRED (1<<25)
-#define NV_TX2_CARRIERLOST (1<<26)
-#define NV_TX2_LATECOLLISION (1<<27)
-#define NV_TX2_UNDERFLOW (1<<28)
-/* error and valid are the same for both */
-#define NV_TX2_ERROR (1<<30)
-#define NV_TX2_VALID (1<<31)
-#define NV_TX2_TSO (1<<28)
-#define NV_TX2_TSO_SHIFT 14
-#define NV_TX2_TSO_MAX_SHIFT 14
-#define NV_TX2_TSO_MAX_SIZE (1<<NV_TX2_TSO_MAX_SHIFT)
-#define NV_TX2_CHECKSUM_L3 (1<<27)
-#define NV_TX2_CHECKSUM_L4 (1<<26)
-
-#define NV_TX3_VLAN_TAG_PRESENT (1<<18)
-
-#define NV_RX_DESCRIPTORVALID (1<<16)
-#define NV_RX_MISSEDFRAME (1<<17)
-#define NV_RX_SUBSTRACT1 (1<<18)
-#define NV_RX_ERROR1 (1<<23)
-#define NV_RX_ERROR2 (1<<24)
-#define NV_RX_ERROR3 (1<<25)
-#define NV_RX_ERROR4 (1<<26)
-#define NV_RX_CRCERR (1<<27)
-#define NV_RX_OVERFLOW (1<<28)
-#define NV_RX_FRAMINGERR (1<<29)
-#define NV_RX_ERROR (1<<30)
-#define NV_RX_AVAIL (1<<31)
-
-#define NV_RX2_CHECKSUMMASK (0x1C000000)
-#define NV_RX2_CHECKSUMOK1 (0x10000000)
-#define NV_RX2_CHECKSUMOK2 (0x14000000)
-#define NV_RX2_CHECKSUMOK3 (0x18000000)
-#define NV_RX2_DESCRIPTORVALID (1<<29)
-#define NV_RX2_SUBSTRACT1 (1<<25)
-#define NV_RX2_ERROR1 (1<<18)
-#define NV_RX2_ERROR2 (1<<19)
-#define NV_RX2_ERROR3 (1<<20)
-#define NV_RX2_ERROR4 (1<<21)
-#define NV_RX2_CRCERR (1<<22)
-#define NV_RX2_OVERFLOW (1<<23)
-#define NV_RX2_FRAMINGERR (1<<24)
-/* error and avail are the same for both */
-#define NV_RX2_ERROR (1<<30)
-#define NV_RX2_AVAIL (1<<31)
-
-#define NV_RX3_VLAN_TAG_PRESENT (1<<16)
-#define NV_RX3_VLAN_TAG_MASK (0x0000FFFF)
-
-/* Miscelaneous hardware related defines: */
-#define NV_PCI_REGSZ_VER1 0x270
-#define NV_PCI_REGSZ_VER2 0x604
-
-/* various timeout delays: all in usec */
-#define NV_TXRX_RESET_DELAY 4
-#define NV_TXSTOP_DELAY1 10
-#define NV_TXSTOP_DELAY1MAX 500000
-#define NV_TXSTOP_DELAY2 100
-#define NV_RXSTOP_DELAY1 10
-#define NV_RXSTOP_DELAY1MAX 500000
-#define NV_RXSTOP_DELAY2 100
-#define NV_SETUP5_DELAY 5
-#define NV_SETUP5_DELAYMAX 50000
-#define NV_POWERUP_DELAY 5
-#define NV_POWERUP_DELAYMAX 5000
-#define NV_MIIBUSY_DELAY 50
-#define NV_MIIPHY_DELAY 10
-#define NV_MIIPHY_DELAYMAX 10000
-#define NV_MAC_RESET_DELAY 64
-
-#define NV_WAKEUPPATTERNS 5
-#define NV_WAKEUPMASKENTRIES 4
-
-/* General driver defaults */
-#define NV_WATCHDOG_TIMEO (5*HZ)
-
-#define RX_RING_DEFAULT 128
-#define TX_RING_DEFAULT 256
-#define RX_RING_MIN 128
-#define TX_RING_MIN 64
-#define RING_MAX_DESC_VER_1 1024
-#define RING_MAX_DESC_VER_2_3 16384
-/*
- * Difference between the get and put pointers for the tx ring.
- * This is used to throttle the amount of data outstanding in the
- * tx ring.
- */
-#define TX_LIMIT_DIFFERENCE 1
-
-/* rx/tx mac addr + type + vlan + align + slack*/
-#define NV_RX_HEADERS (64)
-/* even more slack. */
-#define NV_RX_ALLOC_PAD (64)
-
-/* maximum mtu size */
-#define NV_PKTLIMIT_1 ETH_DATA_LEN /* hard limit not known */
-#define NV_PKTLIMIT_2 9100 /* Actual limit according to NVidia: 9202 */
-
-#define OOM_REFILL (1+HZ/20)
-#define POLL_WAIT (1+HZ/100)
-#define LINK_TIMEOUT (3*HZ)
-#define STATS_INTERVAL (10*HZ)
-
-/*
- * desc_ver values:
- * The nic supports three different descriptor types:
- * - DESC_VER_1: Original
- * - DESC_VER_2: support for jumbo frames.
- * - DESC_VER_3: 64-bit format.
- */
-#define DESC_VER_1 1
-#define DESC_VER_2 2
-#define DESC_VER_3 3
-
-/* PHY defines */
-#define PHY_OUI_MARVELL 0x5043
-#define PHY_OUI_CICADA 0x03f1
-#define PHYID1_OUI_MASK 0x03ff
-#define PHYID1_OUI_SHFT 6
-#define PHYID2_OUI_MASK 0xfc00
-#define PHYID2_OUI_SHFT 10
-#define PHYID2_MODEL_MASK 0x03f0
-#define PHY_MODEL_MARVELL_E3016 0x220
-#define PHY_MARVELL_E3016_INITMASK 0x0300
-#define PHY_INIT1 0x0f000
-#define PHY_INIT2 0x0e00
-#define PHY_INIT3 0x01000
-#define PHY_INIT4 0x0200
-#define PHY_INIT5 0x0004
-#define PHY_INIT6 0x02000
-#define PHY_GIGABIT 0x0100
-
-#define PHY_TIMEOUT 0x1
-#define PHY_ERROR 0x2
-
-#define PHY_100 0x1
-#define PHY_1000 0x2
-#define PHY_HALF 0x100
-
-#define NV_PAUSEFRAME_RX_CAPABLE 0x0001
-#define NV_PAUSEFRAME_TX_CAPABLE 0x0002
-#define NV_PAUSEFRAME_RX_ENABLE 0x0004
-#define NV_PAUSEFRAME_TX_ENABLE 0x0008
-#define NV_PAUSEFRAME_RX_REQ 0x0010
-#define NV_PAUSEFRAME_TX_REQ 0x0020
-#define NV_PAUSEFRAME_AUTONEG 0x0040
-
-/* MSI/MSI-X defines */
-#define NV_MSI_X_MAX_VECTORS 8
-#define NV_MSI_X_VECTORS_MASK 0x000f
-#define NV_MSI_CAPABLE 0x0010
-#define NV_MSI_X_CAPABLE 0x0020
-#define NV_MSI_ENABLED 0x0040
-#define NV_MSI_X_ENABLED 0x0080
-
-#define NV_MSI_X_VECTOR_ALL 0x0
-#define NV_MSI_X_VECTOR_RX 0x0
-#define NV_MSI_X_VECTOR_TX 0x1
-#define NV_MSI_X_VECTOR_OTHER 0x2
-
-/* statistics */
-struct nv_ethtool_str {
- char name[ETH_GSTRING_LEN];
-};
-
-static const struct nv_ethtool_str nv_estats_str[] = {
- { "tx_bytes" },
- { "tx_zero_rexmt" },
- { "tx_one_rexmt" },
- { "tx_many_rexmt" },
- { "tx_late_collision" },
- { "tx_fifo_errors" },
- { "tx_carrier_errors" },
- { "tx_excess_deferral" },
- { "tx_retry_error" },
- { "tx_deferral" },
- { "tx_packets" },
- { "tx_pause" },
- { "rx_frame_error" },
- { "rx_extra_byte" },
- { "rx_late_collision" },
- { "rx_runt" },
- { "rx_frame_too_long" },
- { "rx_over_errors" },
- { "rx_crc_errors" },
- { "rx_frame_align_error" },
- { "rx_length_error" },
- { "rx_unicast" },
- { "rx_multicast" },
- { "rx_broadcast" },
- { "rx_bytes" },
- { "rx_pause" },
- { "rx_drop_frame" },
- { "rx_packets" },
- { "rx_errors_total" }
-};
-
-struct nv_ethtool_stats {
- u64 tx_bytes;
- u64 tx_zero_rexmt;
- u64 tx_one_rexmt;
- u64 tx_many_rexmt;
- u64 tx_late_collision;
- u64 tx_fifo_errors;
- u64 tx_carrier_errors;
- u64 tx_excess_deferral;
- u64 tx_retry_error;
- u64 tx_deferral;
- u64 tx_packets;
- u64 tx_pause;
- u64 rx_frame_error;
- u64 rx_extra_byte;
- u64 rx_late_collision;
- u64 rx_runt;
- u64 rx_frame_too_long;
- u64 rx_over_errors;
- u64 rx_crc_errors;
- u64 rx_frame_align_error;
- u64 rx_length_error;
- u64 rx_unicast;
- u64 rx_multicast;
- u64 rx_broadcast;
- u64 rx_bytes;
- u64 rx_pause;
- u64 rx_drop_frame;
- u64 rx_packets;
- u64 rx_errors_total;
-};
-
-/* diagnostics */
-#define NV_TEST_COUNT_BASE 3
-#define NV_TEST_COUNT_EXTENDED 4
-
-static const struct nv_ethtool_str nv_etests_str[] = {
- { "link (online/offline)" },
- { "register (offline) " },
- { "interrupt (offline) " },
- { "loopback (offline) " }
-};
-
-struct register_test {
- __le32 reg;
- __le32 mask;
-};
-
-static const struct register_test nv_registers_test[] = {
- { NvRegUnknownSetupReg6, 0x01 },
- { NvRegMisc1, 0x03c },
- { NvRegOffloadConfig, 0x03ff },
- { NvRegMulticastAddrA, 0xffffffff },
- { NvRegTxWatermark, 0x0ff },
- { NvRegWakeUpFlags, 0x07777 },
- { 0,0 }
-};
-
-/*
- * SMP locking:
- * All hardware access under dev->priv->lock, except the performance
- * critical parts:
- * - rx is (pseudo-) lockless: it relies on the single-threading provided
- * by the arch code for interrupts.
- * - tx setup is lockless: it relies on netif_tx_lock. Actual submission
- * needs dev->priv->lock :-(
- * - set_multicast_list: preparation lockless, relies on netif_tx_lock.
- */
-
-/* in dev: base, irq */
-struct fe_priv {
- spinlock_t lock;
-
- /* General data:
- * Locking: spin_lock(&np->lock); */
- struct net_device_stats stats;
- struct nv_ethtool_stats estats;
- int in_shutdown;
- u32 linkspeed;
- int duplex;
- int autoneg;
- int fixed_mode;
- int phyaddr;
- int wolenabled;
- unsigned int phy_oui;
- unsigned int phy_model;
- u16 gigabit;
- int intr_test;
-
- /* General data: RO fields */
- dma_addr_t ring_addr;
- struct pci_dev *pci_dev;
- u32 orig_mac[2];
- u32 irqmask;
- u32 desc_ver;
- u32 txrxctl_bits;
- u32 vlanctl_bits;
- u32 driver_data;
- u32 register_size;
- int rx_csum;
-
- void __iomem *base;
-
- /* rx specific fields.
- * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
- */
- union ring_type rx_ring;
- unsigned int cur_rx, refill_rx;
- struct sk_buff **rx_skbuff;
- dma_addr_t *rx_dma;
- unsigned int rx_buf_sz;
- unsigned int pkt_limit;
- struct timer_list oom_kick;
- struct timer_list nic_poll;
- struct timer_list stats_poll;
- u32 nic_poll_irq;
- int rx_ring_size;
-
- /* media detection workaround.
- * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
- */
- int need_linktimer;
- unsigned long link_timeout;
- /*
- * tx specific fields.
- */
- union ring_type tx_ring;
- unsigned int next_tx, nic_tx;
- struct sk_buff **tx_skbuff;
- dma_addr_t *tx_dma;
- unsigned int *tx_dma_len;
- u32 tx_flags;
- int tx_ring_size;
- int tx_limit_start;
- int tx_limit_stop;
-
- /* vlan fields */
- struct vlan_group *vlangrp;
-
- /* msi/msi-x fields */
- u32 msi_flags;
- struct msix_entry msi_x_entry[NV_MSI_X_MAX_VECTORS];
-
- /* flow control */
- u32 pause_flags;
-
- ec_device_t *ecdev;
-};
-
-/*
- * Maximum number of loops until we assume that a bit in the irq mask
- * is stuck. Overridable with module param.
- */
-static int max_interrupt_work = 5;
-
-/*
- * Optimization can be either throuput mode or cpu mode
- *
- * Throughput Mode: Every tx and rx packet will generate an interrupt.
- * CPU Mode: Interrupts are controlled by a timer.
- */
-enum {
- NV_OPTIMIZATION_MODE_THROUGHPUT,
- NV_OPTIMIZATION_MODE_CPU
-};
-static int optimization_mode = NV_OPTIMIZATION_MODE_THROUGHPUT;
-
-/*
- * Poll interval for timer irq
- *
- * This interval determines how frequent an interrupt is generated.
- * The is value is determined by [(time_in_micro_secs * 100) / (2^10)]
- * Min = 0, and Max = 65535
- */
-static int poll_interval = -1;
-
-/*
- * MSI interrupts
- */
-enum {
- NV_MSI_INT_DISABLED,
- NV_MSI_INT_ENABLED
-};
-static int msi = NV_MSI_INT_ENABLED;
-
-/*
- * MSIX interrupts
- */
-enum {
- NV_MSIX_INT_DISABLED,
- NV_MSIX_INT_ENABLED
-};
-static int msix = NV_MSIX_INT_ENABLED;
-
-/*
- * DMA 64bit
- */
-enum {
- NV_DMA_64BIT_DISABLED,
- NV_DMA_64BIT_ENABLED
-};
-static int dma_64bit = NV_DMA_64BIT_ENABLED;
-
-static int board_idx = -1;
-
-static inline struct fe_priv *get_nvpriv(struct net_device *dev)
-{
- return netdev_priv(dev);
-}
-
-static inline u8 __iomem *get_hwbase(struct net_device *dev)
-{
- return ((struct fe_priv *)netdev_priv(dev))->base;
-}
-
-static inline void pci_push(u8 __iomem *base)
-{
- /* force out pending posted writes */
- readl(base);
-}
-
-static inline u32 nv_descr_getlength(struct ring_desc *prd, u32 v)
-{
- return le32_to_cpu(prd->flaglen)
- & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2);
-}
-
-static inline u32 nv_descr_getlength_ex(struct ring_desc_ex *prd, u32 v)
-{
- return le32_to_cpu(prd->flaglen) & LEN_MASK_V2;
-}
-
-static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target,
- int delay, int delaymax, const char *msg)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- pci_push(base);
- do {
- udelay(delay);
- delaymax -= delay;
- if (delaymax < 0) {
- if (msg)
- printk(msg);
- return 1;
- }
- } while ((readl(base + offset) & mask) != target);
- return 0;
-}
-
-#define NV_SETUP_RX_RING 0x01
-#define NV_SETUP_TX_RING 0x02
-
-static void setup_hw_rings(struct net_device *dev, int rxtx_flags)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- if (rxtx_flags & NV_SETUP_RX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
- }
- if (rxtx_flags & NV_SETUP_TX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr);
- }
- } else {
- if (rxtx_flags & NV_SETUP_RX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
- writel((u32) (cpu_to_le64(np->ring_addr) >> 32), base + NvRegRxRingPhysAddrHigh);
- }
- if (rxtx_flags & NV_SETUP_TX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr);
- writel((u32) (cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)) >> 32), base + NvRegTxRingPhysAddrHigh);
- }
- }
-}
-
-static void free_rings(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- if (np->rx_ring.orig)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size),
- np->rx_ring.orig, np->ring_addr);
- } else {
- if (np->rx_ring.ex)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size),
- np->rx_ring.ex, np->ring_addr);
- }
- if (np->rx_skbuff)
- kfree(np->rx_skbuff);
- if (np->rx_dma)
- kfree(np->rx_dma);
- if (np->tx_skbuff)
- kfree(np->tx_skbuff);
- if (np->tx_dma)
- kfree(np->tx_dma);
- if (np->tx_dma_len)
- kfree(np->tx_dma_len);
-}
-
-static int using_multi_irqs(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!(np->msi_flags & NV_MSI_X_ENABLED) ||
- ((np->msi_flags & NV_MSI_X_ENABLED) &&
- ((np->msi_flags & NV_MSI_X_VECTORS_MASK) == 0x1)))
- return 0;
- else
- return 1;
-}
-
-static void nv_enable_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
-}
-
-static void nv_disable_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- } else {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
-}
-
-/* In MSIX mode, a write to irqmask behaves as XOR */
-static void nv_enable_hw_interrupts(struct net_device *dev, u32 mask)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- writel(mask, base + NvRegIrqMask);
-}
-
-static void nv_disable_hw_interrupts(struct net_device *dev, u32 mask)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->msi_flags & NV_MSI_X_ENABLED) {
- writel(mask, base + NvRegIrqMask);
- } else {
- if (np->msi_flags & NV_MSI_ENABLED)
- writel(0, base + NvRegMSIIrqMask);
- writel(0, base + NvRegIrqMask);
- }
-}
-
-#define MII_READ (-1)
-/* mii_rw: read/write a register on the PHY.
- *
- * Caller must guarantee serialization
- */
-static int mii_rw(struct net_device *dev, int addr, int miireg, int value)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 reg;
- int retval;
-
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
-
- reg = readl(base + NvRegMIIControl);
- if (reg & NVREG_MIICTL_INUSE) {
- writel(NVREG_MIICTL_INUSE, base + NvRegMIIControl);
- udelay(NV_MIIBUSY_DELAY);
- }
-
- reg = (addr << NVREG_MIICTL_ADDRSHIFT) | miireg;
- if (value != MII_READ) {
- writel(value, base + NvRegMIIData);
- reg |= NVREG_MIICTL_WRITE;
- }
- writel(reg, base + NvRegMIIControl);
-
- if (reg_delay(dev, NvRegMIIControl, NVREG_MIICTL_INUSE, 0,
- NV_MIIPHY_DELAY, NV_MIIPHY_DELAYMAX, NULL)) {
- dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d timed out.\n",
- dev->name, miireg, addr);
- retval = -1;
- } else if (value != MII_READ) {
- /* it was a write operation - fewer failures are detectable */
- dprintk(KERN_DEBUG "%s: mii_rw wrote 0x%x to reg %d at PHY %d\n",
- dev->name, value, miireg, addr);
- retval = 0;
- } else if (readl(base + NvRegMIIStatus) & NVREG_MIISTAT_ERROR) {
- dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d failed.\n",
- dev->name, miireg, addr);
- retval = -1;
- } else {
- retval = readl(base + NvRegMIIData);
- dprintk(KERN_DEBUG "%s: mii_rw read from reg %d at PHY %d: 0x%x.\n",
- dev->name, miireg, addr, retval);
- }
-
- return retval;
-}
-
-static int phy_reset(struct net_device *dev, u32 bmcr_setup)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 miicontrol;
- unsigned int tries = 0;
-
- miicontrol = BMCR_RESET | bmcr_setup;
- if (mii_rw(dev, np->phyaddr, MII_BMCR, miicontrol)) {
- return -1;
- }
-
- /* wait for 500ms */
- msleep(500);
-
- /* must wait till reset is deasserted */
- while (miicontrol & BMCR_RESET) {
- msleep(10);
- miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- /* FIXME: 100 tries seem excessive */
- if (tries++ > 100)
- return -1;
- }
- return 0;
-}
-
-static int phy_init(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 phyinterface, phy_reserved, mii_status, mii_control, mii_control_1000,reg;
-
- /* phy errata for E3016 phy */
- if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
- reg = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ);
- reg &= ~PHY_MARVELL_E3016_INITMASK;
- if (mii_rw(dev, np->phyaddr, MII_NCONFIG, reg)) {
- printk(KERN_INFO "%s: phy write to errata reg failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
-
- /* set advertise register */
- reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- reg |= (ADVERTISE_10HALF|ADVERTISE_10FULL|ADVERTISE_100HALF|ADVERTISE_100FULL|ADVERTISE_PAUSE_ASYM|ADVERTISE_PAUSE_CAP);
- if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) {
- printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
-
- /* get phy interface type */
- phyinterface = readl(base + NvRegPhyInterface);
-
- /* see if gigabit phy */
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- if (mii_status & PHY_GIGABIT) {
- np->gigabit = PHY_GIGABIT;
- mii_control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- mii_control_1000 &= ~ADVERTISE_1000HALF;
- if (phyinterface & PHY_RGMII)
- mii_control_1000 |= ADVERTISE_1000FULL;
- else
- mii_control_1000 &= ~ADVERTISE_1000FULL;
-
- if (mii_rw(dev, np->phyaddr, MII_CTRL1000, mii_control_1000)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- else
- np->gigabit = 0;
-
- mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- mii_control |= BMCR_ANENABLE;
-
- /* reset the phy
- * (certain phys need bmcr to be setup with reset)
- */
- if (phy_reset(dev, mii_control)) {
- printk(KERN_INFO "%s: phy reset failed\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
-
- /* phy vendor specific configuration */
- if ((np->phy_oui == PHY_OUI_CICADA) && (phyinterface & PHY_RGMII) ) {
- phy_reserved = mii_rw(dev, np->phyaddr, MII_RESV1, MII_READ);
- phy_reserved &= ~(PHY_INIT1 | PHY_INIT2);
- phy_reserved |= (PHY_INIT3 | PHY_INIT4);
- if (mii_rw(dev, np->phyaddr, MII_RESV1, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- phy_reserved = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ);
- phy_reserved |= PHY_INIT5;
- if (mii_rw(dev, np->phyaddr, MII_NCONFIG, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- if (np->phy_oui == PHY_OUI_CICADA) {
- phy_reserved = mii_rw(dev, np->phyaddr, MII_SREVISION, MII_READ);
- phy_reserved |= PHY_INIT6;
- if (mii_rw(dev, np->phyaddr, MII_SREVISION, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- /* some phys clear out pause advertisment on reset, set it back */
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg);
-
- /* restart auto negotiation */
- mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE);
- if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) {
- return PHY_ERROR;
- }
-
- return 0;
-}
-
-static void nv_start_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_start_rx\n", dev->name);
- /* Already running? Stop it. */
- if (readl(base + NvRegReceiverControl) & NVREG_RCVCTL_START) {
- writel(0, base + NvRegReceiverControl);
- pci_push(base);
- }
- writel(np->linkspeed, base + NvRegLinkSpeed);
- pci_push(base);
- writel(NVREG_RCVCTL_START, base + NvRegReceiverControl);
- dprintk(KERN_DEBUG "%s: nv_start_rx to duplex %d, speed 0x%08x.\n",
- dev->name, np->duplex, np->linkspeed);
- pci_push(base);
-}
-
-static void nv_stop_rx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_stop_rx\n", dev->name);
- writel(0, base + NvRegReceiverControl);
- reg_delay(dev, NvRegReceiverStatus, NVREG_RCVSTAT_BUSY, 0,
- NV_RXSTOP_DELAY1, NV_RXSTOP_DELAY1MAX,
- KERN_INFO "nv_stop_rx: ReceiverStatus remained busy");
-
- udelay(NV_RXSTOP_DELAY2);
- writel(0, base + NvRegLinkSpeed);
-}
-
-static void nv_start_tx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_start_tx\n", dev->name);
- writel(NVREG_XMITCTL_START, base + NvRegTransmitterControl);
- pci_push(base);
-}
-
-static void nv_stop_tx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_stop_tx\n", dev->name);
- writel(0, base + NvRegTransmitterControl);
- reg_delay(dev, NvRegTransmitterStatus, NVREG_XMITSTAT_BUSY, 0,
- NV_TXSTOP_DELAY1, NV_TXSTOP_DELAY1MAX,
- KERN_INFO "nv_stop_tx: TransmitterStatus remained busy");
-
- udelay(NV_TXSTOP_DELAY2);
- writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
-}
-
-static void nv_txrx_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_txrx_reset\n", dev->name);
- writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
- udelay(NV_TXRX_RESET_DELAY);
- writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
-}
-
-static void nv_mac_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_mac_reset\n", dev->name);
- writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
- writel(NVREG_MAC_RESET_ASSERT, base + NvRegMacReset);
- pci_push(base);
- udelay(NV_MAC_RESET_DELAY);
- writel(0, base + NvRegMacReset);
- pci_push(base);
- udelay(NV_MAC_RESET_DELAY);
- writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
-}
-
-/*
- * nv_get_stats: dev->get_stats function
- * Get latest stats value from the nic.
- * Called with read_lock(&dev_base_lock) held for read -
- * only synchronized against unregister_netdevice.
- */
-static struct net_device_stats *nv_get_stats(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- /* It seems that the nic always generates interrupts and doesn't
- * accumulate errors internally. Thus the current values in np->stats
- * are already up to date.
- */
- return &np->stats;
-}
-
-/*
- * nv_alloc_rx: fill rx ring entries.
- * Return 1 if the allocations for the skbs failed and the
- * rx engine is without Available descriptors
- */
-static int nv_alloc_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- unsigned int refill_rx = np->refill_rx;
- int nr;
-
- while (np->cur_rx != refill_rx) {
- struct sk_buff *skb;
-
- nr = refill_rx % np->rx_ring_size;
- if (np->rx_skbuff[nr] == NULL) {
-
- skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD);
- if (!skb)
- break;
-
- skb->dev = dev;
- np->rx_skbuff[nr] = skb;
- } else {
- skb = np->rx_skbuff[nr];
- }
- np->rx_dma[nr] = pci_map_single(np->pci_dev, skb->data,
- skb->end-skb->data, PCI_DMA_FROMDEVICE);
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig[nr].buf = cpu_to_le32(np->rx_dma[nr]);
- wmb();
- np->rx_ring.orig[nr].flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL);
- } else {
- np->rx_ring.ex[nr].bufhigh = cpu_to_le64(np->rx_dma[nr]) >> 32;
- np->rx_ring.ex[nr].buflow = cpu_to_le64(np->rx_dma[nr]) & 0x0FFFFFFFF;
- wmb();
- np->rx_ring.ex[nr].flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL);
- }
- dprintk(KERN_DEBUG "%s: nv_alloc_rx: Packet %d marked as Available\n",
- dev->name, refill_rx);
- refill_rx++;
- }
- np->refill_rx = refill_rx;
- if (np->cur_rx - refill_rx == np->rx_ring_size)
- return 1;
- return 0;
-}
-
-/* If rx bufs are exhausted called after 50ms to attempt to refresh */
-#ifdef CONFIG_FORCEDETH_NAPI
-static void nv_do_rx_refill(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
-
- /* Just reschedule NAPI rx processing */
- netif_rx_schedule(dev);
-}
-#else
-static void nv_do_rx_refill(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- } else {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
- if (nv_alloc_rx(dev)) {
- spin_lock_irq(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
-}
-#endif
-
-static void nv_init_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
-
- np->cur_rx = np->rx_ring_size;
- np->refill_rx = 0;
- for (i = 0; i < np->rx_ring_size; i++)
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->rx_ring.orig[i].flaglen = 0;
- else
- np->rx_ring.ex[i].flaglen = 0;
-}
-
-static void nv_init_tx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
-
- np->next_tx = np->nic_tx = 0;
- for (i = 0; i < np->tx_ring_size; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->tx_ring.orig[i].flaglen = 0;
- else
- np->tx_ring.ex[i].flaglen = 0;
- np->tx_skbuff[i] = NULL;
- np->tx_dma[i] = 0;
- }
-}
-
-static int nv_init_ring(struct net_device *dev)
-{
- nv_init_tx(dev);
- nv_init_rx(dev);
- return nv_alloc_rx(dev);
-}
-
-static int nv_release_txskb(struct net_device *dev, unsigned int skbnr)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- dprintk(KERN_INFO "%s: nv_release_txskb for skbnr %d\n",
- dev->name, skbnr);
-
- if (np->tx_dma[skbnr]) {
- pci_unmap_page(np->pci_dev, np->tx_dma[skbnr],
- np->tx_dma_len[skbnr],
- PCI_DMA_TODEVICE);
- np->tx_dma[skbnr] = 0;
- }
-
- if (np->tx_skbuff[skbnr]) {
- if (!np->ecdev) dev_kfree_skb_any(np->tx_skbuff[skbnr]);
- np->tx_skbuff[skbnr] = NULL;
- return 1;
- } else {
- return 0;
- }
-}
-
-static void nv_drain_tx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- unsigned int i;
-
- for (i = 0; i < np->tx_ring_size; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->tx_ring.orig[i].flaglen = 0;
- else
- np->tx_ring.ex[i].flaglen = 0;
- if (nv_release_txskb(dev, i))
- np->stats.tx_dropped++;
- }
-}
-
-static void nv_drain_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
- for (i = 0; i < np->rx_ring_size; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->rx_ring.orig[i].flaglen = 0;
- else
- np->rx_ring.ex[i].flaglen = 0;
- wmb();
- if (np->rx_skbuff[i]) {
- pci_unmap_single(np->pci_dev, np->rx_dma[i],
- np->rx_skbuff[i]->end-np->rx_skbuff[i]->data,
- PCI_DMA_FROMDEVICE);
- if (!np->ecdev) dev_kfree_skb(np->rx_skbuff[i]);
- np->rx_skbuff[i] = NULL;
- }
- }
-}
-
-static void drain_ring(struct net_device *dev)
-{
- nv_drain_tx(dev);
- nv_drain_rx(dev);
-}
-
-/*
- * nv_start_xmit: dev->hard_start_xmit function
- * Called with netif_tx_lock held.
- */
-static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 tx_flags = 0;
- u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET);
- unsigned int fragments = skb_shinfo(skb)->nr_frags;
- unsigned int nr = (np->next_tx - 1) % np->tx_ring_size;
- unsigned int start_nr = np->next_tx % np->tx_ring_size;
- unsigned int i;
- u32 offset = 0;
- u32 bcnt;
- u32 size = skb->len-skb->data_len;
- u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
- u32 tx_flags_vlan = 0;
-
- /* add fragments to entries count */
- for (i = 0; i < fragments; i++) {
- entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) +
- ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
- }
-
- if (!np->ecdev) {
- spin_lock_irq(&np->lock);
-
- if ((np->next_tx - np->nic_tx + entries - 1) > np->tx_limit_stop) {
- spin_unlock_irq(&np->lock);
- netif_stop_queue(dev);
- return NETDEV_TX_BUSY;
- }
- }
-
- /* setup the header buffer */
- do {
- bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- nr = (nr + 1) % np->tx_ring_size;
-
- np->tx_dma[nr] = pci_map_single(np->pci_dev, skb->data + offset, bcnt,
- PCI_DMA_TODEVICE);
- np->tx_dma_len[nr] = bcnt;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].buf = cpu_to_le32(np->tx_dma[nr]);
- np->tx_ring.orig[nr].flaglen = cpu_to_le32((bcnt-1) | tx_flags);
- } else {
- np->tx_ring.ex[nr].bufhigh = cpu_to_le64(np->tx_dma[nr]) >> 32;
- np->tx_ring.ex[nr].buflow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF;
- np->tx_ring.ex[nr].flaglen = cpu_to_le32((bcnt-1) | tx_flags);
- }
- tx_flags = np->tx_flags;
- offset += bcnt;
- size -= bcnt;
- } while (size);
-
- /* setup the fragments */
- for (i = 0; i < fragments; i++) {
- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- u32 size = frag->size;
- offset = 0;
-
- do {
- bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- nr = (nr + 1) % np->tx_ring_size;
-
- np->tx_dma[nr] = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
- PCI_DMA_TODEVICE);
- np->tx_dma_len[nr] = bcnt;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].buf = cpu_to_le32(np->tx_dma[nr]);
- np->tx_ring.orig[nr].flaglen = cpu_to_le32((bcnt-1) | tx_flags);
- } else {
- np->tx_ring.ex[nr].bufhigh = cpu_to_le64(np->tx_dma[nr]) >> 32;
- np->tx_ring.ex[nr].buflow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF;
- np->tx_ring.ex[nr].flaglen = cpu_to_le32((bcnt-1) | tx_flags);
- }
- offset += bcnt;
- size -= bcnt;
- } while (size);
- }
-
- /* set last fragment flag */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].flaglen |= cpu_to_le32(tx_flags_extra);
- } else {
- np->tx_ring.ex[nr].flaglen |= cpu_to_le32(tx_flags_extra);
- }
-
- np->tx_skbuff[nr] = skb;
-
-#ifdef NETIF_F_TSO
- if (skb_is_gso(skb))
- tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT);
- else
-#endif
- tx_flags_extra = skb->ip_summed == CHECKSUM_PARTIAL ?
- NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0;
-
- /* vlan tag */
- if (np->vlangrp && vlan_tx_tag_present(skb)) {
- tx_flags_vlan = NV_TX3_VLAN_TAG_PRESENT | vlan_tx_tag_get(skb);
- }
-
- /* set tx flags */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[start_nr].flaglen |= cpu_to_le32(tx_flags | tx_flags_extra);
- } else {
- np->tx_ring.ex[start_nr].txvlan = cpu_to_le32(tx_flags_vlan);
- np->tx_ring.ex[start_nr].flaglen |= cpu_to_le32(tx_flags | tx_flags_extra);
- }
-
- dprintk(KERN_DEBUG "%s: nv_start_xmit: packet %d (entries %d) queued for transmission. tx_flags_extra: %x\n",
- dev->name, np->next_tx, entries, tx_flags_extra);
- {
- int j;
- for (j=0; j<64; j++) {
- if ((j%16) == 0)
- dprintk("\n%03x:", j);
- dprintk(" %02x", ((unsigned char*)skb->data)[j]);
- }
- dprintk("\n");
- }
-
- np->next_tx += entries;
-
- dev->trans_start = jiffies;
- if (!np->ecdev) spin_unlock_irq(&np->lock);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(get_hwbase(dev));
- return NETDEV_TX_OK;
-}
-
-/*
- * nv_tx_done: check for completed packets, release the skbs.
- *
- * Caller must own np->lock.
- */
-static void nv_tx_done(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 flags;
- unsigned int i;
- struct sk_buff *skb;
-
- while (np->nic_tx != np->next_tx) {
- i = np->nic_tx % np->tx_ring_size;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- flags = le32_to_cpu(np->tx_ring.orig[i].flaglen);
- else
- flags = le32_to_cpu(np->tx_ring.ex[i].flaglen);
-
- dprintk(KERN_DEBUG "%s: nv_tx_done: looking at packet %d, flags 0x%x.\n",
- dev->name, np->nic_tx, flags);
- if (flags & NV_TX_VALID)
- break;
- if (np->desc_ver == DESC_VER_1) {
- if (flags & NV_TX_LASTPACKET) {
- skb = np->tx_skbuff[i];
- if (flags & (NV_TX_RETRYERROR|NV_TX_CARRIERLOST|NV_TX_LATECOLLISION|
- NV_TX_UNDERFLOW|NV_TX_ERROR)) {
- if (flags & NV_TX_UNDERFLOW)
- np->stats.tx_fifo_errors++;
- if (flags & NV_TX_CARRIERLOST)
- np->stats.tx_carrier_errors++;
- np->stats.tx_errors++;
- } else {
- np->stats.tx_packets++;
- np->stats.tx_bytes += skb->len;
- }
- }
- } else {
- if (flags & NV_TX2_LASTPACKET) {
- skb = np->tx_skbuff[i];
- if (flags & (NV_TX2_RETRYERROR|NV_TX2_CARRIERLOST|NV_TX2_LATECOLLISION|
- NV_TX2_UNDERFLOW|NV_TX2_ERROR)) {
- if (flags & NV_TX2_UNDERFLOW)
- np->stats.tx_fifo_errors++;
- if (flags & NV_TX2_CARRIERLOST)
- np->stats.tx_carrier_errors++;
- np->stats.tx_errors++;
- } else {
- np->stats.tx_packets++;
- np->stats.tx_bytes += skb->len;
- }
- }
- }
- nv_release_txskb(dev, i);
- np->nic_tx++;
- }
- if (!np->ecdev && np->next_tx - np->nic_tx < np->tx_limit_start)
- netif_wake_queue(dev);
-}
-
-/*
- * nv_tx_timeout: dev->tx_timeout function
- * Called with netif_tx_lock held.
- */
-static void nv_tx_timeout(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 status;
-
- if (np->msi_flags & NV_MSI_X_ENABLED)
- status = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- else
- status = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
-
- printk(KERN_INFO "%s: Got tx_timeout. irq: %08x\n", dev->name, status);
-
- {
- int i;
-
- printk(KERN_INFO "%s: Ring at %lx: next %d nic %d\n",
- dev->name, (unsigned long)np->ring_addr,
- np->next_tx, np->nic_tx);
- printk(KERN_INFO "%s: Dumping tx registers\n", dev->name);
- for (i=0;i<=np->register_size;i+= 32) {
- printk(KERN_INFO "%3x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
- i,
- readl(base + i + 0), readl(base + i + 4),
- readl(base + i + 8), readl(base + i + 12),
- readl(base + i + 16), readl(base + i + 20),
- readl(base + i + 24), readl(base + i + 28));
- }
- printk(KERN_INFO "%s: Dumping tx ring\n", dev->name);
- for (i=0;i<np->tx_ring_size;i+= 4) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- printk(KERN_INFO "%03x: %08x %08x // %08x %08x // %08x %08x // %08x %08x\n",
- i,
- le32_to_cpu(np->tx_ring.orig[i].buf),
- le32_to_cpu(np->tx_ring.orig[i].flaglen),
- le32_to_cpu(np->tx_ring.orig[i+1].buf),
- le32_to_cpu(np->tx_ring.orig[i+1].flaglen),
- le32_to_cpu(np->tx_ring.orig[i+2].buf),
- le32_to_cpu(np->tx_ring.orig[i+2].flaglen),
- le32_to_cpu(np->tx_ring.orig[i+3].buf),
- le32_to_cpu(np->tx_ring.orig[i+3].flaglen));
- } else {
- printk(KERN_INFO "%03x: %08x %08x %08x // %08x %08x %08x // %08x %08x %08x // %08x %08x %08x\n",
- i,
- le32_to_cpu(np->tx_ring.ex[i].bufhigh),
- le32_to_cpu(np->tx_ring.ex[i].buflow),
- le32_to_cpu(np->tx_ring.ex[i].flaglen),
- le32_to_cpu(np->tx_ring.ex[i+1].bufhigh),
- le32_to_cpu(np->tx_ring.ex[i+1].buflow),
- le32_to_cpu(np->tx_ring.ex[i+1].flaglen),
- le32_to_cpu(np->tx_ring.ex[i+2].bufhigh),
- le32_to_cpu(np->tx_ring.ex[i+2].buflow),
- le32_to_cpu(np->tx_ring.ex[i+2].flaglen),
- le32_to_cpu(np->tx_ring.ex[i+3].bufhigh),
- le32_to_cpu(np->tx_ring.ex[i+3].buflow),
- le32_to_cpu(np->tx_ring.ex[i+3].flaglen));
- }
- }
- }
-
- if (!np->ecdev) spin_lock_irq(&np->lock);
-
- /* 1) stop tx engine */
- nv_stop_tx(dev);
-
- /* 2) check that the packets were not sent already: */
- nv_tx_done(dev);
-
- /* 3) if there are dead entries: clear everything */
- if (np->next_tx != np->nic_tx) {
- printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name);
- nv_drain_tx(dev);
- np->next_tx = np->nic_tx = 0;
- setup_hw_rings(dev, NV_SETUP_TX_RING);
- if (!np->ecdev) netif_wake_queue(dev);
- }
-
- /* 4) restart tx engine */
- nv_start_tx(dev);
- if (!np->ecdev) spin_unlock_irq(&np->lock);
-}
-
-/*
- * Called when the nic notices a mismatch between the actual data len on the
- * wire and the len indicated in the 802 header
- */
-static int nv_getlen(struct net_device *dev, void *packet, int datalen)
-{
- int hdrlen; /* length of the 802 header */
- int protolen; /* length as stored in the proto field */
-
- /* 1) calculate len according to header */
- if ( ((struct vlan_ethhdr *)packet)->h_vlan_proto == htons(ETH_P_8021Q)) {
- protolen = ntohs( ((struct vlan_ethhdr *)packet)->h_vlan_encapsulated_proto );
- hdrlen = VLAN_HLEN;
- } else {
- protolen = ntohs( ((struct ethhdr *)packet)->h_proto);
- hdrlen = ETH_HLEN;
- }
- dprintk(KERN_DEBUG "%s: nv_getlen: datalen %d, protolen %d, hdrlen %d\n",
- dev->name, datalen, protolen, hdrlen);
- if (protolen > ETH_DATA_LEN)
- return datalen; /* Value in proto field not a len, no checks possible */
-
- protolen += hdrlen;
- /* consistency checks: */
- if (datalen > ETH_ZLEN) {
- if (datalen >= protolen) {
- /* more data on wire than in 802 header, trim of
- * additional data.
- */
- dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
- dev->name, protolen);
- return protolen;
- } else {
- /* less data on wire than mentioned in header.
- * Discard the packet.
- */
- dprintk(KERN_DEBUG "%s: nv_getlen: discarding long packet.\n",
- dev->name);
- return -1;
- }
- } else {
- /* short packet. Accept only if 802 values are also short */
- if (protolen > ETH_ZLEN) {
- dprintk(KERN_DEBUG "%s: nv_getlen: discarding short packet.\n",
- dev->name);
- return -1;
- }
- dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
- dev->name, datalen);
- return datalen;
- }
-}
-
-static int nv_rx_process(struct net_device *dev, int limit)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 flags;
- u32 vlanflags = 0;
- int count;
-
- for (count = 0; count < limit; ++count) {
- struct sk_buff *skb;
- int len;
- int i;
- if (np->cur_rx - np->refill_rx >= np->rx_ring_size)
- break; /* we scanned the whole ring - do not continue */
-
- i = np->cur_rx % np->rx_ring_size;
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- flags = le32_to_cpu(np->rx_ring.orig[i].flaglen);
- len = nv_descr_getlength(&np->rx_ring.orig[i], np->desc_ver);
- } else {
- flags = le32_to_cpu(np->rx_ring.ex[i].flaglen);
- len = nv_descr_getlength_ex(&np->rx_ring.ex[i], np->desc_ver);
- vlanflags = le32_to_cpu(np->rx_ring.ex[i].buflow);
- }
-
- dprintk(KERN_DEBUG "%s: nv_rx_process: looking at packet %d, flags 0x%x.\n",
- dev->name, np->cur_rx, flags);
-
- if (flags & NV_RX_AVAIL)
- break; /* still owned by hardware, */
-
- /*
- * the packet is for us - immediately tear down the pci mapping.
- * TODO: check if a prefetch of the first cacheline improves
- * the performance.
- */
- pci_unmap_single(np->pci_dev, np->rx_dma[i],
- np->rx_skbuff[i]->end-np->rx_skbuff[i]->data,
- PCI_DMA_FROMDEVICE);
-
- {
- int j;
- dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",flags);
- for (j=0; j<64; j++) {
- if ((j%16) == 0)
- dprintk("\n%03x:", j);
- dprintk(" %02x", ((unsigned char*)np->rx_skbuff[i]->data)[j]);
- }
- dprintk("\n");
- }
- /* look at what we actually got: */
- if (np->desc_ver == DESC_VER_1) {
- if (!(flags & NV_RX_DESCRIPTORVALID))
- goto next_pkt;
-
- if (flags & NV_RX_ERROR) {
- if (flags & NV_RX_MISSEDFRAME) {
- np->stats.rx_missed_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & (NV_RX_ERROR1|NV_RX_ERROR2|NV_RX_ERROR3)) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX_CRCERR) {
- np->stats.rx_crc_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX_OVERFLOW) {
- np->stats.rx_over_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX_ERROR4) {
- len = nv_getlen(dev, np->rx_skbuff[i]->data, len);
- if (len < 0) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- }
- /* framing errors are soft errors. */
- if (flags & NV_RX_FRAMINGERR) {
- if (flags & NV_RX_SUBSTRACT1) {
- len--;
- }
- }
- }
- } else {
- if (!(flags & NV_RX2_DESCRIPTORVALID))
- goto next_pkt;
-
- if (flags & NV_RX2_ERROR) {
- if (flags & (NV_RX2_ERROR1|NV_RX2_ERROR2|NV_RX2_ERROR3)) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX2_CRCERR) {
- np->stats.rx_crc_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX2_OVERFLOW) {
- np->stats.rx_over_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX2_ERROR4) {
- len = nv_getlen(dev, np->rx_skbuff[i]->data, len);
- if (len < 0) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- }
- /* framing errors are soft errors */
- if (flags & NV_RX2_FRAMINGERR) {
- if (flags & NV_RX2_SUBSTRACT1) {
- len--;
- }
- }
- }
- if (np->rx_csum) {
- flags &= NV_RX2_CHECKSUMMASK;
- if (flags == NV_RX2_CHECKSUMOK1 ||
- flags == NV_RX2_CHECKSUMOK2 ||
- flags == NV_RX2_CHECKSUMOK3) {
- dprintk(KERN_DEBUG "%s: hw checksum hit!.\n", dev->name);
- np->rx_skbuff[i]->ip_summed = CHECKSUM_UNNECESSARY;
- } else {
- dprintk(KERN_DEBUG "%s: hwchecksum miss!.\n", dev->name);
- }
- }
- }
- if (np->ecdev) {
- ecdev_receive(np->ecdev, np->rx_skbuff[i]->data, len);
- }
- else {
- /* got a valid packet - forward it to the network core */
- skb = np->rx_skbuff[i];
- np->rx_skbuff[i] = NULL;
-
- skb_put(skb, len);
- skb->protocol = eth_type_trans(skb, dev);
- dprintk(KERN_DEBUG "%s: nv_rx_process: packet %d with %d bytes, proto %d accepted.\n",
- dev->name, np->cur_rx, len, skb->protocol);
-#ifdef CONFIG_FORCEDETH_NAPI
- if (np->vlangrp && (vlanflags & NV_RX3_VLAN_TAG_PRESENT))
- vlan_hwaccel_receive_skb(skb, np->vlangrp,
- vlanflags & NV_RX3_VLAN_TAG_MASK);
- else
- netif_receive_skb(skb);
-#else
- if (np->vlangrp && (vlanflags & NV_RX3_VLAN_TAG_PRESENT))
- vlan_hwaccel_rx(skb, np->vlangrp,
- vlanflags & NV_RX3_VLAN_TAG_MASK);
- else
- netif_rx(skb);
-#endif
- }
- dev->last_rx = jiffies;
- np->stats.rx_packets++;
- np->stats.rx_bytes += len;
-next_pkt:
- np->cur_rx++;
- }
-
- return count;
-}
-
-static void set_bufsize(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (dev->mtu <= ETH_DATA_LEN)
- np->rx_buf_sz = ETH_DATA_LEN + NV_RX_HEADERS;
- else
- np->rx_buf_sz = dev->mtu + NV_RX_HEADERS;
-}
-
-/*
- * nv_change_mtu: dev->change_mtu function
- * Called with dev_base_lock held for read.
- */
-static int nv_change_mtu(struct net_device *dev, int new_mtu)
-{
- struct fe_priv *np = netdev_priv(dev);
- int old_mtu;
-
- if (new_mtu < 64 || new_mtu > np->pkt_limit)
- return -EINVAL;
-
- old_mtu = dev->mtu;
- dev->mtu = new_mtu;
-
- /* return early if the buffer sizes will not change */
- if (old_mtu <= ETH_DATA_LEN && new_mtu <= ETH_DATA_LEN)
- return 0;
- if (old_mtu == new_mtu)
- return 0;
-
- /* synchronized against open : rtnl_lock() held by caller */
- if (netif_running(dev)) {
- u8 __iomem *base = get_hwbase(dev);
- /*
- * It seems that the nic preloads valid ring entries into an
- * internal buffer. The procedure for flushing everything is
- * guessed, there is probably a simpler approach.
- * Changing the MTU is a rare event, it shouldn't matter.
- */
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain rx queue */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
- /* reinit driver view of the rx queue */
- set_bufsize(dev);
- if (nv_init_ring(dev)) {
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- }
- /* reinit nic view of the rx queue */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(base);
-
- /* restart rx engine */
- nv_start_rx(dev);
- nv_start_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- nv_enable_irq(dev);
- }
- return 0;
-}
-
-static void nv_copy_mac_to_hw(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 mac[2];
-
- mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
- (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
- mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
-
- writel(mac[0], base + NvRegMacAddrA);
- writel(mac[1], base + NvRegMacAddrB);
-}
-
-/*
- * nv_set_mac_address: dev->set_mac_address function
- * Called with rtnl_lock() held.
- */
-static int nv_set_mac_address(struct net_device *dev, void *addr)
-{
- struct fe_priv *np = netdev_priv(dev);
- struct sockaddr *macaddr = (struct sockaddr*)addr;
-
- if (!is_valid_ether_addr(macaddr->sa_data))
- return -EADDRNOTAVAIL;
-
- /* synchronized against open : rtnl_lock() held by caller */
- memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN);
-
- if (netif_running(dev)) {
- netif_tx_lock_bh(dev);
- spin_lock_irq(&np->lock);
-
- /* stop rx engine */
- nv_stop_rx(dev);
-
- /* set mac address */
- nv_copy_mac_to_hw(dev);
-
- /* restart rx engine */
- nv_start_rx(dev);
- spin_unlock_irq(&np->lock);
- netif_tx_unlock_bh(dev);
- } else {
- nv_copy_mac_to_hw(dev);
- }
- return 0;
-}
-
-/*
- * nv_set_multicast: dev->set_multicast function
- * Called with netif_tx_lock held.
- */
-static void nv_set_multicast(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 addr[2];
- u32 mask[2];
- u32 pff = readl(base + NvRegPacketFilterFlags) & NVREG_PFF_PAUSE_RX;
-
- memset(addr, 0, sizeof(addr));
- memset(mask, 0, sizeof(mask));
-
- if (dev->flags & IFF_PROMISC) {
- pff |= NVREG_PFF_PROMISC;
- } else {
- pff |= NVREG_PFF_MYADDR;
-
- if (dev->flags & IFF_ALLMULTI || dev->mc_list) {
- u32 alwaysOff[2];
- u32 alwaysOn[2];
-
- alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0xffffffff;
- if (dev->flags & IFF_ALLMULTI) {
- alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0;
- } else {
- struct dev_mc_list *walk;
-
- walk = dev->mc_list;
- while (walk != NULL) {
- u32 a, b;
- a = le32_to_cpu(*(u32 *) walk->dmi_addr);
- b = le16_to_cpu(*(u16 *) (&walk->dmi_addr[4]));
- alwaysOn[0] &= a;
- alwaysOff[0] &= ~a;
- alwaysOn[1] &= b;
- alwaysOff[1] &= ~b;
- walk = walk->next;
- }
- }
- addr[0] = alwaysOn[0];
- addr[1] = alwaysOn[1];
- mask[0] = alwaysOn[0] | alwaysOff[0];
- mask[1] = alwaysOn[1] | alwaysOff[1];
- }
- }
- addr[0] |= NVREG_MCASTADDRA_FORCE;
- pff |= NVREG_PFF_ALWAYS;
- spin_lock_irq(&np->lock);
- nv_stop_rx(dev);
- writel(addr[0], base + NvRegMulticastAddrA);
- writel(addr[1], base + NvRegMulticastAddrB);
- writel(mask[0], base + NvRegMulticastMaskA);
- writel(mask[1], base + NvRegMulticastMaskB);
- writel(pff, base + NvRegPacketFilterFlags);
- dprintk(KERN_INFO "%s: reconfiguration for multicast lists.\n",
- dev->name);
- nv_start_rx(dev);
- spin_unlock_irq(&np->lock);
-}
-
-static void nv_update_pause(struct net_device *dev, u32 pause_flags)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- np->pause_flags &= ~(NV_PAUSEFRAME_TX_ENABLE | NV_PAUSEFRAME_RX_ENABLE);
-
- if (np->pause_flags & NV_PAUSEFRAME_RX_CAPABLE) {
- u32 pff = readl(base + NvRegPacketFilterFlags) & ~NVREG_PFF_PAUSE_RX;
- if (pause_flags & NV_PAUSEFRAME_RX_ENABLE) {
- writel(pff|NVREG_PFF_PAUSE_RX, base + NvRegPacketFilterFlags);
- np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- } else {
- writel(pff, base + NvRegPacketFilterFlags);
- }
- }
- if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE) {
- u32 regmisc = readl(base + NvRegMisc1) & ~NVREG_MISC1_PAUSE_TX;
- if (pause_flags & NV_PAUSEFRAME_TX_ENABLE) {
- writel(NVREG_TX_PAUSEFRAME_ENABLE, base + NvRegTxPauseFrame);
- writel(regmisc|NVREG_MISC1_PAUSE_TX, base + NvRegMisc1);
- np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- } else {
- writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame);
- writel(regmisc, base + NvRegMisc1);
- }
- }
-}
-
-/**
- * nv_update_linkspeed: Setup the MAC according to the link partner
- * @dev: Network device to be configured
- *
- * The function queries the PHY and checks if there is a link partner.
- * If yes, then it sets up the MAC accordingly. Otherwise, the MAC is
- * set to 10 MBit HD.
- *
- * The function returns 0 if there is no link partner and 1 if there is
- * a good link partner.
- */
-static int nv_update_linkspeed(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int adv = 0;
- int lpa = 0;
- int adv_lpa, adv_pause, lpa_pause;
- int newls = np->linkspeed;
- int newdup = np->duplex;
- int mii_status;
- int retval = 0;
- u32 control_1000, status_1000, phyreg, pause_flags, txreg;
-
- /* BMSR_LSTATUS is latched, read it twice:
- * we want the current value.
- */
- mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
-
- if (!(mii_status & BMSR_LSTATUS)) {
- dprintk(KERN_DEBUG "%s: no link detected by phy - falling back to 10HD.\n",
- dev->name);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- retval = 0;
- goto set_speed;
- }
-
- if (np->autoneg == 0) {
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: autoneg off, PHY set to 0x%04x.\n",
- dev->name, np->fixed_mode);
- if (np->fixed_mode & LPA_100FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 1;
- } else if (np->fixed_mode & LPA_100HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 0;
- } else if (np->fixed_mode & LPA_10FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 1;
- } else {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- }
- retval = 1;
- goto set_speed;
- }
- /* check auto negotiation is complete */
- if (!(mii_status & BMSR_ANEGCOMPLETE)) {
- /* still in autonegotiation - configure nic for 10 MBit HD and wait. */
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- retval = 0;
- dprintk(KERN_DEBUG "%s: autoneg not completed - falling back to 10HD.\n", dev->name);
- goto set_speed;
- }
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ);
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: PHY advertises 0x%04x, lpa 0x%04x.\n",
- dev->name, adv, lpa);
-
- retval = 1;
- if (np->gigabit == PHY_GIGABIT) {
- control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- status_1000 = mii_rw(dev, np->phyaddr, MII_STAT1000, MII_READ);
-
- if ((control_1000 & ADVERTISE_1000FULL) &&
- (status_1000 & LPA_1000FULL)) {
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: GBit ethernet detected.\n",
- dev->name);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_1000;
- newdup = 1;
- goto set_speed;
- }
- }
-
- /* FIXME: handle parallel detection properly */
- adv_lpa = lpa & adv;
- if (adv_lpa & LPA_100FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 1;
- } else if (adv_lpa & LPA_100HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 0;
- } else if (adv_lpa & LPA_10FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 1;
- } else if (adv_lpa & LPA_10HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- } else {
- dprintk(KERN_DEBUG "%s: bad ability %04x - falling back to 10HD.\n", dev->name, adv_lpa);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- }
-
-set_speed:
- if (np->duplex == newdup && np->linkspeed == newls)
- return retval;
-
- dprintk(KERN_INFO "%s: changing link setting from %d/%d to %d/%d.\n",
- dev->name, np->linkspeed, np->duplex, newls, newdup);
-
- np->duplex = newdup;
- np->linkspeed = newls;
-
- if (np->gigabit == PHY_GIGABIT) {
- phyreg = readl(base + NvRegRandomSeed);
- phyreg &= ~(0x3FF00);
- if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_10)
- phyreg |= NVREG_RNDSEED_FORCE3;
- else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_100)
- phyreg |= NVREG_RNDSEED_FORCE2;
- else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_1000)
- phyreg |= NVREG_RNDSEED_FORCE;
- writel(phyreg, base + NvRegRandomSeed);
- }
-
- phyreg = readl(base + NvRegPhyInterface);
- phyreg &= ~(PHY_HALF|PHY_100|PHY_1000);
- if (np->duplex == 0)
- phyreg |= PHY_HALF;
- if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_100)
- phyreg |= PHY_100;
- else if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
- phyreg |= PHY_1000;
- writel(phyreg, base + NvRegPhyInterface);
-
- if (phyreg & PHY_RGMII) {
- if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
- txreg = NVREG_TX_DEFERRAL_RGMII_1000;
- else
- txreg = NVREG_TX_DEFERRAL_RGMII_10_100;
- } else {
- txreg = NVREG_TX_DEFERRAL_DEFAULT;
- }
- writel(txreg, base + NvRegTxDeferral);
-
- if (np->desc_ver == DESC_VER_1) {
- txreg = NVREG_TX_WM_DESC1_DEFAULT;
- } else {
- if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
- txreg = NVREG_TX_WM_DESC2_3_1000;
- else
- txreg = NVREG_TX_WM_DESC2_3_DEFAULT;
- }
- writel(txreg, base + NvRegTxWatermark);
-
- writel(NVREG_MISC1_FORCE | ( np->duplex ? 0 : NVREG_MISC1_HD),
- base + NvRegMisc1);
- pci_push(base);
- writel(np->linkspeed, base + NvRegLinkSpeed);
- pci_push(base);
-
- pause_flags = 0;
- /* setup pause frame */
- if (np->duplex != 0) {
- if (np->autoneg && np->pause_flags & NV_PAUSEFRAME_AUTONEG) {
- adv_pause = adv & (ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM);
- lpa_pause = lpa & (LPA_PAUSE_CAP| LPA_PAUSE_ASYM);
-
- switch (adv_pause) {
- case ADVERTISE_PAUSE_CAP:
- if (lpa_pause & LPA_PAUSE_CAP) {
- pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
- pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- }
- break;
- case ADVERTISE_PAUSE_ASYM:
- if (lpa_pause == (LPA_PAUSE_CAP| LPA_PAUSE_ASYM))
- {
- pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- }
- break;
- case ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM:
- if (lpa_pause & LPA_PAUSE_CAP)
- {
- pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
- pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- }
- if (lpa_pause == LPA_PAUSE_ASYM)
- {
- pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- }
- break;
- }
- } else {
- pause_flags = np->pause_flags;
- }
- }
- nv_update_pause(dev, pause_flags);
-
- return retval;
-}
-
-static void nv_linkchange(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->ecdev) {
- int link = nv_update_linkspeed(dev);
- ecdev_set_link(np->ecdev, link);
- return;
- }
-
- if (nv_update_linkspeed(dev)) {
- if (!netif_carrier_ok(dev)) {
- netif_carrier_on(dev);
- printk(KERN_INFO "%s: link up.\n", dev->name);
- nv_start_rx(dev);
- }
- } else {
- if (netif_carrier_ok(dev)) {
- netif_carrier_off(dev);
- printk(KERN_INFO "%s: link down.\n", dev->name);
- nv_stop_rx(dev);
- }
- }
-}
-
-static void nv_link_irq(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 miistat;
-
- miistat = readl(base + NvRegMIIStatus);
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
- dprintk(KERN_INFO "%s: link change irq, status 0x%x.\n", dev->name, miistat);
-
- if (miistat & (NVREG_MIISTAT_LINKCHANGE))
- nv_linkchange(dev);
- dprintk(KERN_DEBUG "%s: link change notification done.\n", dev->name);
-}
-
-static irqreturn_t nv_nic_irq(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq\n", dev->name);
-
- for (i=0; ; i++) {
- if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
- events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- } else {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
- }
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- if (!np->ecdev) spin_lock(&np->lock);
- nv_tx_done(dev);
- if (!np->ecdev) spin_unlock(&np->lock);
-
- if (events & NVREG_IRQ_LINK) {
- if (!np->ecdev) spin_lock(&np->lock);
- nv_link_irq(dev);
- if (!np->ecdev) spin_unlock(&np->lock);
- }
- if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- if (!np->ecdev) spin_lock(&np->lock);
- nv_linkchange(dev);
- if (!np->ecdev) spin_unlock(&np->lock);
- np->link_timeout = jiffies + LINK_TIMEOUT;
- }
- if (events & (NVREG_IRQ_TX_ERR)) {
- dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
- dev->name, events);
- }
- if (events & (NVREG_IRQ_UNKNOWN)) {
- printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
- dev->name, events);
- }
-#ifdef CONFIG_FORCEDETH_NAPI
- if (events & NVREG_IRQ_RX_ALL) {
- if (np->ecdev) {
- nv_rx_process(dev, dev->weight);
- }
- else {
- netif_rx_schedule(dev);
-
- /* Disable furthur receive irq's */
- spin_lock(&np->lock);
- np->irqmask &= ~NVREG_IRQ_RX_ALL;
-
- if (np->msi_flags & NV_MSI_X_ENABLED)
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- else
- writel(np->irqmask, base + NvRegIrqMask);
- spin_unlock(&np->lock);
- }
- }
-#else
- nv_rx_process(dev, dev->weight);
- if (nv_alloc_rx(dev) && !np->ecdev) {
- spin_lock(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock(&np->lock);
- }
-#endif
- if (i > max_interrupt_work) {
- if (!np->ecdev) {
- spin_lock(&np->lock);
- /* disable interrupts on the nic */
- if (!(np->msi_flags & NV_MSI_X_ENABLED))
- writel(0, base + NvRegIrqMask);
- else
- writel(np->irqmask, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq = np->irqmask;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- spin_unlock(&np->lock);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_tx(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
- unsigned long flags = 0;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_tx\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_TX_ALL;
- writel(NVREG_IRQ_TX_ALL, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: tx irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- if (!np->ecdev) spin_lock_irqsave(&np->lock, flags);
- nv_tx_done(dev);
- if (!np->ecdev) spin_unlock_irqrestore(&np->lock, flags);
-
- if (events & (NVREG_IRQ_TX_ERR)) {
- dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- if (!np->ecdev) {
- spin_lock_irqsave(&np->lock, flags);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_TX_ALL, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_TX_ALL;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- spin_unlock_irqrestore(&np->lock, flags);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_tx.\n", dev->name, i);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_tx completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-#ifdef CONFIG_FORCEDETH_NAPI
-static int nv_napi_poll(struct net_device *dev, int *budget)
-{
- int pkts, limit = min(*budget, dev->quota);
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- pkts = nv_rx_process(dev, limit);
-
- if (nv_alloc_rx(dev)) {
- spin_lock_irq(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
-
- if (pkts < limit) {
- /* all done, no more packets present */
- netif_rx_complete(dev);
-
- /* re-enable receive interrupts */
- spin_lock_irq(&np->lock);
- np->irqmask |= NVREG_IRQ_RX_ALL;
- if (np->msi_flags & NV_MSI_X_ENABLED)
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- else
- writel(np->irqmask, base + NvRegIrqMask);
- spin_unlock_irq(&np->lock);
- return 0;
- } else {
- /* used up our quantum, so reschedule */
- dev->quota -= pkts;
- *budget -= pkts;
- return 1;
- }
-}
-#endif
-
-#ifdef CONFIG_FORCEDETH_NAPI
-static irqreturn_t nv_nic_irq_rx(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
-
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL;
- writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus);
-
- if (events && !np->ecdev) {
- netif_rx_schedule(dev);
- /* disable receive interrupts on the nic */
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- pci_push(base);
- }
- return IRQ_HANDLED;
-}
-#else
-static irqreturn_t nv_nic_irq_rx(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
- unsigned long flags;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_rx\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL;
- writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: rx irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- nv_rx_process(dev, dev->weight);
- if (nv_alloc_rx(dev) && !np->ecdev) {
- spin_lock_irqsave(&np->lock, flags);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irqrestore(&np->lock, flags);
- }
-
- if (i > max_interrupt_work) {
- if (!np->ecdev) {
- spin_lock_irqsave(&np->lock, flags);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_RX_ALL;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- spin_unlock_irqrestore(&np->lock, flags);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_rx.\n", dev->name, i);
- break;
- }
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_rx completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-#endif
-
-static irqreturn_t nv_nic_irq_other(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
- unsigned long flags = 0;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_other\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_OTHER;
- writel(NVREG_IRQ_OTHER, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- if (events & NVREG_IRQ_LINK) {
- if (!np->ecdev) spin_lock_irqsave(&np->lock, flags);
- nv_link_irq(dev);
- if (!np->ecdev) spin_unlock_irqrestore(&np->lock, flags);
- }
- if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- if (!np->ecdev) spin_lock_irqsave(&np->lock, flags);
- nv_linkchange(dev);
- if (!np->ecdev) spin_unlock_irqrestore(&np->lock, flags);
- np->link_timeout = jiffies + LINK_TIMEOUT;
- }
- if (events & (NVREG_IRQ_UNKNOWN)) {
- printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- if (!np->ecdev) {
- spin_lock_irqsave(&np->lock, flags);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_OTHER, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_OTHER;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- spin_unlock_irqrestore(&np->lock, flags);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_other.\n", dev->name, i);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_other completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_test(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_test\n", dev->name);
-
- if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
- events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQ_TIMER, base + NvRegIrqStatus);
- } else {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQ_TIMER, base + NvRegMSIXIrqStatus);
- }
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & NVREG_IRQ_TIMER))
- return IRQ_RETVAL(0);
-
- spin_lock(&np->lock);
- np->intr_test = 1;
- spin_unlock(&np->lock);
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_test completed\n", dev->name);
-
- return IRQ_RETVAL(1);
-}
-
-static void set_msix_vector_map(struct net_device *dev, u32 vector, u32 irqmask)
-{
- u8 __iomem *base = get_hwbase(dev);
- int i;
- u32 msixmap = 0;
-
- /* Each interrupt bit can be mapped to a MSIX vector (4 bits).
- * MSIXMap0 represents the first 8 interrupts and MSIXMap1 represents
- * the remaining 8 interrupts.
- */
- for (i = 0; i < 8; i++) {
- if ((irqmask >> i) & 0x1) {
- msixmap |= vector << (i << 2);
- }
- }
- writel(readl(base + NvRegMSIXMap0) | msixmap, base + NvRegMSIXMap0);
-
- msixmap = 0;
- for (i = 0; i < 8; i++) {
- if ((irqmask >> (i + 8)) & 0x1) {
- msixmap |= vector << (i << 2);
- }
- }
- writel(readl(base + NvRegMSIXMap1) | msixmap, base + NvRegMSIXMap1);
-}
-
-static int nv_request_irq(struct net_device *dev, int intr_test)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int i;
-
- if (np->msi_flags & NV_MSI_X_CAPABLE) {
- for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
- np->msi_x_entry[i].entry = i;
- }
- if ((ret = pci_enable_msix(np->pci_dev, np->msi_x_entry, (np->msi_flags & NV_MSI_X_VECTORS_MASK))) == 0) {
- np->msi_flags |= NV_MSI_X_ENABLED;
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT && !intr_test) {
- /* Request irq for rx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQF_SHARED, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_err;
- }
- /* Request irq for tx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQF_SHARED, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_free_rx;
- }
- /* Request irq for link and timer handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQF_SHARED, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_free_tx;
- }
- /* map interrupts to their respective vector */
- writel(0, base + NvRegMSIXMap0);
- writel(0, base + NvRegMSIXMap1);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER);
- } else {
- /* Request irq for all interrupts */
- if ((!intr_test &&
- request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
- (intr_test &&
- request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0)) {
- printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_err;
- }
-
- /* map interrupts to vector 0 */
- writel(0, base + NvRegMSIXMap0);
- writel(0, base + NvRegMSIXMap1);
- }
- }
- }
- if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) {
- if ((ret = pci_enable_msi(np->pci_dev)) == 0) {
- pci_intx(np->pci_dev, 0);
- np->msi_flags |= NV_MSI_ENABLED;
- if ((!intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
- (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0)) {
- printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
- pci_disable_msi(np->pci_dev);
- pci_intx(np->pci_dev, 1);
- np->msi_flags &= ~NV_MSI_ENABLED;
- goto out_err;
- }
-
- /* map interrupts to vector 0 */
- writel(0, base + NvRegMSIMap0);
- writel(0, base + NvRegMSIMap1);
- /* enable msi vector 0 */
- writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask);
- }
- }
- if (ret != 0) {
- if ((!intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
- (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0))
- goto out_err;
-
- }
-
- return 0;
-out_free_tx:
- free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev);
-out_free_rx:
- free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev);
-out_err:
- return 1;
-}
-
-static void nv_free_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- int i;
-
- if (np->msi_flags & NV_MSI_X_ENABLED) {
- for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
- free_irq(np->msi_x_entry[i].vector, dev);
- }
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- } else {
- free_irq(np->pci_dev->irq, dev);
- if (np->msi_flags & NV_MSI_ENABLED) {
- pci_disable_msi(np->pci_dev);
- pci_intx(np->pci_dev, 1);
- np->msi_flags &= ~NV_MSI_ENABLED;
- }
- }
-}
-
-void ec_poll(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (!using_multi_irqs(dev)) {
- nv_nic_irq(0, dev);
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- nv_nic_irq_rx(0, dev);
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- nv_nic_irq_tx(0, dev);
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- nv_nic_irq_other(0, dev);
- }
- }
-}
-
-static void nv_do_nic_poll(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 mask = 0;
-
- /*
- * First disable irq(s) and then
- * reenable interrupts on the nic, we have to do this before calling
- * nv_nic_irq because that may decide to do otherwise
- */
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq_lockdep(dev->irq);
- mask = np->irqmask;
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- mask |= NVREG_IRQ_RX_ALL;
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- mask |= NVREG_IRQ_TX_ALL;
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- mask |= NVREG_IRQ_OTHER;
- }
- }
- np->nic_poll_irq = 0;
-
- /* FIXME: Do we need synchronize_irq(dev->irq) here? */
-
- writel(mask, base + NvRegIrqMask);
- pci_push(base);
-
- if (!using_multi_irqs(dev)) {
- nv_nic_irq(0, dev);
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq_lockdep(dev->irq);
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- nv_nic_irq_rx(0, dev);
- enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- nv_nic_irq_tx(0, dev);
- enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- nv_nic_irq_other(0, dev);
- enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
- }
-}
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void nv_poll_controller(struct net_device *dev)
-{
- nv_do_nic_poll((unsigned long) dev);
-}
-#endif
-
-static void nv_do_stats_poll(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- np->estats.tx_bytes += readl(base + NvRegTxCnt);
- np->estats.tx_zero_rexmt += readl(base + NvRegTxZeroReXmt);
- np->estats.tx_one_rexmt += readl(base + NvRegTxOneReXmt);
- np->estats.tx_many_rexmt += readl(base + NvRegTxManyReXmt);
- np->estats.tx_late_collision += readl(base + NvRegTxLateCol);
- np->estats.tx_fifo_errors += readl(base + NvRegTxUnderflow);
- np->estats.tx_carrier_errors += readl(base + NvRegTxLossCarrier);
- np->estats.tx_excess_deferral += readl(base + NvRegTxExcessDef);
- np->estats.tx_retry_error += readl(base + NvRegTxRetryErr);
- np->estats.tx_deferral += readl(base + NvRegTxDef);
- np->estats.tx_packets += readl(base + NvRegTxFrame);
- np->estats.tx_pause += readl(base + NvRegTxPause);
- np->estats.rx_frame_error += readl(base + NvRegRxFrameErr);
- np->estats.rx_extra_byte += readl(base + NvRegRxExtraByte);
- np->estats.rx_late_collision += readl(base + NvRegRxLateCol);
- np->estats.rx_runt += readl(base + NvRegRxRunt);
- np->estats.rx_frame_too_long += readl(base + NvRegRxFrameTooLong);
- np->estats.rx_over_errors += readl(base + NvRegRxOverflow);
- np->estats.rx_crc_errors += readl(base + NvRegRxFCSErr);
- np->estats.rx_frame_align_error += readl(base + NvRegRxFrameAlignErr);
- np->estats.rx_length_error += readl(base + NvRegRxLenErr);
- np->estats.rx_unicast += readl(base + NvRegRxUnicast);
- np->estats.rx_multicast += readl(base + NvRegRxMulticast);
- np->estats.rx_broadcast += readl(base + NvRegRxBroadcast);
- np->estats.rx_bytes += readl(base + NvRegRxCnt);
- np->estats.rx_pause += readl(base + NvRegRxPause);
- np->estats.rx_drop_frame += readl(base + NvRegRxDropFrame);
- np->estats.rx_packets =
- np->estats.rx_unicast +
- np->estats.rx_multicast +
- np->estats.rx_broadcast;
- np->estats.rx_errors_total =
- np->estats.rx_crc_errors +
- np->estats.rx_over_errors +
- np->estats.rx_frame_error +
- (np->estats.rx_frame_align_error - np->estats.rx_extra_byte) +
- np->estats.rx_late_collision +
- np->estats.rx_runt +
- np->estats.rx_frame_too_long;
-
- if (!np->in_shutdown)
- mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL);
-}
-
-static void nv_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
-{
- struct fe_priv *np = netdev_priv(dev);
- strcpy(info->driver, "forcedeth");
- strcpy(info->version, FORCEDETH_VERSION);
- strcpy(info->bus_info, pci_name(np->pci_dev));
-}
-
-static void nv_get_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
-{
- struct fe_priv *np = netdev_priv(dev);
- wolinfo->supported = WAKE_MAGIC;
-
- spin_lock_irq(&np->lock);
- if (np->wolenabled)
- wolinfo->wolopts = WAKE_MAGIC;
- spin_unlock_irq(&np->lock);
-}
-
-static int nv_set_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 flags = 0;
-
- if (wolinfo->wolopts == 0) {
- np->wolenabled = 0;
- } else if (wolinfo->wolopts & WAKE_MAGIC) {
- np->wolenabled = 1;
- flags = NVREG_WAKEUPFLAGS_ENABLE;
- }
- if (netif_running(dev)) {
- spin_lock_irq(&np->lock);
- writel(flags, base + NvRegWakeUpFlags);
- spin_unlock_irq(&np->lock);
- }
- return 0;
-}
-
-static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct fe_priv *np = netdev_priv(dev);
- int adv;
-
- spin_lock_irq(&np->lock);
- ecmd->port = PORT_MII;
- if (!netif_running(dev)) {
- /* We do not track link speed / duplex setting if the
- * interface is disabled. Force a link check */
- if (nv_update_linkspeed(dev)) {
- if (!netif_carrier_ok(dev))
- netif_carrier_on(dev);
- } else {
- if (netif_carrier_ok(dev))
- netif_carrier_off(dev);
- }
- }
-
- if (netif_carrier_ok(dev)) {
- switch(np->linkspeed & (NVREG_LINKSPEED_MASK)) {
- case NVREG_LINKSPEED_10:
- ecmd->speed = SPEED_10;
- break;
- case NVREG_LINKSPEED_100:
- ecmd->speed = SPEED_100;
- break;
- case NVREG_LINKSPEED_1000:
- ecmd->speed = SPEED_1000;
- break;
- }
- ecmd->duplex = DUPLEX_HALF;
- if (np->duplex)
- ecmd->duplex = DUPLEX_FULL;
- } else {
- ecmd->speed = -1;
- ecmd->duplex = -1;
- }
-
- ecmd->autoneg = np->autoneg;
-
- ecmd->advertising = ADVERTISED_MII;
- if (np->autoneg) {
- ecmd->advertising |= ADVERTISED_Autoneg;
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- if (adv & ADVERTISE_10HALF)
- ecmd->advertising |= ADVERTISED_10baseT_Half;
- if (adv & ADVERTISE_10FULL)
- ecmd->advertising |= ADVERTISED_10baseT_Full;
- if (adv & ADVERTISE_100HALF)
- ecmd->advertising |= ADVERTISED_100baseT_Half;
- if (adv & ADVERTISE_100FULL)
- ecmd->advertising |= ADVERTISED_100baseT_Full;
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- if (adv & ADVERTISE_1000FULL)
- ecmd->advertising |= ADVERTISED_1000baseT_Full;
- }
- }
- ecmd->supported = (SUPPORTED_Autoneg |
- SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full |
- SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
- SUPPORTED_MII);
- if (np->gigabit == PHY_GIGABIT)
- ecmd->supported |= SUPPORTED_1000baseT_Full;
-
- ecmd->phy_address = np->phyaddr;
- ecmd->transceiver = XCVR_EXTERNAL;
-
- /* ignore maxtxpkt, maxrxpkt for now */
- spin_unlock_irq(&np->lock);
- return 0;
-}
-
-static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (ecmd->port != PORT_MII)
- return -EINVAL;
- if (ecmd->transceiver != XCVR_EXTERNAL)
- return -EINVAL;
- if (ecmd->phy_address != np->phyaddr) {
- /* TODO: support switching between multiple phys. Should be
- * trivial, but not enabled due to lack of test hardware. */
- return -EINVAL;
- }
- if (ecmd->autoneg == AUTONEG_ENABLE) {
- u32 mask;
-
- mask = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
- ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full;
- if (np->gigabit == PHY_GIGABIT)
- mask |= ADVERTISED_1000baseT_Full;
-
- if ((ecmd->advertising & mask) == 0)
- return -EINVAL;
-
- } else if (ecmd->autoneg == AUTONEG_DISABLE) {
- /* Note: autonegotiation disable, speed 1000 intentionally
- * forbidden - noone should need that. */
-
- if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100)
- return -EINVAL;
- if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL)
- return -EINVAL;
- } else {
- return -EINVAL;
- }
-
- netif_carrier_off(dev);
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- }
-
- if (ecmd->autoneg == AUTONEG_ENABLE) {
- int adv, bmcr;
-
- np->autoneg = 1;
-
- /* advertise only what has been requested */
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
- if (ecmd->advertising & ADVERTISED_10baseT_Half)
- adv |= ADVERTISE_10HALF;
- if (ecmd->advertising & ADVERTISED_10baseT_Full)
- adv |= ADVERTISE_10FULL;
- if (ecmd->advertising & ADVERTISED_100baseT_Half)
- adv |= ADVERTISE_100HALF;
- if (ecmd->advertising & ADVERTISED_100baseT_Full)
- adv |= ADVERTISE_100FULL;
- if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */
- adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
- adv |= ADVERTISE_PAUSE_ASYM;
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
-
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- adv &= ~ADVERTISE_1000FULL;
- if (ecmd->advertising & ADVERTISED_1000baseT_Full)
- adv |= ADVERTISE_1000FULL;
- mii_rw(dev, np->phyaddr, MII_CTRL1000, adv);
- }
-
- if (netif_running(dev))
- printk(KERN_INFO "%s: link down.\n", dev->name);
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
- bmcr |= BMCR_ANENABLE;
- /* reset the phy in order for settings to stick,
- * and cause autoneg to start */
- if (phy_reset(dev, bmcr)) {
- printk(KERN_INFO "%s: phy reset failed\n", dev->name);
- return -EINVAL;
- }
- } else {
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
- }
- } else {
- int adv, bmcr;
-
- np->autoneg = 0;
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
- if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_HALF)
- adv |= ADVERTISE_10HALF;
- if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_FULL)
- adv |= ADVERTISE_10FULL;
- if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_HALF)
- adv |= ADVERTISE_100HALF;
- if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_FULL)
- adv |= ADVERTISE_100FULL;
- np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE);
- if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) {/* for rx we set both advertisments but disable tx pause */
- adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
- np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- }
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) {
- adv |= ADVERTISE_PAUSE_ASYM;
- np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- }
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
- np->fixed_mode = adv;
-
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- adv &= ~ADVERTISE_1000FULL;
- mii_rw(dev, np->phyaddr, MII_CTRL1000, adv);
- }
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr &= ~(BMCR_ANENABLE|BMCR_SPEED100|BMCR_SPEED1000|BMCR_FULLDPLX);
- if (np->fixed_mode & (ADVERTISE_10FULL|ADVERTISE_100FULL))
- bmcr |= BMCR_FULLDPLX;
- if (np->fixed_mode & (ADVERTISE_100HALF|ADVERTISE_100FULL))
- bmcr |= BMCR_SPEED100;
- if (np->phy_oui == PHY_OUI_MARVELL) {
- /* reset the phy in order for forced mode settings to stick */
- if (phy_reset(dev, bmcr)) {
- printk(KERN_INFO "%s: phy reset failed\n", dev->name);
- return -EINVAL;
- }
- } else {
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
- if (netif_running(dev)) {
- /* Wait a bit and then reconfigure the nic. */
- udelay(10);
- nv_linkchange(dev);
- }
- }
- }
-
- if (netif_running(dev)) {
- nv_start_rx(dev);
- nv_start_tx(dev);
- nv_enable_irq(dev);
- }
-
- return 0;
-}
-
-#define FORCEDETH_REGS_VER 1
-
-static int nv_get_regs_len(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- return np->register_size;
-}
-
-static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 *rbuf = buf;
- int i;
-
- regs->version = FORCEDETH_REGS_VER;
- spin_lock_irq(&np->lock);
- for (i = 0;i <= np->register_size/sizeof(u32); i++)
- rbuf[i] = readl(base + i*sizeof(u32));
- spin_unlock_irq(&np->lock);
-}
-
-static int nv_nway_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int ret;
-
- if (np->autoneg) {
- int bmcr;
-
- netif_carrier_off(dev);
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- printk(KERN_INFO "%s: link down.\n", dev->name);
- }
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
- bmcr |= BMCR_ANENABLE;
- /* reset the phy in order for settings to stick*/
- if (phy_reset(dev, bmcr)) {
- printk(KERN_INFO "%s: phy reset failed\n", dev->name);
- return -EINVAL;
- }
- } else {
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
- }
-
- if (netif_running(dev)) {
- nv_start_rx(dev);
- nv_start_tx(dev);
- nv_enable_irq(dev);
- }
- ret = 0;
- } else {
- ret = -EINVAL;
- }
-
- return ret;
-}
-
-static int nv_set_tso(struct net_device *dev, u32 value)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if ((np->driver_data & DEV_HAS_CHECKSUM))
- return ethtool_op_set_tso(dev, value);
- else
- return -EOPNOTSUPP;
-}
-
-static void nv_get_ringparam(struct net_device *dev, struct ethtool_ringparam* ring)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- ring->rx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3;
- ring->rx_mini_max_pending = 0;
- ring->rx_jumbo_max_pending = 0;
- ring->tx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3;
-
- ring->rx_pending = np->rx_ring_size;
- ring->rx_mini_pending = 0;
- ring->rx_jumbo_pending = 0;
- ring->tx_pending = np->tx_ring_size;
-}
-
-static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ring)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u8 *rxtx_ring, *rx_skbuff, *tx_skbuff, *rx_dma, *tx_dma, *tx_dma_len;
- dma_addr_t ring_addr;
-
- if (ring->rx_pending < RX_RING_MIN ||
- ring->tx_pending < TX_RING_MIN ||
- ring->rx_mini_pending != 0 ||
- ring->rx_jumbo_pending != 0 ||
- (np->desc_ver == DESC_VER_1 &&
- (ring->rx_pending > RING_MAX_DESC_VER_1 ||
- ring->tx_pending > RING_MAX_DESC_VER_1)) ||
- (np->desc_ver != DESC_VER_1 &&
- (ring->rx_pending > RING_MAX_DESC_VER_2_3 ||
- ring->tx_pending > RING_MAX_DESC_VER_2_3))) {
- return -EINVAL;
- }
-
- /* allocate new rings */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- rxtx_ring = pci_alloc_consistent(np->pci_dev,
- sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending),
- &ring_addr);
- } else {
- rxtx_ring = pci_alloc_consistent(np->pci_dev,
- sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending),
- &ring_addr);
- }
- rx_skbuff = kmalloc(sizeof(struct sk_buff*) * ring->rx_pending, GFP_KERNEL);
- rx_dma = kmalloc(sizeof(dma_addr_t) * ring->rx_pending, GFP_KERNEL);
- tx_skbuff = kmalloc(sizeof(struct sk_buff*) * ring->tx_pending, GFP_KERNEL);
- tx_dma = kmalloc(sizeof(dma_addr_t) * ring->tx_pending, GFP_KERNEL);
- tx_dma_len = kmalloc(sizeof(unsigned int) * ring->tx_pending, GFP_KERNEL);
- if (!rxtx_ring || !rx_skbuff || !rx_dma || !tx_skbuff || !tx_dma || !tx_dma_len) {
- /* fall back to old rings */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- if (rxtx_ring)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending),
- rxtx_ring, ring_addr);
- } else {
- if (rxtx_ring)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending),
- rxtx_ring, ring_addr);
- }
- if (rx_skbuff)
- kfree(rx_skbuff);
- if (rx_dma)
- kfree(rx_dma);
- if (tx_skbuff)
- kfree(tx_skbuff);
- if (tx_dma)
- kfree(tx_dma);
- if (tx_dma_len)
- kfree(tx_dma_len);
- goto exit;
- }
-
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain queues */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
- /* delete queues */
- free_rings(dev);
- }
-
- /* set new values */
- np->rx_ring_size = ring->rx_pending;
- np->tx_ring_size = ring->tx_pending;
- np->tx_limit_stop = ring->tx_pending - TX_LIMIT_DIFFERENCE;
- np->tx_limit_start = ring->tx_pending - TX_LIMIT_DIFFERENCE - 1;
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig = (struct ring_desc*)rxtx_ring;
- np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size];
- } else {
- np->rx_ring.ex = (struct ring_desc_ex*)rxtx_ring;
- np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size];
- }
- np->rx_skbuff = (struct sk_buff**)rx_skbuff;
- np->rx_dma = (dma_addr_t*)rx_dma;
- np->tx_skbuff = (struct sk_buff**)tx_skbuff;
- np->tx_dma = (dma_addr_t*)tx_dma;
- np->tx_dma_len = (unsigned int*)tx_dma_len;
- np->ring_addr = ring_addr;
-
- memset(np->rx_skbuff, 0, sizeof(struct sk_buff*) * np->rx_ring_size);
- memset(np->rx_dma, 0, sizeof(dma_addr_t) * np->rx_ring_size);
- memset(np->tx_skbuff, 0, sizeof(struct sk_buff*) * np->tx_ring_size);
- memset(np->tx_dma, 0, sizeof(dma_addr_t) * np->tx_ring_size);
- memset(np->tx_dma_len, 0, sizeof(unsigned int) * np->tx_ring_size);
-
- if (netif_running(dev)) {
- /* reinit driver view of the queues */
- set_bufsize(dev);
- if (nv_init_ring(dev)) {
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- }
-
- /* reinit nic view of the queues */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(base);
-
- /* restart engines */
- nv_start_rx(dev);
- nv_start_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- nv_enable_irq(dev);
- }
- return 0;
-exit:
- return -ENOMEM;
-}
-
-static void nv_get_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- pause->autoneg = (np->pause_flags & NV_PAUSEFRAME_AUTONEG) != 0;
- pause->rx_pause = (np->pause_flags & NV_PAUSEFRAME_RX_ENABLE) != 0;
- pause->tx_pause = (np->pause_flags & NV_PAUSEFRAME_TX_ENABLE) != 0;
-}
-
-static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause)
-{
- struct fe_priv *np = netdev_priv(dev);
- int adv, bmcr;
-
- if ((!np->autoneg && np->duplex == 0) ||
- (np->autoneg && !pause->autoneg && np->duplex == 0)) {
- printk(KERN_INFO "%s: can not set pause settings when forced link is in half duplex.\n",
- dev->name);
- return -EINVAL;
- }
- if (pause->tx_pause && !(np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE)) {
- printk(KERN_INFO "%s: hardware does not support tx pause frames.\n", dev->name);
- return -EINVAL;
- }
-
- netif_carrier_off(dev);
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- }
-
- np->pause_flags &= ~(NV_PAUSEFRAME_RX_REQ|NV_PAUSEFRAME_TX_REQ);
- if (pause->rx_pause)
- np->pause_flags |= NV_PAUSEFRAME_RX_REQ;
- if (pause->tx_pause)
- np->pause_flags |= NV_PAUSEFRAME_TX_REQ;
-
- if (np->autoneg && pause->autoneg) {
- np->pause_flags |= NV_PAUSEFRAME_AUTONEG;
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
- if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */
- adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
- adv |= ADVERTISE_PAUSE_ASYM;
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
-
- if (netif_running(dev))
- printk(KERN_INFO "%s: link down.\n", dev->name);
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
- } else {
- np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE);
- if (pause->rx_pause)
- np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- if (pause->tx_pause)
- np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
-
- if (!netif_running(dev))
- nv_update_linkspeed(dev);
- else
- nv_update_pause(dev, np->pause_flags);
- }
-
- if (netif_running(dev)) {
- nv_start_rx(dev);
- nv_start_tx(dev);
- nv_enable_irq(dev);
- }
- return 0;
-}
-
-static u32 nv_get_rx_csum(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- return (np->rx_csum) != 0;
-}
-
-static int nv_set_rx_csum(struct net_device *dev, u32 data)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int retcode = 0;
-
- if (np->driver_data & DEV_HAS_CHECKSUM) {
- if (data) {
- np->rx_csum = 1;
- np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
- } else {
- np->rx_csum = 0;
- /* vlan is dependent on rx checksum offload */
- if (!(np->vlanctl_bits & NVREG_VLANCONTROL_ENABLE))
- np->txrxctl_bits &= ~NVREG_TXRXCTL_RXCHECK;
- }
- if (netif_running(dev)) {
- spin_lock_irq(&np->lock);
- writel(np->txrxctl_bits, base + NvRegTxRxControl);
- spin_unlock_irq(&np->lock);
- }
- } else {
- return -EINVAL;
- }
-
- return retcode;
-}
-
-static int nv_set_tx_csum(struct net_device *dev, u32 data)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->driver_data & DEV_HAS_CHECKSUM)
- return ethtool_op_set_tx_hw_csum(dev, data);
- else
- return -EOPNOTSUPP;
-}
-
-static int nv_set_sg(struct net_device *dev, u32 data)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->driver_data & DEV_HAS_CHECKSUM)
- return ethtool_op_set_sg(dev, data);
- else
- return -EOPNOTSUPP;
-}
-
-static int nv_get_stats_count(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->driver_data & DEV_HAS_STATISTICS)
- return sizeof(struct nv_ethtool_stats)/sizeof(u64);
- else
- return 0;
-}
-
-static void nv_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *estats, u64 *buffer)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- /* update stats */
- nv_do_stats_poll((unsigned long)dev);
-
- memcpy(buffer, &np->estats, nv_get_stats_count(dev)*sizeof(u64));
-}
-
-static int nv_self_test_count(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->driver_data & DEV_HAS_TEST_EXTENDED)
- return NV_TEST_COUNT_EXTENDED;
- else
- return NV_TEST_COUNT_BASE;
-}
-
-static int nv_link_test(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int mii_status;
-
- mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
-
- /* check phy link status */
- if (!(mii_status & BMSR_LSTATUS))
- return 0;
- else
- return 1;
-}
-
-static int nv_register_test(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- int i = 0;
- u32 orig_read, new_read;
-
- do {
- orig_read = readl(base + nv_registers_test[i].reg);
-
- /* xor with mask to toggle bits */
- orig_read ^= nv_registers_test[i].mask;
-
- writel(orig_read, base + nv_registers_test[i].reg);
-
- new_read = readl(base + nv_registers_test[i].reg);
-
- if ((new_read & nv_registers_test[i].mask) != (orig_read & nv_registers_test[i].mask))
- return 0;
-
- /* restore original value */
- orig_read ^= nv_registers_test[i].mask;
- writel(orig_read, base + nv_registers_test[i].reg);
-
- } while (nv_registers_test[++i].reg != 0);
-
- return 1;
-}
-
-static int nv_interrupt_test(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int testcnt;
- u32 save_msi_flags, save_poll_interval = 0;
-
- if (netif_running(dev)) {
- /* free current irq */
- nv_free_irq(dev);
- save_poll_interval = readl(base+NvRegPollingInterval);
- }
-
- /* flag to test interrupt handler */
- np->intr_test = 0;
-
- /* setup test irq */
- save_msi_flags = np->msi_flags;
- np->msi_flags &= ~NV_MSI_X_VECTORS_MASK;
- np->msi_flags |= 0x001; /* setup 1 vector */
- if (nv_request_irq(dev, 1))
- return 0;
-
- /* setup timer interrupt */
- writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval);
- writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
-
- nv_enable_hw_interrupts(dev, NVREG_IRQ_TIMER);
-
- /* wait for at least one interrupt */
- msleep(100);
-
- spin_lock_irq(&np->lock);
-
- /* flag should be set within ISR */
- testcnt = np->intr_test;
- if (!testcnt)
- ret = 2;
-
- nv_disable_hw_interrupts(dev, NVREG_IRQ_TIMER);
- if (!(np->msi_flags & NV_MSI_X_ENABLED))
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- else
- writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
-
- spin_unlock_irq(&np->lock);
-
- nv_free_irq(dev);
-
- np->msi_flags = save_msi_flags;
-
- if (netif_running(dev)) {
- writel(save_poll_interval, base + NvRegPollingInterval);
- writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
- /* restore original irq */
- if (nv_request_irq(dev, 0))
- return 0;
- }
-
- return ret;
-}
-
-static int nv_loopback_test(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- struct sk_buff *tx_skb, *rx_skb;
- dma_addr_t test_dma_addr;
- u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET);
- u32 flags;
- int len, i, pkt_len;
- u8 *pkt_data;
- u32 filter_flags = 0;
- u32 misc1_flags = 0;
- int ret = 1;
-
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- filter_flags = readl(base + NvRegPacketFilterFlags);
- misc1_flags = readl(base + NvRegMisc1);
- } else {
- nv_txrx_reset(dev);
- }
-
- /* reinit driver view of the rx queue */
- set_bufsize(dev);
- nv_init_ring(dev);
-
- /* setup hardware for loopback */
- writel(NVREG_MISC1_FORCE, base + NvRegMisc1);
- writel(NVREG_PFF_ALWAYS | NVREG_PFF_LOOPBACK, base + NvRegPacketFilterFlags);
-
- /* reinit nic view of the rx queue */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
-
- /* restart rx engine */
- nv_start_rx(dev);
- nv_start_tx(dev);
-
- /* setup packet for tx */
- pkt_len = ETH_DATA_LEN;
- tx_skb = dev_alloc_skb(pkt_len);
- if (!tx_skb) {
- printk(KERN_ERR "dev_alloc_skb() failed during loopback test"
- " of %s\n", dev->name);
- ret = 0;
- goto out;
- }
- pkt_data = skb_put(tx_skb, pkt_len);
- for (i = 0; i < pkt_len; i++)
- pkt_data[i] = (u8)(i & 0xff);
- test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data,
- tx_skb->end-tx_skb->data, PCI_DMA_FROMDEVICE);
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[0].buf = cpu_to_le32(test_dma_addr);
- np->tx_ring.orig[0].flaglen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra);
- } else {
- np->tx_ring.ex[0].bufhigh = cpu_to_le64(test_dma_addr) >> 32;
- np->tx_ring.ex[0].buflow = cpu_to_le64(test_dma_addr) & 0x0FFFFFFFF;
- np->tx_ring.ex[0].flaglen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra);
- }
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(get_hwbase(dev));
-
- msleep(500);
-
- /* check for rx of the packet */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- flags = le32_to_cpu(np->rx_ring.orig[0].flaglen);
- len = nv_descr_getlength(&np->rx_ring.orig[0], np->desc_ver);
-
- } else {
- flags = le32_to_cpu(np->rx_ring.ex[0].flaglen);
- len = nv_descr_getlength_ex(&np->rx_ring.ex[0], np->desc_ver);
- }
-
- if (flags & NV_RX_AVAIL) {
- ret = 0;
- } else if (np->desc_ver == DESC_VER_1) {
- if (flags & NV_RX_ERROR)
- ret = 0;
- } else {
- if (flags & NV_RX2_ERROR) {
- ret = 0;
- }
- }
-
- if (ret) {
- if (len != pkt_len) {
- ret = 0;
- dprintk(KERN_DEBUG "%s: loopback len mismatch %d vs %d\n",
- dev->name, len, pkt_len);
- } else {
- rx_skb = np->rx_skbuff[0];
- for (i = 0; i < pkt_len; i++) {
- if (rx_skb->data[i] != (u8)(i & 0xff)) {
- ret = 0;
- dprintk(KERN_DEBUG "%s: loopback pattern check failed on byte %d\n",
- dev->name, i);
- break;
- }
- }
- }
- } else {
- dprintk(KERN_DEBUG "%s: loopback - did not receive test packet\n", dev->name);
- }
-
- pci_unmap_page(np->pci_dev, test_dma_addr,
- tx_skb->end-tx_skb->data,
- PCI_DMA_TODEVICE);
- dev_kfree_skb_any(tx_skb);
- out:
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain rx queue */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
-
- if (netif_running(dev)) {
- writel(misc1_flags, base + NvRegMisc1);
- writel(filter_flags, base + NvRegPacketFilterFlags);
- nv_enable_irq(dev);
- }
-
- return ret;
-}
-
-static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 *buffer)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int result;
- memset(buffer, 0, nv_self_test_count(dev)*sizeof(u64));
-
- if (!nv_link_test(dev)) {
- test->flags |= ETH_TEST_FL_FAILED;
- buffer[0] = 1;
- }
-
- if (test->flags & ETH_TEST_FL_OFFLINE) {
- if (netif_running(dev)) {
- netif_stop_queue(dev);
- netif_poll_disable(dev);
- netif_tx_lock_bh(dev);
- spin_lock_irq(&np->lock);
- nv_disable_hw_interrupts(dev, np->irqmask);
- if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- } else {
- writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
- }
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain rx queue */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
- spin_unlock_irq(&np->lock);
- netif_tx_unlock_bh(dev);
- }
-
- if (!nv_register_test(dev)) {
- test->flags |= ETH_TEST_FL_FAILED;
- buffer[1] = 1;
- }
-
- result = nv_interrupt_test(dev);
- if (result != 1) {
- test->flags |= ETH_TEST_FL_FAILED;
- buffer[2] = 1;
- }
- if (result == 0) {
- /* bail out */
- return;
- }
-
- if (!nv_loopback_test(dev)) {
- test->flags |= ETH_TEST_FL_FAILED;
- buffer[3] = 1;
- }
-
- if (netif_running(dev)) {
- /* reinit driver view of the rx queue */
- set_bufsize(dev);
- if (nv_init_ring(dev)) {
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- }
- /* reinit nic view of the rx queue */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(base);
- /* restart rx engine */
- nv_start_rx(dev);
- nv_start_tx(dev);
- netif_start_queue(dev);
- netif_poll_enable(dev);
- nv_enable_hw_interrupts(dev, np->irqmask);
- }
- }
-}
-
-static void nv_get_strings(struct net_device *dev, u32 stringset, u8 *buffer)
-{
- switch (stringset) {
- case ETH_SS_STATS:
- memcpy(buffer, &nv_estats_str, nv_get_stats_count(dev)*sizeof(struct nv_ethtool_str));
- break;
- case ETH_SS_TEST:
- memcpy(buffer, &nv_etests_str, nv_self_test_count(dev)*sizeof(struct nv_ethtool_str));
- break;
- }
-}
-
-static const struct ethtool_ops ops = {
- .get_drvinfo = nv_get_drvinfo,
- .get_link = ethtool_op_get_link,
- .get_wol = nv_get_wol,
- .set_wol = nv_set_wol,
- .get_settings = nv_get_settings,
- .set_settings = nv_set_settings,
- .get_regs_len = nv_get_regs_len,
- .get_regs = nv_get_regs,
- .nway_reset = nv_nway_reset,
- .get_perm_addr = ethtool_op_get_perm_addr,
- .get_tso = ethtool_op_get_tso,
- .set_tso = nv_set_tso,
- .get_ringparam = nv_get_ringparam,
- .set_ringparam = nv_set_ringparam,
- .get_pauseparam = nv_get_pauseparam,
- .set_pauseparam = nv_set_pauseparam,
- .get_rx_csum = nv_get_rx_csum,
- .set_rx_csum = nv_set_rx_csum,
- .get_tx_csum = ethtool_op_get_tx_csum,
- .set_tx_csum = nv_set_tx_csum,
- .get_sg = ethtool_op_get_sg,
- .set_sg = nv_set_sg,
- .get_strings = nv_get_strings,
- .get_stats_count = nv_get_stats_count,
- .get_ethtool_stats = nv_get_ethtool_stats,
- .self_test_count = nv_self_test_count,
- .self_test = nv_self_test,
-};
-
-static void nv_vlan_rx_register(struct net_device *dev, struct vlan_group *grp)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- spin_lock_irq(&np->lock);
-
- /* save vlan group */
- np->vlangrp = grp;
-
- if (grp) {
- /* enable vlan on MAC */
- np->txrxctl_bits |= NVREG_TXRXCTL_VLANSTRIP | NVREG_TXRXCTL_VLANINS;
- } else {
- /* disable vlan on MAC */
- np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANSTRIP;
- np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANINS;
- }
-
- writel(np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
-
- spin_unlock_irq(&np->lock);
-};
-
-static void nv_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid)
-{
- /* nothing to do */
-};
-
-static int nv_open(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int oom, i;
-
- dprintk(KERN_DEBUG "nv_open: begin\n");
-
- /* erase previous misconfiguration */
- if (np->driver_data & DEV_HAS_POWER_CNTRL)
- nv_mac_reset(dev);
- writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
- writel(0, base + NvRegMulticastAddrB);
- writel(0, base + NvRegMulticastMaskA);
- writel(0, base + NvRegMulticastMaskB);
- writel(0, base + NvRegPacketFilterFlags);
-
- writel(0, base + NvRegTransmitterControl);
- writel(0, base + NvRegReceiverControl);
-
- writel(0, base + NvRegAdapterControl);
-
- if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE)
- writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame);
-
- /* initialize descriptor rings */
- set_bufsize(dev);
- oom = nv_init_ring(dev);
-
- writel(0, base + NvRegLinkSpeed);
- writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
- nv_txrx_reset(dev);
- writel(0, base + NvRegUnknownSetupReg6);
-
- np->in_shutdown = 0;
-
- /* give hw rings */
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
-
- writel(np->linkspeed, base + NvRegLinkSpeed);
- if (np->desc_ver == DESC_VER_1)
- writel(NVREG_TX_WM_DESC1_DEFAULT, base + NvRegTxWatermark);
- else
- writel(NVREG_TX_WM_DESC2_3_DEFAULT, base + NvRegTxWatermark);
- writel(np->txrxctl_bits, base + NvRegTxRxControl);
- writel(np->vlanctl_bits, base + NvRegVlanControl);
- pci_push(base);
- writel(NVREG_TXRXCTL_BIT1|np->txrxctl_bits, base + NvRegTxRxControl);
- reg_delay(dev, NvRegUnknownSetupReg5, NVREG_UNKSETUP5_BIT31, NVREG_UNKSETUP5_BIT31,
- NV_SETUP5_DELAY, NV_SETUP5_DELAYMAX,
- KERN_INFO "open: SetupReg5, Bit 31 remained off\n");
-
- writel(0, base + NvRegUnknownSetupReg4);
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus);
-
- writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1);
- writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus);
- writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags);
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
-
- writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus);
- get_random_bytes(&i, sizeof(i));
- writel(NVREG_RNDSEED_FORCE | (i&NVREG_RNDSEED_MASK), base + NvRegRandomSeed);
- writel(NVREG_TX_DEFERRAL_DEFAULT, base + NvRegTxDeferral);
- writel(NVREG_RX_DEFERRAL_DEFAULT, base + NvRegRxDeferral);
- if (poll_interval == -1) {
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT)
- writel(NVREG_POLL_DEFAULT_THROUGHPUT, base + NvRegPollingInterval);
- else
- writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval);
- }
- else
- writel(poll_interval & 0xFFFF, base + NvRegPollingInterval);
- writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
- writel((np->phyaddr << NVREG_ADAPTCTL_PHYSHIFT)|NVREG_ADAPTCTL_PHYVALID|NVREG_ADAPTCTL_RUNNING,
- base + NvRegAdapterControl);
- writel(NVREG_MIISPEED_BIT8|NVREG_MIIDELAY, base + NvRegMIISpeed);
- writel(NVREG_UNKSETUP4_VAL, base + NvRegUnknownSetupReg4);
- if (np->wolenabled)
- writel(NVREG_WAKEUPFLAGS_ENABLE , base + NvRegWakeUpFlags);
-
- i = readl(base + NvRegPowerState);
- if ( (i & NVREG_POWERSTATE_POWEREDUP) == 0)
- writel(NVREG_POWERSTATE_POWEREDUP|i, base + NvRegPowerState);
-
- pci_push(base);
- udelay(10);
- writel(readl(base + NvRegPowerState) | NVREG_POWERSTATE_VALID, base + NvRegPowerState);
-
- nv_disable_hw_interrupts(dev, np->irqmask);
- pci_push(base);
- writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus);
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- pci_push(base);
-
- if (!np->ecdev) {
- if (nv_request_irq(dev, 0)) {
- goto out_drain;
- }
-
- /* ask for interrupts */
- nv_enable_hw_interrupts(dev, np->irqmask);
-
- spin_lock_irq(&np->lock);
- }
-
- writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
- writel(0, base + NvRegMulticastAddrB);
- writel(0, base + NvRegMulticastMaskA);
- writel(0, base + NvRegMulticastMaskB);
- writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags);
- /* One manual link speed update: Interrupts are enabled, future link
- * speed changes cause interrupts and are handled by nv_link_irq().
- */
- {
- u32 miistat;
- miistat = readl(base + NvRegMIIStatus);
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
- dprintk(KERN_INFO "startup: got 0x%08x.\n", miistat);
- }
- /* set linkspeed to invalid value, thus force nv_update_linkspeed
- * to init hw */
- np->linkspeed = 0;
- ret = nv_update_linkspeed(dev);
- nv_start_rx(dev);
- nv_start_tx(dev);
-
- if (np->ecdev) {
- ecdev_set_link(np->ecdev, ret);
- }
- else {
- netif_start_queue(dev);
- netif_poll_enable(dev);
-
- if (ret) {
- netif_carrier_on(dev);
- } else {
- printk("%s: no link during initialization.\n", dev->name);
- netif_carrier_off(dev);
- }
- if (oom)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
-
- /* start statistics timer */
- if (np->driver_data & DEV_HAS_STATISTICS)
- mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL);
-
- spin_unlock_irq(&np->lock);
- }
-
- return 0;
-out_drain:
- drain_ring(dev);
- return ret;
-}
-
-static int nv_close(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base;
-
- if (!np->ecdev) {
- spin_lock_irq(&np->lock);
- np->in_shutdown = 1;
- spin_unlock_irq(&np->lock);
- netif_poll_disable(dev);
- synchronize_irq(dev->irq);
-
- del_timer_sync(&np->oom_kick);
- del_timer_sync(&np->nic_poll);
- del_timer_sync(&np->stats_poll);
-
- netif_stop_queue(dev);
- spin_lock_irq(&np->lock);
- }
-
- nv_stop_tx(dev);
- nv_stop_rx(dev);
- nv_txrx_reset(dev);
-
- /* disable interrupts on the nic or we will lock up */
- if (!np->ecdev) {
- base = get_hwbase(dev);
- nv_disable_hw_interrupts(dev, np->irqmask);
- pci_push(base);
- dprintk(KERN_INFO "%s: Irqmask is zero again\n", dev->name);
-
- spin_unlock_irq(&np->lock);
-
- nv_free_irq(dev);
- }
-
- drain_ring(dev);
-
- if (np->wolenabled)
- nv_start_rx(dev);
-
- /* FIXME: power down nic */
-
- return 0;
-}
-
-static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
-{
- struct net_device *dev;
- struct fe_priv *np;
- unsigned long addr;
- u8 __iomem *base;
- int err, i;
- u32 powerstate, txreg;
-
- board_idx++;
-
- dev = alloc_etherdev(sizeof(struct fe_priv));
- err = -ENOMEM;
- if (!dev)
- goto out;
-
- np = netdev_priv(dev);
- np->pci_dev = pci_dev;
- spin_lock_init(&np->lock);
- SET_MODULE_OWNER(dev);
- SET_NETDEV_DEV(dev, &pci_dev->dev);
-
- init_timer(&np->oom_kick);
- np->oom_kick.data = (unsigned long) dev;
- np->oom_kick.function = &nv_do_rx_refill; /* timer handler */
- init_timer(&np->nic_poll);
- np->nic_poll.data = (unsigned long) dev;
- np->nic_poll.function = &nv_do_nic_poll; /* timer handler */
- init_timer(&np->stats_poll);
- np->stats_poll.data = (unsigned long) dev;
- np->stats_poll.function = &nv_do_stats_poll; /* timer handler */
-
- err = pci_enable_device(pci_dev);
- if (err) {
- printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n",
- err, pci_name(pci_dev));
- goto out_free;
- }
-
- pci_set_master(pci_dev);
-
- err = pci_request_regions(pci_dev, DRV_NAME);
- if (err < 0)
- goto out_disable;
-
- if (id->driver_data & (DEV_HAS_VLAN|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS))
- np->register_size = NV_PCI_REGSZ_VER2;
- else
- np->register_size = NV_PCI_REGSZ_VER1;
-
- err = -EINVAL;
- addr = 0;
- for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
- dprintk(KERN_DEBUG "%s: resource %d start %p len %ld flags 0x%08lx.\n",
- pci_name(pci_dev), i, (void*)pci_resource_start(pci_dev, i),
- pci_resource_len(pci_dev, i),
- pci_resource_flags(pci_dev, i));
- if (pci_resource_flags(pci_dev, i) & IORESOURCE_MEM &&
- pci_resource_len(pci_dev, i) >= np->register_size) {
- addr = pci_resource_start(pci_dev, i);
- break;
- }
- }
- if (i == DEVICE_COUNT_RESOURCE) {
- printk(KERN_INFO "forcedeth: Couldn't find register window for device %s.\n",
- pci_name(pci_dev));
- goto out_relreg;
- }
-
- /* copy of driver data */
- np->driver_data = id->driver_data;
-
- /* handle different descriptor versions */
- if (id->driver_data & DEV_HAS_HIGH_DMA) {
- /* packet format 3: supports 40-bit addressing */
- np->desc_ver = DESC_VER_3;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_3;
- if (dma_64bit) {
- if (pci_set_dma_mask(pci_dev, DMA_39BIT_MASK)) {
- printk(KERN_INFO "forcedeth: 64-bit DMA failed, using 32-bit addressing for device %s.\n",
- pci_name(pci_dev));
- } else {
- dev->features |= NETIF_F_HIGHDMA;
- printk(KERN_INFO "forcedeth: using HIGHDMA\n");
- }
- if (pci_set_consistent_dma_mask(pci_dev, DMA_39BIT_MASK)) {
- printk(KERN_INFO "forcedeth: 64-bit DMA (consistent) failed, using 32-bit ring buffers for device %s.\n",
- pci_name(pci_dev));
- }
- }
- } else if (id->driver_data & DEV_HAS_LARGEDESC) {
- /* packet format 2: supports jumbo frames */
- np->desc_ver = DESC_VER_2;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_2;
- } else {
- /* original packet format */
- np->desc_ver = DESC_VER_1;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_1;
- }
-
- np->pkt_limit = NV_PKTLIMIT_1;
- if (id->driver_data & DEV_HAS_LARGEDESC)
- np->pkt_limit = NV_PKTLIMIT_2;
-
- if (id->driver_data & DEV_HAS_CHECKSUM) {
- np->rx_csum = 1;
- np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
- dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;
-#ifdef NETIF_F_TSO
- dev->features |= NETIF_F_TSO;
-#endif
- }
-
- np->vlanctl_bits = 0;
- if (id->driver_data & DEV_HAS_VLAN) {
- np->vlanctl_bits = NVREG_VLANCONTROL_ENABLE;
- dev->features |= NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_TX;
- dev->vlan_rx_register = nv_vlan_rx_register;
- dev->vlan_rx_kill_vid = nv_vlan_rx_kill_vid;
- }
-
- np->msi_flags = 0;
- if ((id->driver_data & DEV_HAS_MSI) && msi) {
- np->msi_flags |= NV_MSI_CAPABLE;
- }
- if ((id->driver_data & DEV_HAS_MSI_X) && msix) {
- np->msi_flags |= NV_MSI_X_CAPABLE;
- }
-
- np->pause_flags = NV_PAUSEFRAME_RX_CAPABLE | NV_PAUSEFRAME_RX_REQ | NV_PAUSEFRAME_AUTONEG;
- if (id->driver_data & DEV_HAS_PAUSEFRAME_TX) {
- np->pause_flags |= NV_PAUSEFRAME_TX_CAPABLE | NV_PAUSEFRAME_TX_REQ;
- }
-
-
- err = -ENOMEM;
- np->base = ioremap(addr, np->register_size);
- if (!np->base)
- goto out_relreg;
- dev->base_addr = (unsigned long)np->base;
-
- dev->irq = pci_dev->irq;
-
- np->rx_ring_size = RX_RING_DEFAULT;
- np->tx_ring_size = TX_RING_DEFAULT;
- np->tx_limit_stop = np->tx_ring_size - TX_LIMIT_DIFFERENCE;
- np->tx_limit_start = np->tx_ring_size - TX_LIMIT_DIFFERENCE - 1;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig = pci_alloc_consistent(pci_dev,
- sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size),
- &np->ring_addr);
- if (!np->rx_ring.orig)
- goto out_unmap;
- np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size];
- } else {
- np->rx_ring.ex = pci_alloc_consistent(pci_dev,
- sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size),
- &np->ring_addr);
- if (!np->rx_ring.ex)
- goto out_unmap;
- np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size];
- }
- np->rx_skbuff = kmalloc(sizeof(struct sk_buff*) * np->rx_ring_size, GFP_KERNEL);
- np->rx_dma = kmalloc(sizeof(dma_addr_t) * np->rx_ring_size, GFP_KERNEL);
- np->tx_skbuff = kmalloc(sizeof(struct sk_buff*) * np->tx_ring_size, GFP_KERNEL);
- np->tx_dma = kmalloc(sizeof(dma_addr_t) * np->tx_ring_size, GFP_KERNEL);
- np->tx_dma_len = kmalloc(sizeof(unsigned int) * np->tx_ring_size, GFP_KERNEL);
- if (!np->rx_skbuff || !np->rx_dma || !np->tx_skbuff || !np->tx_dma || !np->tx_dma_len)
- goto out_freering;
- memset(np->rx_skbuff, 0, sizeof(struct sk_buff*) * np->rx_ring_size);
- memset(np->rx_dma, 0, sizeof(dma_addr_t) * np->rx_ring_size);
- memset(np->tx_skbuff, 0, sizeof(struct sk_buff*) * np->tx_ring_size);
- memset(np->tx_dma, 0, sizeof(dma_addr_t) * np->tx_ring_size);
- memset(np->tx_dma_len, 0, sizeof(unsigned int) * np->tx_ring_size);
-
- dev->open = nv_open;
- dev->stop = nv_close;
- dev->hard_start_xmit = nv_start_xmit;
- dev->get_stats = nv_get_stats;
- dev->change_mtu = nv_change_mtu;
- dev->set_mac_address = nv_set_mac_address;
- dev->set_multicast_list = nv_set_multicast;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = nv_poll_controller;
-#endif
- dev->weight = 64;
-#ifdef CONFIG_FORCEDETH_NAPI
- dev->poll = nv_napi_poll;
-#endif
- SET_ETHTOOL_OPS(dev, &ops);
- dev->tx_timeout = nv_tx_timeout;
- dev->watchdog_timeo = NV_WATCHDOG_TIMEO;
-
- pci_set_drvdata(pci_dev, dev);
-
- /* read the mac address */
- base = get_hwbase(dev);
- np->orig_mac[0] = readl(base + NvRegMacAddrA);
- np->orig_mac[1] = readl(base + NvRegMacAddrB);
-
- /* check the workaround bit for correct mac address order */
- txreg = readl(base + NvRegTransmitPoll);
- if (txreg & NVREG_TRANSMITPOLL_MAC_ADDR_REV) {
- /* mac address is already in correct order */
- dev->dev_addr[0] = (np->orig_mac[0] >> 0) & 0xff;
- dev->dev_addr[1] = (np->orig_mac[0] >> 8) & 0xff;
- dev->dev_addr[2] = (np->orig_mac[0] >> 16) & 0xff;
- dev->dev_addr[3] = (np->orig_mac[0] >> 24) & 0xff;
- dev->dev_addr[4] = (np->orig_mac[1] >> 0) & 0xff;
- dev->dev_addr[5] = (np->orig_mac[1] >> 8) & 0xff;
- } else {
- /* need to reverse mac address to correct order */
- dev->dev_addr[0] = (np->orig_mac[1] >> 8) & 0xff;
- dev->dev_addr[1] = (np->orig_mac[1] >> 0) & 0xff;
- dev->dev_addr[2] = (np->orig_mac[0] >> 24) & 0xff;
- dev->dev_addr[3] = (np->orig_mac[0] >> 16) & 0xff;
- dev->dev_addr[4] = (np->orig_mac[0] >> 8) & 0xff;
- dev->dev_addr[5] = (np->orig_mac[0] >> 0) & 0xff;
- /* set permanent address to be correct aswell */
- np->orig_mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
- (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
- np->orig_mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
- writel(txreg|NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
- }
- memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
-
- if (!is_valid_ether_addr(dev->perm_addr)) {
- /*
- * Bad mac address. At least one bios sets the mac address
- * to 01:23:45:67:89:ab
- */
- printk(KERN_ERR "%s: Invalid Mac address detected: %02x:%02x:%02x:%02x:%02x:%02x\n",
- pci_name(pci_dev),
- dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
- dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
- printk(KERN_ERR "Please complain to your hardware vendor. Switching to a random MAC.\n");
- dev->dev_addr[0] = 0x00;
- dev->dev_addr[1] = 0x00;
- dev->dev_addr[2] = 0x6c;
- get_random_bytes(&dev->dev_addr[3], 3);
- }
-
- dprintk(KERN_DEBUG "%s: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(pci_dev),
- dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
- dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
-
- /* set mac address */
- nv_copy_mac_to_hw(dev);
-
- /* disable WOL */
- writel(0, base + NvRegWakeUpFlags);
- np->wolenabled = 0;
-
- if (id->driver_data & DEV_HAS_POWER_CNTRL) {
- u8 revision_id;
- pci_read_config_byte(pci_dev, PCI_REVISION_ID, &revision_id);
-
- /* take phy and nic out of low power mode */
- powerstate = readl(base + NvRegPowerState2);
- powerstate &= ~NVREG_POWERSTATE2_POWERUP_MASK;
- if ((id->device == PCI_DEVICE_ID_NVIDIA_NVENET_12 ||
- id->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) &&
- revision_id >= 0xA3)
- powerstate |= NVREG_POWERSTATE2_POWERUP_REV_A3;
- writel(powerstate, base + NvRegPowerState2);
- }
-
- if (np->desc_ver == DESC_VER_1) {
- np->tx_flags = NV_TX_VALID;
- } else {
- np->tx_flags = NV_TX2_VALID;
- }
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) {
- np->irqmask = NVREG_IRQMASK_THROUGHPUT;
- if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
- np->msi_flags |= 0x0003;
- } else {
- np->irqmask = NVREG_IRQMASK_CPU;
- if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
- np->msi_flags |= 0x0001;
- }
-
- if (id->driver_data & DEV_NEED_TIMERIRQ)
- np->irqmask |= NVREG_IRQ_TIMER;
- if (id->driver_data & DEV_NEED_LINKTIMER) {
- dprintk(KERN_INFO "%s: link timer on.\n", pci_name(pci_dev));
- np->need_linktimer = 1;
- np->link_timeout = jiffies + LINK_TIMEOUT;
- } else {
- dprintk(KERN_INFO "%s: link timer off.\n", pci_name(pci_dev));
- np->need_linktimer = 0;
- }
-
- /* find a suitable phy */
- for (i = 1; i <= 32; i++) {
- int id1, id2;
- int phyaddr = i & 0x1F;
-
- spin_lock_irq(&np->lock);
- id1 = mii_rw(dev, phyaddr, MII_PHYSID1, MII_READ);
- spin_unlock_irq(&np->lock);
- if (id1 < 0 || id1 == 0xffff)
- continue;
- spin_lock_irq(&np->lock);
- id2 = mii_rw(dev, phyaddr, MII_PHYSID2, MII_READ);
- spin_unlock_irq(&np->lock);
- if (id2 < 0 || id2 == 0xffff)
- continue;
-
- np->phy_model = id2 & PHYID2_MODEL_MASK;
- id1 = (id1 & PHYID1_OUI_MASK) << PHYID1_OUI_SHFT;
- id2 = (id2 & PHYID2_OUI_MASK) >> PHYID2_OUI_SHFT;
- dprintk(KERN_DEBUG "%s: open: Found PHY %04x:%04x at address %d.\n",
- pci_name(pci_dev), id1, id2, phyaddr);
- np->phyaddr = phyaddr;
- np->phy_oui = id1 | id2;
- break;
- }
- if (i == 33) {
- printk(KERN_INFO "%s: open: Could not find a valid PHY.\n",
- pci_name(pci_dev));
- goto out_error;
- }
-
- /* reset it */
- phy_init(dev);
-
- /* set default link speed settings */
- np->linkspeed = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- np->duplex = 0;
- np->autoneg = 1;
-
- // offer device to EtherCAT master module
- np->ecdev = ecdev_offer(dev, ec_poll, THIS_MODULE);
- if (np->ecdev) {
- if (ecdev_open(np->ecdev)) {
- ecdev_withdraw(np->ecdev);
- goto out_error;
- }
- } else {
- err = register_netdev(dev);
- if (err) {
- printk(KERN_INFO "forcedeth: unable to register netdev: %d\n", err);
- goto out_freering;
- }
- }
- printk(KERN_INFO "%s: forcedeth.c: subsystem: %05x:%04x bound to %s\n",
- dev->name, pci_dev->subsystem_vendor, pci_dev->subsystem_device,
- pci_name(pci_dev));
-
- return 0;
-
-out_error:
- pci_set_drvdata(pci_dev, NULL);
-out_freering:
- free_rings(dev);
-out_unmap:
- iounmap(get_hwbase(dev));
-out_relreg:
- pci_release_regions(pci_dev);
-out_disable:
- pci_disable_device(pci_dev);
-out_free:
- free_netdev(dev);
-out:
- return err;
-}
-
-static void __devexit nv_remove(struct pci_dev *pci_dev)
-{
- struct net_device *dev = pci_get_drvdata(pci_dev);
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->ecdev) {
- ecdev_close(np->ecdev);
- ecdev_withdraw(np->ecdev);
- }
- else {
- unregister_netdev(dev);
- }
-
- /* special op: write back the misordered MAC address - otherwise
- * the next nv_probe would see a wrong address.
- */
- writel(np->orig_mac[0], base + NvRegMacAddrA);
- writel(np->orig_mac[1], base + NvRegMacAddrB);
-
- /* free all structures */
- free_rings(dev);
- iounmap(get_hwbase(dev));
- pci_release_regions(pci_dev);
- pci_disable_device(pci_dev);
- free_netdev(dev);
- pci_set_drvdata(pci_dev, NULL);
-}
-
-static struct pci_device_id pci_tbl[] = {
- { /* nForce Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_1),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce2 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_2),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_3),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_4),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_5),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_6),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_7),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* CK804 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_8),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* CK804 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_9),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP04 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_10),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP04 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_11),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP51 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_12),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP51 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_13),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP55 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_14),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP55 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_15),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP61 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_16),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP61 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_17),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP61 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_18),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP61 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_19),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP65 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_20),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP65 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_21),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP65 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_22),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP65 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_23),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- {0,},
-};
-
-static struct pci_driver driver = {
- .name = "forcedeth",
- .id_table = pci_tbl,
- .probe = nv_probe,
- .remove = __devexit_p(nv_remove),
-};
-
-
-static int __init init_nic(void)
-{
- printk(KERN_INFO "forcedeth: EtherCAT-capable nForce ethernet driver."
- " Version %s, master %s.\n",
- FORCEDETH_VERSION, EC_MASTER_VERSION);
- return pci_register_driver(&driver);
-}
-
-static void __exit exit_nic(void)
-{
- pci_unregister_driver(&driver);
-}
-
-module_param(max_interrupt_work, int, 0);
-MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt");
-module_param(optimization_mode, int, 0);
-MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer.");
-module_param(poll_interval, int, 0);
-MODULE_PARM_DESC(poll_interval, "Interval determines how frequent timer interrupt is generated by [(time_in_micro_secs * 100) / (2^10)]. Min is 0 and Max is 65535.");
-module_param(msi, int, 0);
-MODULE_PARM_DESC(msi, "MSI interrupts are enabled by setting to 1 and disabled by setting to 0.");
-module_param(msix, int, 0);
-MODULE_PARM_DESC(msix, "MSIX interrupts are enabled by setting to 1 and disabled by setting to 0.");
-module_param(dma_64bit, int, 0);
-MODULE_PARM_DESC(dma_64bit, "High DMA is enabled by setting to 1 and disabled by setting to 0.");
-
-MODULE_AUTHOR("Dipl.-Ing. (FH) Florian Pose <fp@igh-essen.com>");
-MODULE_DESCRIPTION("EtherCAT-capable nForce ethernet driver");
-MODULE_LICENSE("GPL");
-
-//MODULE_DEVICE_TABLE(pci, pci_tbl); // prevent auto-loading
-
-module_init(init_nic);
-module_exit(exit_nic);
--- a/devices/forcedeth-2.6.19-orig.c Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,4744 +0,0 @@
-/*
- * forcedeth: Ethernet driver for NVIDIA nForce media access controllers.
- *
- * Note: This driver is a cleanroom reimplementation based on reverse
- * engineered documentation written by Carl-Daniel Hailfinger
- * and Andrew de Quincey. It's neither supported nor endorsed
- * by NVIDIA Corp. Use at your own risk.
- *
- * NVIDIA, nForce and other NVIDIA marks are trademarks or registered
- * trademarks of NVIDIA Corporation in the United States and other
- * countries.
- *
- * Copyright (C) 2003,4,5 Manfred Spraul
- * Copyright (C) 2004 Andrew de Quincey (wol support)
- * Copyright (C) 2004 Carl-Daniel Hailfinger (invalid MAC handling, insane
- * IRQ rate fixes, bigendian fixes, cleanups, verification)
- * Copyright (c) 2004 NVIDIA Corporation
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
- * Changelog:
- * 0.01: 05 Oct 2003: First release that compiles without warnings.
- * 0.02: 05 Oct 2003: Fix bug for nv_drain_tx: do not try to free NULL skbs.
- * Check all PCI BARs for the register window.
- * udelay added to mii_rw.
- * 0.03: 06 Oct 2003: Initialize dev->irq.
- * 0.04: 07 Oct 2003: Initialize np->lock, reduce handled irqs, add printks.
- * 0.05: 09 Oct 2003: printk removed again, irq status print tx_timeout.
- * 0.06: 10 Oct 2003: MAC Address read updated, pff flag generation updated,
- * irq mask updated
- * 0.07: 14 Oct 2003: Further irq mask updates.
- * 0.08: 20 Oct 2003: rx_desc.Length initialization added, nv_alloc_rx refill
- * added into irq handler, NULL check for drain_ring.
- * 0.09: 20 Oct 2003: Basic link speed irq implementation. Only handle the
- * requested interrupt sources.
- * 0.10: 20 Oct 2003: First cleanup for release.
- * 0.11: 21 Oct 2003: hexdump for tx added, rx buffer sizes increased.
- * MAC Address init fix, set_multicast cleanup.
- * 0.12: 23 Oct 2003: Cleanups for release.
- * 0.13: 25 Oct 2003: Limit for concurrent tx packets increased to 10.
- * Set link speed correctly. start rx before starting
- * tx (nv_start_rx sets the link speed).
- * 0.14: 25 Oct 2003: Nic dependant irq mask.
- * 0.15: 08 Nov 2003: fix smp deadlock with set_multicast_list during
- * open.
- * 0.16: 15 Nov 2003: include file cleanup for ppc64, rx buffer size
- * increased to 1628 bytes.
- * 0.17: 16 Nov 2003: undo rx buffer size increase. Substract 1 from
- * the tx length.
- * 0.18: 17 Nov 2003: fix oops due to late initialization of dev_stats
- * 0.19: 29 Nov 2003: Handle RxNoBuf, detect & handle invalid mac
- * addresses, really stop rx if already running
- * in nv_start_rx, clean up a bit.
- * 0.20: 07 Dec 2003: alloc fixes
- * 0.21: 12 Jan 2004: additional alloc fix, nic polling fix.
- * 0.22: 19 Jan 2004: reprogram timer to a sane rate, avoid lockup
- * on close.
- * 0.23: 26 Jan 2004: various small cleanups
- * 0.24: 27 Feb 2004: make driver even less anonymous in backtraces
- * 0.25: 09 Mar 2004: wol support
- * 0.26: 03 Jun 2004: netdriver specific annotation, sparse-related fixes
- * 0.27: 19 Jun 2004: Gigabit support, new descriptor rings,
- * added CK804/MCP04 device IDs, code fixes
- * for registers, link status and other minor fixes.
- * 0.28: 21 Jun 2004: Big cleanup, making driver mostly endian safe
- * 0.29: 31 Aug 2004: Add backup timer for link change notification.
- * 0.30: 25 Sep 2004: rx checksum support for nf 250 Gb. Add rx reset
- * into nv_close, otherwise reenabling for wol can
- * cause DMA to kfree'd memory.
- * 0.31: 14 Nov 2004: ethtool support for getting/setting link
- * capabilities.
- * 0.32: 16 Apr 2005: RX_ERROR4 handling added.
- * 0.33: 16 May 2005: Support for MCP51 added.
- * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics.
- * 0.35: 26 Jun 2005: Support for MCP55 added.
- * 0.36: 28 Jun 2005: Add jumbo frame support.
- * 0.37: 10 Jul 2005: Additional ethtool support, cleanup of pci id list
- * 0.38: 16 Jul 2005: tx irq rewrite: Use global flags instead of
- * per-packet flags.
- * 0.39: 18 Jul 2005: Add 64bit descriptor support.
- * 0.40: 19 Jul 2005: Add support for mac address change.
- * 0.41: 30 Jul 2005: Write back original MAC in nv_close instead
- * of nv_remove
- * 0.42: 06 Aug 2005: Fix lack of link speed initialization
- * in the second (and later) nv_open call
- * 0.43: 10 Aug 2005: Add support for tx checksum.
- * 0.44: 20 Aug 2005: Add support for scatter gather and segmentation.
- * 0.45: 18 Sep 2005: Remove nv_stop/start_rx from every link check
- * 0.46: 20 Oct 2005: Add irq optimization modes.
- * 0.47: 26 Oct 2005: Add phyaddr 0 in phy scan.
- * 0.48: 24 Dec 2005: Disable TSO, bugfix for pci_map_single
- * 0.49: 10 Dec 2005: Fix tso for large buffers.
- * 0.50: 20 Jan 2006: Add 8021pq tagging support.
- * 0.51: 20 Jan 2006: Add 64bit consistent memory allocation for rings.
- * 0.52: 20 Jan 2006: Add MSI/MSIX support.
- * 0.53: 19 Mar 2006: Fix init from low power mode and add hw reset.
- * 0.54: 21 Mar 2006: Fix spin locks for multi irqs and cleanup.
- * 0.55: 22 Mar 2006: Add flow control (pause frame).
- * 0.56: 22 Mar 2006: Additional ethtool config and moduleparam support.
- * 0.57: 14 May 2006: Mac address set in probe/remove and order corrections.
- *
- * Known bugs:
- * We suspect that on some hardware no TX done interrupts are generated.
- * This means recovery from netif_stop_queue only happens if the hw timer
- * interrupt fires (100 times/second, configurable with NVREG_POLL_DEFAULT)
- * and the timer is active in the IRQMask, or if a rx packet arrives by chance.
- * If your hardware reliably generates tx done interrupts, then you can remove
- * DEV_NEED_TIMERIRQ from the driver_data flags.
- * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few
- * superfluous timer interrupts from the nic.
- */
-#ifdef CONFIG_FORCEDETH_NAPI
-#define DRIVERNAPI "-NAPI"
-#else
-#define DRIVERNAPI
-#endif
-#define FORCEDETH_VERSION "0.57"
-#define DRV_NAME "forcedeth"
-
-#include <linux/module.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/delay.h>
-#include <linux/spinlock.h>
-#include <linux/ethtool.h>
-#include <linux/timer.h>
-#include <linux/skbuff.h>
-#include <linux/mii.h>
-#include <linux/random.h>
-#include <linux/init.h>
-#include <linux/if_vlan.h>
-#include <linux/dma-mapping.h>
-
-#include <asm/irq.h>
-#include <asm/io.h>
-#include <asm/uaccess.h>
-#include <asm/system.h>
-
-#if 0
-#define dprintk printk
-#else
-#define dprintk(x...) do { } while (0)
-#endif
-
-
-/*
- * Hardware access:
- */
-
-#define DEV_NEED_TIMERIRQ 0x0001 /* set the timer irq flag in the irq mask */
-#define DEV_NEED_LINKTIMER 0x0002 /* poll link settings. Relies on the timer irq */
-#define DEV_HAS_LARGEDESC 0x0004 /* device supports jumbo frames and needs packet format 2 */
-#define DEV_HAS_HIGH_DMA 0x0008 /* device supports 64bit dma */
-#define DEV_HAS_CHECKSUM 0x0010 /* device supports tx and rx checksum offloads */
-#define DEV_HAS_VLAN 0x0020 /* device supports vlan tagging and striping */
-#define DEV_HAS_MSI 0x0040 /* device supports MSI */
-#define DEV_HAS_MSI_X 0x0080 /* device supports MSI-X */
-#define DEV_HAS_POWER_CNTRL 0x0100 /* device supports power savings */
-#define DEV_HAS_PAUSEFRAME_TX 0x0200 /* device supports tx pause frames */
-#define DEV_HAS_STATISTICS 0x0400 /* device supports hw statistics */
-#define DEV_HAS_TEST_EXTENDED 0x0800 /* device supports extended diagnostic test */
-
-enum {
- NvRegIrqStatus = 0x000,
-#define NVREG_IRQSTAT_MIIEVENT 0x040
-#define NVREG_IRQSTAT_MASK 0x1ff
- NvRegIrqMask = 0x004,
-#define NVREG_IRQ_RX_ERROR 0x0001
-#define NVREG_IRQ_RX 0x0002
-#define NVREG_IRQ_RX_NOBUF 0x0004
-#define NVREG_IRQ_TX_ERR 0x0008
-#define NVREG_IRQ_TX_OK 0x0010
-#define NVREG_IRQ_TIMER 0x0020
-#define NVREG_IRQ_LINK 0x0040
-#define NVREG_IRQ_RX_FORCED 0x0080
-#define NVREG_IRQ_TX_FORCED 0x0100
-#define NVREG_IRQMASK_THROUGHPUT 0x00df
-#define NVREG_IRQMASK_CPU 0x0040
-#define NVREG_IRQ_TX_ALL (NVREG_IRQ_TX_ERR|NVREG_IRQ_TX_OK|NVREG_IRQ_TX_FORCED)
-#define NVREG_IRQ_RX_ALL (NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_RX_FORCED)
-#define NVREG_IRQ_OTHER (NVREG_IRQ_TIMER|NVREG_IRQ_LINK)
-
-#define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR| \
- NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RX_FORCED| \
- NVREG_IRQ_TX_FORCED))
-
- NvRegUnknownSetupReg6 = 0x008,
-#define NVREG_UNKSETUP6_VAL 3
-
-/*
- * NVREG_POLL_DEFAULT is the interval length of the timer source on the nic
- * NVREG_POLL_DEFAULT=97 would result in an interval length of 1 ms
- */
- NvRegPollingInterval = 0x00c,
-#define NVREG_POLL_DEFAULT_THROUGHPUT 970
-#define NVREG_POLL_DEFAULT_CPU 13
- NvRegMSIMap0 = 0x020,
- NvRegMSIMap1 = 0x024,
- NvRegMSIIrqMask = 0x030,
-#define NVREG_MSI_VECTOR_0_ENABLED 0x01
- NvRegMisc1 = 0x080,
-#define NVREG_MISC1_PAUSE_TX 0x01
-#define NVREG_MISC1_HD 0x02
-#define NVREG_MISC1_FORCE 0x3b0f3c
-
- NvRegMacReset = 0x3c,
-#define NVREG_MAC_RESET_ASSERT 0x0F3
- NvRegTransmitterControl = 0x084,
-#define NVREG_XMITCTL_START 0x01
- NvRegTransmitterStatus = 0x088,
-#define NVREG_XMITSTAT_BUSY 0x01
-
- NvRegPacketFilterFlags = 0x8c,
-#define NVREG_PFF_PAUSE_RX 0x08
-#define NVREG_PFF_ALWAYS 0x7F0000
-#define NVREG_PFF_PROMISC 0x80
-#define NVREG_PFF_MYADDR 0x20
-#define NVREG_PFF_LOOPBACK 0x10
-
- NvRegOffloadConfig = 0x90,
-#define NVREG_OFFLOAD_HOMEPHY 0x601
-#define NVREG_OFFLOAD_NORMAL RX_NIC_BUFSIZE
- NvRegReceiverControl = 0x094,
-#define NVREG_RCVCTL_START 0x01
- NvRegReceiverStatus = 0x98,
-#define NVREG_RCVSTAT_BUSY 0x01
-
- NvRegRandomSeed = 0x9c,
-#define NVREG_RNDSEED_MASK 0x00ff
-#define NVREG_RNDSEED_FORCE 0x7f00
-#define NVREG_RNDSEED_FORCE2 0x2d00
-#define NVREG_RNDSEED_FORCE3 0x7400
-
- NvRegTxDeferral = 0xA0,
-#define NVREG_TX_DEFERRAL_DEFAULT 0x15050f
-#define NVREG_TX_DEFERRAL_RGMII_10_100 0x16070f
-#define NVREG_TX_DEFERRAL_RGMII_1000 0x14050f
- NvRegRxDeferral = 0xA4,
-#define NVREG_RX_DEFERRAL_DEFAULT 0x16
- NvRegMacAddrA = 0xA8,
- NvRegMacAddrB = 0xAC,
- NvRegMulticastAddrA = 0xB0,
-#define NVREG_MCASTADDRA_FORCE 0x01
- NvRegMulticastAddrB = 0xB4,
- NvRegMulticastMaskA = 0xB8,
- NvRegMulticastMaskB = 0xBC,
-
- NvRegPhyInterface = 0xC0,
-#define PHY_RGMII 0x10000000
-
- NvRegTxRingPhysAddr = 0x100,
- NvRegRxRingPhysAddr = 0x104,
- NvRegRingSizes = 0x108,
-#define NVREG_RINGSZ_TXSHIFT 0
-#define NVREG_RINGSZ_RXSHIFT 16
- NvRegTransmitPoll = 0x10c,
-#define NVREG_TRANSMITPOLL_MAC_ADDR_REV 0x00008000
- NvRegLinkSpeed = 0x110,
-#define NVREG_LINKSPEED_FORCE 0x10000
-#define NVREG_LINKSPEED_10 1000
-#define NVREG_LINKSPEED_100 100
-#define NVREG_LINKSPEED_1000 50
-#define NVREG_LINKSPEED_MASK (0xFFF)
- NvRegUnknownSetupReg5 = 0x130,
-#define NVREG_UNKSETUP5_BIT31 (1<<31)
- NvRegTxWatermark = 0x13c,
-#define NVREG_TX_WM_DESC1_DEFAULT 0x0200010
-#define NVREG_TX_WM_DESC2_3_DEFAULT 0x1e08000
-#define NVREG_TX_WM_DESC2_3_1000 0xfe08000
- NvRegTxRxControl = 0x144,
-#define NVREG_TXRXCTL_KICK 0x0001
-#define NVREG_TXRXCTL_BIT1 0x0002
-#define NVREG_TXRXCTL_BIT2 0x0004
-#define NVREG_TXRXCTL_IDLE 0x0008
-#define NVREG_TXRXCTL_RESET 0x0010
-#define NVREG_TXRXCTL_RXCHECK 0x0400
-#define NVREG_TXRXCTL_DESC_1 0
-#define NVREG_TXRXCTL_DESC_2 0x02100
-#define NVREG_TXRXCTL_DESC_3 0x02200
-#define NVREG_TXRXCTL_VLANSTRIP 0x00040
-#define NVREG_TXRXCTL_VLANINS 0x00080
- NvRegTxRingPhysAddrHigh = 0x148,
- NvRegRxRingPhysAddrHigh = 0x14C,
- NvRegTxPauseFrame = 0x170,
-#define NVREG_TX_PAUSEFRAME_DISABLE 0x1ff0080
-#define NVREG_TX_PAUSEFRAME_ENABLE 0x0c00030
- NvRegMIIStatus = 0x180,
-#define NVREG_MIISTAT_ERROR 0x0001
-#define NVREG_MIISTAT_LINKCHANGE 0x0008
-#define NVREG_MIISTAT_MASK 0x000f
-#define NVREG_MIISTAT_MASK2 0x000f
- NvRegUnknownSetupReg4 = 0x184,
-#define NVREG_UNKSETUP4_VAL 8
-
- NvRegAdapterControl = 0x188,
-#define NVREG_ADAPTCTL_START 0x02
-#define NVREG_ADAPTCTL_LINKUP 0x04
-#define NVREG_ADAPTCTL_PHYVALID 0x40000
-#define NVREG_ADAPTCTL_RUNNING 0x100000
-#define NVREG_ADAPTCTL_PHYSHIFT 24
- NvRegMIISpeed = 0x18c,
-#define NVREG_MIISPEED_BIT8 (1<<8)
-#define NVREG_MIIDELAY 5
- NvRegMIIControl = 0x190,
-#define NVREG_MIICTL_INUSE 0x08000
-#define NVREG_MIICTL_WRITE 0x00400
-#define NVREG_MIICTL_ADDRSHIFT 5
- NvRegMIIData = 0x194,
- NvRegWakeUpFlags = 0x200,
-#define NVREG_WAKEUPFLAGS_VAL 0x7770
-#define NVREG_WAKEUPFLAGS_BUSYSHIFT 24
-#define NVREG_WAKEUPFLAGS_ENABLESHIFT 16
-#define NVREG_WAKEUPFLAGS_D3SHIFT 12
-#define NVREG_WAKEUPFLAGS_D2SHIFT 8
-#define NVREG_WAKEUPFLAGS_D1SHIFT 4
-#define NVREG_WAKEUPFLAGS_D0SHIFT 0
-#define NVREG_WAKEUPFLAGS_ACCEPT_MAGPAT 0x01
-#define NVREG_WAKEUPFLAGS_ACCEPT_WAKEUPPAT 0x02
-#define NVREG_WAKEUPFLAGS_ACCEPT_LINKCHANGE 0x04
-#define NVREG_WAKEUPFLAGS_ENABLE 0x1111
-
- NvRegPatternCRC = 0x204,
- NvRegPatternMask = 0x208,
- NvRegPowerCap = 0x268,
-#define NVREG_POWERCAP_D3SUPP (1<<30)
-#define NVREG_POWERCAP_D2SUPP (1<<26)
-#define NVREG_POWERCAP_D1SUPP (1<<25)
- NvRegPowerState = 0x26c,
-#define NVREG_POWERSTATE_POWEREDUP 0x8000
-#define NVREG_POWERSTATE_VALID 0x0100
-#define NVREG_POWERSTATE_MASK 0x0003
-#define NVREG_POWERSTATE_D0 0x0000
-#define NVREG_POWERSTATE_D1 0x0001
-#define NVREG_POWERSTATE_D2 0x0002
-#define NVREG_POWERSTATE_D3 0x0003
- NvRegTxCnt = 0x280,
- NvRegTxZeroReXmt = 0x284,
- NvRegTxOneReXmt = 0x288,
- NvRegTxManyReXmt = 0x28c,
- NvRegTxLateCol = 0x290,
- NvRegTxUnderflow = 0x294,
- NvRegTxLossCarrier = 0x298,
- NvRegTxExcessDef = 0x29c,
- NvRegTxRetryErr = 0x2a0,
- NvRegRxFrameErr = 0x2a4,
- NvRegRxExtraByte = 0x2a8,
- NvRegRxLateCol = 0x2ac,
- NvRegRxRunt = 0x2b0,
- NvRegRxFrameTooLong = 0x2b4,
- NvRegRxOverflow = 0x2b8,
- NvRegRxFCSErr = 0x2bc,
- NvRegRxFrameAlignErr = 0x2c0,
- NvRegRxLenErr = 0x2c4,
- NvRegRxUnicast = 0x2c8,
- NvRegRxMulticast = 0x2cc,
- NvRegRxBroadcast = 0x2d0,
- NvRegTxDef = 0x2d4,
- NvRegTxFrame = 0x2d8,
- NvRegRxCnt = 0x2dc,
- NvRegTxPause = 0x2e0,
- NvRegRxPause = 0x2e4,
- NvRegRxDropFrame = 0x2e8,
- NvRegVlanControl = 0x300,
-#define NVREG_VLANCONTROL_ENABLE 0x2000
- NvRegMSIXMap0 = 0x3e0,
- NvRegMSIXMap1 = 0x3e4,
- NvRegMSIXIrqStatus = 0x3f0,
-
- NvRegPowerState2 = 0x600,
-#define NVREG_POWERSTATE2_POWERUP_MASK 0x0F11
-#define NVREG_POWERSTATE2_POWERUP_REV_A3 0x0001
-};
-
-/* Big endian: should work, but is untested */
-struct ring_desc {
- __le32 buf;
- __le32 flaglen;
-};
-
-struct ring_desc_ex {
- __le32 bufhigh;
- __le32 buflow;
- __le32 txvlan;
- __le32 flaglen;
-};
-
-union ring_type {
- struct ring_desc* orig;
- struct ring_desc_ex* ex;
-};
-
-#define FLAG_MASK_V1 0xffff0000
-#define FLAG_MASK_V2 0xffffc000
-#define LEN_MASK_V1 (0xffffffff ^ FLAG_MASK_V1)
-#define LEN_MASK_V2 (0xffffffff ^ FLAG_MASK_V2)
-
-#define NV_TX_LASTPACKET (1<<16)
-#define NV_TX_RETRYERROR (1<<19)
-#define NV_TX_FORCED_INTERRUPT (1<<24)
-#define NV_TX_DEFERRED (1<<26)
-#define NV_TX_CARRIERLOST (1<<27)
-#define NV_TX_LATECOLLISION (1<<28)
-#define NV_TX_UNDERFLOW (1<<29)
-#define NV_TX_ERROR (1<<30)
-#define NV_TX_VALID (1<<31)
-
-#define NV_TX2_LASTPACKET (1<<29)
-#define NV_TX2_RETRYERROR (1<<18)
-#define NV_TX2_FORCED_INTERRUPT (1<<30)
-#define NV_TX2_DEFERRED (1<<25)
-#define NV_TX2_CARRIERLOST (1<<26)
-#define NV_TX2_LATECOLLISION (1<<27)
-#define NV_TX2_UNDERFLOW (1<<28)
-/* error and valid are the same for both */
-#define NV_TX2_ERROR (1<<30)
-#define NV_TX2_VALID (1<<31)
-#define NV_TX2_TSO (1<<28)
-#define NV_TX2_TSO_SHIFT 14
-#define NV_TX2_TSO_MAX_SHIFT 14
-#define NV_TX2_TSO_MAX_SIZE (1<<NV_TX2_TSO_MAX_SHIFT)
-#define NV_TX2_CHECKSUM_L3 (1<<27)
-#define NV_TX2_CHECKSUM_L4 (1<<26)
-
-#define NV_TX3_VLAN_TAG_PRESENT (1<<18)
-
-#define NV_RX_DESCRIPTORVALID (1<<16)
-#define NV_RX_MISSEDFRAME (1<<17)
-#define NV_RX_SUBSTRACT1 (1<<18)
-#define NV_RX_ERROR1 (1<<23)
-#define NV_RX_ERROR2 (1<<24)
-#define NV_RX_ERROR3 (1<<25)
-#define NV_RX_ERROR4 (1<<26)
-#define NV_RX_CRCERR (1<<27)
-#define NV_RX_OVERFLOW (1<<28)
-#define NV_RX_FRAMINGERR (1<<29)
-#define NV_RX_ERROR (1<<30)
-#define NV_RX_AVAIL (1<<31)
-
-#define NV_RX2_CHECKSUMMASK (0x1C000000)
-#define NV_RX2_CHECKSUMOK1 (0x10000000)
-#define NV_RX2_CHECKSUMOK2 (0x14000000)
-#define NV_RX2_CHECKSUMOK3 (0x18000000)
-#define NV_RX2_DESCRIPTORVALID (1<<29)
-#define NV_RX2_SUBSTRACT1 (1<<25)
-#define NV_RX2_ERROR1 (1<<18)
-#define NV_RX2_ERROR2 (1<<19)
-#define NV_RX2_ERROR3 (1<<20)
-#define NV_RX2_ERROR4 (1<<21)
-#define NV_RX2_CRCERR (1<<22)
-#define NV_RX2_OVERFLOW (1<<23)
-#define NV_RX2_FRAMINGERR (1<<24)
-/* error and avail are the same for both */
-#define NV_RX2_ERROR (1<<30)
-#define NV_RX2_AVAIL (1<<31)
-
-#define NV_RX3_VLAN_TAG_PRESENT (1<<16)
-#define NV_RX3_VLAN_TAG_MASK (0x0000FFFF)
-
-/* Miscelaneous hardware related defines: */
-#define NV_PCI_REGSZ_VER1 0x270
-#define NV_PCI_REGSZ_VER2 0x604
-
-/* various timeout delays: all in usec */
-#define NV_TXRX_RESET_DELAY 4
-#define NV_TXSTOP_DELAY1 10
-#define NV_TXSTOP_DELAY1MAX 500000
-#define NV_TXSTOP_DELAY2 100
-#define NV_RXSTOP_DELAY1 10
-#define NV_RXSTOP_DELAY1MAX 500000
-#define NV_RXSTOP_DELAY2 100
-#define NV_SETUP5_DELAY 5
-#define NV_SETUP5_DELAYMAX 50000
-#define NV_POWERUP_DELAY 5
-#define NV_POWERUP_DELAYMAX 5000
-#define NV_MIIBUSY_DELAY 50
-#define NV_MIIPHY_DELAY 10
-#define NV_MIIPHY_DELAYMAX 10000
-#define NV_MAC_RESET_DELAY 64
-
-#define NV_WAKEUPPATTERNS 5
-#define NV_WAKEUPMASKENTRIES 4
-
-/* General driver defaults */
-#define NV_WATCHDOG_TIMEO (5*HZ)
-
-#define RX_RING_DEFAULT 128
-#define TX_RING_DEFAULT 256
-#define RX_RING_MIN 128
-#define TX_RING_MIN 64
-#define RING_MAX_DESC_VER_1 1024
-#define RING_MAX_DESC_VER_2_3 16384
-/*
- * Difference between the get and put pointers for the tx ring.
- * This is used to throttle the amount of data outstanding in the
- * tx ring.
- */
-#define TX_LIMIT_DIFFERENCE 1
-
-/* rx/tx mac addr + type + vlan + align + slack*/
-#define NV_RX_HEADERS (64)
-/* even more slack. */
-#define NV_RX_ALLOC_PAD (64)
-
-/* maximum mtu size */
-#define NV_PKTLIMIT_1 ETH_DATA_LEN /* hard limit not known */
-#define NV_PKTLIMIT_2 9100 /* Actual limit according to NVidia: 9202 */
-
-#define OOM_REFILL (1+HZ/20)
-#define POLL_WAIT (1+HZ/100)
-#define LINK_TIMEOUT (3*HZ)
-#define STATS_INTERVAL (10*HZ)
-
-/*
- * desc_ver values:
- * The nic supports three different descriptor types:
- * - DESC_VER_1: Original
- * - DESC_VER_2: support for jumbo frames.
- * - DESC_VER_3: 64-bit format.
- */
-#define DESC_VER_1 1
-#define DESC_VER_2 2
-#define DESC_VER_3 3
-
-/* PHY defines */
-#define PHY_OUI_MARVELL 0x5043
-#define PHY_OUI_CICADA 0x03f1
-#define PHYID1_OUI_MASK 0x03ff
-#define PHYID1_OUI_SHFT 6
-#define PHYID2_OUI_MASK 0xfc00
-#define PHYID2_OUI_SHFT 10
-#define PHYID2_MODEL_MASK 0x03f0
-#define PHY_MODEL_MARVELL_E3016 0x220
-#define PHY_MARVELL_E3016_INITMASK 0x0300
-#define PHY_INIT1 0x0f000
-#define PHY_INIT2 0x0e00
-#define PHY_INIT3 0x01000
-#define PHY_INIT4 0x0200
-#define PHY_INIT5 0x0004
-#define PHY_INIT6 0x02000
-#define PHY_GIGABIT 0x0100
-
-#define PHY_TIMEOUT 0x1
-#define PHY_ERROR 0x2
-
-#define PHY_100 0x1
-#define PHY_1000 0x2
-#define PHY_HALF 0x100
-
-#define NV_PAUSEFRAME_RX_CAPABLE 0x0001
-#define NV_PAUSEFRAME_TX_CAPABLE 0x0002
-#define NV_PAUSEFRAME_RX_ENABLE 0x0004
-#define NV_PAUSEFRAME_TX_ENABLE 0x0008
-#define NV_PAUSEFRAME_RX_REQ 0x0010
-#define NV_PAUSEFRAME_TX_REQ 0x0020
-#define NV_PAUSEFRAME_AUTONEG 0x0040
-
-/* MSI/MSI-X defines */
-#define NV_MSI_X_MAX_VECTORS 8
-#define NV_MSI_X_VECTORS_MASK 0x000f
-#define NV_MSI_CAPABLE 0x0010
-#define NV_MSI_X_CAPABLE 0x0020
-#define NV_MSI_ENABLED 0x0040
-#define NV_MSI_X_ENABLED 0x0080
-
-#define NV_MSI_X_VECTOR_ALL 0x0
-#define NV_MSI_X_VECTOR_RX 0x0
-#define NV_MSI_X_VECTOR_TX 0x1
-#define NV_MSI_X_VECTOR_OTHER 0x2
-
-/* statistics */
-struct nv_ethtool_str {
- char name[ETH_GSTRING_LEN];
-};
-
-static const struct nv_ethtool_str nv_estats_str[] = {
- { "tx_bytes" },
- { "tx_zero_rexmt" },
- { "tx_one_rexmt" },
- { "tx_many_rexmt" },
- { "tx_late_collision" },
- { "tx_fifo_errors" },
- { "tx_carrier_errors" },
- { "tx_excess_deferral" },
- { "tx_retry_error" },
- { "tx_deferral" },
- { "tx_packets" },
- { "tx_pause" },
- { "rx_frame_error" },
- { "rx_extra_byte" },
- { "rx_late_collision" },
- { "rx_runt" },
- { "rx_frame_too_long" },
- { "rx_over_errors" },
- { "rx_crc_errors" },
- { "rx_frame_align_error" },
- { "rx_length_error" },
- { "rx_unicast" },
- { "rx_multicast" },
- { "rx_broadcast" },
- { "rx_bytes" },
- { "rx_pause" },
- { "rx_drop_frame" },
- { "rx_packets" },
- { "rx_errors_total" }
-};
-
-struct nv_ethtool_stats {
- u64 tx_bytes;
- u64 tx_zero_rexmt;
- u64 tx_one_rexmt;
- u64 tx_many_rexmt;
- u64 tx_late_collision;
- u64 tx_fifo_errors;
- u64 tx_carrier_errors;
- u64 tx_excess_deferral;
- u64 tx_retry_error;
- u64 tx_deferral;
- u64 tx_packets;
- u64 tx_pause;
- u64 rx_frame_error;
- u64 rx_extra_byte;
- u64 rx_late_collision;
- u64 rx_runt;
- u64 rx_frame_too_long;
- u64 rx_over_errors;
- u64 rx_crc_errors;
- u64 rx_frame_align_error;
- u64 rx_length_error;
- u64 rx_unicast;
- u64 rx_multicast;
- u64 rx_broadcast;
- u64 rx_bytes;
- u64 rx_pause;
- u64 rx_drop_frame;
- u64 rx_packets;
- u64 rx_errors_total;
-};
-
-/* diagnostics */
-#define NV_TEST_COUNT_BASE 3
-#define NV_TEST_COUNT_EXTENDED 4
-
-static const struct nv_ethtool_str nv_etests_str[] = {
- { "link (online/offline)" },
- { "register (offline) " },
- { "interrupt (offline) " },
- { "loopback (offline) " }
-};
-
-struct register_test {
- __le32 reg;
- __le32 mask;
-};
-
-static const struct register_test nv_registers_test[] = {
- { NvRegUnknownSetupReg6, 0x01 },
- { NvRegMisc1, 0x03c },
- { NvRegOffloadConfig, 0x03ff },
- { NvRegMulticastAddrA, 0xffffffff },
- { NvRegTxWatermark, 0x0ff },
- { NvRegWakeUpFlags, 0x07777 },
- { 0,0 }
-};
-
-/*
- * SMP locking:
- * All hardware access under dev->priv->lock, except the performance
- * critical parts:
- * - rx is (pseudo-) lockless: it relies on the single-threading provided
- * by the arch code for interrupts.
- * - tx setup is lockless: it relies on netif_tx_lock. Actual submission
- * needs dev->priv->lock :-(
- * - set_multicast_list: preparation lockless, relies on netif_tx_lock.
- */
-
-/* in dev: base, irq */
-struct fe_priv {
- spinlock_t lock;
-
- /* General data:
- * Locking: spin_lock(&np->lock); */
- struct net_device_stats stats;
- struct nv_ethtool_stats estats;
- int in_shutdown;
- u32 linkspeed;
- int duplex;
- int autoneg;
- int fixed_mode;
- int phyaddr;
- int wolenabled;
- unsigned int phy_oui;
- unsigned int phy_model;
- u16 gigabit;
- int intr_test;
-
- /* General data: RO fields */
- dma_addr_t ring_addr;
- struct pci_dev *pci_dev;
- u32 orig_mac[2];
- u32 irqmask;
- u32 desc_ver;
- u32 txrxctl_bits;
- u32 vlanctl_bits;
- u32 driver_data;
- u32 register_size;
- int rx_csum;
-
- void __iomem *base;
-
- /* rx specific fields.
- * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
- */
- union ring_type rx_ring;
- unsigned int cur_rx, refill_rx;
- struct sk_buff **rx_skbuff;
- dma_addr_t *rx_dma;
- unsigned int rx_buf_sz;
- unsigned int pkt_limit;
- struct timer_list oom_kick;
- struct timer_list nic_poll;
- struct timer_list stats_poll;
- u32 nic_poll_irq;
- int rx_ring_size;
-
- /* media detection workaround.
- * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
- */
- int need_linktimer;
- unsigned long link_timeout;
- /*
- * tx specific fields.
- */
- union ring_type tx_ring;
- unsigned int next_tx, nic_tx;
- struct sk_buff **tx_skbuff;
- dma_addr_t *tx_dma;
- unsigned int *tx_dma_len;
- u32 tx_flags;
- int tx_ring_size;
- int tx_limit_start;
- int tx_limit_stop;
-
- /* vlan fields */
- struct vlan_group *vlangrp;
-
- /* msi/msi-x fields */
- u32 msi_flags;
- struct msix_entry msi_x_entry[NV_MSI_X_MAX_VECTORS];
-
- /* flow control */
- u32 pause_flags;
-};
-
-/*
- * Maximum number of loops until we assume that a bit in the irq mask
- * is stuck. Overridable with module param.
- */
-static int max_interrupt_work = 5;
-
-/*
- * Optimization can be either throuput mode or cpu mode
- *
- * Throughput Mode: Every tx and rx packet will generate an interrupt.
- * CPU Mode: Interrupts are controlled by a timer.
- */
-enum {
- NV_OPTIMIZATION_MODE_THROUGHPUT,
- NV_OPTIMIZATION_MODE_CPU
-};
-static int optimization_mode = NV_OPTIMIZATION_MODE_THROUGHPUT;
-
-/*
- * Poll interval for timer irq
- *
- * This interval determines how frequent an interrupt is generated.
- * The is value is determined by [(time_in_micro_secs * 100) / (2^10)]
- * Min = 0, and Max = 65535
- */
-static int poll_interval = -1;
-
-/*
- * MSI interrupts
- */
-enum {
- NV_MSI_INT_DISABLED,
- NV_MSI_INT_ENABLED
-};
-static int msi = NV_MSI_INT_ENABLED;
-
-/*
- * MSIX interrupts
- */
-enum {
- NV_MSIX_INT_DISABLED,
- NV_MSIX_INT_ENABLED
-};
-static int msix = NV_MSIX_INT_ENABLED;
-
-/*
- * DMA 64bit
- */
-enum {
- NV_DMA_64BIT_DISABLED,
- NV_DMA_64BIT_ENABLED
-};
-static int dma_64bit = NV_DMA_64BIT_ENABLED;
-
-static inline struct fe_priv *get_nvpriv(struct net_device *dev)
-{
- return netdev_priv(dev);
-}
-
-static inline u8 __iomem *get_hwbase(struct net_device *dev)
-{
- return ((struct fe_priv *)netdev_priv(dev))->base;
-}
-
-static inline void pci_push(u8 __iomem *base)
-{
- /* force out pending posted writes */
- readl(base);
-}
-
-static inline u32 nv_descr_getlength(struct ring_desc *prd, u32 v)
-{
- return le32_to_cpu(prd->flaglen)
- & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2);
-}
-
-static inline u32 nv_descr_getlength_ex(struct ring_desc_ex *prd, u32 v)
-{
- return le32_to_cpu(prd->flaglen) & LEN_MASK_V2;
-}
-
-static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target,
- int delay, int delaymax, const char *msg)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- pci_push(base);
- do {
- udelay(delay);
- delaymax -= delay;
- if (delaymax < 0) {
- if (msg)
- printk(msg);
- return 1;
- }
- } while ((readl(base + offset) & mask) != target);
- return 0;
-}
-
-#define NV_SETUP_RX_RING 0x01
-#define NV_SETUP_TX_RING 0x02
-
-static void setup_hw_rings(struct net_device *dev, int rxtx_flags)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- if (rxtx_flags & NV_SETUP_RX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
- }
- if (rxtx_flags & NV_SETUP_TX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr);
- }
- } else {
- if (rxtx_flags & NV_SETUP_RX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
- writel((u32) (cpu_to_le64(np->ring_addr) >> 32), base + NvRegRxRingPhysAddrHigh);
- }
- if (rxtx_flags & NV_SETUP_TX_RING) {
- writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr);
- writel((u32) (cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)) >> 32), base + NvRegTxRingPhysAddrHigh);
- }
- }
-}
-
-static void free_rings(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- if (np->rx_ring.orig)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size),
- np->rx_ring.orig, np->ring_addr);
- } else {
- if (np->rx_ring.ex)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size),
- np->rx_ring.ex, np->ring_addr);
- }
- if (np->rx_skbuff)
- kfree(np->rx_skbuff);
- if (np->rx_dma)
- kfree(np->rx_dma);
- if (np->tx_skbuff)
- kfree(np->tx_skbuff);
- if (np->tx_dma)
- kfree(np->tx_dma);
- if (np->tx_dma_len)
- kfree(np->tx_dma_len);
-}
-
-static int using_multi_irqs(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!(np->msi_flags & NV_MSI_X_ENABLED) ||
- ((np->msi_flags & NV_MSI_X_ENABLED) &&
- ((np->msi_flags & NV_MSI_X_VECTORS_MASK) == 0x1)))
- return 0;
- else
- return 1;
-}
-
-static void nv_enable_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
-}
-
-static void nv_disable_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- } else {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
-}
-
-/* In MSIX mode, a write to irqmask behaves as XOR */
-static void nv_enable_hw_interrupts(struct net_device *dev, u32 mask)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- writel(mask, base + NvRegIrqMask);
-}
-
-static void nv_disable_hw_interrupts(struct net_device *dev, u32 mask)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- if (np->msi_flags & NV_MSI_X_ENABLED) {
- writel(mask, base + NvRegIrqMask);
- } else {
- if (np->msi_flags & NV_MSI_ENABLED)
- writel(0, base + NvRegMSIIrqMask);
- writel(0, base + NvRegIrqMask);
- }
-}
-
-#define MII_READ (-1)
-/* mii_rw: read/write a register on the PHY.
- *
- * Caller must guarantee serialization
- */
-static int mii_rw(struct net_device *dev, int addr, int miireg, int value)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 reg;
- int retval;
-
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
-
- reg = readl(base + NvRegMIIControl);
- if (reg & NVREG_MIICTL_INUSE) {
- writel(NVREG_MIICTL_INUSE, base + NvRegMIIControl);
- udelay(NV_MIIBUSY_DELAY);
- }
-
- reg = (addr << NVREG_MIICTL_ADDRSHIFT) | miireg;
- if (value != MII_READ) {
- writel(value, base + NvRegMIIData);
- reg |= NVREG_MIICTL_WRITE;
- }
- writel(reg, base + NvRegMIIControl);
-
- if (reg_delay(dev, NvRegMIIControl, NVREG_MIICTL_INUSE, 0,
- NV_MIIPHY_DELAY, NV_MIIPHY_DELAYMAX, NULL)) {
- dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d timed out.\n",
- dev->name, miireg, addr);
- retval = -1;
- } else if (value != MII_READ) {
- /* it was a write operation - fewer failures are detectable */
- dprintk(KERN_DEBUG "%s: mii_rw wrote 0x%x to reg %d at PHY %d\n",
- dev->name, value, miireg, addr);
- retval = 0;
- } else if (readl(base + NvRegMIIStatus) & NVREG_MIISTAT_ERROR) {
- dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d failed.\n",
- dev->name, miireg, addr);
- retval = -1;
- } else {
- retval = readl(base + NvRegMIIData);
- dprintk(KERN_DEBUG "%s: mii_rw read from reg %d at PHY %d: 0x%x.\n",
- dev->name, miireg, addr, retval);
- }
-
- return retval;
-}
-
-static int phy_reset(struct net_device *dev, u32 bmcr_setup)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 miicontrol;
- unsigned int tries = 0;
-
- miicontrol = BMCR_RESET | bmcr_setup;
- if (mii_rw(dev, np->phyaddr, MII_BMCR, miicontrol)) {
- return -1;
- }
-
- /* wait for 500ms */
- msleep(500);
-
- /* must wait till reset is deasserted */
- while (miicontrol & BMCR_RESET) {
- msleep(10);
- miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- /* FIXME: 100 tries seem excessive */
- if (tries++ > 100)
- return -1;
- }
- return 0;
-}
-
-static int phy_init(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 phyinterface, phy_reserved, mii_status, mii_control, mii_control_1000,reg;
-
- /* phy errata for E3016 phy */
- if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
- reg = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ);
- reg &= ~PHY_MARVELL_E3016_INITMASK;
- if (mii_rw(dev, np->phyaddr, MII_NCONFIG, reg)) {
- printk(KERN_INFO "%s: phy write to errata reg failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
-
- /* set advertise register */
- reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- reg |= (ADVERTISE_10HALF|ADVERTISE_10FULL|ADVERTISE_100HALF|ADVERTISE_100FULL|ADVERTISE_PAUSE_ASYM|ADVERTISE_PAUSE_CAP);
- if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) {
- printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
-
- /* get phy interface type */
- phyinterface = readl(base + NvRegPhyInterface);
-
- /* see if gigabit phy */
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- if (mii_status & PHY_GIGABIT) {
- np->gigabit = PHY_GIGABIT;
- mii_control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- mii_control_1000 &= ~ADVERTISE_1000HALF;
- if (phyinterface & PHY_RGMII)
- mii_control_1000 |= ADVERTISE_1000FULL;
- else
- mii_control_1000 &= ~ADVERTISE_1000FULL;
-
- if (mii_rw(dev, np->phyaddr, MII_CTRL1000, mii_control_1000)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- else
- np->gigabit = 0;
-
- mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- mii_control |= BMCR_ANENABLE;
-
- /* reset the phy
- * (certain phys need bmcr to be setup with reset)
- */
- if (phy_reset(dev, mii_control)) {
- printk(KERN_INFO "%s: phy reset failed\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
-
- /* phy vendor specific configuration */
- if ((np->phy_oui == PHY_OUI_CICADA) && (phyinterface & PHY_RGMII) ) {
- phy_reserved = mii_rw(dev, np->phyaddr, MII_RESV1, MII_READ);
- phy_reserved &= ~(PHY_INIT1 | PHY_INIT2);
- phy_reserved |= (PHY_INIT3 | PHY_INIT4);
- if (mii_rw(dev, np->phyaddr, MII_RESV1, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- phy_reserved = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ);
- phy_reserved |= PHY_INIT5;
- if (mii_rw(dev, np->phyaddr, MII_NCONFIG, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- if (np->phy_oui == PHY_OUI_CICADA) {
- phy_reserved = mii_rw(dev, np->phyaddr, MII_SREVISION, MII_READ);
- phy_reserved |= PHY_INIT6;
- if (mii_rw(dev, np->phyaddr, MII_SREVISION, phy_reserved)) {
- printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
- return PHY_ERROR;
- }
- }
- /* some phys clear out pause advertisment on reset, set it back */
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg);
-
- /* restart auto negotiation */
- mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE);
- if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) {
- return PHY_ERROR;
- }
-
- return 0;
-}
-
-static void nv_start_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_start_rx\n", dev->name);
- /* Already running? Stop it. */
- if (readl(base + NvRegReceiverControl) & NVREG_RCVCTL_START) {
- writel(0, base + NvRegReceiverControl);
- pci_push(base);
- }
- writel(np->linkspeed, base + NvRegLinkSpeed);
- pci_push(base);
- writel(NVREG_RCVCTL_START, base + NvRegReceiverControl);
- dprintk(KERN_DEBUG "%s: nv_start_rx to duplex %d, speed 0x%08x.\n",
- dev->name, np->duplex, np->linkspeed);
- pci_push(base);
-}
-
-static void nv_stop_rx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_stop_rx\n", dev->name);
- writel(0, base + NvRegReceiverControl);
- reg_delay(dev, NvRegReceiverStatus, NVREG_RCVSTAT_BUSY, 0,
- NV_RXSTOP_DELAY1, NV_RXSTOP_DELAY1MAX,
- KERN_INFO "nv_stop_rx: ReceiverStatus remained busy");
-
- udelay(NV_RXSTOP_DELAY2);
- writel(0, base + NvRegLinkSpeed);
-}
-
-static void nv_start_tx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_start_tx\n", dev->name);
- writel(NVREG_XMITCTL_START, base + NvRegTransmitterControl);
- pci_push(base);
-}
-
-static void nv_stop_tx(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_stop_tx\n", dev->name);
- writel(0, base + NvRegTransmitterControl);
- reg_delay(dev, NvRegTransmitterStatus, NVREG_XMITSTAT_BUSY, 0,
- NV_TXSTOP_DELAY1, NV_TXSTOP_DELAY1MAX,
- KERN_INFO "nv_stop_tx: TransmitterStatus remained busy");
-
- udelay(NV_TXSTOP_DELAY2);
- writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
-}
-
-static void nv_txrx_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_txrx_reset\n", dev->name);
- writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
- udelay(NV_TXRX_RESET_DELAY);
- writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
-}
-
-static void nv_mac_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- dprintk(KERN_DEBUG "%s: nv_mac_reset\n", dev->name);
- writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
- writel(NVREG_MAC_RESET_ASSERT, base + NvRegMacReset);
- pci_push(base);
- udelay(NV_MAC_RESET_DELAY);
- writel(0, base + NvRegMacReset);
- pci_push(base);
- udelay(NV_MAC_RESET_DELAY);
- writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
- pci_push(base);
-}
-
-/*
- * nv_get_stats: dev->get_stats function
- * Get latest stats value from the nic.
- * Called with read_lock(&dev_base_lock) held for read -
- * only synchronized against unregister_netdevice.
- */
-static struct net_device_stats *nv_get_stats(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- /* It seems that the nic always generates interrupts and doesn't
- * accumulate errors internally. Thus the current values in np->stats
- * are already up to date.
- */
- return &np->stats;
-}
-
-/*
- * nv_alloc_rx: fill rx ring entries.
- * Return 1 if the allocations for the skbs failed and the
- * rx engine is without Available descriptors
- */
-static int nv_alloc_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- unsigned int refill_rx = np->refill_rx;
- int nr;
-
- while (np->cur_rx != refill_rx) {
- struct sk_buff *skb;
-
- nr = refill_rx % np->rx_ring_size;
- if (np->rx_skbuff[nr] == NULL) {
-
- skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD);
- if (!skb)
- break;
-
- skb->dev = dev;
- np->rx_skbuff[nr] = skb;
- } else {
- skb = np->rx_skbuff[nr];
- }
- np->rx_dma[nr] = pci_map_single(np->pci_dev, skb->data,
- skb->end-skb->data, PCI_DMA_FROMDEVICE);
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig[nr].buf = cpu_to_le32(np->rx_dma[nr]);
- wmb();
- np->rx_ring.orig[nr].flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL);
- } else {
- np->rx_ring.ex[nr].bufhigh = cpu_to_le64(np->rx_dma[nr]) >> 32;
- np->rx_ring.ex[nr].buflow = cpu_to_le64(np->rx_dma[nr]) & 0x0FFFFFFFF;
- wmb();
- np->rx_ring.ex[nr].flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL);
- }
- dprintk(KERN_DEBUG "%s: nv_alloc_rx: Packet %d marked as Available\n",
- dev->name, refill_rx);
- refill_rx++;
- }
- np->refill_rx = refill_rx;
- if (np->cur_rx - refill_rx == np->rx_ring_size)
- return 1;
- return 0;
-}
-
-/* If rx bufs are exhausted called after 50ms to attempt to refresh */
-#ifdef CONFIG_FORCEDETH_NAPI
-static void nv_do_rx_refill(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
-
- /* Just reschedule NAPI rx processing */
- netif_rx_schedule(dev);
-}
-#else
-static void nv_do_rx_refill(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq(dev->irq);
- } else {
- disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
- if (nv_alloc_rx(dev)) {
- spin_lock_irq(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq(dev->irq);
- } else {
- enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
-}
-#endif
-
-static void nv_init_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
-
- np->cur_rx = np->rx_ring_size;
- np->refill_rx = 0;
- for (i = 0; i < np->rx_ring_size; i++)
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->rx_ring.orig[i].flaglen = 0;
- else
- np->rx_ring.ex[i].flaglen = 0;
-}
-
-static void nv_init_tx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
-
- np->next_tx = np->nic_tx = 0;
- for (i = 0; i < np->tx_ring_size; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->tx_ring.orig[i].flaglen = 0;
- else
- np->tx_ring.ex[i].flaglen = 0;
- np->tx_skbuff[i] = NULL;
- np->tx_dma[i] = 0;
- }
-}
-
-static int nv_init_ring(struct net_device *dev)
-{
- nv_init_tx(dev);
- nv_init_rx(dev);
- return nv_alloc_rx(dev);
-}
-
-static int nv_release_txskb(struct net_device *dev, unsigned int skbnr)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- dprintk(KERN_INFO "%s: nv_release_txskb for skbnr %d\n",
- dev->name, skbnr);
-
- if (np->tx_dma[skbnr]) {
- pci_unmap_page(np->pci_dev, np->tx_dma[skbnr],
- np->tx_dma_len[skbnr],
- PCI_DMA_TODEVICE);
- np->tx_dma[skbnr] = 0;
- }
-
- if (np->tx_skbuff[skbnr]) {
- dev_kfree_skb_any(np->tx_skbuff[skbnr]);
- np->tx_skbuff[skbnr] = NULL;
- return 1;
- } else {
- return 0;
- }
-}
-
-static void nv_drain_tx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- unsigned int i;
-
- for (i = 0; i < np->tx_ring_size; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->tx_ring.orig[i].flaglen = 0;
- else
- np->tx_ring.ex[i].flaglen = 0;
- if (nv_release_txskb(dev, i))
- np->stats.tx_dropped++;
- }
-}
-
-static void nv_drain_rx(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int i;
- for (i = 0; i < np->rx_ring_size; i++) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- np->rx_ring.orig[i].flaglen = 0;
- else
- np->rx_ring.ex[i].flaglen = 0;
- wmb();
- if (np->rx_skbuff[i]) {
- pci_unmap_single(np->pci_dev, np->rx_dma[i],
- np->rx_skbuff[i]->end-np->rx_skbuff[i]->data,
- PCI_DMA_FROMDEVICE);
- dev_kfree_skb(np->rx_skbuff[i]);
- np->rx_skbuff[i] = NULL;
- }
- }
-}
-
-static void drain_ring(struct net_device *dev)
-{
- nv_drain_tx(dev);
- nv_drain_rx(dev);
-}
-
-/*
- * nv_start_xmit: dev->hard_start_xmit function
- * Called with netif_tx_lock held.
- */
-static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 tx_flags = 0;
- u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET);
- unsigned int fragments = skb_shinfo(skb)->nr_frags;
- unsigned int nr = (np->next_tx - 1) % np->tx_ring_size;
- unsigned int start_nr = np->next_tx % np->tx_ring_size;
- unsigned int i;
- u32 offset = 0;
- u32 bcnt;
- u32 size = skb->len-skb->data_len;
- u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
- u32 tx_flags_vlan = 0;
-
- /* add fragments to entries count */
- for (i = 0; i < fragments; i++) {
- entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) +
- ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
- }
-
- spin_lock_irq(&np->lock);
-
- if ((np->next_tx - np->nic_tx + entries - 1) > np->tx_limit_stop) {
- spin_unlock_irq(&np->lock);
- netif_stop_queue(dev);
- return NETDEV_TX_BUSY;
- }
-
- /* setup the header buffer */
- do {
- bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- nr = (nr + 1) % np->tx_ring_size;
-
- np->tx_dma[nr] = pci_map_single(np->pci_dev, skb->data + offset, bcnt,
- PCI_DMA_TODEVICE);
- np->tx_dma_len[nr] = bcnt;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].buf = cpu_to_le32(np->tx_dma[nr]);
- np->tx_ring.orig[nr].flaglen = cpu_to_le32((bcnt-1) | tx_flags);
- } else {
- np->tx_ring.ex[nr].bufhigh = cpu_to_le64(np->tx_dma[nr]) >> 32;
- np->tx_ring.ex[nr].buflow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF;
- np->tx_ring.ex[nr].flaglen = cpu_to_le32((bcnt-1) | tx_flags);
- }
- tx_flags = np->tx_flags;
- offset += bcnt;
- size -= bcnt;
- } while (size);
-
- /* setup the fragments */
- for (i = 0; i < fragments; i++) {
- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- u32 size = frag->size;
- offset = 0;
-
- do {
- bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- nr = (nr + 1) % np->tx_ring_size;
-
- np->tx_dma[nr] = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
- PCI_DMA_TODEVICE);
- np->tx_dma_len[nr] = bcnt;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].buf = cpu_to_le32(np->tx_dma[nr]);
- np->tx_ring.orig[nr].flaglen = cpu_to_le32((bcnt-1) | tx_flags);
- } else {
- np->tx_ring.ex[nr].bufhigh = cpu_to_le64(np->tx_dma[nr]) >> 32;
- np->tx_ring.ex[nr].buflow = cpu_to_le64(np->tx_dma[nr]) & 0x0FFFFFFFF;
- np->tx_ring.ex[nr].flaglen = cpu_to_le32((bcnt-1) | tx_flags);
- }
- offset += bcnt;
- size -= bcnt;
- } while (size);
- }
-
- /* set last fragment flag */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[nr].flaglen |= cpu_to_le32(tx_flags_extra);
- } else {
- np->tx_ring.ex[nr].flaglen |= cpu_to_le32(tx_flags_extra);
- }
-
- np->tx_skbuff[nr] = skb;
-
-#ifdef NETIF_F_TSO
- if (skb_is_gso(skb))
- tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT);
- else
-#endif
- tx_flags_extra = skb->ip_summed == CHECKSUM_PARTIAL ?
- NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0;
-
- /* vlan tag */
- if (np->vlangrp && vlan_tx_tag_present(skb)) {
- tx_flags_vlan = NV_TX3_VLAN_TAG_PRESENT | vlan_tx_tag_get(skb);
- }
-
- /* set tx flags */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[start_nr].flaglen |= cpu_to_le32(tx_flags | tx_flags_extra);
- } else {
- np->tx_ring.ex[start_nr].txvlan = cpu_to_le32(tx_flags_vlan);
- np->tx_ring.ex[start_nr].flaglen |= cpu_to_le32(tx_flags | tx_flags_extra);
- }
-
- dprintk(KERN_DEBUG "%s: nv_start_xmit: packet %d (entries %d) queued for transmission. tx_flags_extra: %x\n",
- dev->name, np->next_tx, entries, tx_flags_extra);
- {
- int j;
- for (j=0; j<64; j++) {
- if ((j%16) == 0)
- dprintk("\n%03x:", j);
- dprintk(" %02x", ((unsigned char*)skb->data)[j]);
- }
- dprintk("\n");
- }
-
- np->next_tx += entries;
-
- dev->trans_start = jiffies;
- spin_unlock_irq(&np->lock);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(get_hwbase(dev));
- return NETDEV_TX_OK;
-}
-
-/*
- * nv_tx_done: check for completed packets, release the skbs.
- *
- * Caller must own np->lock.
- */
-static void nv_tx_done(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 flags;
- unsigned int i;
- struct sk_buff *skb;
-
- while (np->nic_tx != np->next_tx) {
- i = np->nic_tx % np->tx_ring_size;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
- flags = le32_to_cpu(np->tx_ring.orig[i].flaglen);
- else
- flags = le32_to_cpu(np->tx_ring.ex[i].flaglen);
-
- dprintk(KERN_DEBUG "%s: nv_tx_done: looking at packet %d, flags 0x%x.\n",
- dev->name, np->nic_tx, flags);
- if (flags & NV_TX_VALID)
- break;
- if (np->desc_ver == DESC_VER_1) {
- if (flags & NV_TX_LASTPACKET) {
- skb = np->tx_skbuff[i];
- if (flags & (NV_TX_RETRYERROR|NV_TX_CARRIERLOST|NV_TX_LATECOLLISION|
- NV_TX_UNDERFLOW|NV_TX_ERROR)) {
- if (flags & NV_TX_UNDERFLOW)
- np->stats.tx_fifo_errors++;
- if (flags & NV_TX_CARRIERLOST)
- np->stats.tx_carrier_errors++;
- np->stats.tx_errors++;
- } else {
- np->stats.tx_packets++;
- np->stats.tx_bytes += skb->len;
- }
- }
- } else {
- if (flags & NV_TX2_LASTPACKET) {
- skb = np->tx_skbuff[i];
- if (flags & (NV_TX2_RETRYERROR|NV_TX2_CARRIERLOST|NV_TX2_LATECOLLISION|
- NV_TX2_UNDERFLOW|NV_TX2_ERROR)) {
- if (flags & NV_TX2_UNDERFLOW)
- np->stats.tx_fifo_errors++;
- if (flags & NV_TX2_CARRIERLOST)
- np->stats.tx_carrier_errors++;
- np->stats.tx_errors++;
- } else {
- np->stats.tx_packets++;
- np->stats.tx_bytes += skb->len;
- }
- }
- }
- nv_release_txskb(dev, i);
- np->nic_tx++;
- }
- if (np->next_tx - np->nic_tx < np->tx_limit_start)
- netif_wake_queue(dev);
-}
-
-/*
- * nv_tx_timeout: dev->tx_timeout function
- * Called with netif_tx_lock held.
- */
-static void nv_tx_timeout(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 status;
-
- if (np->msi_flags & NV_MSI_X_ENABLED)
- status = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- else
- status = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
-
- printk(KERN_INFO "%s: Got tx_timeout. irq: %08x\n", dev->name, status);
-
- {
- int i;
-
- printk(KERN_INFO "%s: Ring at %lx: next %d nic %d\n",
- dev->name, (unsigned long)np->ring_addr,
- np->next_tx, np->nic_tx);
- printk(KERN_INFO "%s: Dumping tx registers\n", dev->name);
- for (i=0;i<=np->register_size;i+= 32) {
- printk(KERN_INFO "%3x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
- i,
- readl(base + i + 0), readl(base + i + 4),
- readl(base + i + 8), readl(base + i + 12),
- readl(base + i + 16), readl(base + i + 20),
- readl(base + i + 24), readl(base + i + 28));
- }
- printk(KERN_INFO "%s: Dumping tx ring\n", dev->name);
- for (i=0;i<np->tx_ring_size;i+= 4) {
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- printk(KERN_INFO "%03x: %08x %08x // %08x %08x // %08x %08x // %08x %08x\n",
- i,
- le32_to_cpu(np->tx_ring.orig[i].buf),
- le32_to_cpu(np->tx_ring.orig[i].flaglen),
- le32_to_cpu(np->tx_ring.orig[i+1].buf),
- le32_to_cpu(np->tx_ring.orig[i+1].flaglen),
- le32_to_cpu(np->tx_ring.orig[i+2].buf),
- le32_to_cpu(np->tx_ring.orig[i+2].flaglen),
- le32_to_cpu(np->tx_ring.orig[i+3].buf),
- le32_to_cpu(np->tx_ring.orig[i+3].flaglen));
- } else {
- printk(KERN_INFO "%03x: %08x %08x %08x // %08x %08x %08x // %08x %08x %08x // %08x %08x %08x\n",
- i,
- le32_to_cpu(np->tx_ring.ex[i].bufhigh),
- le32_to_cpu(np->tx_ring.ex[i].buflow),
- le32_to_cpu(np->tx_ring.ex[i].flaglen),
- le32_to_cpu(np->tx_ring.ex[i+1].bufhigh),
- le32_to_cpu(np->tx_ring.ex[i+1].buflow),
- le32_to_cpu(np->tx_ring.ex[i+1].flaglen),
- le32_to_cpu(np->tx_ring.ex[i+2].bufhigh),
- le32_to_cpu(np->tx_ring.ex[i+2].buflow),
- le32_to_cpu(np->tx_ring.ex[i+2].flaglen),
- le32_to_cpu(np->tx_ring.ex[i+3].bufhigh),
- le32_to_cpu(np->tx_ring.ex[i+3].buflow),
- le32_to_cpu(np->tx_ring.ex[i+3].flaglen));
- }
- }
- }
-
- spin_lock_irq(&np->lock);
-
- /* 1) stop tx engine */
- nv_stop_tx(dev);
-
- /* 2) check that the packets were not sent already: */
- nv_tx_done(dev);
-
- /* 3) if there are dead entries: clear everything */
- if (np->next_tx != np->nic_tx) {
- printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name);
- nv_drain_tx(dev);
- np->next_tx = np->nic_tx = 0;
- setup_hw_rings(dev, NV_SETUP_TX_RING);
- netif_wake_queue(dev);
- }
-
- /* 4) restart tx engine */
- nv_start_tx(dev);
- spin_unlock_irq(&np->lock);
-}
-
-/*
- * Called when the nic notices a mismatch between the actual data len on the
- * wire and the len indicated in the 802 header
- */
-static int nv_getlen(struct net_device *dev, void *packet, int datalen)
-{
- int hdrlen; /* length of the 802 header */
- int protolen; /* length as stored in the proto field */
-
- /* 1) calculate len according to header */
- if ( ((struct vlan_ethhdr *)packet)->h_vlan_proto == htons(ETH_P_8021Q)) {
- protolen = ntohs( ((struct vlan_ethhdr *)packet)->h_vlan_encapsulated_proto );
- hdrlen = VLAN_HLEN;
- } else {
- protolen = ntohs( ((struct ethhdr *)packet)->h_proto);
- hdrlen = ETH_HLEN;
- }
- dprintk(KERN_DEBUG "%s: nv_getlen: datalen %d, protolen %d, hdrlen %d\n",
- dev->name, datalen, protolen, hdrlen);
- if (protolen > ETH_DATA_LEN)
- return datalen; /* Value in proto field not a len, no checks possible */
-
- protolen += hdrlen;
- /* consistency checks: */
- if (datalen > ETH_ZLEN) {
- if (datalen >= protolen) {
- /* more data on wire than in 802 header, trim of
- * additional data.
- */
- dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
- dev->name, protolen);
- return protolen;
- } else {
- /* less data on wire than mentioned in header.
- * Discard the packet.
- */
- dprintk(KERN_DEBUG "%s: nv_getlen: discarding long packet.\n",
- dev->name);
- return -1;
- }
- } else {
- /* short packet. Accept only if 802 values are also short */
- if (protolen > ETH_ZLEN) {
- dprintk(KERN_DEBUG "%s: nv_getlen: discarding short packet.\n",
- dev->name);
- return -1;
- }
- dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
- dev->name, datalen);
- return datalen;
- }
-}
-
-static int nv_rx_process(struct net_device *dev, int limit)
-{
- struct fe_priv *np = netdev_priv(dev);
- u32 flags;
- u32 vlanflags = 0;
- int count;
-
- for (count = 0; count < limit; ++count) {
- struct sk_buff *skb;
- int len;
- int i;
- if (np->cur_rx - np->refill_rx >= np->rx_ring_size)
- break; /* we scanned the whole ring - do not continue */
-
- i = np->cur_rx % np->rx_ring_size;
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- flags = le32_to_cpu(np->rx_ring.orig[i].flaglen);
- len = nv_descr_getlength(&np->rx_ring.orig[i], np->desc_ver);
- } else {
- flags = le32_to_cpu(np->rx_ring.ex[i].flaglen);
- len = nv_descr_getlength_ex(&np->rx_ring.ex[i], np->desc_ver);
- vlanflags = le32_to_cpu(np->rx_ring.ex[i].buflow);
- }
-
- dprintk(KERN_DEBUG "%s: nv_rx_process: looking at packet %d, flags 0x%x.\n",
- dev->name, np->cur_rx, flags);
-
- if (flags & NV_RX_AVAIL)
- break; /* still owned by hardware, */
-
- /*
- * the packet is for us - immediately tear down the pci mapping.
- * TODO: check if a prefetch of the first cacheline improves
- * the performance.
- */
- pci_unmap_single(np->pci_dev, np->rx_dma[i],
- np->rx_skbuff[i]->end-np->rx_skbuff[i]->data,
- PCI_DMA_FROMDEVICE);
-
- {
- int j;
- dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",flags);
- for (j=0; j<64; j++) {
- if ((j%16) == 0)
- dprintk("\n%03x:", j);
- dprintk(" %02x", ((unsigned char*)np->rx_skbuff[i]->data)[j]);
- }
- dprintk("\n");
- }
- /* look at what we actually got: */
- if (np->desc_ver == DESC_VER_1) {
- if (!(flags & NV_RX_DESCRIPTORVALID))
- goto next_pkt;
-
- if (flags & NV_RX_ERROR) {
- if (flags & NV_RX_MISSEDFRAME) {
- np->stats.rx_missed_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & (NV_RX_ERROR1|NV_RX_ERROR2|NV_RX_ERROR3)) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX_CRCERR) {
- np->stats.rx_crc_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX_OVERFLOW) {
- np->stats.rx_over_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX_ERROR4) {
- len = nv_getlen(dev, np->rx_skbuff[i]->data, len);
- if (len < 0) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- }
- /* framing errors are soft errors. */
- if (flags & NV_RX_FRAMINGERR) {
- if (flags & NV_RX_SUBSTRACT1) {
- len--;
- }
- }
- }
- } else {
- if (!(flags & NV_RX2_DESCRIPTORVALID))
- goto next_pkt;
-
- if (flags & NV_RX2_ERROR) {
- if (flags & (NV_RX2_ERROR1|NV_RX2_ERROR2|NV_RX2_ERROR3)) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX2_CRCERR) {
- np->stats.rx_crc_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX2_OVERFLOW) {
- np->stats.rx_over_errors++;
- np->stats.rx_errors++;
- goto next_pkt;
- }
- if (flags & NV_RX2_ERROR4) {
- len = nv_getlen(dev, np->rx_skbuff[i]->data, len);
- if (len < 0) {
- np->stats.rx_errors++;
- goto next_pkt;
- }
- }
- /* framing errors are soft errors */
- if (flags & NV_RX2_FRAMINGERR) {
- if (flags & NV_RX2_SUBSTRACT1) {
- len--;
- }
- }
- }
- if (np->rx_csum) {
- flags &= NV_RX2_CHECKSUMMASK;
- if (flags == NV_RX2_CHECKSUMOK1 ||
- flags == NV_RX2_CHECKSUMOK2 ||
- flags == NV_RX2_CHECKSUMOK3) {
- dprintk(KERN_DEBUG "%s: hw checksum hit!.\n", dev->name);
- np->rx_skbuff[i]->ip_summed = CHECKSUM_UNNECESSARY;
- } else {
- dprintk(KERN_DEBUG "%s: hwchecksum miss!.\n", dev->name);
- }
- }
- }
- /* got a valid packet - forward it to the network core */
- skb = np->rx_skbuff[i];
- np->rx_skbuff[i] = NULL;
-
- skb_put(skb, len);
- skb->protocol = eth_type_trans(skb, dev);
- dprintk(KERN_DEBUG "%s: nv_rx_process: packet %d with %d bytes, proto %d accepted.\n",
- dev->name, np->cur_rx, len, skb->protocol);
-#ifdef CONFIG_FORCEDETH_NAPI
- if (np->vlangrp && (vlanflags & NV_RX3_VLAN_TAG_PRESENT))
- vlan_hwaccel_receive_skb(skb, np->vlangrp,
- vlanflags & NV_RX3_VLAN_TAG_MASK);
- else
- netif_receive_skb(skb);
-#else
- if (np->vlangrp && (vlanflags & NV_RX3_VLAN_TAG_PRESENT))
- vlan_hwaccel_rx(skb, np->vlangrp,
- vlanflags & NV_RX3_VLAN_TAG_MASK);
- else
- netif_rx(skb);
-#endif
- dev->last_rx = jiffies;
- np->stats.rx_packets++;
- np->stats.rx_bytes += len;
-next_pkt:
- np->cur_rx++;
- }
-
- return count;
-}
-
-static void set_bufsize(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (dev->mtu <= ETH_DATA_LEN)
- np->rx_buf_sz = ETH_DATA_LEN + NV_RX_HEADERS;
- else
- np->rx_buf_sz = dev->mtu + NV_RX_HEADERS;
-}
-
-/*
- * nv_change_mtu: dev->change_mtu function
- * Called with dev_base_lock held for read.
- */
-static int nv_change_mtu(struct net_device *dev, int new_mtu)
-{
- struct fe_priv *np = netdev_priv(dev);
- int old_mtu;
-
- if (new_mtu < 64 || new_mtu > np->pkt_limit)
- return -EINVAL;
-
- old_mtu = dev->mtu;
- dev->mtu = new_mtu;
-
- /* return early if the buffer sizes will not change */
- if (old_mtu <= ETH_DATA_LEN && new_mtu <= ETH_DATA_LEN)
- return 0;
- if (old_mtu == new_mtu)
- return 0;
-
- /* synchronized against open : rtnl_lock() held by caller */
- if (netif_running(dev)) {
- u8 __iomem *base = get_hwbase(dev);
- /*
- * It seems that the nic preloads valid ring entries into an
- * internal buffer. The procedure for flushing everything is
- * guessed, there is probably a simpler approach.
- * Changing the MTU is a rare event, it shouldn't matter.
- */
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain rx queue */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
- /* reinit driver view of the rx queue */
- set_bufsize(dev);
- if (nv_init_ring(dev)) {
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- }
- /* reinit nic view of the rx queue */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(base);
-
- /* restart rx engine */
- nv_start_rx(dev);
- nv_start_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- nv_enable_irq(dev);
- }
- return 0;
-}
-
-static void nv_copy_mac_to_hw(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 mac[2];
-
- mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
- (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
- mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
-
- writel(mac[0], base + NvRegMacAddrA);
- writel(mac[1], base + NvRegMacAddrB);
-}
-
-/*
- * nv_set_mac_address: dev->set_mac_address function
- * Called with rtnl_lock() held.
- */
-static int nv_set_mac_address(struct net_device *dev, void *addr)
-{
- struct fe_priv *np = netdev_priv(dev);
- struct sockaddr *macaddr = (struct sockaddr*)addr;
-
- if (!is_valid_ether_addr(macaddr->sa_data))
- return -EADDRNOTAVAIL;
-
- /* synchronized against open : rtnl_lock() held by caller */
- memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN);
-
- if (netif_running(dev)) {
- netif_tx_lock_bh(dev);
- spin_lock_irq(&np->lock);
-
- /* stop rx engine */
- nv_stop_rx(dev);
-
- /* set mac address */
- nv_copy_mac_to_hw(dev);
-
- /* restart rx engine */
- nv_start_rx(dev);
- spin_unlock_irq(&np->lock);
- netif_tx_unlock_bh(dev);
- } else {
- nv_copy_mac_to_hw(dev);
- }
- return 0;
-}
-
-/*
- * nv_set_multicast: dev->set_multicast function
- * Called with netif_tx_lock held.
- */
-static void nv_set_multicast(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 addr[2];
- u32 mask[2];
- u32 pff = readl(base + NvRegPacketFilterFlags) & NVREG_PFF_PAUSE_RX;
-
- memset(addr, 0, sizeof(addr));
- memset(mask, 0, sizeof(mask));
-
- if (dev->flags & IFF_PROMISC) {
- pff |= NVREG_PFF_PROMISC;
- } else {
- pff |= NVREG_PFF_MYADDR;
-
- if (dev->flags & IFF_ALLMULTI || dev->mc_list) {
- u32 alwaysOff[2];
- u32 alwaysOn[2];
-
- alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0xffffffff;
- if (dev->flags & IFF_ALLMULTI) {
- alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0;
- } else {
- struct dev_mc_list *walk;
-
- walk = dev->mc_list;
- while (walk != NULL) {
- u32 a, b;
- a = le32_to_cpu(*(u32 *) walk->dmi_addr);
- b = le16_to_cpu(*(u16 *) (&walk->dmi_addr[4]));
- alwaysOn[0] &= a;
- alwaysOff[0] &= ~a;
- alwaysOn[1] &= b;
- alwaysOff[1] &= ~b;
- walk = walk->next;
- }
- }
- addr[0] = alwaysOn[0];
- addr[1] = alwaysOn[1];
- mask[0] = alwaysOn[0] | alwaysOff[0];
- mask[1] = alwaysOn[1] | alwaysOff[1];
- }
- }
- addr[0] |= NVREG_MCASTADDRA_FORCE;
- pff |= NVREG_PFF_ALWAYS;
- spin_lock_irq(&np->lock);
- nv_stop_rx(dev);
- writel(addr[0], base + NvRegMulticastAddrA);
- writel(addr[1], base + NvRegMulticastAddrB);
- writel(mask[0], base + NvRegMulticastMaskA);
- writel(mask[1], base + NvRegMulticastMaskB);
- writel(pff, base + NvRegPacketFilterFlags);
- dprintk(KERN_INFO "%s: reconfiguration for multicast lists.\n",
- dev->name);
- nv_start_rx(dev);
- spin_unlock_irq(&np->lock);
-}
-
-static void nv_update_pause(struct net_device *dev, u32 pause_flags)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- np->pause_flags &= ~(NV_PAUSEFRAME_TX_ENABLE | NV_PAUSEFRAME_RX_ENABLE);
-
- if (np->pause_flags & NV_PAUSEFRAME_RX_CAPABLE) {
- u32 pff = readl(base + NvRegPacketFilterFlags) & ~NVREG_PFF_PAUSE_RX;
- if (pause_flags & NV_PAUSEFRAME_RX_ENABLE) {
- writel(pff|NVREG_PFF_PAUSE_RX, base + NvRegPacketFilterFlags);
- np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- } else {
- writel(pff, base + NvRegPacketFilterFlags);
- }
- }
- if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE) {
- u32 regmisc = readl(base + NvRegMisc1) & ~NVREG_MISC1_PAUSE_TX;
- if (pause_flags & NV_PAUSEFRAME_TX_ENABLE) {
- writel(NVREG_TX_PAUSEFRAME_ENABLE, base + NvRegTxPauseFrame);
- writel(regmisc|NVREG_MISC1_PAUSE_TX, base + NvRegMisc1);
- np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- } else {
- writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame);
- writel(regmisc, base + NvRegMisc1);
- }
- }
-}
-
-/**
- * nv_update_linkspeed: Setup the MAC according to the link partner
- * @dev: Network device to be configured
- *
- * The function queries the PHY and checks if there is a link partner.
- * If yes, then it sets up the MAC accordingly. Otherwise, the MAC is
- * set to 10 MBit HD.
- *
- * The function returns 0 if there is no link partner and 1 if there is
- * a good link partner.
- */
-static int nv_update_linkspeed(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int adv = 0;
- int lpa = 0;
- int adv_lpa, adv_pause, lpa_pause;
- int newls = np->linkspeed;
- int newdup = np->duplex;
- int mii_status;
- int retval = 0;
- u32 control_1000, status_1000, phyreg, pause_flags, txreg;
-
- /* BMSR_LSTATUS is latched, read it twice:
- * we want the current value.
- */
- mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
-
- if (!(mii_status & BMSR_LSTATUS)) {
- dprintk(KERN_DEBUG "%s: no link detected by phy - falling back to 10HD.\n",
- dev->name);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- retval = 0;
- goto set_speed;
- }
-
- if (np->autoneg == 0) {
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: autoneg off, PHY set to 0x%04x.\n",
- dev->name, np->fixed_mode);
- if (np->fixed_mode & LPA_100FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 1;
- } else if (np->fixed_mode & LPA_100HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 0;
- } else if (np->fixed_mode & LPA_10FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 1;
- } else {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- }
- retval = 1;
- goto set_speed;
- }
- /* check auto negotiation is complete */
- if (!(mii_status & BMSR_ANEGCOMPLETE)) {
- /* still in autonegotiation - configure nic for 10 MBit HD and wait. */
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- retval = 0;
- dprintk(KERN_DEBUG "%s: autoneg not completed - falling back to 10HD.\n", dev->name);
- goto set_speed;
- }
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ);
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: PHY advertises 0x%04x, lpa 0x%04x.\n",
- dev->name, adv, lpa);
-
- retval = 1;
- if (np->gigabit == PHY_GIGABIT) {
- control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- status_1000 = mii_rw(dev, np->phyaddr, MII_STAT1000, MII_READ);
-
- if ((control_1000 & ADVERTISE_1000FULL) &&
- (status_1000 & LPA_1000FULL)) {
- dprintk(KERN_DEBUG "%s: nv_update_linkspeed: GBit ethernet detected.\n",
- dev->name);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_1000;
- newdup = 1;
- goto set_speed;
- }
- }
-
- /* FIXME: handle parallel detection properly */
- adv_lpa = lpa & adv;
- if (adv_lpa & LPA_100FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 1;
- } else if (adv_lpa & LPA_100HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
- newdup = 0;
- } else if (adv_lpa & LPA_10FULL) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 1;
- } else if (adv_lpa & LPA_10HALF) {
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- } else {
- dprintk(KERN_DEBUG "%s: bad ability %04x - falling back to 10HD.\n", dev->name, adv_lpa);
- newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- newdup = 0;
- }
-
-set_speed:
- if (np->duplex == newdup && np->linkspeed == newls)
- return retval;
-
- dprintk(KERN_INFO "%s: changing link setting from %d/%d to %d/%d.\n",
- dev->name, np->linkspeed, np->duplex, newls, newdup);
-
- np->duplex = newdup;
- np->linkspeed = newls;
-
- if (np->gigabit == PHY_GIGABIT) {
- phyreg = readl(base + NvRegRandomSeed);
- phyreg &= ~(0x3FF00);
- if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_10)
- phyreg |= NVREG_RNDSEED_FORCE3;
- else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_100)
- phyreg |= NVREG_RNDSEED_FORCE2;
- else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_1000)
- phyreg |= NVREG_RNDSEED_FORCE;
- writel(phyreg, base + NvRegRandomSeed);
- }
-
- phyreg = readl(base + NvRegPhyInterface);
- phyreg &= ~(PHY_HALF|PHY_100|PHY_1000);
- if (np->duplex == 0)
- phyreg |= PHY_HALF;
- if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_100)
- phyreg |= PHY_100;
- else if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
- phyreg |= PHY_1000;
- writel(phyreg, base + NvRegPhyInterface);
-
- if (phyreg & PHY_RGMII) {
- if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
- txreg = NVREG_TX_DEFERRAL_RGMII_1000;
- else
- txreg = NVREG_TX_DEFERRAL_RGMII_10_100;
- } else {
- txreg = NVREG_TX_DEFERRAL_DEFAULT;
- }
- writel(txreg, base + NvRegTxDeferral);
-
- if (np->desc_ver == DESC_VER_1) {
- txreg = NVREG_TX_WM_DESC1_DEFAULT;
- } else {
- if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
- txreg = NVREG_TX_WM_DESC2_3_1000;
- else
- txreg = NVREG_TX_WM_DESC2_3_DEFAULT;
- }
- writel(txreg, base + NvRegTxWatermark);
-
- writel(NVREG_MISC1_FORCE | ( np->duplex ? 0 : NVREG_MISC1_HD),
- base + NvRegMisc1);
- pci_push(base);
- writel(np->linkspeed, base + NvRegLinkSpeed);
- pci_push(base);
-
- pause_flags = 0;
- /* setup pause frame */
- if (np->duplex != 0) {
- if (np->autoneg && np->pause_flags & NV_PAUSEFRAME_AUTONEG) {
- adv_pause = adv & (ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM);
- lpa_pause = lpa & (LPA_PAUSE_CAP| LPA_PAUSE_ASYM);
-
- switch (adv_pause) {
- case ADVERTISE_PAUSE_CAP:
- if (lpa_pause & LPA_PAUSE_CAP) {
- pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
- pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- }
- break;
- case ADVERTISE_PAUSE_ASYM:
- if (lpa_pause == (LPA_PAUSE_CAP| LPA_PAUSE_ASYM))
- {
- pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- }
- break;
- case ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM:
- if (lpa_pause & LPA_PAUSE_CAP)
- {
- pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
- pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- }
- if (lpa_pause == LPA_PAUSE_ASYM)
- {
- pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- }
- break;
- }
- } else {
- pause_flags = np->pause_flags;
- }
- }
- nv_update_pause(dev, pause_flags);
-
- return retval;
-}
-
-static void nv_linkchange(struct net_device *dev)
-{
- if (nv_update_linkspeed(dev)) {
- if (!netif_carrier_ok(dev)) {
- netif_carrier_on(dev);
- printk(KERN_INFO "%s: link up.\n", dev->name);
- nv_start_rx(dev);
- }
- } else {
- if (netif_carrier_ok(dev)) {
- netif_carrier_off(dev);
- printk(KERN_INFO "%s: link down.\n", dev->name);
- nv_stop_rx(dev);
- }
- }
-}
-
-static void nv_link_irq(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- u32 miistat;
-
- miistat = readl(base + NvRegMIIStatus);
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
- dprintk(KERN_INFO "%s: link change irq, status 0x%x.\n", dev->name, miistat);
-
- if (miistat & (NVREG_MIISTAT_LINKCHANGE))
- nv_linkchange(dev);
- dprintk(KERN_DEBUG "%s: link change notification done.\n", dev->name);
-}
-
-static irqreturn_t nv_nic_irq(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq\n", dev->name);
-
- for (i=0; ; i++) {
- if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
- events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- } else {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
- }
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- spin_lock(&np->lock);
- nv_tx_done(dev);
- spin_unlock(&np->lock);
-
- if (events & NVREG_IRQ_LINK) {
- spin_lock(&np->lock);
- nv_link_irq(dev);
- spin_unlock(&np->lock);
- }
- if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- spin_lock(&np->lock);
- nv_linkchange(dev);
- spin_unlock(&np->lock);
- np->link_timeout = jiffies + LINK_TIMEOUT;
- }
- if (events & (NVREG_IRQ_TX_ERR)) {
- dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
- dev->name, events);
- }
- if (events & (NVREG_IRQ_UNKNOWN)) {
- printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
- dev->name, events);
- }
-#ifdef CONFIG_FORCEDETH_NAPI
- if (events & NVREG_IRQ_RX_ALL) {
- netif_rx_schedule(dev);
-
- /* Disable furthur receive irq's */
- spin_lock(&np->lock);
- np->irqmask &= ~NVREG_IRQ_RX_ALL;
-
- if (np->msi_flags & NV_MSI_X_ENABLED)
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- else
- writel(np->irqmask, base + NvRegIrqMask);
- spin_unlock(&np->lock);
- }
-#else
- nv_rx_process(dev, dev->weight);
- if (nv_alloc_rx(dev)) {
- spin_lock(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock(&np->lock);
- }
-#endif
- if (i > max_interrupt_work) {
- spin_lock(&np->lock);
- /* disable interrupts on the nic */
- if (!(np->msi_flags & NV_MSI_X_ENABLED))
- writel(0, base + NvRegIrqMask);
- else
- writel(np->irqmask, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq = np->irqmask;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i);
- spin_unlock(&np->lock);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_tx(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
- unsigned long flags;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_tx\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_TX_ALL;
- writel(NVREG_IRQ_TX_ALL, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: tx irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- spin_lock_irqsave(&np->lock, flags);
- nv_tx_done(dev);
- spin_unlock_irqrestore(&np->lock, flags);
-
- if (events & (NVREG_IRQ_TX_ERR)) {
- dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- spin_lock_irqsave(&np->lock, flags);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_TX_ALL, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_TX_ALL;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_tx.\n", dev->name, i);
- spin_unlock_irqrestore(&np->lock, flags);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_tx completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-#ifdef CONFIG_FORCEDETH_NAPI
-static int nv_napi_poll(struct net_device *dev, int *budget)
-{
- int pkts, limit = min(*budget, dev->quota);
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- pkts = nv_rx_process(dev, limit);
-
- if (nv_alloc_rx(dev)) {
- spin_lock_irq(&np->lock);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
- }
-
- if (pkts < limit) {
- /* all done, no more packets present */
- netif_rx_complete(dev);
-
- /* re-enable receive interrupts */
- spin_lock_irq(&np->lock);
- np->irqmask |= NVREG_IRQ_RX_ALL;
- if (np->msi_flags & NV_MSI_X_ENABLED)
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- else
- writel(np->irqmask, base + NvRegIrqMask);
- spin_unlock_irq(&np->lock);
- return 0;
- } else {
- /* used up our quantum, so reschedule */
- dev->quota -= pkts;
- *budget -= pkts;
- return 1;
- }
-}
-#endif
-
-#ifdef CONFIG_FORCEDETH_NAPI
-static irqreturn_t nv_nic_irq_rx(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
-
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL;
- writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus);
-
- if (events) {
- netif_rx_schedule(dev);
- /* disable receive interrupts on the nic */
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- pci_push(base);
- }
- return IRQ_HANDLED;
-}
-#else
-static irqreturn_t nv_nic_irq_rx(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
- unsigned long flags;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_rx\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL;
- writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: rx irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- nv_rx_process(dev, dev->weight);
- if (nv_alloc_rx(dev)) {
- spin_lock_irqsave(&np->lock, flags);
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irqrestore(&np->lock, flags);
- }
-
- if (i > max_interrupt_work) {
- spin_lock_irqsave(&np->lock, flags);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_RX_ALL;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_rx.\n", dev->name, i);
- spin_unlock_irqrestore(&np->lock, flags);
- break;
- }
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_rx completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-#endif
-
-static irqreturn_t nv_nic_irq_other(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
- int i;
- unsigned long flags;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_other\n", dev->name);
-
- for (i=0; ; i++) {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_OTHER;
- writel(NVREG_IRQ_OTHER, base + NvRegMSIXIrqStatus);
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & np->irqmask))
- break;
-
- if (events & NVREG_IRQ_LINK) {
- spin_lock_irqsave(&np->lock, flags);
- nv_link_irq(dev);
- spin_unlock_irqrestore(&np->lock, flags);
- }
- if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- spin_lock_irqsave(&np->lock, flags);
- nv_linkchange(dev);
- spin_unlock_irqrestore(&np->lock, flags);
- np->link_timeout = jiffies + LINK_TIMEOUT;
- }
- if (events & (NVREG_IRQ_UNKNOWN)) {
- printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
- dev->name, events);
- }
- if (i > max_interrupt_work) {
- spin_lock_irqsave(&np->lock, flags);
- /* disable interrupts on the nic */
- writel(NVREG_IRQ_OTHER, base + NvRegIrqMask);
- pci_push(base);
-
- if (!np->in_shutdown) {
- np->nic_poll_irq |= NVREG_IRQ_OTHER;
- mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
- }
- printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_other.\n", dev->name, i);
- spin_unlock_irqrestore(&np->lock, flags);
- break;
- }
-
- }
- dprintk(KERN_DEBUG "%s: nv_nic_irq_other completed\n", dev->name);
-
- return IRQ_RETVAL(i);
-}
-
-static irqreturn_t nv_nic_irq_test(int foo, void *data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 events;
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_test\n", dev->name);
-
- if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
- events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQ_TIMER, base + NvRegIrqStatus);
- } else {
- events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
- writel(NVREG_IRQ_TIMER, base + NvRegMSIXIrqStatus);
- }
- pci_push(base);
- dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
- if (!(events & NVREG_IRQ_TIMER))
- return IRQ_RETVAL(0);
-
- spin_lock(&np->lock);
- np->intr_test = 1;
- spin_unlock(&np->lock);
-
- dprintk(KERN_DEBUG "%s: nv_nic_irq_test completed\n", dev->name);
-
- return IRQ_RETVAL(1);
-}
-
-static void set_msix_vector_map(struct net_device *dev, u32 vector, u32 irqmask)
-{
- u8 __iomem *base = get_hwbase(dev);
- int i;
- u32 msixmap = 0;
-
- /* Each interrupt bit can be mapped to a MSIX vector (4 bits).
- * MSIXMap0 represents the first 8 interrupts and MSIXMap1 represents
- * the remaining 8 interrupts.
- */
- for (i = 0; i < 8; i++) {
- if ((irqmask >> i) & 0x1) {
- msixmap |= vector << (i << 2);
- }
- }
- writel(readl(base + NvRegMSIXMap0) | msixmap, base + NvRegMSIXMap0);
-
- msixmap = 0;
- for (i = 0; i < 8; i++) {
- if ((irqmask >> (i + 8)) & 0x1) {
- msixmap |= vector << (i << 2);
- }
- }
- writel(readl(base + NvRegMSIXMap1) | msixmap, base + NvRegMSIXMap1);
-}
-
-static int nv_request_irq(struct net_device *dev, int intr_test)
-{
- struct fe_priv *np = get_nvpriv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int i;
-
- if (np->msi_flags & NV_MSI_X_CAPABLE) {
- for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
- np->msi_x_entry[i].entry = i;
- }
- if ((ret = pci_enable_msix(np->pci_dev, np->msi_x_entry, (np->msi_flags & NV_MSI_X_VECTORS_MASK))) == 0) {
- np->msi_flags |= NV_MSI_X_ENABLED;
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT && !intr_test) {
- /* Request irq for rx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQF_SHARED, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_err;
- }
- /* Request irq for tx handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQF_SHARED, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_free_rx;
- }
- /* Request irq for link and timer handling */
- if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQF_SHARED, dev->name, dev) != 0) {
- printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_free_tx;
- }
- /* map interrupts to their respective vector */
- writel(0, base + NvRegMSIXMap0);
- writel(0, base + NvRegMSIXMap1);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL);
- set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER);
- } else {
- /* Request irq for all interrupts */
- if ((!intr_test &&
- request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
- (intr_test &&
- request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0)) {
- printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- goto out_err;
- }
-
- /* map interrupts to vector 0 */
- writel(0, base + NvRegMSIXMap0);
- writel(0, base + NvRegMSIXMap1);
- }
- }
- }
- if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) {
- if ((ret = pci_enable_msi(np->pci_dev)) == 0) {
- pci_intx(np->pci_dev, 0);
- np->msi_flags |= NV_MSI_ENABLED;
- if ((!intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
- (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0)) {
- printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
- pci_disable_msi(np->pci_dev);
- pci_intx(np->pci_dev, 1);
- np->msi_flags &= ~NV_MSI_ENABLED;
- goto out_err;
- }
-
- /* map interrupts to vector 0 */
- writel(0, base + NvRegMSIMap0);
- writel(0, base + NvRegMSIMap1);
- /* enable msi vector 0 */
- writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask);
- }
- }
- if (ret != 0) {
- if ((!intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq, IRQF_SHARED, dev->name, dev) != 0) ||
- (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQF_SHARED, dev->name, dev) != 0))
- goto out_err;
-
- }
-
- return 0;
-out_free_tx:
- free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev);
-out_free_rx:
- free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev);
-out_err:
- return 1;
-}
-
-static void nv_free_irq(struct net_device *dev)
-{
- struct fe_priv *np = get_nvpriv(dev);
- int i;
-
- if (np->msi_flags & NV_MSI_X_ENABLED) {
- for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
- free_irq(np->msi_x_entry[i].vector, dev);
- }
- pci_disable_msix(np->pci_dev);
- np->msi_flags &= ~NV_MSI_X_ENABLED;
- } else {
- free_irq(np->pci_dev->irq, dev);
- if (np->msi_flags & NV_MSI_ENABLED) {
- pci_disable_msi(np->pci_dev);
- pci_intx(np->pci_dev, 1);
- np->msi_flags &= ~NV_MSI_ENABLED;
- }
- }
-}
-
-static void nv_do_nic_poll(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 mask = 0;
-
- /*
- * First disable irq(s) and then
- * reenable interrupts on the nic, we have to do this before calling
- * nv_nic_irq because that may decide to do otherwise
- */
-
- if (!using_multi_irqs(dev)) {
- if (np->msi_flags & NV_MSI_X_ENABLED)
- disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- disable_irq_lockdep(dev->irq);
- mask = np->irqmask;
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- mask |= NVREG_IRQ_RX_ALL;
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- mask |= NVREG_IRQ_TX_ALL;
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- mask |= NVREG_IRQ_OTHER;
- }
- }
- np->nic_poll_irq = 0;
-
- /* FIXME: Do we need synchronize_irq(dev->irq) here? */
-
- writel(mask, base + NvRegIrqMask);
- pci_push(base);
-
- if (!using_multi_irqs(dev)) {
- nv_nic_irq(0, dev);
- if (np->msi_flags & NV_MSI_X_ENABLED)
- enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
- else
- enable_irq_lockdep(dev->irq);
- } else {
- if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
- nv_nic_irq_rx(0, dev);
- enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
- }
- if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
- nv_nic_irq_tx(0, dev);
- enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
- }
- if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
- nv_nic_irq_other(0, dev);
- enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
- }
- }
-}
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void nv_poll_controller(struct net_device *dev)
-{
- nv_do_nic_poll((unsigned long) dev);
-}
-#endif
-
-static void nv_do_stats_poll(unsigned long data)
-{
- struct net_device *dev = (struct net_device *) data;
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- np->estats.tx_bytes += readl(base + NvRegTxCnt);
- np->estats.tx_zero_rexmt += readl(base + NvRegTxZeroReXmt);
- np->estats.tx_one_rexmt += readl(base + NvRegTxOneReXmt);
- np->estats.tx_many_rexmt += readl(base + NvRegTxManyReXmt);
- np->estats.tx_late_collision += readl(base + NvRegTxLateCol);
- np->estats.tx_fifo_errors += readl(base + NvRegTxUnderflow);
- np->estats.tx_carrier_errors += readl(base + NvRegTxLossCarrier);
- np->estats.tx_excess_deferral += readl(base + NvRegTxExcessDef);
- np->estats.tx_retry_error += readl(base + NvRegTxRetryErr);
- np->estats.tx_deferral += readl(base + NvRegTxDef);
- np->estats.tx_packets += readl(base + NvRegTxFrame);
- np->estats.tx_pause += readl(base + NvRegTxPause);
- np->estats.rx_frame_error += readl(base + NvRegRxFrameErr);
- np->estats.rx_extra_byte += readl(base + NvRegRxExtraByte);
- np->estats.rx_late_collision += readl(base + NvRegRxLateCol);
- np->estats.rx_runt += readl(base + NvRegRxRunt);
- np->estats.rx_frame_too_long += readl(base + NvRegRxFrameTooLong);
- np->estats.rx_over_errors += readl(base + NvRegRxOverflow);
- np->estats.rx_crc_errors += readl(base + NvRegRxFCSErr);
- np->estats.rx_frame_align_error += readl(base + NvRegRxFrameAlignErr);
- np->estats.rx_length_error += readl(base + NvRegRxLenErr);
- np->estats.rx_unicast += readl(base + NvRegRxUnicast);
- np->estats.rx_multicast += readl(base + NvRegRxMulticast);
- np->estats.rx_broadcast += readl(base + NvRegRxBroadcast);
- np->estats.rx_bytes += readl(base + NvRegRxCnt);
- np->estats.rx_pause += readl(base + NvRegRxPause);
- np->estats.rx_drop_frame += readl(base + NvRegRxDropFrame);
- np->estats.rx_packets =
- np->estats.rx_unicast +
- np->estats.rx_multicast +
- np->estats.rx_broadcast;
- np->estats.rx_errors_total =
- np->estats.rx_crc_errors +
- np->estats.rx_over_errors +
- np->estats.rx_frame_error +
- (np->estats.rx_frame_align_error - np->estats.rx_extra_byte) +
- np->estats.rx_late_collision +
- np->estats.rx_runt +
- np->estats.rx_frame_too_long;
-
- if (!np->in_shutdown)
- mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL);
-}
-
-static void nv_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
-{
- struct fe_priv *np = netdev_priv(dev);
- strcpy(info->driver, "forcedeth");
- strcpy(info->version, FORCEDETH_VERSION);
- strcpy(info->bus_info, pci_name(np->pci_dev));
-}
-
-static void nv_get_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
-{
- struct fe_priv *np = netdev_priv(dev);
- wolinfo->supported = WAKE_MAGIC;
-
- spin_lock_irq(&np->lock);
- if (np->wolenabled)
- wolinfo->wolopts = WAKE_MAGIC;
- spin_unlock_irq(&np->lock);
-}
-
-static int nv_set_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 flags = 0;
-
- if (wolinfo->wolopts == 0) {
- np->wolenabled = 0;
- } else if (wolinfo->wolopts & WAKE_MAGIC) {
- np->wolenabled = 1;
- flags = NVREG_WAKEUPFLAGS_ENABLE;
- }
- if (netif_running(dev)) {
- spin_lock_irq(&np->lock);
- writel(flags, base + NvRegWakeUpFlags);
- spin_unlock_irq(&np->lock);
- }
- return 0;
-}
-
-static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct fe_priv *np = netdev_priv(dev);
- int adv;
-
- spin_lock_irq(&np->lock);
- ecmd->port = PORT_MII;
- if (!netif_running(dev)) {
- /* We do not track link speed / duplex setting if the
- * interface is disabled. Force a link check */
- if (nv_update_linkspeed(dev)) {
- if (!netif_carrier_ok(dev))
- netif_carrier_on(dev);
- } else {
- if (netif_carrier_ok(dev))
- netif_carrier_off(dev);
- }
- }
-
- if (netif_carrier_ok(dev)) {
- switch(np->linkspeed & (NVREG_LINKSPEED_MASK)) {
- case NVREG_LINKSPEED_10:
- ecmd->speed = SPEED_10;
- break;
- case NVREG_LINKSPEED_100:
- ecmd->speed = SPEED_100;
- break;
- case NVREG_LINKSPEED_1000:
- ecmd->speed = SPEED_1000;
- break;
- }
- ecmd->duplex = DUPLEX_HALF;
- if (np->duplex)
- ecmd->duplex = DUPLEX_FULL;
- } else {
- ecmd->speed = -1;
- ecmd->duplex = -1;
- }
-
- ecmd->autoneg = np->autoneg;
-
- ecmd->advertising = ADVERTISED_MII;
- if (np->autoneg) {
- ecmd->advertising |= ADVERTISED_Autoneg;
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- if (adv & ADVERTISE_10HALF)
- ecmd->advertising |= ADVERTISED_10baseT_Half;
- if (adv & ADVERTISE_10FULL)
- ecmd->advertising |= ADVERTISED_10baseT_Full;
- if (adv & ADVERTISE_100HALF)
- ecmd->advertising |= ADVERTISED_100baseT_Half;
- if (adv & ADVERTISE_100FULL)
- ecmd->advertising |= ADVERTISED_100baseT_Full;
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- if (adv & ADVERTISE_1000FULL)
- ecmd->advertising |= ADVERTISED_1000baseT_Full;
- }
- }
- ecmd->supported = (SUPPORTED_Autoneg |
- SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full |
- SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
- SUPPORTED_MII);
- if (np->gigabit == PHY_GIGABIT)
- ecmd->supported |= SUPPORTED_1000baseT_Full;
-
- ecmd->phy_address = np->phyaddr;
- ecmd->transceiver = XCVR_EXTERNAL;
-
- /* ignore maxtxpkt, maxrxpkt for now */
- spin_unlock_irq(&np->lock);
- return 0;
-}
-
-static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (ecmd->port != PORT_MII)
- return -EINVAL;
- if (ecmd->transceiver != XCVR_EXTERNAL)
- return -EINVAL;
- if (ecmd->phy_address != np->phyaddr) {
- /* TODO: support switching between multiple phys. Should be
- * trivial, but not enabled due to lack of test hardware. */
- return -EINVAL;
- }
- if (ecmd->autoneg == AUTONEG_ENABLE) {
- u32 mask;
-
- mask = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
- ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full;
- if (np->gigabit == PHY_GIGABIT)
- mask |= ADVERTISED_1000baseT_Full;
-
- if ((ecmd->advertising & mask) == 0)
- return -EINVAL;
-
- } else if (ecmd->autoneg == AUTONEG_DISABLE) {
- /* Note: autonegotiation disable, speed 1000 intentionally
- * forbidden - noone should need that. */
-
- if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100)
- return -EINVAL;
- if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL)
- return -EINVAL;
- } else {
- return -EINVAL;
- }
-
- netif_carrier_off(dev);
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- }
-
- if (ecmd->autoneg == AUTONEG_ENABLE) {
- int adv, bmcr;
-
- np->autoneg = 1;
-
- /* advertise only what has been requested */
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
- if (ecmd->advertising & ADVERTISED_10baseT_Half)
- adv |= ADVERTISE_10HALF;
- if (ecmd->advertising & ADVERTISED_10baseT_Full)
- adv |= ADVERTISE_10FULL;
- if (ecmd->advertising & ADVERTISED_100baseT_Half)
- adv |= ADVERTISE_100HALF;
- if (ecmd->advertising & ADVERTISED_100baseT_Full)
- adv |= ADVERTISE_100FULL;
- if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */
- adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
- adv |= ADVERTISE_PAUSE_ASYM;
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
-
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- adv &= ~ADVERTISE_1000FULL;
- if (ecmd->advertising & ADVERTISED_1000baseT_Full)
- adv |= ADVERTISE_1000FULL;
- mii_rw(dev, np->phyaddr, MII_CTRL1000, adv);
- }
-
- if (netif_running(dev))
- printk(KERN_INFO "%s: link down.\n", dev->name);
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
- bmcr |= BMCR_ANENABLE;
- /* reset the phy in order for settings to stick,
- * and cause autoneg to start */
- if (phy_reset(dev, bmcr)) {
- printk(KERN_INFO "%s: phy reset failed\n", dev->name);
- return -EINVAL;
- }
- } else {
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
- }
- } else {
- int adv, bmcr;
-
- np->autoneg = 0;
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
- if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_HALF)
- adv |= ADVERTISE_10HALF;
- if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_FULL)
- adv |= ADVERTISE_10FULL;
- if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_HALF)
- adv |= ADVERTISE_100HALF;
- if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_FULL)
- adv |= ADVERTISE_100FULL;
- np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE);
- if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) {/* for rx we set both advertisments but disable tx pause */
- adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
- np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- }
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) {
- adv |= ADVERTISE_PAUSE_ASYM;
- np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
- }
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
- np->fixed_mode = adv;
-
- if (np->gigabit == PHY_GIGABIT) {
- adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
- adv &= ~ADVERTISE_1000FULL;
- mii_rw(dev, np->phyaddr, MII_CTRL1000, adv);
- }
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr &= ~(BMCR_ANENABLE|BMCR_SPEED100|BMCR_SPEED1000|BMCR_FULLDPLX);
- if (np->fixed_mode & (ADVERTISE_10FULL|ADVERTISE_100FULL))
- bmcr |= BMCR_FULLDPLX;
- if (np->fixed_mode & (ADVERTISE_100HALF|ADVERTISE_100FULL))
- bmcr |= BMCR_SPEED100;
- if (np->phy_oui == PHY_OUI_MARVELL) {
- /* reset the phy in order for forced mode settings to stick */
- if (phy_reset(dev, bmcr)) {
- printk(KERN_INFO "%s: phy reset failed\n", dev->name);
- return -EINVAL;
- }
- } else {
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
- if (netif_running(dev)) {
- /* Wait a bit and then reconfigure the nic. */
- udelay(10);
- nv_linkchange(dev);
- }
- }
- }
-
- if (netif_running(dev)) {
- nv_start_rx(dev);
- nv_start_tx(dev);
- nv_enable_irq(dev);
- }
-
- return 0;
-}
-
-#define FORCEDETH_REGS_VER 1
-
-static int nv_get_regs_len(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- return np->register_size;
-}
-
-static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u32 *rbuf = buf;
- int i;
-
- regs->version = FORCEDETH_REGS_VER;
- spin_lock_irq(&np->lock);
- for (i = 0;i <= np->register_size/sizeof(u32); i++)
- rbuf[i] = readl(base + i*sizeof(u32));
- spin_unlock_irq(&np->lock);
-}
-
-static int nv_nway_reset(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int ret;
-
- if (np->autoneg) {
- int bmcr;
-
- netif_carrier_off(dev);
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- printk(KERN_INFO "%s: link down.\n", dev->name);
- }
-
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
- bmcr |= BMCR_ANENABLE;
- /* reset the phy in order for settings to stick*/
- if (phy_reset(dev, bmcr)) {
- printk(KERN_INFO "%s: phy reset failed\n", dev->name);
- return -EINVAL;
- }
- } else {
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
- }
-
- if (netif_running(dev)) {
- nv_start_rx(dev);
- nv_start_tx(dev);
- nv_enable_irq(dev);
- }
- ret = 0;
- } else {
- ret = -EINVAL;
- }
-
- return ret;
-}
-
-static int nv_set_tso(struct net_device *dev, u32 value)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if ((np->driver_data & DEV_HAS_CHECKSUM))
- return ethtool_op_set_tso(dev, value);
- else
- return -EOPNOTSUPP;
-}
-
-static void nv_get_ringparam(struct net_device *dev, struct ethtool_ringparam* ring)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- ring->rx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3;
- ring->rx_mini_max_pending = 0;
- ring->rx_jumbo_max_pending = 0;
- ring->tx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3;
-
- ring->rx_pending = np->rx_ring_size;
- ring->rx_mini_pending = 0;
- ring->rx_jumbo_pending = 0;
- ring->tx_pending = np->tx_ring_size;
-}
-
-static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ring)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- u8 *rxtx_ring, *rx_skbuff, *tx_skbuff, *rx_dma, *tx_dma, *tx_dma_len;
- dma_addr_t ring_addr;
-
- if (ring->rx_pending < RX_RING_MIN ||
- ring->tx_pending < TX_RING_MIN ||
- ring->rx_mini_pending != 0 ||
- ring->rx_jumbo_pending != 0 ||
- (np->desc_ver == DESC_VER_1 &&
- (ring->rx_pending > RING_MAX_DESC_VER_1 ||
- ring->tx_pending > RING_MAX_DESC_VER_1)) ||
- (np->desc_ver != DESC_VER_1 &&
- (ring->rx_pending > RING_MAX_DESC_VER_2_3 ||
- ring->tx_pending > RING_MAX_DESC_VER_2_3))) {
- return -EINVAL;
- }
-
- /* allocate new rings */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- rxtx_ring = pci_alloc_consistent(np->pci_dev,
- sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending),
- &ring_addr);
- } else {
- rxtx_ring = pci_alloc_consistent(np->pci_dev,
- sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending),
- &ring_addr);
- }
- rx_skbuff = kmalloc(sizeof(struct sk_buff*) * ring->rx_pending, GFP_KERNEL);
- rx_dma = kmalloc(sizeof(dma_addr_t) * ring->rx_pending, GFP_KERNEL);
- tx_skbuff = kmalloc(sizeof(struct sk_buff*) * ring->tx_pending, GFP_KERNEL);
- tx_dma = kmalloc(sizeof(dma_addr_t) * ring->tx_pending, GFP_KERNEL);
- tx_dma_len = kmalloc(sizeof(unsigned int) * ring->tx_pending, GFP_KERNEL);
- if (!rxtx_ring || !rx_skbuff || !rx_dma || !tx_skbuff || !tx_dma || !tx_dma_len) {
- /* fall back to old rings */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- if (rxtx_ring)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending),
- rxtx_ring, ring_addr);
- } else {
- if (rxtx_ring)
- pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending),
- rxtx_ring, ring_addr);
- }
- if (rx_skbuff)
- kfree(rx_skbuff);
- if (rx_dma)
- kfree(rx_dma);
- if (tx_skbuff)
- kfree(tx_skbuff);
- if (tx_dma)
- kfree(tx_dma);
- if (tx_dma_len)
- kfree(tx_dma_len);
- goto exit;
- }
-
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain queues */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
- /* delete queues */
- free_rings(dev);
- }
-
- /* set new values */
- np->rx_ring_size = ring->rx_pending;
- np->tx_ring_size = ring->tx_pending;
- np->tx_limit_stop = ring->tx_pending - TX_LIMIT_DIFFERENCE;
- np->tx_limit_start = ring->tx_pending - TX_LIMIT_DIFFERENCE - 1;
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig = (struct ring_desc*)rxtx_ring;
- np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size];
- } else {
- np->rx_ring.ex = (struct ring_desc_ex*)rxtx_ring;
- np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size];
- }
- np->rx_skbuff = (struct sk_buff**)rx_skbuff;
- np->rx_dma = (dma_addr_t*)rx_dma;
- np->tx_skbuff = (struct sk_buff**)tx_skbuff;
- np->tx_dma = (dma_addr_t*)tx_dma;
- np->tx_dma_len = (unsigned int*)tx_dma_len;
- np->ring_addr = ring_addr;
-
- memset(np->rx_skbuff, 0, sizeof(struct sk_buff*) * np->rx_ring_size);
- memset(np->rx_dma, 0, sizeof(dma_addr_t) * np->rx_ring_size);
- memset(np->tx_skbuff, 0, sizeof(struct sk_buff*) * np->tx_ring_size);
- memset(np->tx_dma, 0, sizeof(dma_addr_t) * np->tx_ring_size);
- memset(np->tx_dma_len, 0, sizeof(unsigned int) * np->tx_ring_size);
-
- if (netif_running(dev)) {
- /* reinit driver view of the queues */
- set_bufsize(dev);
- if (nv_init_ring(dev)) {
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- }
-
- /* reinit nic view of the queues */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(base);
-
- /* restart engines */
- nv_start_rx(dev);
- nv_start_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- nv_enable_irq(dev);
- }
- return 0;
-exit:
- return -ENOMEM;
-}
-
-static void nv_get_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- pause->autoneg = (np->pause_flags & NV_PAUSEFRAME_AUTONEG) != 0;
- pause->rx_pause = (np->pause_flags & NV_PAUSEFRAME_RX_ENABLE) != 0;
- pause->tx_pause = (np->pause_flags & NV_PAUSEFRAME_TX_ENABLE) != 0;
-}
-
-static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause)
-{
- struct fe_priv *np = netdev_priv(dev);
- int adv, bmcr;
-
- if ((!np->autoneg && np->duplex == 0) ||
- (np->autoneg && !pause->autoneg && np->duplex == 0)) {
- printk(KERN_INFO "%s: can not set pause settings when forced link is in half duplex.\n",
- dev->name);
- return -EINVAL;
- }
- if (pause->tx_pause && !(np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE)) {
- printk(KERN_INFO "%s: hardware does not support tx pause frames.\n", dev->name);
- return -EINVAL;
- }
-
- netif_carrier_off(dev);
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- netif_tx_lock_bh(dev);
- spin_lock(&np->lock);
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- spin_unlock(&np->lock);
- netif_tx_unlock_bh(dev);
- }
-
- np->pause_flags &= ~(NV_PAUSEFRAME_RX_REQ|NV_PAUSEFRAME_TX_REQ);
- if (pause->rx_pause)
- np->pause_flags |= NV_PAUSEFRAME_RX_REQ;
- if (pause->tx_pause)
- np->pause_flags |= NV_PAUSEFRAME_TX_REQ;
-
- if (np->autoneg && pause->autoneg) {
- np->pause_flags |= NV_PAUSEFRAME_AUTONEG;
-
- adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
- adv &= ~(ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
- if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */
- adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
- if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
- adv |= ADVERTISE_PAUSE_ASYM;
- mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
-
- if (netif_running(dev))
- printk(KERN_INFO "%s: link down.\n", dev->name);
- bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
- mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
- } else {
- np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE);
- if (pause->rx_pause)
- np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
- if (pause->tx_pause)
- np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
-
- if (!netif_running(dev))
- nv_update_linkspeed(dev);
- else
- nv_update_pause(dev, np->pause_flags);
- }
-
- if (netif_running(dev)) {
- nv_start_rx(dev);
- nv_start_tx(dev);
- nv_enable_irq(dev);
- }
- return 0;
-}
-
-static u32 nv_get_rx_csum(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- return (np->rx_csum) != 0;
-}
-
-static int nv_set_rx_csum(struct net_device *dev, u32 data)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int retcode = 0;
-
- if (np->driver_data & DEV_HAS_CHECKSUM) {
- if (data) {
- np->rx_csum = 1;
- np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
- } else {
- np->rx_csum = 0;
- /* vlan is dependent on rx checksum offload */
- if (!(np->vlanctl_bits & NVREG_VLANCONTROL_ENABLE))
- np->txrxctl_bits &= ~NVREG_TXRXCTL_RXCHECK;
- }
- if (netif_running(dev)) {
- spin_lock_irq(&np->lock);
- writel(np->txrxctl_bits, base + NvRegTxRxControl);
- spin_unlock_irq(&np->lock);
- }
- } else {
- return -EINVAL;
- }
-
- return retcode;
-}
-
-static int nv_set_tx_csum(struct net_device *dev, u32 data)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->driver_data & DEV_HAS_CHECKSUM)
- return ethtool_op_set_tx_hw_csum(dev, data);
- else
- return -EOPNOTSUPP;
-}
-
-static int nv_set_sg(struct net_device *dev, u32 data)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->driver_data & DEV_HAS_CHECKSUM)
- return ethtool_op_set_sg(dev, data);
- else
- return -EOPNOTSUPP;
-}
-
-static int nv_get_stats_count(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->driver_data & DEV_HAS_STATISTICS)
- return sizeof(struct nv_ethtool_stats)/sizeof(u64);
- else
- return 0;
-}
-
-static void nv_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *estats, u64 *buffer)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- /* update stats */
- nv_do_stats_poll((unsigned long)dev);
-
- memcpy(buffer, &np->estats, nv_get_stats_count(dev)*sizeof(u64));
-}
-
-static int nv_self_test_count(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
-
- if (np->driver_data & DEV_HAS_TEST_EXTENDED)
- return NV_TEST_COUNT_EXTENDED;
- else
- return NV_TEST_COUNT_BASE;
-}
-
-static int nv_link_test(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- int mii_status;
-
- mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
- mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
-
- /* check phy link status */
- if (!(mii_status & BMSR_LSTATUS))
- return 0;
- else
- return 1;
-}
-
-static int nv_register_test(struct net_device *dev)
-{
- u8 __iomem *base = get_hwbase(dev);
- int i = 0;
- u32 orig_read, new_read;
-
- do {
- orig_read = readl(base + nv_registers_test[i].reg);
-
- /* xor with mask to toggle bits */
- orig_read ^= nv_registers_test[i].mask;
-
- writel(orig_read, base + nv_registers_test[i].reg);
-
- new_read = readl(base + nv_registers_test[i].reg);
-
- if ((new_read & nv_registers_test[i].mask) != (orig_read & nv_registers_test[i].mask))
- return 0;
-
- /* restore original value */
- orig_read ^= nv_registers_test[i].mask;
- writel(orig_read, base + nv_registers_test[i].reg);
-
- } while (nv_registers_test[++i].reg != 0);
-
- return 1;
-}
-
-static int nv_interrupt_test(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int testcnt;
- u32 save_msi_flags, save_poll_interval = 0;
-
- if (netif_running(dev)) {
- /* free current irq */
- nv_free_irq(dev);
- save_poll_interval = readl(base+NvRegPollingInterval);
- }
-
- /* flag to test interrupt handler */
- np->intr_test = 0;
-
- /* setup test irq */
- save_msi_flags = np->msi_flags;
- np->msi_flags &= ~NV_MSI_X_VECTORS_MASK;
- np->msi_flags |= 0x001; /* setup 1 vector */
- if (nv_request_irq(dev, 1))
- return 0;
-
- /* setup timer interrupt */
- writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval);
- writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
-
- nv_enable_hw_interrupts(dev, NVREG_IRQ_TIMER);
-
- /* wait for at least one interrupt */
- msleep(100);
-
- spin_lock_irq(&np->lock);
-
- /* flag should be set within ISR */
- testcnt = np->intr_test;
- if (!testcnt)
- ret = 2;
-
- nv_disable_hw_interrupts(dev, NVREG_IRQ_TIMER);
- if (!(np->msi_flags & NV_MSI_X_ENABLED))
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- else
- writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
-
- spin_unlock_irq(&np->lock);
-
- nv_free_irq(dev);
-
- np->msi_flags = save_msi_flags;
-
- if (netif_running(dev)) {
- writel(save_poll_interval, base + NvRegPollingInterval);
- writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
- /* restore original irq */
- if (nv_request_irq(dev, 0))
- return 0;
- }
-
- return ret;
-}
-
-static int nv_loopback_test(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- struct sk_buff *tx_skb, *rx_skb;
- dma_addr_t test_dma_addr;
- u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET);
- u32 flags;
- int len, i, pkt_len;
- u8 *pkt_data;
- u32 filter_flags = 0;
- u32 misc1_flags = 0;
- int ret = 1;
-
- if (netif_running(dev)) {
- nv_disable_irq(dev);
- filter_flags = readl(base + NvRegPacketFilterFlags);
- misc1_flags = readl(base + NvRegMisc1);
- } else {
- nv_txrx_reset(dev);
- }
-
- /* reinit driver view of the rx queue */
- set_bufsize(dev);
- nv_init_ring(dev);
-
- /* setup hardware for loopback */
- writel(NVREG_MISC1_FORCE, base + NvRegMisc1);
- writel(NVREG_PFF_ALWAYS | NVREG_PFF_LOOPBACK, base + NvRegPacketFilterFlags);
-
- /* reinit nic view of the rx queue */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
-
- /* restart rx engine */
- nv_start_rx(dev);
- nv_start_tx(dev);
-
- /* setup packet for tx */
- pkt_len = ETH_DATA_LEN;
- tx_skb = dev_alloc_skb(pkt_len);
- if (!tx_skb) {
- printk(KERN_ERR "dev_alloc_skb() failed during loopback test"
- " of %s\n", dev->name);
- ret = 0;
- goto out;
- }
- pkt_data = skb_put(tx_skb, pkt_len);
- for (i = 0; i < pkt_len; i++)
- pkt_data[i] = (u8)(i & 0xff);
- test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data,
- tx_skb->end-tx_skb->data, PCI_DMA_FROMDEVICE);
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->tx_ring.orig[0].buf = cpu_to_le32(test_dma_addr);
- np->tx_ring.orig[0].flaglen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra);
- } else {
- np->tx_ring.ex[0].bufhigh = cpu_to_le64(test_dma_addr) >> 32;
- np->tx_ring.ex[0].buflow = cpu_to_le64(test_dma_addr) & 0x0FFFFFFFF;
- np->tx_ring.ex[0].flaglen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra);
- }
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(get_hwbase(dev));
-
- msleep(500);
-
- /* check for rx of the packet */
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- flags = le32_to_cpu(np->rx_ring.orig[0].flaglen);
- len = nv_descr_getlength(&np->rx_ring.orig[0], np->desc_ver);
-
- } else {
- flags = le32_to_cpu(np->rx_ring.ex[0].flaglen);
- len = nv_descr_getlength_ex(&np->rx_ring.ex[0], np->desc_ver);
- }
-
- if (flags & NV_RX_AVAIL) {
- ret = 0;
- } else if (np->desc_ver == DESC_VER_1) {
- if (flags & NV_RX_ERROR)
- ret = 0;
- } else {
- if (flags & NV_RX2_ERROR) {
- ret = 0;
- }
- }
-
- if (ret) {
- if (len != pkt_len) {
- ret = 0;
- dprintk(KERN_DEBUG "%s: loopback len mismatch %d vs %d\n",
- dev->name, len, pkt_len);
- } else {
- rx_skb = np->rx_skbuff[0];
- for (i = 0; i < pkt_len; i++) {
- if (rx_skb->data[i] != (u8)(i & 0xff)) {
- ret = 0;
- dprintk(KERN_DEBUG "%s: loopback pattern check failed on byte %d\n",
- dev->name, i);
- break;
- }
- }
- }
- } else {
- dprintk(KERN_DEBUG "%s: loopback - did not receive test packet\n", dev->name);
- }
-
- pci_unmap_page(np->pci_dev, test_dma_addr,
- tx_skb->end-tx_skb->data,
- PCI_DMA_TODEVICE);
- dev_kfree_skb_any(tx_skb);
- out:
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain rx queue */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
-
- if (netif_running(dev)) {
- writel(misc1_flags, base + NvRegMisc1);
- writel(filter_flags, base + NvRegPacketFilterFlags);
- nv_enable_irq(dev);
- }
-
- return ret;
-}
-
-static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 *buffer)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int result;
- memset(buffer, 0, nv_self_test_count(dev)*sizeof(u64));
-
- if (!nv_link_test(dev)) {
- test->flags |= ETH_TEST_FL_FAILED;
- buffer[0] = 1;
- }
-
- if (test->flags & ETH_TEST_FL_OFFLINE) {
- if (netif_running(dev)) {
- netif_stop_queue(dev);
- netif_poll_disable(dev);
- netif_tx_lock_bh(dev);
- spin_lock_irq(&np->lock);
- nv_disable_hw_interrupts(dev, np->irqmask);
- if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- } else {
- writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
- }
- /* stop engines */
- nv_stop_rx(dev);
- nv_stop_tx(dev);
- nv_txrx_reset(dev);
- /* drain rx queue */
- nv_drain_rx(dev);
- nv_drain_tx(dev);
- spin_unlock_irq(&np->lock);
- netif_tx_unlock_bh(dev);
- }
-
- if (!nv_register_test(dev)) {
- test->flags |= ETH_TEST_FL_FAILED;
- buffer[1] = 1;
- }
-
- result = nv_interrupt_test(dev);
- if (result != 1) {
- test->flags |= ETH_TEST_FL_FAILED;
- buffer[2] = 1;
- }
- if (result == 0) {
- /* bail out */
- return;
- }
-
- if (!nv_loopback_test(dev)) {
- test->flags |= ETH_TEST_FL_FAILED;
- buffer[3] = 1;
- }
-
- if (netif_running(dev)) {
- /* reinit driver view of the rx queue */
- set_bufsize(dev);
- if (nv_init_ring(dev)) {
- if (!np->in_shutdown)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- }
- /* reinit nic view of the rx queue */
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
- pci_push(base);
- writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
- pci_push(base);
- /* restart rx engine */
- nv_start_rx(dev);
- nv_start_tx(dev);
- netif_start_queue(dev);
- netif_poll_enable(dev);
- nv_enable_hw_interrupts(dev, np->irqmask);
- }
- }
-}
-
-static void nv_get_strings(struct net_device *dev, u32 stringset, u8 *buffer)
-{
- switch (stringset) {
- case ETH_SS_STATS:
- memcpy(buffer, &nv_estats_str, nv_get_stats_count(dev)*sizeof(struct nv_ethtool_str));
- break;
- case ETH_SS_TEST:
- memcpy(buffer, &nv_etests_str, nv_self_test_count(dev)*sizeof(struct nv_ethtool_str));
- break;
- }
-}
-
-static const struct ethtool_ops ops = {
- .get_drvinfo = nv_get_drvinfo,
- .get_link = ethtool_op_get_link,
- .get_wol = nv_get_wol,
- .set_wol = nv_set_wol,
- .get_settings = nv_get_settings,
- .set_settings = nv_set_settings,
- .get_regs_len = nv_get_regs_len,
- .get_regs = nv_get_regs,
- .nway_reset = nv_nway_reset,
- .get_perm_addr = ethtool_op_get_perm_addr,
- .get_tso = ethtool_op_get_tso,
- .set_tso = nv_set_tso,
- .get_ringparam = nv_get_ringparam,
- .set_ringparam = nv_set_ringparam,
- .get_pauseparam = nv_get_pauseparam,
- .set_pauseparam = nv_set_pauseparam,
- .get_rx_csum = nv_get_rx_csum,
- .set_rx_csum = nv_set_rx_csum,
- .get_tx_csum = ethtool_op_get_tx_csum,
- .set_tx_csum = nv_set_tx_csum,
- .get_sg = ethtool_op_get_sg,
- .set_sg = nv_set_sg,
- .get_strings = nv_get_strings,
- .get_stats_count = nv_get_stats_count,
- .get_ethtool_stats = nv_get_ethtool_stats,
- .self_test_count = nv_self_test_count,
- .self_test = nv_self_test,
-};
-
-static void nv_vlan_rx_register(struct net_device *dev, struct vlan_group *grp)
-{
- struct fe_priv *np = get_nvpriv(dev);
-
- spin_lock_irq(&np->lock);
-
- /* save vlan group */
- np->vlangrp = grp;
-
- if (grp) {
- /* enable vlan on MAC */
- np->txrxctl_bits |= NVREG_TXRXCTL_VLANSTRIP | NVREG_TXRXCTL_VLANINS;
- } else {
- /* disable vlan on MAC */
- np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANSTRIP;
- np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANINS;
- }
-
- writel(np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
-
- spin_unlock_irq(&np->lock);
-};
-
-static void nv_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid)
-{
- /* nothing to do */
-};
-
-static int nv_open(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
- int ret = 1;
- int oom, i;
-
- dprintk(KERN_DEBUG "nv_open: begin\n");
-
- /* erase previous misconfiguration */
- if (np->driver_data & DEV_HAS_POWER_CNTRL)
- nv_mac_reset(dev);
- writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
- writel(0, base + NvRegMulticastAddrB);
- writel(0, base + NvRegMulticastMaskA);
- writel(0, base + NvRegMulticastMaskB);
- writel(0, base + NvRegPacketFilterFlags);
-
- writel(0, base + NvRegTransmitterControl);
- writel(0, base + NvRegReceiverControl);
-
- writel(0, base + NvRegAdapterControl);
-
- if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE)
- writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame);
-
- /* initialize descriptor rings */
- set_bufsize(dev);
- oom = nv_init_ring(dev);
-
- writel(0, base + NvRegLinkSpeed);
- writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
- nv_txrx_reset(dev);
- writel(0, base + NvRegUnknownSetupReg6);
-
- np->in_shutdown = 0;
-
- /* give hw rings */
- setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
- writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
- base + NvRegRingSizes);
-
- writel(np->linkspeed, base + NvRegLinkSpeed);
- if (np->desc_ver == DESC_VER_1)
- writel(NVREG_TX_WM_DESC1_DEFAULT, base + NvRegTxWatermark);
- else
- writel(NVREG_TX_WM_DESC2_3_DEFAULT, base + NvRegTxWatermark);
- writel(np->txrxctl_bits, base + NvRegTxRxControl);
- writel(np->vlanctl_bits, base + NvRegVlanControl);
- pci_push(base);
- writel(NVREG_TXRXCTL_BIT1|np->txrxctl_bits, base + NvRegTxRxControl);
- reg_delay(dev, NvRegUnknownSetupReg5, NVREG_UNKSETUP5_BIT31, NVREG_UNKSETUP5_BIT31,
- NV_SETUP5_DELAY, NV_SETUP5_DELAYMAX,
- KERN_INFO "open: SetupReg5, Bit 31 remained off\n");
-
- writel(0, base + NvRegUnknownSetupReg4);
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus);
-
- writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1);
- writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus);
- writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags);
- writel(np->rx_buf_sz, base + NvRegOffloadConfig);
-
- writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus);
- get_random_bytes(&i, sizeof(i));
- writel(NVREG_RNDSEED_FORCE | (i&NVREG_RNDSEED_MASK), base + NvRegRandomSeed);
- writel(NVREG_TX_DEFERRAL_DEFAULT, base + NvRegTxDeferral);
- writel(NVREG_RX_DEFERRAL_DEFAULT, base + NvRegRxDeferral);
- if (poll_interval == -1) {
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT)
- writel(NVREG_POLL_DEFAULT_THROUGHPUT, base + NvRegPollingInterval);
- else
- writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval);
- }
- else
- writel(poll_interval & 0xFFFF, base + NvRegPollingInterval);
- writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
- writel((np->phyaddr << NVREG_ADAPTCTL_PHYSHIFT)|NVREG_ADAPTCTL_PHYVALID|NVREG_ADAPTCTL_RUNNING,
- base + NvRegAdapterControl);
- writel(NVREG_MIISPEED_BIT8|NVREG_MIIDELAY, base + NvRegMIISpeed);
- writel(NVREG_UNKSETUP4_VAL, base + NvRegUnknownSetupReg4);
- if (np->wolenabled)
- writel(NVREG_WAKEUPFLAGS_ENABLE , base + NvRegWakeUpFlags);
-
- i = readl(base + NvRegPowerState);
- if ( (i & NVREG_POWERSTATE_POWEREDUP) == 0)
- writel(NVREG_POWERSTATE_POWEREDUP|i, base + NvRegPowerState);
-
- pci_push(base);
- udelay(10);
- writel(readl(base + NvRegPowerState) | NVREG_POWERSTATE_VALID, base + NvRegPowerState);
-
- nv_disable_hw_interrupts(dev, np->irqmask);
- pci_push(base);
- writel(NVREG_MIISTAT_MASK2, base + NvRegMIIStatus);
- writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
- pci_push(base);
-
- if (nv_request_irq(dev, 0)) {
- goto out_drain;
- }
-
- /* ask for interrupts */
- nv_enable_hw_interrupts(dev, np->irqmask);
-
- spin_lock_irq(&np->lock);
- writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
- writel(0, base + NvRegMulticastAddrB);
- writel(0, base + NvRegMulticastMaskA);
- writel(0, base + NvRegMulticastMaskB);
- writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags);
- /* One manual link speed update: Interrupts are enabled, future link
- * speed changes cause interrupts and are handled by nv_link_irq().
- */
- {
- u32 miistat;
- miistat = readl(base + NvRegMIIStatus);
- writel(NVREG_MIISTAT_MASK, base + NvRegMIIStatus);
- dprintk(KERN_INFO "startup: got 0x%08x.\n", miistat);
- }
- /* set linkspeed to invalid value, thus force nv_update_linkspeed
- * to init hw */
- np->linkspeed = 0;
- ret = nv_update_linkspeed(dev);
- nv_start_rx(dev);
- nv_start_tx(dev);
- netif_start_queue(dev);
- netif_poll_enable(dev);
-
- if (ret) {
- netif_carrier_on(dev);
- } else {
- printk("%s: no link during initialization.\n", dev->name);
- netif_carrier_off(dev);
- }
- if (oom)
- mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
-
- /* start statistics timer */
- if (np->driver_data & DEV_HAS_STATISTICS)
- mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL);
-
- spin_unlock_irq(&np->lock);
-
- return 0;
-out_drain:
- drain_ring(dev);
- return ret;
-}
-
-static int nv_close(struct net_device *dev)
-{
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base;
-
- spin_lock_irq(&np->lock);
- np->in_shutdown = 1;
- spin_unlock_irq(&np->lock);
- netif_poll_disable(dev);
- synchronize_irq(dev->irq);
-
- del_timer_sync(&np->oom_kick);
- del_timer_sync(&np->nic_poll);
- del_timer_sync(&np->stats_poll);
-
- netif_stop_queue(dev);
- spin_lock_irq(&np->lock);
- nv_stop_tx(dev);
- nv_stop_rx(dev);
- nv_txrx_reset(dev);
-
- /* disable interrupts on the nic or we will lock up */
- base = get_hwbase(dev);
- nv_disable_hw_interrupts(dev, np->irqmask);
- pci_push(base);
- dprintk(KERN_INFO "%s: Irqmask is zero again\n", dev->name);
-
- spin_unlock_irq(&np->lock);
-
- nv_free_irq(dev);
-
- drain_ring(dev);
-
- if (np->wolenabled)
- nv_start_rx(dev);
-
- /* FIXME: power down nic */
-
- return 0;
-}
-
-static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
-{
- struct net_device *dev;
- struct fe_priv *np;
- unsigned long addr;
- u8 __iomem *base;
- int err, i;
- u32 powerstate, txreg;
-
- dev = alloc_etherdev(sizeof(struct fe_priv));
- err = -ENOMEM;
- if (!dev)
- goto out;
-
- np = netdev_priv(dev);
- np->pci_dev = pci_dev;
- spin_lock_init(&np->lock);
- SET_MODULE_OWNER(dev);
- SET_NETDEV_DEV(dev, &pci_dev->dev);
-
- init_timer(&np->oom_kick);
- np->oom_kick.data = (unsigned long) dev;
- np->oom_kick.function = &nv_do_rx_refill; /* timer handler */
- init_timer(&np->nic_poll);
- np->nic_poll.data = (unsigned long) dev;
- np->nic_poll.function = &nv_do_nic_poll; /* timer handler */
- init_timer(&np->stats_poll);
- np->stats_poll.data = (unsigned long) dev;
- np->stats_poll.function = &nv_do_stats_poll; /* timer handler */
-
- err = pci_enable_device(pci_dev);
- if (err) {
- printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n",
- err, pci_name(pci_dev));
- goto out_free;
- }
-
- pci_set_master(pci_dev);
-
- err = pci_request_regions(pci_dev, DRV_NAME);
- if (err < 0)
- goto out_disable;
-
- if (id->driver_data & (DEV_HAS_VLAN|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS))
- np->register_size = NV_PCI_REGSZ_VER2;
- else
- np->register_size = NV_PCI_REGSZ_VER1;
-
- err = -EINVAL;
- addr = 0;
- for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
- dprintk(KERN_DEBUG "%s: resource %d start %p len %ld flags 0x%08lx.\n",
- pci_name(pci_dev), i, (void*)pci_resource_start(pci_dev, i),
- pci_resource_len(pci_dev, i),
- pci_resource_flags(pci_dev, i));
- if (pci_resource_flags(pci_dev, i) & IORESOURCE_MEM &&
- pci_resource_len(pci_dev, i) >= np->register_size) {
- addr = pci_resource_start(pci_dev, i);
- break;
- }
- }
- if (i == DEVICE_COUNT_RESOURCE) {
- printk(KERN_INFO "forcedeth: Couldn't find register window for device %s.\n",
- pci_name(pci_dev));
- goto out_relreg;
- }
-
- /* copy of driver data */
- np->driver_data = id->driver_data;
-
- /* handle different descriptor versions */
- if (id->driver_data & DEV_HAS_HIGH_DMA) {
- /* packet format 3: supports 40-bit addressing */
- np->desc_ver = DESC_VER_3;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_3;
- if (dma_64bit) {
- if (pci_set_dma_mask(pci_dev, DMA_39BIT_MASK)) {
- printk(KERN_INFO "forcedeth: 64-bit DMA failed, using 32-bit addressing for device %s.\n",
- pci_name(pci_dev));
- } else {
- dev->features |= NETIF_F_HIGHDMA;
- printk(KERN_INFO "forcedeth: using HIGHDMA\n");
- }
- if (pci_set_consistent_dma_mask(pci_dev, DMA_39BIT_MASK)) {
- printk(KERN_INFO "forcedeth: 64-bit DMA (consistent) failed, using 32-bit ring buffers for device %s.\n",
- pci_name(pci_dev));
- }
- }
- } else if (id->driver_data & DEV_HAS_LARGEDESC) {
- /* packet format 2: supports jumbo frames */
- np->desc_ver = DESC_VER_2;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_2;
- } else {
- /* original packet format */
- np->desc_ver = DESC_VER_1;
- np->txrxctl_bits = NVREG_TXRXCTL_DESC_1;
- }
-
- np->pkt_limit = NV_PKTLIMIT_1;
- if (id->driver_data & DEV_HAS_LARGEDESC)
- np->pkt_limit = NV_PKTLIMIT_2;
-
- if (id->driver_data & DEV_HAS_CHECKSUM) {
- np->rx_csum = 1;
- np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
- dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;
-#ifdef NETIF_F_TSO
- dev->features |= NETIF_F_TSO;
-#endif
- }
-
- np->vlanctl_bits = 0;
- if (id->driver_data & DEV_HAS_VLAN) {
- np->vlanctl_bits = NVREG_VLANCONTROL_ENABLE;
- dev->features |= NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_TX;
- dev->vlan_rx_register = nv_vlan_rx_register;
- dev->vlan_rx_kill_vid = nv_vlan_rx_kill_vid;
- }
-
- np->msi_flags = 0;
- if ((id->driver_data & DEV_HAS_MSI) && msi) {
- np->msi_flags |= NV_MSI_CAPABLE;
- }
- if ((id->driver_data & DEV_HAS_MSI_X) && msix) {
- np->msi_flags |= NV_MSI_X_CAPABLE;
- }
-
- np->pause_flags = NV_PAUSEFRAME_RX_CAPABLE | NV_PAUSEFRAME_RX_REQ | NV_PAUSEFRAME_AUTONEG;
- if (id->driver_data & DEV_HAS_PAUSEFRAME_TX) {
- np->pause_flags |= NV_PAUSEFRAME_TX_CAPABLE | NV_PAUSEFRAME_TX_REQ;
- }
-
-
- err = -ENOMEM;
- np->base = ioremap(addr, np->register_size);
- if (!np->base)
- goto out_relreg;
- dev->base_addr = (unsigned long)np->base;
-
- dev->irq = pci_dev->irq;
-
- np->rx_ring_size = RX_RING_DEFAULT;
- np->tx_ring_size = TX_RING_DEFAULT;
- np->tx_limit_stop = np->tx_ring_size - TX_LIMIT_DIFFERENCE;
- np->tx_limit_start = np->tx_ring_size - TX_LIMIT_DIFFERENCE - 1;
-
- if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
- np->rx_ring.orig = pci_alloc_consistent(pci_dev,
- sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size),
- &np->ring_addr);
- if (!np->rx_ring.orig)
- goto out_unmap;
- np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size];
- } else {
- np->rx_ring.ex = pci_alloc_consistent(pci_dev,
- sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size),
- &np->ring_addr);
- if (!np->rx_ring.ex)
- goto out_unmap;
- np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size];
- }
- np->rx_skbuff = kmalloc(sizeof(struct sk_buff*) * np->rx_ring_size, GFP_KERNEL);
- np->rx_dma = kmalloc(sizeof(dma_addr_t) * np->rx_ring_size, GFP_KERNEL);
- np->tx_skbuff = kmalloc(sizeof(struct sk_buff*) * np->tx_ring_size, GFP_KERNEL);
- np->tx_dma = kmalloc(sizeof(dma_addr_t) * np->tx_ring_size, GFP_KERNEL);
- np->tx_dma_len = kmalloc(sizeof(unsigned int) * np->tx_ring_size, GFP_KERNEL);
- if (!np->rx_skbuff || !np->rx_dma || !np->tx_skbuff || !np->tx_dma || !np->tx_dma_len)
- goto out_freering;
- memset(np->rx_skbuff, 0, sizeof(struct sk_buff*) * np->rx_ring_size);
- memset(np->rx_dma, 0, sizeof(dma_addr_t) * np->rx_ring_size);
- memset(np->tx_skbuff, 0, sizeof(struct sk_buff*) * np->tx_ring_size);
- memset(np->tx_dma, 0, sizeof(dma_addr_t) * np->tx_ring_size);
- memset(np->tx_dma_len, 0, sizeof(unsigned int) * np->tx_ring_size);
-
- dev->open = nv_open;
- dev->stop = nv_close;
- dev->hard_start_xmit = nv_start_xmit;
- dev->get_stats = nv_get_stats;
- dev->change_mtu = nv_change_mtu;
- dev->set_mac_address = nv_set_mac_address;
- dev->set_multicast_list = nv_set_multicast;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = nv_poll_controller;
-#endif
- dev->weight = 64;
-#ifdef CONFIG_FORCEDETH_NAPI
- dev->poll = nv_napi_poll;
-#endif
- SET_ETHTOOL_OPS(dev, &ops);
- dev->tx_timeout = nv_tx_timeout;
- dev->watchdog_timeo = NV_WATCHDOG_TIMEO;
-
- pci_set_drvdata(pci_dev, dev);
-
- /* read the mac address */
- base = get_hwbase(dev);
- np->orig_mac[0] = readl(base + NvRegMacAddrA);
- np->orig_mac[1] = readl(base + NvRegMacAddrB);
-
- /* check the workaround bit for correct mac address order */
- txreg = readl(base + NvRegTransmitPoll);
- if (txreg & NVREG_TRANSMITPOLL_MAC_ADDR_REV) {
- /* mac address is already in correct order */
- dev->dev_addr[0] = (np->orig_mac[0] >> 0) & 0xff;
- dev->dev_addr[1] = (np->orig_mac[0] >> 8) & 0xff;
- dev->dev_addr[2] = (np->orig_mac[0] >> 16) & 0xff;
- dev->dev_addr[3] = (np->orig_mac[0] >> 24) & 0xff;
- dev->dev_addr[4] = (np->orig_mac[1] >> 0) & 0xff;
- dev->dev_addr[5] = (np->orig_mac[1] >> 8) & 0xff;
- } else {
- /* need to reverse mac address to correct order */
- dev->dev_addr[0] = (np->orig_mac[1] >> 8) & 0xff;
- dev->dev_addr[1] = (np->orig_mac[1] >> 0) & 0xff;
- dev->dev_addr[2] = (np->orig_mac[0] >> 24) & 0xff;
- dev->dev_addr[3] = (np->orig_mac[0] >> 16) & 0xff;
- dev->dev_addr[4] = (np->orig_mac[0] >> 8) & 0xff;
- dev->dev_addr[5] = (np->orig_mac[0] >> 0) & 0xff;
- /* set permanent address to be correct aswell */
- np->orig_mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
- (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
- np->orig_mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
- writel(txreg|NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
- }
- memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
-
- if (!is_valid_ether_addr(dev->perm_addr)) {
- /*
- * Bad mac address. At least one bios sets the mac address
- * to 01:23:45:67:89:ab
- */
- printk(KERN_ERR "%s: Invalid Mac address detected: %02x:%02x:%02x:%02x:%02x:%02x\n",
- pci_name(pci_dev),
- dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
- dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
- printk(KERN_ERR "Please complain to your hardware vendor. Switching to a random MAC.\n");
- dev->dev_addr[0] = 0x00;
- dev->dev_addr[1] = 0x00;
- dev->dev_addr[2] = 0x6c;
- get_random_bytes(&dev->dev_addr[3], 3);
- }
-
- dprintk(KERN_DEBUG "%s: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(pci_dev),
- dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
- dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
-
- /* set mac address */
- nv_copy_mac_to_hw(dev);
-
- /* disable WOL */
- writel(0, base + NvRegWakeUpFlags);
- np->wolenabled = 0;
-
- if (id->driver_data & DEV_HAS_POWER_CNTRL) {
- u8 revision_id;
- pci_read_config_byte(pci_dev, PCI_REVISION_ID, &revision_id);
-
- /* take phy and nic out of low power mode */
- powerstate = readl(base + NvRegPowerState2);
- powerstate &= ~NVREG_POWERSTATE2_POWERUP_MASK;
- if ((id->device == PCI_DEVICE_ID_NVIDIA_NVENET_12 ||
- id->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) &&
- revision_id >= 0xA3)
- powerstate |= NVREG_POWERSTATE2_POWERUP_REV_A3;
- writel(powerstate, base + NvRegPowerState2);
- }
-
- if (np->desc_ver == DESC_VER_1) {
- np->tx_flags = NV_TX_VALID;
- } else {
- np->tx_flags = NV_TX2_VALID;
- }
- if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) {
- np->irqmask = NVREG_IRQMASK_THROUGHPUT;
- if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
- np->msi_flags |= 0x0003;
- } else {
- np->irqmask = NVREG_IRQMASK_CPU;
- if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
- np->msi_flags |= 0x0001;
- }
-
- if (id->driver_data & DEV_NEED_TIMERIRQ)
- np->irqmask |= NVREG_IRQ_TIMER;
- if (id->driver_data & DEV_NEED_LINKTIMER) {
- dprintk(KERN_INFO "%s: link timer on.\n", pci_name(pci_dev));
- np->need_linktimer = 1;
- np->link_timeout = jiffies + LINK_TIMEOUT;
- } else {
- dprintk(KERN_INFO "%s: link timer off.\n", pci_name(pci_dev));
- np->need_linktimer = 0;
- }
-
- /* find a suitable phy */
- for (i = 1; i <= 32; i++) {
- int id1, id2;
- int phyaddr = i & 0x1F;
-
- spin_lock_irq(&np->lock);
- id1 = mii_rw(dev, phyaddr, MII_PHYSID1, MII_READ);
- spin_unlock_irq(&np->lock);
- if (id1 < 0 || id1 == 0xffff)
- continue;
- spin_lock_irq(&np->lock);
- id2 = mii_rw(dev, phyaddr, MII_PHYSID2, MII_READ);
- spin_unlock_irq(&np->lock);
- if (id2 < 0 || id2 == 0xffff)
- continue;
-
- np->phy_model = id2 & PHYID2_MODEL_MASK;
- id1 = (id1 & PHYID1_OUI_MASK) << PHYID1_OUI_SHFT;
- id2 = (id2 & PHYID2_OUI_MASK) >> PHYID2_OUI_SHFT;
- dprintk(KERN_DEBUG "%s: open: Found PHY %04x:%04x at address %d.\n",
- pci_name(pci_dev), id1, id2, phyaddr);
- np->phyaddr = phyaddr;
- np->phy_oui = id1 | id2;
- break;
- }
- if (i == 33) {
- printk(KERN_INFO "%s: open: Could not find a valid PHY.\n",
- pci_name(pci_dev));
- goto out_error;
- }
-
- /* reset it */
- phy_init(dev);
-
- /* set default link speed settings */
- np->linkspeed = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
- np->duplex = 0;
- np->autoneg = 1;
-
- err = register_netdev(dev);
- if (err) {
- printk(KERN_INFO "forcedeth: unable to register netdev: %d\n", err);
- goto out_error;
- }
- printk(KERN_INFO "%s: forcedeth.c: subsystem: %05x:%04x bound to %s\n",
- dev->name, pci_dev->subsystem_vendor, pci_dev->subsystem_device,
- pci_name(pci_dev));
-
- return 0;
-
-out_error:
- pci_set_drvdata(pci_dev, NULL);
-out_freering:
- free_rings(dev);
-out_unmap:
- iounmap(get_hwbase(dev));
-out_relreg:
- pci_release_regions(pci_dev);
-out_disable:
- pci_disable_device(pci_dev);
-out_free:
- free_netdev(dev);
-out:
- return err;
-}
-
-static void __devexit nv_remove(struct pci_dev *pci_dev)
-{
- struct net_device *dev = pci_get_drvdata(pci_dev);
- struct fe_priv *np = netdev_priv(dev);
- u8 __iomem *base = get_hwbase(dev);
-
- unregister_netdev(dev);
-
- /* special op: write back the misordered MAC address - otherwise
- * the next nv_probe would see a wrong address.
- */
- writel(np->orig_mac[0], base + NvRegMacAddrA);
- writel(np->orig_mac[1], base + NvRegMacAddrB);
-
- /* free all structures */
- free_rings(dev);
- iounmap(get_hwbase(dev));
- pci_release_regions(pci_dev);
- pci_disable_device(pci_dev);
- free_netdev(dev);
- pci_set_drvdata(pci_dev, NULL);
-}
-
-static struct pci_device_id pci_tbl[] = {
- { /* nForce Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_1),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce2 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_2),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_3),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_4),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_5),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_6),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* nForce3 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_7),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
- },
- { /* CK804 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_8),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* CK804 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_9),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP04 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_10),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP04 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_11),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA,
- },
- { /* MCP51 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_12),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP51 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_13),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL,
- },
- { /* MCP55 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_14),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP55 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_15),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP61 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_16),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP61 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_17),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP61 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_18),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP61 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_19),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP65 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_20),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP65 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_21),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP65 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_22),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- { /* MCP65 Ethernet Controller */
- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_23),
- .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX|DEV_HAS_STATISTICS|DEV_HAS_TEST_EXTENDED,
- },
- {0,},
-};
-
-static struct pci_driver driver = {
- .name = "forcedeth",
- .id_table = pci_tbl,
- .probe = nv_probe,
- .remove = __devexit_p(nv_remove),
-};
-
-
-static int __init init_nic(void)
-{
- printk(KERN_INFO "forcedeth.c: Reverse Engineered nForce ethernet driver. Version %s.\n", FORCEDETH_VERSION);
- return pci_register_driver(&driver);
-}
-
-static void __exit exit_nic(void)
-{
- pci_unregister_driver(&driver);
-}
-
-module_param(max_interrupt_work, int, 0);
-MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt");
-module_param(optimization_mode, int, 0);
-MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer.");
-module_param(poll_interval, int, 0);
-MODULE_PARM_DESC(poll_interval, "Interval determines how frequent timer interrupt is generated by [(time_in_micro_secs * 100) / (2^10)]. Min is 0 and Max is 65535.");
-module_param(msi, int, 0);
-MODULE_PARM_DESC(msi, "MSI interrupts are enabled by setting to 1 and disabled by setting to 0.");
-module_param(msix, int, 0);
-MODULE_PARM_DESC(msix, "MSIX interrupts are enabled by setting to 1 and disabled by setting to 0.");
-module_param(dma_64bit, int, 0);
-MODULE_PARM_DESC(dma_64bit, "High DMA is enabled by setting to 1 and disabled by setting to 0.");
-
-MODULE_AUTHOR("Manfred Spraul <manfred@colorfullife.com>");
-MODULE_DESCRIPTION("Reverse Engineered nForce ethernet driver");
-MODULE_LICENSE("GPL");
-
-MODULE_DEVICE_TABLE(pci, pci_tbl);
-
-module_init(init_nic);
-module_exit(exit_nic);
--- a/devices/r8169-2.6.22-ethercat.c Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,3203 +0,0 @@
-/******************************************************************************
- *
- * $Id$
- *
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
- *
- * This file is part of the IgH EtherCAT Master.
- *
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- *
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
- *
- *****************************************************************************/
-
-/*
- * r8169.c: RealTek 8169/8168/8101 ethernet driver.
- *
- * Copyright (c) 2002 ShuChen <shuchen@realtek.com.tw>
- * Copyright (c) 2003 - 2007 Francois Romieu <romieu@fr.zoreil.com>
- * Copyright (c) a lot of people too. Please respect their work.
- *
- * See MAINTAINERS file for support contact information.
- */
-
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/pci.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/delay.h>
-#include <linux/ethtool.h>
-#include <linux/mii.h>
-#include <linux/if_vlan.h>
-#include <linux/crc32.h>
-#include <linux/in.h>
-#include <linux/ip.h>
-#include <linux/tcp.h>
-#include <linux/init.h>
-#include <linux/dma-mapping.h>
-
-#include <asm/system.h>
-#include <asm/io.h>
-#include <asm/irq.h>
-
-#ifdef CONFIG_R8169_NAPI
-#define NAPI_SUFFIX "-NAPI"
-#else
-#define NAPI_SUFFIX ""
-#endif
-
-#define RTL8169_VERSION "2.2LK" NAPI_SUFFIX
-#define MODULENAME "r8169"
-#define PFX MODULENAME ": "
-
-#include "../globals.h"
-#include "ecdev.h"
-
-#ifdef RTL8169_DEBUG
-#define assert(expr) \
- if (!(expr)) { \
- printk( "Assertion failed! %s,%s,%s,line=%d\n", \
- #expr,__FILE__,__FUNCTION__,__LINE__); \
- }
-#define dprintk(fmt, args...) do { printk(PFX fmt, ## args); } while (0)
-#else
-#define assert(expr) do {} while (0)
-#define dprintk(fmt, args...) do {} while (0)
-#endif /* RTL8169_DEBUG */
-
-#define R8169_MSG_DEFAULT \
- (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN)
-
-#define TX_BUFFS_AVAIL(tp) \
- (tp->dirty_tx + NUM_TX_DESC - tp->cur_tx - 1)
-
-#ifdef CONFIG_R8169_NAPI
-#define rtl8169_rx_skb netif_receive_skb
-#define rtl8169_rx_hwaccel_skb vlan_hwaccel_receive_skb
-#define rtl8169_rx_quota(count, quota) min(count, quota)
-#else
-#define rtl8169_rx_skb netif_rx
-#define rtl8169_rx_hwaccel_skb vlan_hwaccel_rx
-#define rtl8169_rx_quota(count, quota) count
-#endif
-
-/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
-static const int max_interrupt_work = 20;
-
-/* Maximum number of multicast addresses to filter (vs. Rx-all-multicast).
- The RTL chips use a 64 element hash table based on the Ethernet CRC. */
-static const int multicast_filter_limit = 32;
-
-/* MAC address length */
-#define MAC_ADDR_LEN 6
-
-#define RX_FIFO_THRESH 7 /* 7 means NO threshold, Rx buffer level before first PCI xfer. */
-#define RX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
-#define TX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
-#define EarlyTxThld 0x3F /* 0x3F means NO early transmit */
-#define RxPacketMaxSize 0x3FE8 /* 16K - 1 - ETH_HLEN - VLAN - CRC... */
-#define SafeMtu 0x1c20 /* ... actually life sucks beyond ~7k */
-#define InterFrameGap 0x03 /* 3 means InterFrameGap = the shortest one */
-
-#define R8169_REGS_SIZE 256
-#define R8169_NAPI_WEIGHT 64
-#define NUM_TX_DESC 64 /* Number of Tx descriptor registers */
-#define NUM_RX_DESC 256 /* Number of Rx descriptor registers */
-#define RX_BUF_SIZE 1536 /* Rx Buffer size */
-#define R8169_TX_RING_BYTES (NUM_TX_DESC * sizeof(struct TxDesc))
-#define R8169_RX_RING_BYTES (NUM_RX_DESC * sizeof(struct RxDesc))
-
-#define RTL8169_TX_TIMEOUT (6*HZ)
-#define RTL8169_PHY_TIMEOUT (10*HZ)
-
-/* write/read MMIO register */
-#define RTL_W8(reg, val8) writeb ((val8), ioaddr + (reg))
-#define RTL_W16(reg, val16) writew ((val16), ioaddr + (reg))
-#define RTL_W32(reg, val32) writel ((val32), ioaddr + (reg))
-#define RTL_R8(reg) readb (ioaddr + (reg))
-#define RTL_R16(reg) readw (ioaddr + (reg))
-#define RTL_R32(reg) ((unsigned long) readl (ioaddr + (reg)))
-
-enum mac_version {
- RTL_GIGA_MAC_VER_01 = 0x01, // 8169
- RTL_GIGA_MAC_VER_02 = 0x02, // 8169S
- RTL_GIGA_MAC_VER_03 = 0x03, // 8110S
- RTL_GIGA_MAC_VER_04 = 0x04, // 8169SB
- RTL_GIGA_MAC_VER_05 = 0x05, // 8110SCd
- RTL_GIGA_MAC_VER_06 = 0x06, // 8110SCe
- RTL_GIGA_MAC_VER_11 = 0x0b, // 8168Bb
- RTL_GIGA_MAC_VER_12 = 0x0c, // 8168Be 8168Bf
- RTL_GIGA_MAC_VER_13 = 0x0d, // 8101Eb 8101Ec
- RTL_GIGA_MAC_VER_14 = 0x0e, // 8101
- RTL_GIGA_MAC_VER_15 = 0x0f // 8101
-};
-
-enum phy_version {
- RTL_GIGA_PHY_VER_C = 0x03, /* PHY Reg 0x03 bit0-3 == 0x0000 */
- RTL_GIGA_PHY_VER_D = 0x04, /* PHY Reg 0x03 bit0-3 == 0x0000 */
- RTL_GIGA_PHY_VER_E = 0x05, /* PHY Reg 0x03 bit0-3 == 0x0000 */
- RTL_GIGA_PHY_VER_F = 0x06, /* PHY Reg 0x03 bit0-3 == 0x0001 */
- RTL_GIGA_PHY_VER_G = 0x07, /* PHY Reg 0x03 bit0-3 == 0x0002 */
- RTL_GIGA_PHY_VER_H = 0x08, /* PHY Reg 0x03 bit0-3 == 0x0003 */
-};
-
-#define _R(NAME,MAC,MASK) \
- { .name = NAME, .mac_version = MAC, .RxConfigMask = MASK }
-
-static const struct {
- const char *name;
- u8 mac_version;
- u32 RxConfigMask; /* Clears the bits supported by this chip */
-} rtl_chip_info[] = {
- _R("RTL8169", RTL_GIGA_MAC_VER_01, 0xff7e1880), // 8169
- _R("RTL8169s", RTL_GIGA_MAC_VER_02, 0xff7e1880), // 8169S
- _R("RTL8110s", RTL_GIGA_MAC_VER_03, 0xff7e1880), // 8110S
- _R("RTL8169sb/8110sb", RTL_GIGA_MAC_VER_04, 0xff7e1880), // 8169SB
- _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_05, 0xff7e1880), // 8110SCd
- _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_06, 0xff7e1880), // 8110SCe
- _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_11, 0xff7e1880), // PCI-E
- _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_12, 0xff7e1880), // PCI-E
- _R("RTL8101e", RTL_GIGA_MAC_VER_13, 0xff7e1880), // PCI-E 8139
- _R("RTL8100e", RTL_GIGA_MAC_VER_14, 0xff7e1880), // PCI-E 8139
- _R("RTL8100e", RTL_GIGA_MAC_VER_15, 0xff7e1880) // PCI-E 8139
-};
-#undef _R
-
-enum cfg_version {
- RTL_CFG_0 = 0x00,
- RTL_CFG_1,
- RTL_CFG_2
-};
-
-static void rtl_hw_start_8169(struct net_device *);
-static void rtl_hw_start_8168(struct net_device *);
-static void rtl_hw_start_8101(struct net_device *);
-
-static struct pci_device_id rtl8169_pci_tbl[] = {
- { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8129), 0, 0, RTL_CFG_0 },
- { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8136), 0, 0, RTL_CFG_2 },
- { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8167), 0, 0, RTL_CFG_0 },
- { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8168), 0, 0, RTL_CFG_1 },
- { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), 0, 0, RTL_CFG_0 },
- { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4300), 0, 0, RTL_CFG_0 },
- { PCI_DEVICE(0x1259, 0xc107), 0, 0, RTL_CFG_0 },
- { PCI_DEVICE(0x16ec, 0x0116), 0, 0, RTL_CFG_0 },
- { PCI_VENDOR_ID_LINKSYS, 0x1032,
- PCI_ANY_ID, 0x0024, 0, 0, RTL_CFG_0 },
- {0,},
-};
-
-MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);
-
-static int rx_copybreak = 200;
-static int use_dac;
-static struct {
- u32 msg_enable;
-} debug = { -1 };
-
-enum rtl_registers {
- MAC0 = 0, /* Ethernet hardware address. */
- MAC4 = 4,
- MAR0 = 8, /* Multicast filter. */
- CounterAddrLow = 0x10,
- CounterAddrHigh = 0x14,
- TxDescStartAddrLow = 0x20,
- TxDescStartAddrHigh = 0x24,
- TxHDescStartAddrLow = 0x28,
- TxHDescStartAddrHigh = 0x2c,
- FLASH = 0x30,
- ERSR = 0x36,
- ChipCmd = 0x37,
- TxPoll = 0x38,
- IntrMask = 0x3c,
- IntrStatus = 0x3e,
- TxConfig = 0x40,
- RxConfig = 0x44,
- RxMissed = 0x4c,
- Cfg9346 = 0x50,
- Config0 = 0x51,
- Config1 = 0x52,
- Config2 = 0x53,
- Config3 = 0x54,
- Config4 = 0x55,
- Config5 = 0x56,
- MultiIntr = 0x5c,
- PHYAR = 0x60,
- TBICSR = 0x64,
- TBI_ANAR = 0x68,
- TBI_LPAR = 0x6a,
- PHYstatus = 0x6c,
- RxMaxSize = 0xda,
- CPlusCmd = 0xe0,
- IntrMitigate = 0xe2,
- RxDescAddrLow = 0xe4,
- RxDescAddrHigh = 0xe8,
- EarlyTxThres = 0xec,
- FuncEvent = 0xf0,
- FuncEventMask = 0xf4,
- FuncPresetState = 0xf8,
- FuncForceEvent = 0xfc,
-};
-
-enum rtl_register_content {
- /* InterruptStatusBits */
- SYSErr = 0x8000,
- PCSTimeout = 0x4000,
- SWInt = 0x0100,
- TxDescUnavail = 0x0080,
- RxFIFOOver = 0x0040,
- LinkChg = 0x0020,
- RxOverflow = 0x0010,
- TxErr = 0x0008,
- TxOK = 0x0004,
- RxErr = 0x0002,
- RxOK = 0x0001,
-
- /* RxStatusDesc */
- RxFOVF = (1 << 23),
- RxRWT = (1 << 22),
- RxRES = (1 << 21),
- RxRUNT = (1 << 20),
- RxCRC = (1 << 19),
-
- /* ChipCmdBits */
- CmdReset = 0x10,
- CmdRxEnb = 0x08,
- CmdTxEnb = 0x04,
- RxBufEmpty = 0x01,
-
- /* TXPoll register p.5 */
- HPQ = 0x80, /* Poll cmd on the high prio queue */
- NPQ = 0x40, /* Poll cmd on the low prio queue */
- FSWInt = 0x01, /* Forced software interrupt */
-
- /* Cfg9346Bits */
- Cfg9346_Lock = 0x00,
- Cfg9346_Unlock = 0xc0,
-
- /* rx_mode_bits */
- AcceptErr = 0x20,
- AcceptRunt = 0x10,
- AcceptBroadcast = 0x08,
- AcceptMulticast = 0x04,
- AcceptMyPhys = 0x02,
- AcceptAllPhys = 0x01,
-
- /* RxConfigBits */
- RxCfgFIFOShift = 13,
- RxCfgDMAShift = 8,
-
- /* TxConfigBits */
- TxInterFrameGapShift = 24,
- TxDMAShift = 8, /* DMA burst value (0-7) is shift this many bits */
-
- /* Config1 register p.24 */
- PMEnable = (1 << 0), /* Power Management Enable */
-
- /* Config2 register p. 25 */
- PCI_Clock_66MHz = 0x01,
- PCI_Clock_33MHz = 0x00,
-
- /* Config3 register p.25 */
- MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */
- LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */
-
- /* Config5 register p.27 */
- BWF = (1 << 6), /* Accept Broadcast wakeup frame */
- MWF = (1 << 5), /* Accept Multicast wakeup frame */
- UWF = (1 << 4), /* Accept Unicast wakeup frame */
- LanWake = (1 << 1), /* LanWake enable/disable */
- PMEStatus = (1 << 0), /* PME status can be reset by PCI RST# */
-
- /* TBICSR p.28 */
- TBIReset = 0x80000000,
- TBILoopback = 0x40000000,
- TBINwEnable = 0x20000000,
- TBINwRestart = 0x10000000,
- TBILinkOk = 0x02000000,
- TBINwComplete = 0x01000000,
-
- /* CPlusCmd p.31 */
- PktCntrDisable = (1 << 7), // 8168
- RxVlan = (1 << 6),
- RxChkSum = (1 << 5),
- PCIDAC = (1 << 4),
- PCIMulRW = (1 << 3),
- INTT_0 = 0x0000, // 8168
- INTT_1 = 0x0001, // 8168
- INTT_2 = 0x0002, // 8168
- INTT_3 = 0x0003, // 8168
-
- /* rtl8169_PHYstatus */
- TBI_Enable = 0x80,
- TxFlowCtrl = 0x40,
- RxFlowCtrl = 0x20,
- _1000bpsF = 0x10,
- _100bps = 0x08,
- _10bps = 0x04,
- LinkStatus = 0x02,
- FullDup = 0x01,
-
- /* _TBICSRBit */
- TBILinkOK = 0x02000000,
-
- /* DumpCounterCommand */
- CounterDump = 0x8,
-};
-
-enum desc_status_bit {
- DescOwn = (1 << 31), /* Descriptor is owned by NIC */
- RingEnd = (1 << 30), /* End of descriptor ring */
- FirstFrag = (1 << 29), /* First segment of a packet */
- LastFrag = (1 << 28), /* Final segment of a packet */
-
- /* Tx private */
- LargeSend = (1 << 27), /* TCP Large Send Offload (TSO) */
- MSSShift = 16, /* MSS value position */
- MSSMask = 0xfff, /* MSS value + LargeSend bit: 12 bits */
- IPCS = (1 << 18), /* Calculate IP checksum */
- UDPCS = (1 << 17), /* Calculate UDP/IP checksum */
- TCPCS = (1 << 16), /* Calculate TCP/IP checksum */
- TxVlanTag = (1 << 17), /* Add VLAN tag */
-
- /* Rx private */
- PID1 = (1 << 18), /* Protocol ID bit 1/2 */
- PID0 = (1 << 17), /* Protocol ID bit 2/2 */
-
-#define RxProtoUDP (PID1)
-#define RxProtoTCP (PID0)
-#define RxProtoIP (PID1 | PID0)
-#define RxProtoMask RxProtoIP
-
- IPFail = (1 << 16), /* IP checksum failed */
- UDPFail = (1 << 15), /* UDP/IP checksum failed */
- TCPFail = (1 << 14), /* TCP/IP checksum failed */
- RxVlanTag = (1 << 16), /* VLAN tag available */
-};
-
-#define RsvdMask 0x3fffc000
-
-struct TxDesc {
- __le32 opts1;
- __le32 opts2;
- __le64 addr;
-};
-
-struct RxDesc {
- __le32 opts1;
- __le32 opts2;
- __le64 addr;
-};
-
-struct ring_info {
- struct sk_buff *skb;
- u32 len;
- u8 __pad[sizeof(void *) - sizeof(u32)];
-};
-
-struct rtl8169_private {
- void __iomem *mmio_addr; /* memory map physical address */
- struct pci_dev *pci_dev; /* Index of PCI device */
- struct net_device *dev;
- struct net_device_stats stats; /* statistics of net device */
- spinlock_t lock; /* spin lock flag */
- u32 msg_enable;
- int chipset;
- int mac_version;
- int phy_version;
- u32 cur_rx; /* Index into the Rx descriptor buffer of next Rx pkt. */
- u32 cur_tx; /* Index into the Tx descriptor buffer of next Rx pkt. */
- u32 dirty_rx;
- u32 dirty_tx;
- struct TxDesc *TxDescArray; /* 256-aligned Tx descriptor ring */
- struct RxDesc *RxDescArray; /* 256-aligned Rx descriptor ring */
- dma_addr_t TxPhyAddr;
- dma_addr_t RxPhyAddr;
- struct sk_buff *Rx_skbuff[NUM_RX_DESC]; /* Rx data buffers */
- struct ring_info tx_skb[NUM_TX_DESC]; /* Tx data buffers */
- unsigned align;
- unsigned rx_buf_sz;
- struct timer_list timer;
- u16 cp_cmd;
- u16 intr_event;
- u16 napi_event;
- u16 intr_mask;
- int phy_auto_nego_reg;
- int phy_1000_ctrl_reg;
-#ifdef CONFIG_R8169_VLAN
- struct vlan_group *vlgrp;
-#endif
- int (*set_speed)(struct net_device *, u8 autoneg, u16 speed, u8 duplex);
- void (*get_settings)(struct net_device *, struct ethtool_cmd *);
- void (*phy_reset_enable)(void __iomem *);
- void (*hw_start)(struct net_device *);
- unsigned int (*phy_reset_pending)(void __iomem *);
- unsigned int (*link_ok)(void __iomem *);
- struct delayed_work task;
- unsigned wol_enabled : 1;
-
- ec_device_t *ecdev;
-};
-
-MODULE_AUTHOR("Realtek and the Linux r8169 crew <netdev@vger.kernel.org>");
-MODULE_DESCRIPTION("RealTek RTL-8169 Gigabit Ethernet/EtherCAT driver");
-module_param(rx_copybreak, int, 0);
-MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");
-module_param(use_dac, int, 0);
-MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot.");
-module_param_named(debug, debug.msg_enable, int, 0);
-MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");
-MODULE_LICENSE("GPL");
-MODULE_VERSION(RTL8169_VERSION);
-
-void ec_poll(struct net_device *);
-
-static int rtl8169_open(struct net_device *dev);
-static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev);
-static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance);
-static int rtl8169_init_ring(struct net_device *dev);
-static void rtl_hw_start(struct net_device *dev);
-static int rtl8169_close(struct net_device *dev);
-static void rtl_set_rx_mode(struct net_device *dev);
-static void rtl8169_tx_timeout(struct net_device *dev);
-static struct net_device_stats *rtl8169_get_stats(struct net_device *dev);
-static int rtl8169_rx_interrupt(struct net_device *, struct rtl8169_private *,
- void __iomem *);
-static int rtl8169_change_mtu(struct net_device *dev, int new_mtu);
-static void rtl8169_down(struct net_device *dev);
-static void rtl8169_rx_clear(struct rtl8169_private *tp);
-
-#ifdef CONFIG_R8169_NAPI
-static int rtl8169_poll(struct net_device *dev, int *budget);
-#endif
-
-static const unsigned int rtl8169_rx_config =
- (RX_FIFO_THRESH << RxCfgFIFOShift) | (RX_DMA_BURST << RxCfgDMAShift);
-
-static void mdio_write(void __iomem *ioaddr, int reg_addr, int value)
-{
- int i;
-
- RTL_W32(PHYAR, 0x80000000 | (reg_addr & 0xFF) << 16 | value);
-
- for (i = 20; i > 0; i--) {
- /*
- * Check if the RTL8169 has completed writing to the specified
- * MII register.
- */
- if (!(RTL_R32(PHYAR) & 0x80000000))
- break;
- udelay(25);
- }
-}
-
-static int mdio_read(void __iomem *ioaddr, int reg_addr)
-{
- int i, value = -1;
-
- RTL_W32(PHYAR, 0x0 | (reg_addr & 0xFF) << 16);
-
- for (i = 20; i > 0; i--) {
- /*
- * Check if the RTL8169 has completed retrieving data from
- * the specified MII register.
- */
- if (RTL_R32(PHYAR) & 0x80000000) {
- value = (int) (RTL_R32(PHYAR) & 0xFFFF);
- break;
- }
- udelay(25);
- }
- return value;
-}
-
-static void rtl8169_irq_mask_and_ack(void __iomem *ioaddr)
-{
- RTL_W16(IntrMask, 0x0000);
-
- RTL_W16(IntrStatus, 0xffff);
-}
-
-static void rtl8169_asic_down(void __iomem *ioaddr)
-{
- RTL_W8(ChipCmd, 0x00);
- rtl8169_irq_mask_and_ack(ioaddr);
- RTL_R16(CPlusCmd);
-}
-
-static unsigned int rtl8169_tbi_reset_pending(void __iomem *ioaddr)
-{
- return RTL_R32(TBICSR) & TBIReset;
-}
-
-static unsigned int rtl8169_xmii_reset_pending(void __iomem *ioaddr)
-{
- return mdio_read(ioaddr, MII_BMCR) & BMCR_RESET;
-}
-
-static unsigned int rtl8169_tbi_link_ok(void __iomem *ioaddr)
-{
- return RTL_R32(TBICSR) & TBILinkOk;
-}
-
-static unsigned int rtl8169_xmii_link_ok(void __iomem *ioaddr)
-{
- return RTL_R8(PHYstatus) & LinkStatus;
-}
-
-static void rtl8169_tbi_reset_enable(void __iomem *ioaddr)
-{
- RTL_W32(TBICSR, RTL_R32(TBICSR) | TBIReset);
-}
-
-static void rtl8169_xmii_reset_enable(void __iomem *ioaddr)
-{
- unsigned int val;
-
- val = mdio_read(ioaddr, MII_BMCR) | BMCR_RESET;
- mdio_write(ioaddr, MII_BMCR, val & 0xffff);
-}
-
-static void rtl8169_check_link_status(struct net_device *dev,
- struct rtl8169_private *tp,
- void __iomem *ioaddr)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&tp->lock, flags);
-
- if (tp->link_ok(ioaddr)) {
- if(tp->ecdev) {
- ecdev_set_link(tp->ecdev, 1);
- } else {
- netif_carrier_on(dev);
- if (netif_msg_ifup(tp))
- printk(KERN_INFO PFX "%s: link up\n", dev->name);
- }
- } else {
- if(tp->ecdev) {
- ecdev_set_link(tp->ecdev, 0);
- } else {
- if (netif_msg_ifdown(tp))
- printk(KERN_INFO PFX "%s: link down\n", dev->name);
- netif_carrier_off(dev);
- }
- }
- spin_unlock_irqrestore(&tp->lock, flags);
-}
-
-static void rtl8169_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- u8 options;
-
- wol->wolopts = 0;
-
-#define WAKE_ANY (WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_BCAST | WAKE_MCAST)
- wol->supported = WAKE_ANY;
-
- spin_lock_irq(&tp->lock);
-
- options = RTL_R8(Config1);
- if (!(options & PMEnable))
- goto out_unlock;
-
- options = RTL_R8(Config3);
- if (options & LinkUp)
- wol->wolopts |= WAKE_PHY;
- if (options & MagicPacket)
- wol->wolopts |= WAKE_MAGIC;
-
- options = RTL_R8(Config5);
- if (options & UWF)
- wol->wolopts |= WAKE_UCAST;
- if (options & BWF)
- wol->wolopts |= WAKE_BCAST;
- if (options & MWF)
- wol->wolopts |= WAKE_MCAST;
-
-out_unlock:
- spin_unlock_irq(&tp->lock);
-}
-
-static int rtl8169_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned int i;
- static struct {
- u32 opt;
- u16 reg;
- u8 mask;
- } cfg[] = {
- { WAKE_ANY, Config1, PMEnable },
- { WAKE_PHY, Config3, LinkUp },
- { WAKE_MAGIC, Config3, MagicPacket },
- { WAKE_UCAST, Config5, UWF },
- { WAKE_BCAST, Config5, BWF },
- { WAKE_MCAST, Config5, MWF },
- { WAKE_ANY, Config5, LanWake }
- };
-
- spin_lock_irq(&tp->lock);
-
- RTL_W8(Cfg9346, Cfg9346_Unlock);
-
- for (i = 0; i < ARRAY_SIZE(cfg); i++) {
- u8 options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
- if (wol->wolopts & cfg[i].opt)
- options |= cfg[i].mask;
- RTL_W8(cfg[i].reg, options);
- }
-
- RTL_W8(Cfg9346, Cfg9346_Lock);
-
- tp->wol_enabled = (wol->wolopts) ? 1 : 0;
-
- spin_unlock_irq(&tp->lock);
-
- return 0;
-}
-
-static void rtl8169_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
-
- strcpy(info->driver, MODULENAME);
- strcpy(info->version, RTL8169_VERSION);
- strcpy(info->bus_info, pci_name(tp->pci_dev));
-}
-
-static int rtl8169_get_regs_len(struct net_device *dev)
-{
- return R8169_REGS_SIZE;
-}
-
-static int rtl8169_set_speed_tbi(struct net_device *dev,
- u8 autoneg, u16 speed, u8 duplex)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- int ret = 0;
- u32 reg;
-
- reg = RTL_R32(TBICSR);
- if ((autoneg == AUTONEG_DISABLE) && (speed == SPEED_1000) &&
- (duplex == DUPLEX_FULL)) {
- RTL_W32(TBICSR, reg & ~(TBINwEnable | TBINwRestart));
- } else if (autoneg == AUTONEG_ENABLE)
- RTL_W32(TBICSR, reg | TBINwEnable | TBINwRestart);
- else {
- if (netif_msg_link(tp)) {
- printk(KERN_WARNING "%s: "
- "incorrect speed setting refused in TBI mode\n",
- dev->name);
- }
- ret = -EOPNOTSUPP;
- }
-
- return ret;
-}
-
-static int rtl8169_set_speed_xmii(struct net_device *dev,
- u8 autoneg, u16 speed, u8 duplex)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- int auto_nego, giga_ctrl;
-
- auto_nego = mdio_read(ioaddr, MII_ADVERTISE);
- auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_10FULL |
- ADVERTISE_100HALF | ADVERTISE_100FULL);
- giga_ctrl = mdio_read(ioaddr, MII_CTRL1000);
- giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
-
- if (autoneg == AUTONEG_ENABLE) {
- auto_nego |= (ADVERTISE_10HALF | ADVERTISE_10FULL |
- ADVERTISE_100HALF | ADVERTISE_100FULL);
- giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
- } else {
- if (speed == SPEED_10)
- auto_nego |= ADVERTISE_10HALF | ADVERTISE_10FULL;
- else if (speed == SPEED_100)
- auto_nego |= ADVERTISE_100HALF | ADVERTISE_100FULL;
- else if (speed == SPEED_1000)
- giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
-
- if (duplex == DUPLEX_HALF)
- auto_nego &= ~(ADVERTISE_10FULL | ADVERTISE_100FULL);
-
- if (duplex == DUPLEX_FULL)
- auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_100HALF);
-
- /* This tweak comes straight from Realtek's driver. */
- if ((speed == SPEED_100) && (duplex == DUPLEX_HALF) &&
- (tp->mac_version == RTL_GIGA_MAC_VER_13)) {
- auto_nego = ADVERTISE_100HALF | ADVERTISE_CSMA;
- }
- }
-
- /* The 8100e/8101e do Fast Ethernet only. */
- if ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_14) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_15)) {
- if ((giga_ctrl & (ADVERTISE_1000FULL | ADVERTISE_1000HALF)) &&
- netif_msg_link(tp)) {
- printk(KERN_INFO "%s: PHY does not support 1000Mbps.\n",
- dev->name);
- }
- giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
- }
-
- auto_nego |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
-
- if (tp->mac_version == RTL_GIGA_MAC_VER_12) {
- /* Vendor specific (0x1f) and reserved (0x0e) MII registers. */
- mdio_write(ioaddr, 0x1f, 0x0000);
- mdio_write(ioaddr, 0x0e, 0x0000);
- }
-
- tp->phy_auto_nego_reg = auto_nego;
- tp->phy_1000_ctrl_reg = giga_ctrl;
-
- mdio_write(ioaddr, MII_ADVERTISE, auto_nego);
- mdio_write(ioaddr, MII_CTRL1000, giga_ctrl);
- mdio_write(ioaddr, MII_BMCR, BMCR_ANENABLE | BMCR_ANRESTART);
- return 0;
-}
-
-static int rtl8169_set_speed(struct net_device *dev,
- u8 autoneg, u16 speed, u8 duplex)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- int ret;
-
- ret = tp->set_speed(dev, autoneg, speed, duplex);
-
- if (netif_running(dev) && (tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
- mod_timer(&tp->timer, jiffies + RTL8169_PHY_TIMEOUT);
-
- return ret;
-}
-
-static int rtl8169_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- unsigned long flags;
- int ret;
-
- spin_lock_irqsave(&tp->lock, flags);
- ret = rtl8169_set_speed(dev, cmd->autoneg, cmd->speed, cmd->duplex);
- spin_unlock_irqrestore(&tp->lock, flags);
-
- return ret;
-}
-
-static u32 rtl8169_get_rx_csum(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
-
- return tp->cp_cmd & RxChkSum;
-}
-
-static int rtl8169_set_rx_csum(struct net_device *dev, u32 data)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned long flags;
-
- spin_lock_irqsave(&tp->lock, flags);
-
- if (data)
- tp->cp_cmd |= RxChkSum;
- else
- tp->cp_cmd &= ~RxChkSum;
-
- RTL_W16(CPlusCmd, tp->cp_cmd);
- RTL_R16(CPlusCmd);
-
- spin_unlock_irqrestore(&tp->lock, flags);
-
- return 0;
-}
-
-#ifdef CONFIG_R8169_VLAN
-
-static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
- struct sk_buff *skb)
-{
- return (tp->vlgrp && vlan_tx_tag_present(skb)) ?
- TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00;
-}
-
-static void rtl8169_vlan_rx_register(struct net_device *dev,
- struct vlan_group *grp)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned long flags;
-
- spin_lock_irqsave(&tp->lock, flags);
- tp->vlgrp = grp;
- if (tp->vlgrp)
- tp->cp_cmd |= RxVlan;
- else
- tp->cp_cmd &= ~RxVlan;
- RTL_W16(CPlusCmd, tp->cp_cmd);
- RTL_R16(CPlusCmd);
- spin_unlock_irqrestore(&tp->lock, flags);
-}
-
-static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
- struct sk_buff *skb)
-{
- u32 opts2 = le32_to_cpu(desc->opts2);
- int ret;
-
- if (tp->vlgrp && (opts2 & RxVlanTag)) {
- rtl8169_rx_hwaccel_skb(skb, tp->vlgrp, swab16(opts2 & 0xffff));
- ret = 0;
- } else
- ret = -1;
- desc->opts2 = 0;
- return ret;
-}
-
-#else /* !CONFIG_R8169_VLAN */
-
-static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
- struct sk_buff *skb)
-{
- return 0;
-}
-
-static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
- struct sk_buff *skb)
-{
- return -1;
-}
-
-#endif
-
-static void rtl8169_gset_tbi(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- u32 status;
-
- cmd->supported =
- SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_FIBRE;
- cmd->port = PORT_FIBRE;
- cmd->transceiver = XCVR_INTERNAL;
-
- status = RTL_R32(TBICSR);
- cmd->advertising = (status & TBINwEnable) ? ADVERTISED_Autoneg : 0;
- cmd->autoneg = !!(status & TBINwEnable);
-
- cmd->speed = SPEED_1000;
- cmd->duplex = DUPLEX_FULL; /* Always set */
-}
-
-static void rtl8169_gset_xmii(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- u8 status;
-
- cmd->supported = SUPPORTED_10baseT_Half |
- SUPPORTED_10baseT_Full |
- SUPPORTED_100baseT_Half |
- SUPPORTED_100baseT_Full |
- SUPPORTED_1000baseT_Full |
- SUPPORTED_Autoneg |
- SUPPORTED_TP;
-
- cmd->autoneg = 1;
- cmd->advertising = ADVERTISED_TP | ADVERTISED_Autoneg;
-
- if (tp->phy_auto_nego_reg & ADVERTISE_10HALF)
- cmd->advertising |= ADVERTISED_10baseT_Half;
- if (tp->phy_auto_nego_reg & ADVERTISE_10FULL)
- cmd->advertising |= ADVERTISED_10baseT_Full;
- if (tp->phy_auto_nego_reg & ADVERTISE_100HALF)
- cmd->advertising |= ADVERTISED_100baseT_Half;
- if (tp->phy_auto_nego_reg & ADVERTISE_100FULL)
- cmd->advertising |= ADVERTISED_100baseT_Full;
- if (tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL)
- cmd->advertising |= ADVERTISED_1000baseT_Full;
-
- status = RTL_R8(PHYstatus);
-
- if (status & _1000bpsF)
- cmd->speed = SPEED_1000;
- else if (status & _100bps)
- cmd->speed = SPEED_100;
- else if (status & _10bps)
- cmd->speed = SPEED_10;
-
- if (status & TxFlowCtrl)
- cmd->advertising |= ADVERTISED_Asym_Pause;
- if (status & RxFlowCtrl)
- cmd->advertising |= ADVERTISED_Pause;
-
- cmd->duplex = ((status & _1000bpsF) || (status & FullDup)) ?
- DUPLEX_FULL : DUPLEX_HALF;
-}
-
-static int rtl8169_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- unsigned long flags;
-
- spin_lock_irqsave(&tp->lock, flags);
-
- tp->get_settings(dev, cmd);
-
- spin_unlock_irqrestore(&tp->lock, flags);
- return 0;
-}
-
-static void rtl8169_get_regs(struct net_device *dev, struct ethtool_regs *regs,
- void *p)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- unsigned long flags;
-
- if (regs->len > R8169_REGS_SIZE)
- regs->len = R8169_REGS_SIZE;
-
- spin_lock_irqsave(&tp->lock, flags);
- memcpy_fromio(p, tp->mmio_addr, regs->len);
- spin_unlock_irqrestore(&tp->lock, flags);
-}
-
-static u32 rtl8169_get_msglevel(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
-
- return tp->msg_enable;
-}
-
-static void rtl8169_set_msglevel(struct net_device *dev, u32 value)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
-
- tp->msg_enable = value;
-}
-
-static const char rtl8169_gstrings[][ETH_GSTRING_LEN] = {
- "tx_packets",
- "rx_packets",
- "tx_errors",
- "rx_errors",
- "rx_missed",
- "align_errors",
- "tx_single_collisions",
- "tx_multi_collisions",
- "unicast",
- "broadcast",
- "multicast",
- "tx_aborted",
- "tx_underrun",
-};
-
-struct rtl8169_counters {
- u64 tx_packets;
- u64 rx_packets;
- u64 tx_errors;
- u32 rx_errors;
- u16 rx_missed;
- u16 align_errors;
- u32 tx_one_collision;
- u32 tx_multi_collision;
- u64 rx_unicast;
- u64 rx_broadcast;
- u32 rx_multicast;
- u16 tx_aborted;
- u16 tx_underun;
-};
-
-static int rtl8169_get_stats_count(struct net_device *dev)
-{
- return ARRAY_SIZE(rtl8169_gstrings);
-}
-
-static void rtl8169_get_ethtool_stats(struct net_device *dev,
- struct ethtool_stats *stats, u64 *data)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- struct rtl8169_counters *counters;
- dma_addr_t paddr;
- u32 cmd;
-
- ASSERT_RTNL();
-
- counters = pci_alloc_consistent(tp->pci_dev, sizeof(*counters), &paddr);
- if (!counters)
- return;
-
- RTL_W32(CounterAddrHigh, (u64)paddr >> 32);
- cmd = (u64)paddr & DMA_32BIT_MASK;
- RTL_W32(CounterAddrLow, cmd);
- RTL_W32(CounterAddrLow, cmd | CounterDump);
-
- while (RTL_R32(CounterAddrLow) & CounterDump) {
- if (msleep_interruptible(1))
- break;
- }
-
- RTL_W32(CounterAddrLow, 0);
- RTL_W32(CounterAddrHigh, 0);
-
- data[0] = le64_to_cpu(counters->tx_packets);
- data[1] = le64_to_cpu(counters->rx_packets);
- data[2] = le64_to_cpu(counters->tx_errors);
- data[3] = le32_to_cpu(counters->rx_errors);
- data[4] = le16_to_cpu(counters->rx_missed);
- data[5] = le16_to_cpu(counters->align_errors);
- data[6] = le32_to_cpu(counters->tx_one_collision);
- data[7] = le32_to_cpu(counters->tx_multi_collision);
- data[8] = le64_to_cpu(counters->rx_unicast);
- data[9] = le64_to_cpu(counters->rx_broadcast);
- data[10] = le32_to_cpu(counters->rx_multicast);
- data[11] = le16_to_cpu(counters->tx_aborted);
- data[12] = le16_to_cpu(counters->tx_underun);
-
- pci_free_consistent(tp->pci_dev, sizeof(*counters), counters, paddr);
-}
-
-static void rtl8169_get_strings(struct net_device *dev, u32 stringset, u8 *data)
-{
- switch(stringset) {
- case ETH_SS_STATS:
- memcpy(data, *rtl8169_gstrings, sizeof(rtl8169_gstrings));
- break;
- }
-}
-
-static const struct ethtool_ops rtl8169_ethtool_ops = {
- .get_drvinfo = rtl8169_get_drvinfo,
- .get_regs_len = rtl8169_get_regs_len,
- .get_link = ethtool_op_get_link,
- .get_settings = rtl8169_get_settings,
- .set_settings = rtl8169_set_settings,
- .get_msglevel = rtl8169_get_msglevel,
- .set_msglevel = rtl8169_set_msglevel,
- .get_rx_csum = rtl8169_get_rx_csum,
- .set_rx_csum = rtl8169_set_rx_csum,
- .get_tx_csum = ethtool_op_get_tx_csum,
- .set_tx_csum = ethtool_op_set_tx_csum,
- .get_sg = ethtool_op_get_sg,
- .set_sg = ethtool_op_set_sg,
- .get_tso = ethtool_op_get_tso,
- .set_tso = ethtool_op_set_tso,
- .get_regs = rtl8169_get_regs,
- .get_wol = rtl8169_get_wol,
- .set_wol = rtl8169_set_wol,
- .get_strings = rtl8169_get_strings,
- .get_stats_count = rtl8169_get_stats_count,
- .get_ethtool_stats = rtl8169_get_ethtool_stats,
-};
-
-static void rtl8169_write_gmii_reg_bit(void __iomem *ioaddr, int reg,
- int bitnum, int bitval)
-{
- int val;
-
- val = mdio_read(ioaddr, reg);
- val = (bitval == 1) ?
- val | (bitval << bitnum) : val & ~(0x0001 << bitnum);
- mdio_write(ioaddr, reg, val & 0xffff);
-}
-
-static void rtl8169_get_mac_version(struct rtl8169_private *tp,
- void __iomem *ioaddr)
-{
- /*
- * The driver currently handles the 8168Bf and the 8168Be identically
- * but they can be identified more specifically through the test below
- * if needed:
- *
- * (RTL_R32(TxConfig) & 0x700000) == 0x500000 ? 8168Bf : 8168Be
- *
- * Same thing for the 8101Eb and the 8101Ec:
- *
- * (RTL_R32(TxConfig) & 0x700000) == 0x200000 ? 8101Eb : 8101Ec
- */
- const struct {
- u32 mask;
- int mac_version;
- } mac_info[] = {
- { 0x38800000, RTL_GIGA_MAC_VER_15 },
- { 0x38000000, RTL_GIGA_MAC_VER_12 },
- { 0x34000000, RTL_GIGA_MAC_VER_13 },
- { 0x30800000, RTL_GIGA_MAC_VER_14 },
- { 0x30000000, RTL_GIGA_MAC_VER_11 },
- { 0x98000000, RTL_GIGA_MAC_VER_06 },
- { 0x18000000, RTL_GIGA_MAC_VER_05 },
- { 0x10000000, RTL_GIGA_MAC_VER_04 },
- { 0x04000000, RTL_GIGA_MAC_VER_03 },
- { 0x00800000, RTL_GIGA_MAC_VER_02 },
- { 0x00000000, RTL_GIGA_MAC_VER_01 } /* Catch-all */
- }, *p = mac_info;
- u32 reg;
-
- reg = RTL_R32(TxConfig) & 0xfc800000;
- while ((reg & p->mask) != p->mask)
- p++;
- tp->mac_version = p->mac_version;
-}
-
-static void rtl8169_print_mac_version(struct rtl8169_private *tp)
-{
- dprintk("mac_version = 0x%02x\n", tp->mac_version);
-}
-
-static void rtl8169_get_phy_version(struct rtl8169_private *tp,
- void __iomem *ioaddr)
-{
- const struct {
- u16 mask;
- u16 set;
- int phy_version;
- } phy_info[] = {
- { 0x000f, 0x0002, RTL_GIGA_PHY_VER_G },
- { 0x000f, 0x0001, RTL_GIGA_PHY_VER_F },
- { 0x000f, 0x0000, RTL_GIGA_PHY_VER_E },
- { 0x0000, 0x0000, RTL_GIGA_PHY_VER_D } /* Catch-all */
- }, *p = phy_info;
- u16 reg;
-
- reg = mdio_read(ioaddr, MII_PHYSID2) & 0xffff;
- while ((reg & p->mask) != p->set)
- p++;
- tp->phy_version = p->phy_version;
-}
-
-static void rtl8169_print_phy_version(struct rtl8169_private *tp)
-{
- struct {
- int version;
- char *msg;
- u32 reg;
- } phy_print[] = {
- { RTL_GIGA_PHY_VER_G, "RTL_GIGA_PHY_VER_G", 0x0002 },
- { RTL_GIGA_PHY_VER_F, "RTL_GIGA_PHY_VER_F", 0x0001 },
- { RTL_GIGA_PHY_VER_E, "RTL_GIGA_PHY_VER_E", 0x0000 },
- { RTL_GIGA_PHY_VER_D, "RTL_GIGA_PHY_VER_D", 0x0000 },
- { 0, NULL, 0x0000 }
- }, *p;
-
- for (p = phy_print; p->msg; p++) {
- if (tp->phy_version == p->version) {
- dprintk("phy_version == %s (%04x)\n", p->msg, p->reg);
- return;
- }
- }
- dprintk("phy_version == Unknown\n");
-}
-
-static void rtl8169_hw_phy_config(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- struct {
- u16 regs[5]; /* Beware of bit-sign propagation */
- } phy_magic[5] = { {
- { 0x0000, //w 4 15 12 0
- 0x00a1, //w 3 15 0 00a1
- 0x0008, //w 2 15 0 0008
- 0x1020, //w 1 15 0 1020
- 0x1000 } },{ //w 0 15 0 1000
- { 0x7000, //w 4 15 12 7
- 0xff41, //w 3 15 0 ff41
- 0xde60, //w 2 15 0 de60
- 0x0140, //w 1 15 0 0140
- 0x0077 } },{ //w 0 15 0 0077
- { 0xa000, //w 4 15 12 a
- 0xdf01, //w 3 15 0 df01
- 0xdf20, //w 2 15 0 df20
- 0xff95, //w 1 15 0 ff95
- 0xfa00 } },{ //w 0 15 0 fa00
- { 0xb000, //w 4 15 12 b
- 0xff41, //w 3 15 0 ff41
- 0xde20, //w 2 15 0 de20
- 0x0140, //w 1 15 0 0140
- 0x00bb } },{ //w 0 15 0 00bb
- { 0xf000, //w 4 15 12 f
- 0xdf01, //w 3 15 0 df01
- 0xdf20, //w 2 15 0 df20
- 0xff95, //w 1 15 0 ff95
- 0xbf00 } //w 0 15 0 bf00
- }
- }, *p = phy_magic;
- unsigned int i;
-
- rtl8169_print_mac_version(tp);
- rtl8169_print_phy_version(tp);
-
- if (tp->mac_version <= RTL_GIGA_MAC_VER_01)
- return;
- if (tp->phy_version >= RTL_GIGA_PHY_VER_H)
- return;
-
- dprintk("MAC version != 0 && PHY version == 0 or 1\n");
- dprintk("Do final_reg2.cfg\n");
-
- /* Shazam ! */
-
- if (tp->mac_version == RTL_GIGA_MAC_VER_04) {
- mdio_write(ioaddr, 31, 0x0002);
- mdio_write(ioaddr, 1, 0x90d0);
- mdio_write(ioaddr, 31, 0x0000);
- return;
- }
-
- if ((tp->mac_version != RTL_GIGA_MAC_VER_02) &&
- (tp->mac_version != RTL_GIGA_MAC_VER_03))
- return;
-
- mdio_write(ioaddr, 31, 0x0001); //w 31 2 0 1
- mdio_write(ioaddr, 21, 0x1000); //w 21 15 0 1000
- mdio_write(ioaddr, 24, 0x65c7); //w 24 15 0 65c7
- rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
-
- for (i = 0; i < ARRAY_SIZE(phy_magic); i++, p++) {
- int val, pos = 4;
-
- val = (mdio_read(ioaddr, pos) & 0x0fff) | (p->regs[0] & 0xffff);
- mdio_write(ioaddr, pos, val);
- while (--pos >= 0)
- mdio_write(ioaddr, pos, p->regs[4 - pos] & 0xffff);
- rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 1); //w 4 11 11 1
- rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
- }
- mdio_write(ioaddr, 31, 0x0000); //w 31 2 0 0
-}
-
-static void rtl8169_phy_timer(unsigned long __opaque)
-{
- struct net_device *dev = (struct net_device *)__opaque;
- struct rtl8169_private *tp = netdev_priv(dev);
- struct timer_list *timer = &tp->timer;
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned long timeout = RTL8169_PHY_TIMEOUT;
-
- assert(tp->mac_version > RTL_GIGA_MAC_VER_01);
- assert(tp->phy_version < RTL_GIGA_PHY_VER_H);
-
- if (!(tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
- return;
-
- spin_lock_irq(&tp->lock);
-
- if (tp->phy_reset_pending(ioaddr)) {
- /*
- * A busy loop could burn quite a few cycles on nowadays CPU.
- * Let's delay the execution of the timer for a few ticks.
- */
- timeout = HZ/10;
- goto out_mod_timer;
- }
-
- if (tp->link_ok(ioaddr))
- goto out_unlock;
-
- if (netif_msg_link(tp))
- printk(KERN_WARNING "%s: PHY reset until link up\n", dev->name);
-
- tp->phy_reset_enable(ioaddr);
-
-out_mod_timer:
- mod_timer(timer, jiffies + timeout);
-out_unlock:
- spin_unlock_irq(&tp->lock);
-}
-
-static inline void rtl8169_delete_timer(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- struct timer_list *timer = &tp->timer;
-
- if ((tp->mac_version <= RTL_GIGA_MAC_VER_01) ||
- (tp->phy_version >= RTL_GIGA_PHY_VER_H))
- return;
-
- del_timer_sync(timer);
-}
-
-static inline void rtl8169_request_timer(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- struct timer_list *timer = &tp->timer;
-
- if ((tp->mac_version <= RTL_GIGA_MAC_VER_01) ||
- (tp->phy_version >= RTL_GIGA_PHY_VER_H))
- return;
-
- mod_timer(timer, jiffies + RTL8169_PHY_TIMEOUT);
-}
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
-/*
- * Polling 'interrupt' - used by things like netconsole to send skbs
- * without having to re-enable interrupts. It's not called while
- * the interrupt routine is executing.
- */
-static void rtl8169_netpoll(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- struct pci_dev *pdev = tp->pci_dev;
-
- disable_irq(pdev->irq);
- rtl8169_interrupt(pdev->irq, dev);
- enable_irq(pdev->irq);
-}
-#endif
-
-static void rtl8169_release_board(struct pci_dev *pdev, struct net_device *dev,
- void __iomem *ioaddr)
-{
- iounmap(ioaddr);
- pci_release_regions(pdev);
- pci_disable_device(pdev);
- free_netdev(dev);
-}
-
-static void rtl8169_phy_reset(struct net_device *dev,
- struct rtl8169_private *tp)
-{
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned int i;
-
- tp->phy_reset_enable(ioaddr);
- for (i = 0; i < 100; i++) {
- if (!tp->phy_reset_pending(ioaddr))
- return;
- msleep(1);
- }
- if (netif_msg_link(tp))
- printk(KERN_ERR "%s: PHY reset failed.\n", dev->name);
-}
-
-static void rtl8169_init_phy(struct net_device *dev, struct rtl8169_private *tp)
-{
- void __iomem *ioaddr = tp->mmio_addr;
-
- rtl8169_hw_phy_config(dev);
-
- dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
- RTL_W8(0x82, 0x01);
-
- pci_write_config_byte(tp->pci_dev, PCI_LATENCY_TIMER, 0x40);
-
- if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
- pci_write_config_byte(tp->pci_dev, PCI_CACHE_LINE_SIZE, 0x08);
-
- if (tp->mac_version == RTL_GIGA_MAC_VER_02) {
- dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
- RTL_W8(0x82, 0x01);
- dprintk("Set PHY Reg 0x0bh = 0x00h\n");
- mdio_write(ioaddr, 0x0b, 0x0000); //w 0x0b 15 0 0
- }
-
- rtl8169_phy_reset(dev, tp);
-
- /*
- * rtl8169_set_speed_xmii takes good care of the Fast Ethernet
- * only 8101. Don't panic.
- */
- rtl8169_set_speed(dev, AUTONEG_ENABLE, SPEED_1000, DUPLEX_FULL);
-
- if ((RTL_R8(PHYstatus) & TBI_Enable) && netif_msg_link(tp))
- printk(KERN_INFO PFX "%s: TBI auto-negotiating\n", dev->name);
-}
-
-static void rtl_rar_set(struct rtl8169_private *tp, u8 *addr)
-{
- void __iomem *ioaddr = tp->mmio_addr;
- u32 high;
- u32 low;
-
- low = addr[0] | (addr[1] << 8) | (addr[2] << 16) | (addr[3] << 24);
- high = addr[4] | (addr[5] << 8);
-
- spin_lock_irq(&tp->lock);
-
- RTL_W8(Cfg9346, Cfg9346_Unlock);
- RTL_W32(MAC0, low);
- RTL_W32(MAC4, high);
- RTL_W8(Cfg9346, Cfg9346_Lock);
-
- spin_unlock_irq(&tp->lock);
-}
-
-static int rtl_set_mac_address(struct net_device *dev, void *p)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- struct sockaddr *addr = p;
-
- if (!is_valid_ether_addr(addr->sa_data))
- return -EADDRNOTAVAIL;
-
- memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
-
- rtl_rar_set(tp, dev->dev_addr);
-
- return 0;
-}
-
-static int rtl8169_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- struct mii_ioctl_data *data = if_mii(ifr);
-
- if (!netif_running(dev))
- return -ENODEV;
-
- switch (cmd) {
- case SIOCGMIIPHY:
- data->phy_id = 32; /* Internal PHY */
- return 0;
-
- case SIOCGMIIREG:
- data->val_out = mdio_read(tp->mmio_addr, data->reg_num & 0x1f);
- return 0;
-
- case SIOCSMIIREG:
- if (!capable(CAP_NET_ADMIN))
- return -EPERM;
- mdio_write(tp->mmio_addr, data->reg_num & 0x1f, data->val_in);
- return 0;
- }
- return -EOPNOTSUPP;
-}
-
-static const struct rtl_cfg_info {
- void (*hw_start)(struct net_device *);
- unsigned int region;
- unsigned int align;
- u16 intr_event;
- u16 napi_event;
-} rtl_cfg_infos [] = {
- [RTL_CFG_0] = {
- .hw_start = rtl_hw_start_8169,
- .region = 1,
- .align = 0,
- .intr_event = SYSErr | LinkChg | RxOverflow |
- RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
- .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow
- },
- [RTL_CFG_1] = {
- .hw_start = rtl_hw_start_8168,
- .region = 2,
- .align = 8,
- .intr_event = SYSErr | LinkChg | RxOverflow |
- TxErr | TxOK | RxOK | RxErr,
- .napi_event = TxErr | TxOK | RxOK | RxOverflow
- },
- [RTL_CFG_2] = {
- .hw_start = rtl_hw_start_8101,
- .region = 2,
- .align = 8,
- .intr_event = SYSErr | LinkChg | RxOverflow | PCSTimeout |
- RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
- .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow
- }
-};
-
-static int __devinit
-rtl8169_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
-{
- const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
- const unsigned int region = cfg->region;
- struct rtl8169_private *tp;
- struct net_device *dev;
- void __iomem *ioaddr;
- unsigned int i;
- int rc;
-
- if (netif_msg_drv(&debug)) {
- printk(KERN_INFO "%s Gigabit Ethernet driver %s loaded\n",
- MODULENAME, RTL8169_VERSION);
- }
-
- dev = alloc_etherdev(sizeof (*tp));
- if (!dev) {
- if (netif_msg_drv(&debug))
- dev_err(&pdev->dev, "unable to alloc new ethernet\n");
- rc = -ENOMEM;
- goto out;
- }
-
- SET_MODULE_OWNER(dev);
- SET_NETDEV_DEV(dev, &pdev->dev);
- tp = netdev_priv(dev);
- tp->dev = dev;
- tp->msg_enable = netif_msg_init(debug.msg_enable, R8169_MSG_DEFAULT);
-
- /* enable device (incl. PCI PM wakeup and hotplug setup) */
- rc = pci_enable_device(pdev);
- if (rc < 0) {
- if (netif_msg_probe(tp))
- dev_err(&pdev->dev, "enable failure\n");
- goto err_out_free_dev_1;
- }
-
- rc = pci_set_mwi(pdev);
- if (rc < 0)
- goto err_out_disable_2;
-
- /* make sure PCI base addr 1 is MMIO */
- if (!(pci_resource_flags(pdev, region) & IORESOURCE_MEM)) {
- if (netif_msg_probe(tp)) {
- dev_err(&pdev->dev,
- "region #%d not an MMIO resource, aborting\n",
- region);
- }
- rc = -ENODEV;
- goto err_out_mwi_3;
- }
-
- /* check for weird/broken PCI region reporting */
- if (pci_resource_len(pdev, region) < R8169_REGS_SIZE) {
- if (netif_msg_probe(tp)) {
- dev_err(&pdev->dev,
- "Invalid PCI region size(s), aborting\n");
- }
- rc = -ENODEV;
- goto err_out_mwi_3;
- }
-
- rc = pci_request_regions(pdev, MODULENAME);
- if (rc < 0) {
- if (netif_msg_probe(tp))
- dev_err(&pdev->dev, "could not request regions.\n");
- goto err_out_mwi_3;
- }
-
- tp->cp_cmd = PCIMulRW | RxChkSum;
-
- if ((sizeof(dma_addr_t) > 4) &&
- !pci_set_dma_mask(pdev, DMA_64BIT_MASK) && use_dac) {
- tp->cp_cmd |= PCIDAC;
- dev->features |= NETIF_F_HIGHDMA;
- } else {
- rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
- if (rc < 0) {
- if (netif_msg_probe(tp)) {
- dev_err(&pdev->dev,
- "DMA configuration failed.\n");
- }
- goto err_out_free_res_4;
- }
- }
-
- pci_set_master(pdev);
-
- /* ioremap MMIO region */
- ioaddr = ioremap(pci_resource_start(pdev, region), R8169_REGS_SIZE);
- if (!ioaddr) {
- if (netif_msg_probe(tp))
- dev_err(&pdev->dev, "cannot remap MMIO, aborting\n");
- rc = -EIO;
- goto err_out_free_res_4;
- }
-
- /* Unneeded ? Don't mess with Mrs. Murphy. */
- rtl8169_irq_mask_and_ack(ioaddr);
-
- /* Soft reset the chip. */
- RTL_W8(ChipCmd, CmdReset);
-
- /* Check that the chip has finished the reset. */
- for (i = 0; i < 100; i++) {
- if ((RTL_R8(ChipCmd) & CmdReset) == 0)
- break;
- msleep_interruptible(1);
- }
-
- /* Identify chip attached to board */
- rtl8169_get_mac_version(tp, ioaddr);
- rtl8169_get_phy_version(tp, ioaddr);
-
- rtl8169_print_mac_version(tp);
- rtl8169_print_phy_version(tp);
-
- for (i = ARRAY_SIZE(rtl_chip_info) - 1; i >= 0; i--) {
- if (tp->mac_version == rtl_chip_info[i].mac_version)
- break;
- }
- if (i < 0) {
- /* Unknown chip: assume array element #0, original RTL-8169 */
- if (netif_msg_probe(tp)) {
- dev_printk(KERN_DEBUG, &pdev->dev,
- "unknown chip version, assuming %s\n",
- rtl_chip_info[0].name);
- }
- i++;
- }
- tp->chipset = i;
-
- RTL_W8(Cfg9346, Cfg9346_Unlock);
- RTL_W8(Config1, RTL_R8(Config1) | PMEnable);
- RTL_W8(Config5, RTL_R8(Config5) & PMEStatus);
- RTL_W8(Cfg9346, Cfg9346_Lock);
-
- if (RTL_R8(PHYstatus) & TBI_Enable) {
- tp->set_speed = rtl8169_set_speed_tbi;
- tp->get_settings = rtl8169_gset_tbi;
- tp->phy_reset_enable = rtl8169_tbi_reset_enable;
- tp->phy_reset_pending = rtl8169_tbi_reset_pending;
- tp->link_ok = rtl8169_tbi_link_ok;
-
- tp->phy_1000_ctrl_reg = ADVERTISE_1000FULL; /* Implied by TBI */
- } else {
- tp->set_speed = rtl8169_set_speed_xmii;
- tp->get_settings = rtl8169_gset_xmii;
- tp->phy_reset_enable = rtl8169_xmii_reset_enable;
- tp->phy_reset_pending = rtl8169_xmii_reset_pending;
- tp->link_ok = rtl8169_xmii_link_ok;
-
- dev->do_ioctl = rtl8169_ioctl;
- }
-
- /* Get MAC address. FIXME: read EEPROM */
- for (i = 0; i < MAC_ADDR_LEN; i++)
- dev->dev_addr[i] = RTL_R8(MAC0 + i);
- memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
-
- dev->open = rtl8169_open;
- dev->hard_start_xmit = rtl8169_start_xmit;
- dev->get_stats = rtl8169_get_stats;
- SET_ETHTOOL_OPS(dev, &rtl8169_ethtool_ops);
- dev->stop = rtl8169_close;
- dev->tx_timeout = rtl8169_tx_timeout;
- dev->set_multicast_list = rtl_set_rx_mode;
- dev->watchdog_timeo = RTL8169_TX_TIMEOUT;
- dev->irq = pdev->irq;
- dev->base_addr = (unsigned long) ioaddr;
- dev->change_mtu = rtl8169_change_mtu;
- dev->set_mac_address = rtl_set_mac_address;
-
-#ifdef CONFIG_R8169_NAPI
- dev->poll = rtl8169_poll;
- dev->weight = R8169_NAPI_WEIGHT;
-#endif
-
-#ifdef CONFIG_R8169_VLAN
- dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
- dev->vlan_rx_register = rtl8169_vlan_rx_register;
-#endif
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = rtl8169_netpoll;
-#endif
-
- tp->intr_mask = 0xffff;
- tp->pci_dev = pdev;
- tp->mmio_addr = ioaddr;
- tp->align = cfg->align;
- tp->hw_start = cfg->hw_start;
- tp->intr_event = cfg->intr_event;
- tp->napi_event = cfg->napi_event;
-
- init_timer(&tp->timer);
- tp->timer.data = (unsigned long) dev;
- tp->timer.function = rtl8169_phy_timer;
-
- spin_lock_init(&tp->lock);
-
- // offer device to EtherCAT master module
- tp->ecdev = ecdev_offer(dev, ec_poll, THIS_MODULE);
-
- if (!tp->ecdev) {
- printk(KERN_INFO "about to register device named %s (%p)...\n", dev->name, dev);
- i = register_netdev (dev);
- if (i) goto err_out_unmap_5;
- }
-
- pci_set_drvdata(pdev, dev);
-
- if (netif_msg_probe(tp)) {
- u32 xid = RTL_R32(TxConfig) & 0x7cf0f8ff;
-
- printk(KERN_INFO "%s: %s at 0x%lx, "
- "%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x, "
- "XID %08x IRQ %d\n",
- dev->name,
- rtl_chip_info[tp->chipset].name,
- dev->base_addr,
- dev->dev_addr[0], dev->dev_addr[1],
- dev->dev_addr[2], dev->dev_addr[3],
- dev->dev_addr[4], dev->dev_addr[5], xid, dev->irq);
- }
-
- rtl8169_init_phy(dev, tp);
-
- if (tp->ecdev && ecdev_open(tp->ecdev)) {
- ecdev_withdraw(tp->ecdev);
- goto err_out_unmap_5;
- }
-
-out:
- return rc;
-
-err_out_unmap_5:
- iounmap(ioaddr);
-err_out_free_res_4:
- pci_release_regions(pdev);
-err_out_mwi_3:
- pci_clear_mwi(pdev);
-err_out_disable_2:
- pci_disable_device(pdev);
-err_out_free_dev_1:
- free_netdev(dev);
- goto out;
-}
-
-static void __devexit rtl8169_remove_one(struct pci_dev *pdev)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
- struct rtl8169_private *tp = netdev_priv(dev);
-
- flush_scheduled_work();
-
-
- if (tp->ecdev) {
- ecdev_close(tp->ecdev);
- ecdev_withdraw(tp->ecdev);
- }
- else {
- unregister_netdev (dev);
- }
-
- rtl8169_release_board(pdev, dev, tp->mmio_addr);
- pci_set_drvdata(pdev, NULL);
-}
-
-static void rtl8169_set_rxbufsize(struct rtl8169_private *tp,
- struct net_device *dev)
-{
- unsigned int mtu = dev->mtu;
-
- tp->rx_buf_sz = (mtu > RX_BUF_SIZE) ? mtu + ETH_HLEN + 8 : RX_BUF_SIZE;
-}
-
-static int rtl8169_open(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- struct pci_dev *pdev = tp->pci_dev;
- int retval = -ENOMEM;
-
-
- rtl8169_set_rxbufsize(tp, dev);
-
- /*
- * Rx and Tx desscriptors needs 256 bytes alignment.
- * pci_alloc_consistent provides more.
- */
- tp->TxDescArray = pci_alloc_consistent(pdev, R8169_TX_RING_BYTES,
- &tp->TxPhyAddr);
- if (!tp->TxDescArray)
- goto out;
-
- tp->RxDescArray = pci_alloc_consistent(pdev, R8169_RX_RING_BYTES,
- &tp->RxPhyAddr);
- if (!tp->RxDescArray)
- goto err_free_tx_0;
-
- retval = rtl8169_init_ring(dev);
- if (retval < 0)
- goto err_free_rx_1;
-
- INIT_DELAYED_WORK(&tp->task, NULL);
-
- smp_mb();
-
- if (!tp->ecdev) {
- retval = request_irq(dev->irq, rtl8169_interrupt, IRQF_SHARED,
- dev->name, dev);
- if (retval < 0)
- goto err_release_ring_2;
- }
-
- rtl_hw_start(dev);
-
- rtl8169_request_timer(dev);
-
- rtl8169_check_link_status(dev, tp, tp->mmio_addr);
-out:
- return retval;
-
-err_release_ring_2:
- rtl8169_rx_clear(tp);
-err_free_rx_1:
- pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
- tp->RxPhyAddr);
-err_free_tx_0:
- pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
- tp->TxPhyAddr);
- goto out;
-}
-
-static void rtl8169_hw_reset(void __iomem *ioaddr)
-{
- /* Disable interrupts */
- rtl8169_irq_mask_and_ack(ioaddr);
-
- /* Reset the chipset */
- RTL_W8(ChipCmd, CmdReset);
-
- /* PCI commit */
- RTL_R8(ChipCmd);
-}
-
-static void rtl_set_rx_tx_config_registers(struct rtl8169_private *tp)
-{
- void __iomem *ioaddr = tp->mmio_addr;
- u32 cfg = rtl8169_rx_config;
-
- cfg |= (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
- RTL_W32(RxConfig, cfg);
-
- /* Set DMA burst size and Interframe Gap Time */
- RTL_W32(TxConfig, (TX_DMA_BURST << TxDMAShift) |
- (InterFrameGap << TxInterFrameGapShift));
-}
-
-static void rtl_hw_start(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned int i;
-
- /* Soft reset the chip. */
- RTL_W8(ChipCmd, CmdReset);
-
- /* Check that the chip has finished the reset. */
- for (i = 0; i < 100; i++) {
- if ((RTL_R8(ChipCmd) & CmdReset) == 0)
- break;
- msleep_interruptible(1);
- }
-
- tp->hw_start(dev);
-
- if(!tp->ecdev) {
- netif_start_queue(dev);
- }
-}
-
-
-void ec_poll(struct net_device *dev)
-{
- rtl8169_interrupt(0, dev);
-}
-
-static void rtl_set_rx_tx_desc_registers(struct rtl8169_private *tp,
- void __iomem *ioaddr)
-{
- /*
- * Magic spell: some iop3xx ARM board needs the TxDescAddrHigh
- * register to be written before TxDescAddrLow to work.
- * Switching from MMIO to I/O access fixes the issue as well.
- */
- RTL_W32(TxDescStartAddrHigh, ((u64) tp->TxPhyAddr) >> 32);
- RTL_W32(TxDescStartAddrLow, ((u64) tp->TxPhyAddr) & DMA_32BIT_MASK);
- RTL_W32(RxDescAddrHigh, ((u64) tp->RxPhyAddr) >> 32);
- RTL_W32(RxDescAddrLow, ((u64) tp->RxPhyAddr) & DMA_32BIT_MASK);
-}
-
-static u16 rtl_rw_cpluscmd(void __iomem *ioaddr)
-{
- u16 cmd;
-
- cmd = RTL_R16(CPlusCmd);
- RTL_W16(CPlusCmd, cmd);
- return cmd;
-}
-
-static void rtl_set_rx_max_size(void __iomem *ioaddr)
-{
- /* Low hurts. Let's disable the filtering. */
- RTL_W16(RxMaxSize, 16383);
-}
-
-static void rtl8169_set_magic_reg(void __iomem *ioaddr, unsigned mac_version)
-{
- struct {
- u32 mac_version;
- u32 clk;
- u32 val;
- } cfg2_info [] = {
- { RTL_GIGA_MAC_VER_05, PCI_Clock_33MHz, 0x000fff00 }, // 8110SCd
- { RTL_GIGA_MAC_VER_05, PCI_Clock_66MHz, 0x000fffff },
- { RTL_GIGA_MAC_VER_06, PCI_Clock_33MHz, 0x00ffff00 }, // 8110SCe
- { RTL_GIGA_MAC_VER_06, PCI_Clock_66MHz, 0x00ffffff }
- }, *p = cfg2_info;
- unsigned int i;
- u32 clk;
-
- clk = RTL_R8(Config2) & PCI_Clock_66MHz;
- for (i = 0; i < ARRAY_SIZE(cfg2_info); i++) {
- if ((p->mac_version == mac_version) && (p->clk == clk)) {
- RTL_W32(0x7c, p->val);
- break;
- }
- }
-}
-
-static void rtl_hw_start_8169(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- struct pci_dev *pdev = tp->pci_dev;
-
- if (tp->mac_version == RTL_GIGA_MAC_VER_05) {
- RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) | PCIMulRW);
- pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x08);
- }
-
- RTL_W8(Cfg9346, Cfg9346_Unlock);
- if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_04))
- RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
-
- RTL_W8(EarlyTxThres, EarlyTxThld);
-
- rtl_set_rx_max_size(ioaddr);
-
- rtl_set_rx_tx_config_registers(tp);
-
- tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
-
- if ((tp->mac_version == RTL_GIGA_MAC_VER_02) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_03)) {
- dprintk(KERN_INFO PFX "Set MAC Reg C+CR Offset 0xE0. "
- "Bit-3 and bit-14 MUST be 1\n");
- tp->cp_cmd |= (1 << 14);
- }
-
- RTL_W16(CPlusCmd, tp->cp_cmd);
-
- rtl8169_set_magic_reg(ioaddr, tp->mac_version);
-
- /*
- * Undocumented corner. Supposedly:
- * (TxTimer << 12) | (TxPackets << 8) | (RxTimer << 4) | RxPackets
- */
- RTL_W16(IntrMitigate, 0x0000);
-
- rtl_set_rx_tx_desc_registers(tp, ioaddr);
-
- RTL_W8(Cfg9346, Cfg9346_Lock);
-
- /* Initially a 10 us delay. Turned it into a PCI commit. - FR */
- RTL_R8(IntrMask);
-
- RTL_W32(RxMissed, 0);
-
- rtl_set_rx_mode(dev);
-
- /* no early-rx interrupts */
- RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
-
- /* Enable all known interrupts by setting the interrupt mask. */
- if(!tp->ecdev) {
- RTL_W16(IntrMask, tp->intr_event);
- }
-
- RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
-}
-
-static void rtl_hw_start_8168(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- struct pci_dev *pdev = tp->pci_dev;
- u8 ctl;
-
- RTL_W8(Cfg9346, Cfg9346_Unlock);
-
- RTL_W8(EarlyTxThres, EarlyTxThld);
-
- rtl_set_rx_max_size(ioaddr);
-
- rtl_set_rx_tx_config_registers(tp);
-
- tp->cp_cmd |= RTL_R16(CPlusCmd) | PktCntrDisable | INTT_1;
-
- RTL_W16(CPlusCmd, tp->cp_cmd);
-
- /* Tx performance tweak. */
- pci_read_config_byte(pdev, 0x69, &ctl);
- ctl = (ctl & ~0x70) | 0x50;
- pci_write_config_byte(pdev, 0x69, ctl);
-
- RTL_W16(IntrMitigate, 0x5151);
-
- /* Work around for RxFIFO overflow. */
- if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
- tp->intr_event |= RxFIFOOver | PCSTimeout;
- tp->intr_event &= ~RxOverflow;
- }
-
- rtl_set_rx_tx_desc_registers(tp, ioaddr);
-
- RTL_W8(Cfg9346, Cfg9346_Lock);
-
- RTL_R8(IntrMask);
-
- RTL_W32(RxMissed, 0);
-
- rtl_set_rx_mode(dev);
-
- RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
-
- RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
-
- if(!tp->ecdev) {
- RTL_W16(IntrMask, tp->intr_event);
- }
-}
-
-static void rtl_hw_start_8101(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- struct pci_dev *pdev = tp->pci_dev;
-
- if (tp->mac_version == RTL_GIGA_MAC_VER_13) {
- pci_write_config_word(pdev, 0x68, 0x00);
- pci_write_config_word(pdev, 0x69, 0x08);
- }
-
- RTL_W8(Cfg9346, Cfg9346_Unlock);
-
- RTL_W8(EarlyTxThres, EarlyTxThld);
-
- rtl_set_rx_max_size(ioaddr);
-
- tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
-
- RTL_W16(CPlusCmd, tp->cp_cmd);
-
- RTL_W16(IntrMitigate, 0x0000);
-
- rtl_set_rx_tx_desc_registers(tp, ioaddr);
-
- RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
- rtl_set_rx_tx_config_registers(tp);
-
- RTL_W8(Cfg9346, Cfg9346_Lock);
-
- RTL_R8(IntrMask);
-
- RTL_W32(RxMissed, 0);
-
- rtl_set_rx_mode(dev);
-
- RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
-
- RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xf000);
-
- if(!tp->ecdev) {
- RTL_W16(IntrMask, tp->intr_event);
- }
-}
-
-static int rtl8169_change_mtu(struct net_device *dev, int new_mtu)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- int ret = 0;
-
- if (new_mtu < ETH_ZLEN || new_mtu > SafeMtu)
- return -EINVAL;
-
- dev->mtu = new_mtu;
-
- if (!netif_running(dev))
- goto out;
-
- rtl8169_down(dev);
-
- rtl8169_set_rxbufsize(tp, dev);
-
- ret = rtl8169_init_ring(dev);
- if (ret < 0)
- goto out;
-
- netif_poll_enable(dev);
-
- rtl_hw_start(dev);
-
- rtl8169_request_timer(dev);
-
-out:
- return ret;
-}
-
-static inline void rtl8169_make_unusable_by_asic(struct RxDesc *desc)
-{
- desc->addr = 0x0badbadbadbadbadull;
- desc->opts1 &= ~cpu_to_le32(DescOwn | RsvdMask);
-}
-
-static void rtl8169_free_rx_skb(struct rtl8169_private *tp,
- struct sk_buff **sk_buff, struct RxDesc *desc)
-{
- struct pci_dev *pdev = tp->pci_dev;
-
- pci_unmap_single(pdev, le64_to_cpu(desc->addr), tp->rx_buf_sz,
- PCI_DMA_FROMDEVICE);
- if(!tp->ecdev) {
- dev_kfree_skb(*sk_buff);
- *sk_buff = NULL;
- }
- rtl8169_make_unusable_by_asic(desc);
-}
-
-static inline void rtl8169_mark_to_asic(struct RxDesc *desc, u32 rx_buf_sz)
-{
- u32 eor = le32_to_cpu(desc->opts1) & RingEnd;
-
- desc->opts1 = cpu_to_le32(DescOwn | eor | rx_buf_sz);
-}
-
-static inline void rtl8169_map_to_asic(struct RxDesc *desc, dma_addr_t mapping,
- u32 rx_buf_sz)
-{
- desc->addr = cpu_to_le64(mapping);
- wmb();
- rtl8169_mark_to_asic(desc, rx_buf_sz);
-}
-
-static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev,
- struct net_device *dev,
- struct RxDesc *desc, int rx_buf_sz,
- unsigned int align)
-{
- struct sk_buff *skb;
- dma_addr_t mapping;
- unsigned int pad;
-
- pad = align ? align : NET_IP_ALIGN;
-
- skb = netdev_alloc_skb(dev, rx_buf_sz + pad);
- if (!skb)
- goto err_out;
-
- skb_reserve(skb, align ? ((pad - 1) & (unsigned long)skb->data) : pad);
-
- mapping = pci_map_single(pdev, skb->data, rx_buf_sz,
- PCI_DMA_FROMDEVICE);
-
- rtl8169_map_to_asic(desc, mapping, rx_buf_sz);
-out:
- return skb;
-
-err_out:
- rtl8169_make_unusable_by_asic(desc);
- goto out;
-}
-
-static void rtl8169_rx_clear(struct rtl8169_private *tp)
-{
- unsigned int i;
-
- for (i = 0; i < NUM_RX_DESC; i++) {
- if (tp->Rx_skbuff[i]) {
- rtl8169_free_rx_skb(tp, tp->Rx_skbuff + i,
- tp->RxDescArray + i);
- }
- }
-}
-
-static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev,
- u32 start, u32 end)
-{
- u32 cur;
-
- for (cur = start; end - cur != 0; cur++) {
- struct sk_buff *skb;
- unsigned int i = cur % NUM_RX_DESC;
-
- WARN_ON((s32)(end - cur) < 0);
-
- if (tp->Rx_skbuff[i])
- continue;
-
- skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev,
- tp->RxDescArray + i,
- tp->rx_buf_sz, tp->align);
- if (!skb)
- break;
-
- tp->Rx_skbuff[i] = skb;
- }
- return cur - start;
-}
-
-static inline void rtl8169_mark_as_last_descriptor(struct RxDesc *desc)
-{
- desc->opts1 |= cpu_to_le32(RingEnd);
-}
-
-static void rtl8169_init_ring_indexes(struct rtl8169_private *tp)
-{
- tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0;
-}
-
-static int rtl8169_init_ring(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
-
- rtl8169_init_ring_indexes(tp);
-
- memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info));
- memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *));
-
- if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC)
- goto err_out;
-
- rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1);
-
- return 0;
-
-err_out:
- rtl8169_rx_clear(tp);
- return -ENOMEM;
-}
-
-static void rtl8169_unmap_tx_skb(struct pci_dev *pdev, struct ring_info *tx_skb,
- struct TxDesc *desc)
-{
- unsigned int len = tx_skb->len;
-
- pci_unmap_single(pdev, le64_to_cpu(desc->addr), len, PCI_DMA_TODEVICE);
- desc->opts1 = 0x00;
- desc->opts2 = 0x00;
- desc->addr = 0x00;
- tx_skb->len = 0;
-}
-
-static void rtl8169_tx_clear(struct rtl8169_private *tp)
-{
- unsigned int i;
-
- for (i = tp->dirty_tx; i < tp->dirty_tx + NUM_TX_DESC; i++) {
- unsigned int entry = i % NUM_TX_DESC;
- struct ring_info *tx_skb = tp->tx_skb + entry;
- unsigned int len = tx_skb->len;
-
- if (len) {
- struct sk_buff *skb = tx_skb->skb;
-
- rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb,
- tp->TxDescArray + entry);
- if (skb) {
- if(!tp->ecdev) {
- dev_kfree_skb(skb);
- tx_skb->skb = NULL;
- }
- }
- tp->stats.tx_dropped++;
- }
- }
- tp->cur_tx = tp->dirty_tx = 0;
-}
-
-static void rtl8169_schedule_work(struct net_device *dev, work_func_t task)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
-
- PREPARE_DELAYED_WORK(&tp->task, task);
- schedule_delayed_work(&tp->task, 4);
-}
-
-static void rtl8169_wait_for_quiescence(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
-
- synchronize_irq(dev->irq);
-
- /* Wait for any pending NAPI task to complete */
- netif_poll_disable(dev);
-
- rtl8169_irq_mask_and_ack(ioaddr);
-
- netif_poll_enable(dev);
-}
-
-static void rtl8169_reinit_task(struct work_struct *work)
-{
- struct rtl8169_private *tp =
- container_of(work, struct rtl8169_private, task.work);
- struct net_device *dev = tp->dev;
- int ret;
-
- rtnl_lock();
-
- if (!netif_running(dev))
- goto out_unlock;
-
- rtl8169_wait_for_quiescence(dev);
- rtl8169_close(dev);
-
- ret = rtl8169_open(dev);
- if (unlikely(ret < 0)) {
- if (net_ratelimit() && netif_msg_drv(tp)) {
- printk(PFX KERN_ERR "%s: reinit failure (status = %d)."
- " Rescheduling.\n", dev->name, ret);
- }
- rtl8169_schedule_work(dev, rtl8169_reinit_task);
- }
-
-out_unlock:
- rtnl_unlock();
-}
-
-static void rtl8169_reset_task(struct work_struct *work)
-{
- struct rtl8169_private *tp =
- container_of(work, struct rtl8169_private, task.work);
- struct net_device *dev = tp->dev;
-
- rtnl_lock();
-
- if (!netif_running(dev))
- goto out_unlock;
-
- rtl8169_wait_for_quiescence(dev);
-
- rtl8169_rx_interrupt(dev, tp, tp->mmio_addr);
- rtl8169_tx_clear(tp);
-
- if (tp->dirty_rx == tp->cur_rx) {
- rtl8169_init_ring_indexes(tp);
- rtl_hw_start(dev);
- netif_wake_queue(dev);
- } else {
- if (net_ratelimit() && netif_msg_intr(tp)) {
- printk(PFX KERN_EMERG "%s: Rx buffers shortage\n",
- dev->name);
- }
- rtl8169_schedule_work(dev, rtl8169_reset_task);
- }
-
-out_unlock:
- rtnl_unlock();
-}
-
-static void rtl8169_tx_timeout(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
-
- rtl8169_hw_reset(tp->mmio_addr);
-
- /* Let's wait a bit while any (async) irq lands on */
- rtl8169_schedule_work(dev, rtl8169_reset_task);
-}
-
-static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb,
- u32 opts1)
-{
- struct skb_shared_info *info = skb_shinfo(skb);
- unsigned int cur_frag, entry;
- struct TxDesc * uninitialized_var(txd);
-
- entry = tp->cur_tx;
- for (cur_frag = 0; cur_frag < info->nr_frags; cur_frag++) {
- skb_frag_t *frag = info->frags + cur_frag;
- dma_addr_t mapping;
- u32 status, len;
- void *addr;
-
- entry = (entry + 1) % NUM_TX_DESC;
-
- txd = tp->TxDescArray + entry;
- len = frag->size;
- addr = ((void *) page_address(frag->page)) + frag->page_offset;
- mapping = pci_map_single(tp->pci_dev, addr, len, PCI_DMA_TODEVICE);
-
- /* anti gcc 2.95.3 bugware (sic) */
- status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
-
- txd->opts1 = cpu_to_le32(status);
- txd->addr = cpu_to_le64(mapping);
-
- tp->tx_skb[entry].len = len;
- }
-
- if (cur_frag) {
- tp->tx_skb[entry].skb = skb;
- txd->opts1 |= cpu_to_le32(LastFrag);
- }
-
- return cur_frag;
-}
-
-static inline u32 rtl8169_tso_csum(struct sk_buff *skb, struct net_device *dev)
-{
- if (dev->features & NETIF_F_TSO) {
- u32 mss = skb_shinfo(skb)->gso_size;
-
- if (mss)
- return LargeSend | ((mss & MSSMask) << MSSShift);
- }
- if (skb->ip_summed == CHECKSUM_PARTIAL) {
- const struct iphdr *ip = ip_hdr(skb);
-
- if (ip->protocol == IPPROTO_TCP)
- return IPCS | TCPCS;
- else if (ip->protocol == IPPROTO_UDP)
- return IPCS | UDPCS;
- WARN_ON(1); /* we need a WARN() */
- }
- return 0;
-}
-
-static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- unsigned int frags, entry = tp->cur_tx % NUM_TX_DESC;
- struct TxDesc *txd = tp->TxDescArray + entry;
- void __iomem *ioaddr = tp->mmio_addr;
- dma_addr_t mapping;
- u32 status, len;
- u32 opts1;
- int ret = NETDEV_TX_OK;
-
- if (unlikely(TX_BUFFS_AVAIL(tp) < skb_shinfo(skb)->nr_frags)) {
- if (netif_msg_drv(tp)) {
- printk(KERN_ERR
- "%s: BUG! Tx Ring full when queue awake!\n",
- dev->name);
- }
- goto err_stop;
- }
-
- if (unlikely(le32_to_cpu(txd->opts1) & DescOwn))
- goto err_stop;
-
- opts1 = DescOwn | rtl8169_tso_csum(skb, dev);
-
- frags = rtl8169_xmit_frags(tp, skb, opts1);
- if (frags) {
- len = skb_headlen(skb);
- opts1 |= FirstFrag;
- } else {
- len = skb->len;
-
- if (unlikely(len < ETH_ZLEN)) {
- if (skb_padto(skb, ETH_ZLEN))
- goto err_update_stats;
- len = ETH_ZLEN;
- }
-
- opts1 |= FirstFrag | LastFrag;
- tp->tx_skb[entry].skb = skb;
- }
-
- mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE);
-
- tp->tx_skb[entry].len = len;
- txd->addr = cpu_to_le64(mapping);
- txd->opts2 = cpu_to_le32(rtl8169_tx_vlan_tag(tp, skb));
-
- wmb();
-
- /* anti gcc 2.95.3 bugware (sic) */
- status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
- txd->opts1 = cpu_to_le32(status);
-
- dev->trans_start = jiffies;
-
- tp->cur_tx += frags + 1;
-
- smp_wmb();
-
- RTL_W8(TxPoll, NPQ); /* set polling bit */
-
- if(!tp->ecdev) {
- if (TX_BUFFS_AVAIL(tp) < MAX_SKB_FRAGS) {
- netif_stop_queue(dev);
- smp_rmb();
- if (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)
- netif_wake_queue(dev);
- }
- }
-
-out:
- return ret;
-
-err_stop:
- if(!tp->ecdev) {
- netif_stop_queue(dev);
- }
- ret = NETDEV_TX_BUSY;
-err_update_stats:
- tp->stats.tx_dropped++;
- goto out;
-}
-
-static void rtl8169_pcierr_interrupt(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- struct pci_dev *pdev = tp->pci_dev;
- void __iomem *ioaddr = tp->mmio_addr;
- u16 pci_status, pci_cmd;
-
- pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
- pci_read_config_word(pdev, PCI_STATUS, &pci_status);
-
- if (netif_msg_intr(tp)) {
- printk(KERN_ERR
- "%s: PCI error (cmd = 0x%04x, status = 0x%04x).\n",
- dev->name, pci_cmd, pci_status);
- }
-
- /*
- * The recovery sequence below admits a very elaborated explanation:
- * - it seems to work;
- * - I did not see what else could be done;
- * - it makes iop3xx happy.
- *
- * Feel free to adjust to your needs.
- */
- if (pdev->broken_parity_status)
- pci_cmd &= ~PCI_COMMAND_PARITY;
- else
- pci_cmd |= PCI_COMMAND_SERR | PCI_COMMAND_PARITY;
-
- pci_write_config_word(pdev, PCI_COMMAND, pci_cmd);
-
- pci_write_config_word(pdev, PCI_STATUS,
- pci_status & (PCI_STATUS_DETECTED_PARITY |
- PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_REC_MASTER_ABORT |
- PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_SIG_TARGET_ABORT));
-
- /* The infamous DAC f*ckup only happens at boot time */
- if ((tp->cp_cmd & PCIDAC) && !tp->dirty_rx && !tp->cur_rx) {
- if (netif_msg_intr(tp))
- printk(KERN_INFO "%s: disabling PCI DAC.\n", dev->name);
- tp->cp_cmd &= ~PCIDAC;
- RTL_W16(CPlusCmd, tp->cp_cmd);
- dev->features &= ~NETIF_F_HIGHDMA;
- }
-
- rtl8169_hw_reset(ioaddr);
-
- rtl8169_schedule_work(dev, rtl8169_reinit_task);
-}
-
-static void rtl8169_tx_interrupt(struct net_device *dev,
- struct rtl8169_private *tp,
- void __iomem *ioaddr)
-{
- unsigned int dirty_tx, tx_left;
-
- dirty_tx = tp->dirty_tx;
- smp_rmb();
- tx_left = tp->cur_tx - dirty_tx;
-
- while (tx_left > 0) {
- unsigned int entry = dirty_tx % NUM_TX_DESC;
- struct ring_info *tx_skb = tp->tx_skb + entry;
- u32 len = tx_skb->len;
- u32 status;
-
- rmb();
- status = le32_to_cpu(tp->TxDescArray[entry].opts1);
- if (status & DescOwn)
- break;
-
- tp->stats.tx_bytes += len;
- tp->stats.tx_packets++;
-
- rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb, tp->TxDescArray + entry);
-
- if (status & LastFrag) {
- if(!tp->ecdev) {
- dev_kfree_skb_irq(tx_skb->skb);
- tx_skb->skb = NULL;
- }
- }
- dirty_tx++;
- tx_left--;
- }
-
- if (tp->dirty_tx != dirty_tx) {
- tp->dirty_tx = dirty_tx;
- smp_wmb();
-
- if (!tp->ecdev) {
- if (netif_queue_stopped(dev) &&
- (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)) {
- netif_wake_queue(dev);
- }
- }
- /*
- * 8168 hack: TxPoll requests are lost when the Tx packets are
- * too close. Let's kick an extra TxPoll request when a burst
- * of start_xmit activity is detected (if it is not detected,
- * it is slow enough). -- FR
- */
- smp_rmb();
- if (tp->cur_tx != dirty_tx)
- RTL_W8(TxPoll, NPQ);
- }
-}
-
-static inline int rtl8169_fragmented_frame(u32 status)
-{
- return (status & (FirstFrag | LastFrag)) != (FirstFrag | LastFrag);
-}
-
-static inline void rtl8169_rx_csum(struct sk_buff *skb, struct RxDesc *desc)
-{
- u32 opts1 = le32_to_cpu(desc->opts1);
- u32 status = opts1 & RxProtoMask;
-
- if (((status == RxProtoTCP) && !(opts1 & TCPFail)) ||
- ((status == RxProtoUDP) && !(opts1 & UDPFail)) ||
- ((status == RxProtoIP) && !(opts1 & IPFail)))
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- else
- skb->ip_summed = CHECKSUM_NONE;
-}
-
-static inline bool rtl8169_try_rx_copy(struct sk_buff **sk_buff,
- struct rtl8169_private *tp, int pkt_size,
- dma_addr_t addr)
-{
- struct sk_buff *skb;
- bool done = false;
-
- if (pkt_size >= rx_copybreak)
- goto out;
-
- skb = netdev_alloc_skb(tp->dev, pkt_size + NET_IP_ALIGN);
- if (!skb)
- goto out;
-
- pci_dma_sync_single_for_cpu(tp->pci_dev, addr, pkt_size,
- PCI_DMA_FROMDEVICE);
- skb_reserve(skb, NET_IP_ALIGN);
- skb_copy_from_linear_data(*sk_buff, skb->data, pkt_size);
- *sk_buff = skb;
- done = true;
-out:
- return done;
-}
-
-static int rtl8169_rx_interrupt(struct net_device *dev,
- struct rtl8169_private *tp,
- void __iomem *ioaddr)
-{
- unsigned int cur_rx, rx_left;
- unsigned int delta, count;
-
- cur_rx = tp->cur_rx;
- rx_left = NUM_RX_DESC + tp->dirty_rx - cur_rx;
- rx_left = rtl8169_rx_quota(rx_left, (u32) dev->quota);
-
- for (; rx_left > 0; rx_left--, cur_rx++) {
- unsigned int entry = cur_rx % NUM_RX_DESC;
- struct RxDesc *desc = tp->RxDescArray + entry;
- u32 status;
-
- rmb();
- status = le32_to_cpu(desc->opts1);
-
- if (status & DescOwn)
- break;
- if (unlikely(status & RxRES)) {
- if(!tp->ecdev) {
- if (netif_msg_rx_err(tp)) {
- printk(KERN_INFO
- "%s: Rx ERROR. status = %08x\n",
- dev->name, status);
- }
- }
- tp->stats.rx_errors++;
- if (status & (RxRWT | RxRUNT))
- tp->stats.rx_length_errors++;
- if (status & RxCRC)
- tp->stats.rx_crc_errors++;
- if (status & RxFOVF) {
- rtl8169_schedule_work(dev, rtl8169_reset_task);
- tp->stats.rx_fifo_errors++;
- }
- rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
- } else {
- struct sk_buff *skb = tp->Rx_skbuff[entry];
- dma_addr_t addr = le64_to_cpu(desc->addr);
- int pkt_size = (status & 0x00001FFF) - 4;
- struct pci_dev *pdev = tp->pci_dev;
-
- /*
- * The driver does not support incoming fragmented
- * frames. They are seen as a symptom of over-mtu
- * sized frames.
- */
- if (unlikely(rtl8169_fragmented_frame(status))) {
- tp->stats.rx_dropped++;
- tp->stats.rx_length_errors++;
- rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
- continue;
- }
-
- rtl8169_rx_csum(skb, desc);
-
- if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) {
- pci_dma_sync_single_for_device(pdev, addr,
- pkt_size, PCI_DMA_FROMDEVICE);
- rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
- } else {
- pci_unmap_single(pdev, addr, pkt_size,
- PCI_DMA_FROMDEVICE);
- tp->Rx_skbuff[entry] = NULL;
- }
-
-
- if (tp->ecdev) {
- ecdev_receive(tp->ecdev, skb->data, pkt_size);
- dev->last_rx = jiffies;
- tp->stats.rx_bytes += pkt_size;
- tp->stats.rx_packets++;
- }
- else {
-
- skb_put(skb, pkt_size);
- skb->protocol = eth_type_trans(skb, dev);
-
- if (rtl8169_rx_vlan_skb(tp, desc, skb) < 0)
- rtl8169_rx_skb(skb);
-
- dev->last_rx = jiffies;
- tp->stats.rx_bytes += pkt_size;
- tp->stats.rx_packets++;
- }
- }
-
- /* Work around for AMD plateform. */
- if ((desc->opts2 & 0xfffe000) &&
- (tp->mac_version == RTL_GIGA_MAC_VER_05)) {
- desc->opts2 = 0;
- cur_rx++;
- }
- }
-
- count = cur_rx - tp->cur_rx;
- tp->cur_rx = cur_rx;
-
- delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx);
- if (!delta && count && netif_msg_intr(tp))
- printk(KERN_INFO "%s: no Rx buffer allocated\n", dev->name);
- tp->dirty_rx += delta;
-
- /*
- * FIXME: until there is periodic timer to try and refill the ring,
- * a temporary shortage may definitely kill the Rx process.
- * - disable the asic to try and avoid an overflow and kick it again
- * after refill ?
- * - how do others driver handle this condition (Uh oh...).
- */
- if ((tp->dirty_rx + NUM_RX_DESC == tp->cur_rx) && netif_msg_intr(tp))
- printk(KERN_EMERG "%s: Rx buffers exhausted\n", dev->name);
-
- return count;
-}
-
-static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
-{
- struct net_device *dev = dev_instance;
- struct rtl8169_private *tp = netdev_priv(dev);
- int boguscnt = max_interrupt_work;
- void __iomem *ioaddr = tp->mmio_addr;
- int status;
- int handled = 0;
-
- do {
- if (tp->ecdev) {
- status = RTL_R16(IntrStatus);
- } else {
- status = RTL_R16(IntrStatus);
-
- /* hotplug/major error/no more work/shared irq */
- if ((status == 0xFFFF) || !status)
- break;
-
- handled = 1;
-
- if (unlikely(!netif_running(dev))) {
- rtl8169_asic_down(ioaddr);
- goto out;
- }
- status &= tp->intr_mask;
- RTL_W16(IntrStatus,
- (status & RxFIFOOver) ? (status | RxOverflow) : status);
-
- if (!(status & tp->intr_event))
- break;
-
- /* Work around for rx fifo overflow */
- if (unlikely(status & RxFIFOOver) &&
- (tp->mac_version == RTL_GIGA_MAC_VER_11)) {
- netif_stop_queue(dev);
- rtl8169_tx_timeout(dev);
- break;
- }
-
- if (unlikely(status & SYSErr)) {
- rtl8169_pcierr_interrupt(dev);
- break;
- }
- }
-
-
- if (status & LinkChg)
- rtl8169_check_link_status(dev, tp, ioaddr);
-
-#ifdef CONFIG_R8169_NAPI
- if (status & tp->napi_event) {
- RTL_W16(IntrMask, tp->intr_event & ~tp->napi_event);
- tp->intr_mask = ~tp->napi_event;
-
- if (likely(netif_rx_schedule_prep(dev)))
- __netif_rx_schedule(dev);
- else if (netif_msg_intr(tp)) {
- printk(KERN_INFO "%s: interrupt %04x in poll\n",
- dev->name, status);
- }
- }
- break;
-#else
- /* Rx interrupt */
- if (status & (RxOK | RxOverflow | RxFIFOOver))
- rtl8169_rx_interrupt(dev, tp, ioaddr);
-
- /* Tx interrupt */
- if (status & (TxOK | TxErr))
- rtl8169_tx_interrupt(dev, tp, ioaddr);
-#endif
-
- boguscnt--;
- } while (boguscnt > 0);
-
- if (!tp->ecdev) {
- if (boguscnt <= 0) {
- if (netif_msg_intr(tp) && net_ratelimit() ) {
- printk(KERN_WARNING
- "%s: Too much work at interrupt!\n", dev->name);
- }
- /* Clear all interrupt sources. */
- RTL_W16(IntrStatus, 0xffff);
- }
- }
-out:
- return IRQ_RETVAL(handled);
-}
-
-#ifdef CONFIG_R8169_NAPI
-static int rtl8169_poll(struct net_device *dev, int *budget)
-{
- unsigned int work_done, work_to_do = min(*budget, dev->quota);
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
-
- work_done = rtl8169_rx_interrupt(dev, tp, ioaddr);
- rtl8169_tx_interrupt(dev, tp, ioaddr);
-
- *budget -= work_done;
- dev->quota -= work_done;
-
- if (work_done < work_to_do) {
- if (!tp->ecdev) {
- netif_rx_complete(dev);
- }
- tp->intr_mask = 0xffff;
- /*
- * 20040426: the barrier is not strictly required but the
- * behavior of the irq handler could be less predictable
- * without it. Btw, the lack of flush for the posted pci
- * write is safe - FR
- */
- smp_wmb();
- if(!tp->ecdev) {
- RTL_W16(IntrMask, tp->intr_event);
- }
- }
-
- return (work_done >= work_to_do);
-}
-#endif
-
-static void rtl8169_down(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned int poll_locked = 0;
- unsigned int intrmask;
-
- rtl8169_delete_timer(dev);
-
-
- if (!tp->ecdev) {
- netif_stop_queue(dev);
- }
-
-core_down:
- spin_lock_irq(&tp->lock);
-
- rtl8169_asic_down(ioaddr);
-
- /* Update the error counts. */
- tp->stats.rx_missed_errors += RTL_R32(RxMissed);
- RTL_W32(RxMissed, 0);
-
- spin_unlock_irq(&tp->lock);
-
- synchronize_irq(dev->irq);
-
- if (!poll_locked) {
- if (!tp->ecdev) {
- netif_poll_disable(dev);
- }
- poll_locked++;
- }
-
- /* Give a racing hard_start_xmit a few cycles to complete. */
- synchronize_sched(); /* FIXME: should this be synchronize_irq()? */
-
- /*
- * And now for the 50k$ question: are IRQ disabled or not ?
- *
- * Two paths lead here:
- * 1) dev->close
- * -> netif_running() is available to sync the current code and the
- * IRQ handler. See rtl8169_interrupt for details.
- * 2) dev->change_mtu
- * -> rtl8169_poll can not be issued again and re-enable the
- * interruptions. Let's simply issue the IRQ down sequence again.
- *
- * No loop if hotpluged or major error (0xffff).
- */
- intrmask = RTL_R16(IntrMask);
- if (intrmask && (intrmask != 0xffff))
- goto core_down;
-
- rtl8169_tx_clear(tp);
-
- rtl8169_rx_clear(tp);
-}
-
-static int rtl8169_close(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- struct pci_dev *pdev = tp->pci_dev;
-
- rtl8169_down(dev);
-
- if (!tp->ecdev) {
- free_irq(dev->irq, dev);
- netif_poll_enable(dev);
- }
-
- pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
- tp->RxPhyAddr);
- pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
- tp->TxPhyAddr);
- tp->TxDescArray = NULL;
- tp->RxDescArray = NULL;
-
- return 0;
-}
-
-static void rtl_set_rx_mode(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned long flags;
- u32 mc_filter[2]; /* Multicast hash filter */
- int rx_mode;
- u32 tmp = 0;
-
- if (dev->flags & IFF_PROMISC) {
- /* Unconditionally log net taps. */
- if (!tp->ecdev) {
- if (netif_msg_link(tp)) {
- printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n",
- dev->name);
- }
- }
- rx_mode =
- AcceptBroadcast | AcceptMulticast | AcceptMyPhys |
- AcceptAllPhys;
- mc_filter[1] = mc_filter[0] = 0xffffffff;
- } else if ((dev->mc_count > multicast_filter_limit)
- || (dev->flags & IFF_ALLMULTI)) {
- /* Too many to filter perfectly -- accept all multicasts. */
- rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
- mc_filter[1] = mc_filter[0] = 0xffffffff;
- } else {
- struct dev_mc_list *mclist;
- unsigned int i;
-
- rx_mode = AcceptBroadcast | AcceptMyPhys;
- mc_filter[1] = mc_filter[0] = 0;
- for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
- i++, mclist = mclist->next) {
- int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
- mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);
- rx_mode |= AcceptMulticast;
- }
- }
-
- spin_lock_irqsave(&tp->lock, flags);
-
- tmp = rtl8169_rx_config | rx_mode |
- (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
-
- if ((tp->mac_version == RTL_GIGA_MAC_VER_11) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_12) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_13) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_14) ||
- (tp->mac_version == RTL_GIGA_MAC_VER_15)) {
- mc_filter[0] = 0xffffffff;
- mc_filter[1] = 0xffffffff;
- }
-
- RTL_W32(MAR0 + 0, mc_filter[0]);
- RTL_W32(MAR0 + 4, mc_filter[1]);
-
- RTL_W32(RxConfig, tmp);
-
- spin_unlock_irqrestore(&tp->lock, flags);
-}
-
-/**
- * rtl8169_get_stats - Get rtl8169 read/write statistics
- * @dev: The Ethernet Device to get statistics for
- *
- * Get TX/RX statistics for rtl8169
- */
-static struct net_device_stats *rtl8169_get_stats(struct net_device *dev)
-{
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
- unsigned long flags;
-
- if (netif_running(dev)) {
- spin_lock_irqsave(&tp->lock, flags);
- tp->stats.rx_missed_errors += RTL_R32(RxMissed);
- RTL_W32(RxMissed, 0);
- spin_unlock_irqrestore(&tp->lock, flags);
- }
-
- return &tp->stats;
-}
-
-#ifdef CONFIG_PM
-
-static int rtl8169_suspend(struct pci_dev *pdev, pm_message_t state)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
- struct rtl8169_private *tp = netdev_priv(dev);
- void __iomem *ioaddr = tp->mmio_addr;
-
- if (!netif_running(dev))
- goto out_pci_suspend;
-
- netif_device_detach(dev);
- netif_stop_queue(dev);
-
- spin_lock_irq(&tp->lock);
-
- rtl8169_asic_down(ioaddr);
-
- tp->stats.rx_missed_errors += RTL_R32(RxMissed);
- RTL_W32(RxMissed, 0);
-
- spin_unlock_irq(&tp->lock);
-
-out_pci_suspend:
- pci_save_state(pdev);
- pci_enable_wake(pdev, pci_choose_state(pdev, state), tp->wol_enabled);
- pci_set_power_state(pdev, pci_choose_state(pdev, state));
-
- return 0;
-}
-
-static int rtl8169_resume(struct pci_dev *pdev)
-{
- struct net_device *dev = pci_get_drvdata(pdev);
-
- pci_set_power_state(pdev, PCI_D0);
- pci_restore_state(pdev);
- pci_enable_wake(pdev, PCI_D0, 0);
-
- if (!netif_running(dev))
- goto out;
-
- netif_device_attach(dev);
-
- rtl8169_schedule_work(dev, rtl8169_reset_task);
-out:
- return 0;
-}
-
-#endif /* CONFIG_PM */
-
-static struct pci_driver rtl8169_pci_driver = {
- .name = MODULENAME,
- .id_table = rtl8169_pci_tbl,
- .probe = rtl8169_init_one,
- .remove = __devexit_p(rtl8169_remove_one),
-#ifdef CONFIG_PM
- .suspend = rtl8169_suspend,
- .resume = rtl8169_resume,
-#endif
-};
-
-static int __init rtl8169_init_module(void)
-{
- return pci_register_driver(&rtl8169_pci_driver);
-}
-
-static void __exit rtl8169_cleanup_module(void)
-{
- pci_unregister_driver(&rtl8169_pci_driver);
-}
-
-module_init(rtl8169_init_module);
-module_exit(rtl8169_cleanup_module);
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/devices/r8169-2.6.24-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -0,0 +1,3314 @@
+/*
+ * r8169.c: RealTek 8169/8168/8101 ethernet driver.
+ *
+ * Copyright (c) 2002 ShuChen <shuchen@realtek.com.tw>
+ * Copyright (c) 2003 - 2007 Francois Romieu <romieu@fr.zoreil.com>
+ * Copyright (c) a lot of people too. Please respect their work.
+ *
+ * See MAINTAINERS file for support contact information.
+ *
+ * vim: noexpandtab
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/delay.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/if_vlan.h>
+#include <linux/crc32.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/init.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include "../globals.h"
+#include "ecdev.h"
+
+#ifdef CONFIG_R8169_NAPI
+#define NAPI_SUFFIX "-NAPI"
+#else
+#define NAPI_SUFFIX ""
+#endif
+
+#define RTL8169_VERSION "2.2LK" NAPI_SUFFIX
+#define MODULENAME "ec_r8169"
+#define PFX MODULENAME ": "
+
+#ifdef RTL8169_DEBUG
+#define assert(expr) \
+ if (!(expr)) { \
+ printk( "Assertion failed! %s,%s,%s,line=%d\n", \
+ #expr,__FILE__,__FUNCTION__,__LINE__); \
+ }
+#define dprintk(fmt, args...) \
+ do { printk(KERN_DEBUG PFX fmt, ## args); } while (0)
+#else
+#define assert(expr) do {} while (0)
+#define dprintk(fmt, args...) do {} while (0)
+#endif /* RTL8169_DEBUG */
+
+#define R8169_MSG_DEFAULT \
+ (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN)
+
+#define TX_BUFFS_AVAIL(tp) \
+ (tp->dirty_tx + NUM_TX_DESC - tp->cur_tx - 1)
+
+#ifdef CONFIG_R8169_NAPI
+#define rtl8169_rx_skb netif_receive_skb
+#define rtl8169_rx_hwaccel_skb vlan_hwaccel_receive_skb
+#define rtl8169_rx_quota(count, quota) min(count, quota)
+#else
+#define rtl8169_rx_skb netif_rx
+#define rtl8169_rx_hwaccel_skb vlan_hwaccel_rx
+#define rtl8169_rx_quota(count, quota) count
+#endif
+
+/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
+static const int max_interrupt_work = 20;
+
+/* Maximum number of multicast addresses to filter (vs. Rx-all-multicast).
+ The RTL chips use a 64 element hash table based on the Ethernet CRC. */
+static const int multicast_filter_limit = 32;
+
+/* MAC address length */
+#define MAC_ADDR_LEN 6
+
+#define RX_FIFO_THRESH 7 /* 7 means NO threshold, Rx buffer level before first PCI xfer. */
+#define RX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define TX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define EarlyTxThld 0x3F /* 0x3F means NO early transmit */
+#define RxPacketMaxSize 0x3FE8 /* 16K - 1 - ETH_HLEN - VLAN - CRC... */
+#define SafeMtu 0x1c20 /* ... actually life sucks beyond ~7k */
+#define InterFrameGap 0x03 /* 3 means InterFrameGap = the shortest one */
+
+#define R8169_REGS_SIZE 256
+#define R8169_NAPI_WEIGHT 64
+#define NUM_TX_DESC 64 /* Number of Tx descriptor registers */
+#define NUM_RX_DESC 256 /* Number of Rx descriptor registers */
+#define RX_BUF_SIZE 1536 /* Rx Buffer size */
+#define R8169_TX_RING_BYTES (NUM_TX_DESC * sizeof(struct TxDesc))
+#define R8169_RX_RING_BYTES (NUM_RX_DESC * sizeof(struct RxDesc))
+
+#define RTL8169_TX_TIMEOUT (6*HZ)
+#define RTL8169_PHY_TIMEOUT (10*HZ)
+
+/* write/read MMIO register */
+#define RTL_W8(reg, val8) writeb ((val8), ioaddr + (reg))
+#define RTL_W16(reg, val16) writew ((val16), ioaddr + (reg))
+#define RTL_W32(reg, val32) writel ((val32), ioaddr + (reg))
+#define RTL_R8(reg) readb (ioaddr + (reg))
+#define RTL_R16(reg) readw (ioaddr + (reg))
+#define RTL_R32(reg) ((unsigned long) readl (ioaddr + (reg)))
+
+enum mac_version {
+ RTL_GIGA_MAC_VER_01 = 0x01, // 8169
+ RTL_GIGA_MAC_VER_02 = 0x02, // 8169S
+ RTL_GIGA_MAC_VER_03 = 0x03, // 8110S
+ RTL_GIGA_MAC_VER_04 = 0x04, // 8169SB
+ RTL_GIGA_MAC_VER_05 = 0x05, // 8110SCd
+ RTL_GIGA_MAC_VER_06 = 0x06, // 8110SCe
+ RTL_GIGA_MAC_VER_11 = 0x0b, // 8168Bb
+ RTL_GIGA_MAC_VER_12 = 0x0c, // 8168Be
+ RTL_GIGA_MAC_VER_13 = 0x0d, // 8101Eb
+ RTL_GIGA_MAC_VER_14 = 0x0e, // 8101 ?
+ RTL_GIGA_MAC_VER_15 = 0x0f, // 8101 ?
+ RTL_GIGA_MAC_VER_16 = 0x11, // 8101Ec
+ RTL_GIGA_MAC_VER_17 = 0x10, // 8168Bf
+ RTL_GIGA_MAC_VER_18 = 0x12, // 8168CP
+ RTL_GIGA_MAC_VER_19 = 0x13, // 8168C
+ RTL_GIGA_MAC_VER_20 = 0x14 // 8168C
+};
+
+#define _R(NAME,MAC,MASK) \
+ { .name = NAME, .mac_version = MAC, .RxConfigMask = MASK }
+
+static const struct {
+ const char *name;
+ u8 mac_version;
+ u32 RxConfigMask; /* Clears the bits supported by this chip */
+} rtl_chip_info[] = {
+ _R("RTL8169", RTL_GIGA_MAC_VER_01, 0xff7e1880), // 8169
+ _R("RTL8169s", RTL_GIGA_MAC_VER_02, 0xff7e1880), // 8169S
+ _R("RTL8110s", RTL_GIGA_MAC_VER_03, 0xff7e1880), // 8110S
+ _R("RTL8169sb/8110sb", RTL_GIGA_MAC_VER_04, 0xff7e1880), // 8169SB
+ _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_05, 0xff7e1880), // 8110SCd
+ _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_06, 0xff7e1880), // 8110SCe
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_11, 0xff7e1880), // PCI-E
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_12, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_13, 0xff7e1880), // PCI-E 8139
+ _R("RTL8100e", RTL_GIGA_MAC_VER_14, 0xff7e1880), // PCI-E 8139
+ _R("RTL8100e", RTL_GIGA_MAC_VER_15, 0xff7e1880), // PCI-E 8139
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_17, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_16, 0xff7e1880), // PCI-E
+ _R("RTL8168cp/8111cp", RTL_GIGA_MAC_VER_18, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_19, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_20, 0xff7e1880) // PCI-E
+};
+#undef _R
+
+enum cfg_version {
+ RTL_CFG_0 = 0x00,
+ RTL_CFG_1,
+ RTL_CFG_2
+};
+
+static void rtl_hw_start_8169(struct net_device *);
+static void rtl_hw_start_8168(struct net_device *);
+static void rtl_hw_start_8101(struct net_device *);
+
+static struct pci_device_id rtl8169_pci_tbl[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8129), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8136), 0, 0, RTL_CFG_2 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8167), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8168), 0, 0, RTL_CFG_1 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4300), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_AT, 0xc107), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(0x16ec, 0x0116), 0, 0, RTL_CFG_0 },
+ { PCI_VENDOR_ID_LINKSYS, 0x1032,
+ PCI_ANY_ID, 0x0024, 0, 0, RTL_CFG_0 },
+ { 0x0001, 0x8168,
+ PCI_ANY_ID, 0x2410, 0, 0, RTL_CFG_2 },
+ {0,},
+};
+
+/* prevent driver from being loaded automatically */
+//MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);
+
+static int rx_copybreak = 200;
+static int use_dac;
+static struct {
+ u32 msg_enable;
+} debug = { -1 };
+
+enum rtl_registers {
+ MAC0 = 0, /* Ethernet hardware address. */
+ MAC4 = 4,
+ MAR0 = 8, /* Multicast filter. */
+ CounterAddrLow = 0x10,
+ CounterAddrHigh = 0x14,
+ TxDescStartAddrLow = 0x20,
+ TxDescStartAddrHigh = 0x24,
+ TxHDescStartAddrLow = 0x28,
+ TxHDescStartAddrHigh = 0x2c,
+ FLASH = 0x30,
+ ERSR = 0x36,
+ ChipCmd = 0x37,
+ TxPoll = 0x38,
+ IntrMask = 0x3c,
+ IntrStatus = 0x3e,
+ TxConfig = 0x40,
+ RxConfig = 0x44,
+ RxMissed = 0x4c,
+ Cfg9346 = 0x50,
+ Config0 = 0x51,
+ Config1 = 0x52,
+ Config2 = 0x53,
+ Config3 = 0x54,
+ Config4 = 0x55,
+ Config5 = 0x56,
+ MultiIntr = 0x5c,
+ PHYAR = 0x60,
+ TBICSR = 0x64,
+ TBI_ANAR = 0x68,
+ TBI_LPAR = 0x6a,
+ PHYstatus = 0x6c,
+ RxMaxSize = 0xda,
+ CPlusCmd = 0xe0,
+ IntrMitigate = 0xe2,
+ RxDescAddrLow = 0xe4,
+ RxDescAddrHigh = 0xe8,
+ EarlyTxThres = 0xec,
+ FuncEvent = 0xf0,
+ FuncEventMask = 0xf4,
+ FuncPresetState = 0xf8,
+ FuncForceEvent = 0xfc,
+};
+
+enum rtl_register_content {
+ /* InterruptStatusBits */
+ SYSErr = 0x8000,
+ PCSTimeout = 0x4000,
+ SWInt = 0x0100,
+ TxDescUnavail = 0x0080,
+ RxFIFOOver = 0x0040,
+ LinkChg = 0x0020,
+ RxOverflow = 0x0010,
+ TxErr = 0x0008,
+ TxOK = 0x0004,
+ RxErr = 0x0002,
+ RxOK = 0x0001,
+
+ /* RxStatusDesc */
+ RxFOVF = (1 << 23),
+ RxRWT = (1 << 22),
+ RxRES = (1 << 21),
+ RxRUNT = (1 << 20),
+ RxCRC = (1 << 19),
+
+ /* ChipCmdBits */
+ CmdReset = 0x10,
+ CmdRxEnb = 0x08,
+ CmdTxEnb = 0x04,
+ RxBufEmpty = 0x01,
+
+ /* TXPoll register p.5 */
+ HPQ = 0x80, /* Poll cmd on the high prio queue */
+ NPQ = 0x40, /* Poll cmd on the low prio queue */
+ FSWInt = 0x01, /* Forced software interrupt */
+
+ /* Cfg9346Bits */
+ Cfg9346_Lock = 0x00,
+ Cfg9346_Unlock = 0xc0,
+
+ /* rx_mode_bits */
+ AcceptErr = 0x20,
+ AcceptRunt = 0x10,
+ AcceptBroadcast = 0x08,
+ AcceptMulticast = 0x04,
+ AcceptMyPhys = 0x02,
+ AcceptAllPhys = 0x01,
+
+ /* RxConfigBits */
+ RxCfgFIFOShift = 13,
+ RxCfgDMAShift = 8,
+
+ /* TxConfigBits */
+ TxInterFrameGapShift = 24,
+ TxDMAShift = 8, /* DMA burst value (0-7) is shift this many bits */
+
+ /* Config1 register p.24 */
+ MSIEnable = (1 << 5), /* Enable Message Signaled Interrupt */
+ PMEnable = (1 << 0), /* Power Management Enable */
+
+ /* Config2 register p. 25 */
+ PCI_Clock_66MHz = 0x01,
+ PCI_Clock_33MHz = 0x00,
+
+ /* Config3 register p.25 */
+ MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */
+ LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */
+
+ /* Config5 register p.27 */
+ BWF = (1 << 6), /* Accept Broadcast wakeup frame */
+ MWF = (1 << 5), /* Accept Multicast wakeup frame */
+ UWF = (1 << 4), /* Accept Unicast wakeup frame */
+ LanWake = (1 << 1), /* LanWake enable/disable */
+ PMEStatus = (1 << 0), /* PME status can be reset by PCI RST# */
+
+ /* TBICSR p.28 */
+ TBIReset = 0x80000000,
+ TBILoopback = 0x40000000,
+ TBINwEnable = 0x20000000,
+ TBINwRestart = 0x10000000,
+ TBILinkOk = 0x02000000,
+ TBINwComplete = 0x01000000,
+
+ /* CPlusCmd p.31 */
+ PktCntrDisable = (1 << 7), // 8168
+ RxVlan = (1 << 6),
+ RxChkSum = (1 << 5),
+ PCIDAC = (1 << 4),
+ PCIMulRW = (1 << 3),
+ INTT_0 = 0x0000, // 8168
+ INTT_1 = 0x0001, // 8168
+ INTT_2 = 0x0002, // 8168
+ INTT_3 = 0x0003, // 8168
+
+ /* rtl8169_PHYstatus */
+ TBI_Enable = 0x80,
+ TxFlowCtrl = 0x40,
+ RxFlowCtrl = 0x20,
+ _1000bpsF = 0x10,
+ _100bps = 0x08,
+ _10bps = 0x04,
+ LinkStatus = 0x02,
+ FullDup = 0x01,
+
+ /* _TBICSRBit */
+ TBILinkOK = 0x02000000,
+
+ /* DumpCounterCommand */
+ CounterDump = 0x8,
+};
+
+enum desc_status_bit {
+ DescOwn = (1 << 31), /* Descriptor is owned by NIC */
+ RingEnd = (1 << 30), /* End of descriptor ring */
+ FirstFrag = (1 << 29), /* First segment of a packet */
+ LastFrag = (1 << 28), /* Final segment of a packet */
+
+ /* Tx private */
+ LargeSend = (1 << 27), /* TCP Large Send Offload (TSO) */
+ MSSShift = 16, /* MSS value position */
+ MSSMask = 0xfff, /* MSS value + LargeSend bit: 12 bits */
+ IPCS = (1 << 18), /* Calculate IP checksum */
+ UDPCS = (1 << 17), /* Calculate UDP/IP checksum */
+ TCPCS = (1 << 16), /* Calculate TCP/IP checksum */
+ TxVlanTag = (1 << 17), /* Add VLAN tag */
+
+ /* Rx private */
+ PID1 = (1 << 18), /* Protocol ID bit 1/2 */
+ PID0 = (1 << 17), /* Protocol ID bit 2/2 */
+
+#define RxProtoUDP (PID1)
+#define RxProtoTCP (PID0)
+#define RxProtoIP (PID1 | PID0)
+#define RxProtoMask RxProtoIP
+
+ IPFail = (1 << 16), /* IP checksum failed */
+ UDPFail = (1 << 15), /* UDP/IP checksum failed */
+ TCPFail = (1 << 14), /* TCP/IP checksum failed */
+ RxVlanTag = (1 << 16), /* VLAN tag available */
+};
+
+#define RsvdMask 0x3fffc000
+
+struct TxDesc {
+ __le32 opts1;
+ __le32 opts2;
+ __le64 addr;
+};
+
+struct RxDesc {
+ __le32 opts1;
+ __le32 opts2;
+ __le64 addr;
+};
+
+struct ring_info {
+ struct sk_buff *skb;
+ u32 len;
+ u8 __pad[sizeof(void *) - sizeof(u32)];
+};
+
+enum features {
+ RTL_FEATURE_WOL = (1 << 0),
+ RTL_FEATURE_MSI = (1 << 1),
+};
+
+struct rtl8169_private {
+ void __iomem *mmio_addr; /* memory map physical address */
+ struct pci_dev *pci_dev; /* Index of PCI device */
+ struct net_device *dev;
+#ifdef CONFIG_R8169_NAPI
+ struct napi_struct napi;
+#endif
+ spinlock_t lock; /* spin lock flag */
+ u32 msg_enable;
+ int chipset;
+ int mac_version;
+ u32 cur_rx; /* Index into the Rx descriptor buffer of next Rx pkt. */
+ u32 cur_tx; /* Index into the Tx descriptor buffer of next Rx pkt. */
+ u32 dirty_rx;
+ u32 dirty_tx;
+ struct TxDesc *TxDescArray; /* 256-aligned Tx descriptor ring */
+ struct RxDesc *RxDescArray; /* 256-aligned Rx descriptor ring */
+ dma_addr_t TxPhyAddr;
+ dma_addr_t RxPhyAddr;
+ struct sk_buff *Rx_skbuff[NUM_RX_DESC]; /* Rx data buffers */
+ struct ring_info tx_skb[NUM_TX_DESC]; /* Tx data buffers */
+ unsigned align;
+ unsigned rx_buf_sz;
+ struct timer_list timer;
+ u16 cp_cmd;
+ u16 intr_event;
+ u16 napi_event;
+ u16 intr_mask;
+ int phy_auto_nego_reg;
+ int phy_1000_ctrl_reg;
+#ifdef CONFIG_R8169_VLAN
+ struct vlan_group *vlgrp;
+#endif
+ int (*set_speed)(struct net_device *, u8 autoneg, u16 speed, u8 duplex);
+ void (*get_settings)(struct net_device *, struct ethtool_cmd *);
+ void (*phy_reset_enable)(void __iomem *);
+ void (*hw_start)(struct net_device *);
+ unsigned int (*phy_reset_pending)(void __iomem *);
+ unsigned int (*link_ok)(void __iomem *);
+ struct delayed_work task;
+ unsigned features;
+
+ ec_device_t *ecdev;
+ unsigned long ec_watchdog_jiffies;
+};
+
+MODULE_AUTHOR("Florian Pose <fp@igh-essen.com>");
+MODULE_DESCRIPTION("EtherCAT-capable RealTek RTL-8169 Gigabit Ethernet driver");
+module_param(rx_copybreak, int, 0);
+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");
+module_param(use_dac, int, 0);
+MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot.");
+module_param_named(debug, debug.msg_enable, int, 0);
+MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(EC_MASTER_VERSION);
+
+static int rtl8169_open(struct net_device *dev);
+static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev);
+static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance);
+static int rtl8169_init_ring(struct net_device *dev);
+static void rtl_hw_start(struct net_device *dev);
+static int rtl8169_close(struct net_device *dev);
+static void rtl_set_rx_mode(struct net_device *dev);
+static void rtl8169_tx_timeout(struct net_device *dev);
+static struct net_device_stats *rtl8169_get_stats(struct net_device *dev);
+static int rtl8169_rx_interrupt(struct net_device *, struct rtl8169_private *,
+ void __iomem *, u32 budget);
+static int rtl8169_change_mtu(struct net_device *dev, int new_mtu);
+static void rtl8169_down(struct net_device *dev);
+static void rtl8169_rx_clear(struct rtl8169_private *tp);
+
+#ifdef CONFIG_R8169_NAPI
+static int rtl8169_poll(struct napi_struct *napi, int budget);
+#endif
+
+static const unsigned int rtl8169_rx_config =
+ (RX_FIFO_THRESH << RxCfgFIFOShift) | (RX_DMA_BURST << RxCfgDMAShift);
+
+static void mdio_write(void __iomem *ioaddr, int reg_addr, int value)
+{
+ int i;
+
+ RTL_W32(PHYAR, 0x80000000 | (reg_addr & 0x1f) << 16 | (value & 0xffff));
+
+ for (i = 20; i > 0; i--) {
+ /*
+ * Check if the RTL8169 has completed writing to the specified
+ * MII register.
+ */
+ if (!(RTL_R32(PHYAR) & 0x80000000))
+ break;
+ udelay(25);
+ }
+}
+
+static int mdio_read(void __iomem *ioaddr, int reg_addr)
+{
+ int i, value = -1;
+
+ RTL_W32(PHYAR, 0x0 | (reg_addr & 0x1f) << 16);
+
+ for (i = 20; i > 0; i--) {
+ /*
+ * Check if the RTL8169 has completed retrieving data from
+ * the specified MII register.
+ */
+ if (RTL_R32(PHYAR) & 0x80000000) {
+ value = RTL_R32(PHYAR) & 0xffff;
+ break;
+ }
+ udelay(25);
+ }
+ return value;
+}
+
+static void rtl8169_irq_mask_and_ack(void __iomem *ioaddr)
+{
+ RTL_W16(IntrMask, 0x0000);
+
+ RTL_W16(IntrStatus, 0xffff);
+}
+
+static void rtl8169_asic_down(void __iomem *ioaddr)
+{
+ RTL_W8(ChipCmd, 0x00);
+ rtl8169_irq_mask_and_ack(ioaddr);
+ RTL_R16(CPlusCmd);
+}
+
+static unsigned int rtl8169_tbi_reset_pending(void __iomem *ioaddr)
+{
+ return RTL_R32(TBICSR) & TBIReset;
+}
+
+static unsigned int rtl8169_xmii_reset_pending(void __iomem *ioaddr)
+{
+ return mdio_read(ioaddr, MII_BMCR) & BMCR_RESET;
+}
+
+static unsigned int rtl8169_tbi_link_ok(void __iomem *ioaddr)
+{
+ return RTL_R32(TBICSR) & TBILinkOk;
+}
+
+static unsigned int rtl8169_xmii_link_ok(void __iomem *ioaddr)
+{
+ return RTL_R8(PHYstatus) & LinkStatus;
+}
+
+static void rtl8169_tbi_reset_enable(void __iomem *ioaddr)
+{
+ RTL_W32(TBICSR, RTL_R32(TBICSR) | TBIReset);
+}
+
+static void rtl8169_xmii_reset_enable(void __iomem *ioaddr)
+{
+ unsigned int val;
+
+ val = mdio_read(ioaddr, MII_BMCR) | BMCR_RESET;
+ mdio_write(ioaddr, MII_BMCR, val & 0xffff);
+}
+
+static void rtl8169_check_link_status(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ unsigned long flags;
+
+ if (tp->ecdev) {
+ ecdev_set_link(tp->ecdev, tp->link_ok(ioaddr) ? 1 : 0);
+ } else {
+ spin_lock_irqsave(&tp->lock, flags);
+ if (tp->link_ok(ioaddr)) {
+ netif_carrier_on(dev);
+ if (netif_msg_ifup(tp))
+ printk(KERN_INFO PFX "%s: link up\n", dev->name);
+ } else {
+ if (netif_msg_ifdown(tp))
+ printk(KERN_INFO PFX "%s: link down\n", dev->name);
+ netif_carrier_off(dev);
+ }
+ spin_unlock_irqrestore(&tp->lock, flags);
+ }
+}
+
+static void rtl8169_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u8 options;
+
+ wol->wolopts = 0;
+
+#define WAKE_ANY (WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_BCAST | WAKE_MCAST)
+ wol->supported = WAKE_ANY;
+
+ spin_lock_irq(&tp->lock);
+
+ options = RTL_R8(Config1);
+ if (!(options & PMEnable))
+ goto out_unlock;
+
+ options = RTL_R8(Config3);
+ if (options & LinkUp)
+ wol->wolopts |= WAKE_PHY;
+ if (options & MagicPacket)
+ wol->wolopts |= WAKE_MAGIC;
+
+ options = RTL_R8(Config5);
+ if (options & UWF)
+ wol->wolopts |= WAKE_UCAST;
+ if (options & BWF)
+ wol->wolopts |= WAKE_BCAST;
+ if (options & MWF)
+ wol->wolopts |= WAKE_MCAST;
+
+out_unlock:
+ spin_unlock_irq(&tp->lock);
+}
+
+static int rtl8169_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+ static struct {
+ u32 opt;
+ u16 reg;
+ u8 mask;
+ } cfg[] = {
+ { WAKE_ANY, Config1, PMEnable },
+ { WAKE_PHY, Config3, LinkUp },
+ { WAKE_MAGIC, Config3, MagicPacket },
+ { WAKE_UCAST, Config5, UWF },
+ { WAKE_BCAST, Config5, BWF },
+ { WAKE_MCAST, Config5, MWF },
+ { WAKE_ANY, Config5, LanWake }
+ };
+
+ spin_lock_irq(&tp->lock);
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ for (i = 0; i < ARRAY_SIZE(cfg); i++) {
+ u8 options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
+ if (wol->wolopts & cfg[i].opt)
+ options |= cfg[i].mask;
+ RTL_W8(cfg[i].reg, options);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ if (wol->wolopts)
+ tp->features |= RTL_FEATURE_WOL;
+ else
+ tp->features &= ~RTL_FEATURE_WOL;
+
+ spin_unlock_irq(&tp->lock);
+
+ return 0;
+}
+
+static void rtl8169_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ strcpy(info->driver, MODULENAME);
+ strcpy(info->version, RTL8169_VERSION);
+ strcpy(info->bus_info, pci_name(tp->pci_dev));
+}
+
+static int rtl8169_get_regs_len(struct net_device *dev)
+{
+ return R8169_REGS_SIZE;
+}
+
+static int rtl8169_set_speed_tbi(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int ret = 0;
+ u32 reg;
+
+ reg = RTL_R32(TBICSR);
+ if ((autoneg == AUTONEG_DISABLE) && (speed == SPEED_1000) &&
+ (duplex == DUPLEX_FULL)) {
+ RTL_W32(TBICSR, reg & ~(TBINwEnable | TBINwRestart));
+ } else if (autoneg == AUTONEG_ENABLE)
+ RTL_W32(TBICSR, reg | TBINwEnable | TBINwRestart);
+ else {
+ if (netif_msg_link(tp)) {
+ printk(KERN_WARNING "%s: "
+ "incorrect speed setting refused in TBI mode\n",
+ dev->name);
+ }
+ ret = -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
+static int rtl8169_set_speed_xmii(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int auto_nego, giga_ctrl;
+
+ auto_nego = mdio_read(ioaddr, MII_ADVERTISE);
+ auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_10FULL |
+ ADVERTISE_100HALF | ADVERTISE_100FULL);
+ giga_ctrl = mdio_read(ioaddr, MII_CTRL1000);
+ giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
+
+ if (autoneg == AUTONEG_ENABLE) {
+ auto_nego |= (ADVERTISE_10HALF | ADVERTISE_10FULL |
+ ADVERTISE_100HALF | ADVERTISE_100FULL);
+ giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
+ } else {
+ if (speed == SPEED_10)
+ auto_nego |= ADVERTISE_10HALF | ADVERTISE_10FULL;
+ else if (speed == SPEED_100)
+ auto_nego |= ADVERTISE_100HALF | ADVERTISE_100FULL;
+ else if (speed == SPEED_1000)
+ giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
+
+ if (duplex == DUPLEX_HALF)
+ auto_nego &= ~(ADVERTISE_10FULL | ADVERTISE_100FULL);
+
+ if (duplex == DUPLEX_FULL)
+ auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_100HALF);
+
+ /* This tweak comes straight from Realtek's driver. */
+ if ((speed == SPEED_100) && (duplex == DUPLEX_HALF) &&
+ ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16))) {
+ auto_nego = ADVERTISE_100HALF | ADVERTISE_CSMA;
+ }
+ }
+
+ /* The 8100e/8101e do Fast Ethernet only. */
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_14) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_15) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16)) {
+ if ((giga_ctrl & (ADVERTISE_1000FULL | ADVERTISE_1000HALF)) &&
+ netif_msg_link(tp)) {
+ printk(KERN_INFO "%s: PHY does not support 1000Mbps.\n",
+ dev->name);
+ }
+ giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
+ }
+
+ auto_nego |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_12) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_17)) {
+ /* Vendor specific (0x1f) and reserved (0x0e) MII registers. */
+ mdio_write(ioaddr, 0x1f, 0x0000);
+ mdio_write(ioaddr, 0x0e, 0x0000);
+ }
+
+ tp->phy_auto_nego_reg = auto_nego;
+ tp->phy_1000_ctrl_reg = giga_ctrl;
+
+ mdio_write(ioaddr, MII_ADVERTISE, auto_nego);
+ mdio_write(ioaddr, MII_CTRL1000, giga_ctrl);
+ mdio_write(ioaddr, MII_BMCR, BMCR_ANENABLE | BMCR_ANRESTART);
+ return 0;
+}
+
+static int rtl8169_set_speed(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int ret;
+
+ ret = tp->set_speed(dev, autoneg, speed, duplex);
+
+ if (netif_running(dev) && (tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
+ mod_timer(&tp->timer, jiffies + RTL8169_PHY_TIMEOUT);
+
+ return ret;
+}
+
+static int rtl8169_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ ret = rtl8169_set_speed(dev, cmd->autoneg, cmd->speed, cmd->duplex);
+ spin_unlock_irqrestore(&tp->lock, flags);
+
+ return ret;
+}
+
+static u32 rtl8169_get_rx_csum(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return tp->cp_cmd & RxChkSum;
+}
+
+static int rtl8169_set_rx_csum(struct net_device *dev, u32 data)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ if (data)
+ tp->cp_cmd |= RxChkSum;
+ else
+ tp->cp_cmd &= ~RxChkSum;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+
+ return 0;
+}
+
+#ifdef CONFIG_R8169_VLAN
+
+static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+{
+ return (tp->vlgrp && vlan_tx_tag_present(skb)) ?
+ TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00;
+}
+
+static void rtl8169_vlan_rx_register(struct net_device *dev,
+ struct vlan_group *grp)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ tp->vlgrp = grp;
+ if (tp->vlgrp)
+ tp->cp_cmd |= RxVlan;
+ else
+ tp->cp_cmd &= ~RxVlan;
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
+ struct sk_buff *skb)
+{
+ u32 opts2 = le32_to_cpu(desc->opts2);
+ int ret;
+
+ if (tp->vlgrp && (opts2 & RxVlanTag)) {
+ rtl8169_rx_hwaccel_skb(skb, tp->vlgrp, swab16(opts2 & 0xffff));
+ ret = 0;
+ } else
+ ret = -1;
+ desc->opts2 = 0;
+ return ret;
+}
+
+#else /* !CONFIG_R8169_VLAN */
+
+static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+{
+ return 0;
+}
+
+static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
+ struct sk_buff *skb)
+{
+ return -1;
+}
+
+#endif
+
+static void rtl8169_gset_tbi(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 status;
+
+ cmd->supported =
+ SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_FIBRE;
+ cmd->port = PORT_FIBRE;
+ cmd->transceiver = XCVR_INTERNAL;
+
+ status = RTL_R32(TBICSR);
+ cmd->advertising = (status & TBINwEnable) ? ADVERTISED_Autoneg : 0;
+ cmd->autoneg = !!(status & TBINwEnable);
+
+ cmd->speed = SPEED_1000;
+ cmd->duplex = DUPLEX_FULL; /* Always set */
+}
+
+static void rtl8169_gset_xmii(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u8 status;
+
+ cmd->supported = SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full |
+ SUPPORTED_100baseT_Half |
+ SUPPORTED_100baseT_Full |
+ SUPPORTED_1000baseT_Full |
+ SUPPORTED_Autoneg |
+ SUPPORTED_TP;
+
+ cmd->autoneg = 1;
+ cmd->advertising = ADVERTISED_TP | ADVERTISED_Autoneg;
+
+ if (tp->phy_auto_nego_reg & ADVERTISE_10HALF)
+ cmd->advertising |= ADVERTISED_10baseT_Half;
+ if (tp->phy_auto_nego_reg & ADVERTISE_10FULL)
+ cmd->advertising |= ADVERTISED_10baseT_Full;
+ if (tp->phy_auto_nego_reg & ADVERTISE_100HALF)
+ cmd->advertising |= ADVERTISED_100baseT_Half;
+ if (tp->phy_auto_nego_reg & ADVERTISE_100FULL)
+ cmd->advertising |= ADVERTISED_100baseT_Full;
+ if (tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL)
+ cmd->advertising |= ADVERTISED_1000baseT_Full;
+
+ status = RTL_R8(PHYstatus);
+
+ if (status & _1000bpsF)
+ cmd->speed = SPEED_1000;
+ else if (status & _100bps)
+ cmd->speed = SPEED_100;
+ else if (status & _10bps)
+ cmd->speed = SPEED_10;
+
+ if (status & TxFlowCtrl)
+ cmd->advertising |= ADVERTISED_Asym_Pause;
+ if (status & RxFlowCtrl)
+ cmd->advertising |= ADVERTISED_Pause;
+
+ cmd->duplex = ((status & _1000bpsF) || (status & FullDup)) ?
+ DUPLEX_FULL : DUPLEX_HALF;
+}
+
+static int rtl8169_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ tp->get_settings(dev, cmd);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+ return 0;
+}
+
+static void rtl8169_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+ void *p)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+
+ if (regs->len > R8169_REGS_SIZE)
+ regs->len = R8169_REGS_SIZE;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ memcpy_fromio(p, tp->mmio_addr, regs->len);
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static u32 rtl8169_get_msglevel(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return tp->msg_enable;
+}
+
+static void rtl8169_set_msglevel(struct net_device *dev, u32 value)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ tp->msg_enable = value;
+}
+
+static const char rtl8169_gstrings[][ETH_GSTRING_LEN] = {
+ "tx_packets",
+ "rx_packets",
+ "tx_errors",
+ "rx_errors",
+ "rx_missed",
+ "align_errors",
+ "tx_single_collisions",
+ "tx_multi_collisions",
+ "unicast",
+ "broadcast",
+ "multicast",
+ "tx_aborted",
+ "tx_underrun",
+};
+
+struct rtl8169_counters {
+ __le64 tx_packets;
+ __le64 rx_packets;
+ __le64 tx_errors;
+ __le32 rx_errors;
+ __le16 rx_missed;
+ __le16 align_errors;
+ __le32 tx_one_collision;
+ __le32 tx_multi_collision;
+ __le64 rx_unicast;
+ __le64 rx_broadcast;
+ __le32 rx_multicast;
+ __le16 tx_aborted;
+ __le16 tx_underun;
+};
+
+static int rtl8169_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return ARRAY_SIZE(rtl8169_gstrings);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void rtl8169_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct rtl8169_counters *counters;
+ dma_addr_t paddr;
+ u32 cmd;
+
+ ASSERT_RTNL();
+
+ counters = pci_alloc_consistent(tp->pci_dev, sizeof(*counters), &paddr);
+ if (!counters)
+ return;
+
+ RTL_W32(CounterAddrHigh, (u64)paddr >> 32);
+ cmd = (u64)paddr & DMA_32BIT_MASK;
+ RTL_W32(CounterAddrLow, cmd);
+ RTL_W32(CounterAddrLow, cmd | CounterDump);
+
+ while (RTL_R32(CounterAddrLow) & CounterDump) {
+ if (msleep_interruptible(1))
+ break;
+ }
+
+ RTL_W32(CounterAddrLow, 0);
+ RTL_W32(CounterAddrHigh, 0);
+
+ data[0] = le64_to_cpu(counters->tx_packets);
+ data[1] = le64_to_cpu(counters->rx_packets);
+ data[2] = le64_to_cpu(counters->tx_errors);
+ data[3] = le32_to_cpu(counters->rx_errors);
+ data[4] = le16_to_cpu(counters->rx_missed);
+ data[5] = le16_to_cpu(counters->align_errors);
+ data[6] = le32_to_cpu(counters->tx_one_collision);
+ data[7] = le32_to_cpu(counters->tx_multi_collision);
+ data[8] = le64_to_cpu(counters->rx_unicast);
+ data[9] = le64_to_cpu(counters->rx_broadcast);
+ data[10] = le32_to_cpu(counters->rx_multicast);
+ data[11] = le16_to_cpu(counters->tx_aborted);
+ data[12] = le16_to_cpu(counters->tx_underun);
+
+ pci_free_consistent(tp->pci_dev, sizeof(*counters), counters, paddr);
+}
+
+static void rtl8169_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+{
+ switch(stringset) {
+ case ETH_SS_STATS:
+ memcpy(data, *rtl8169_gstrings, sizeof(rtl8169_gstrings));
+ break;
+ }
+}
+
+static const struct ethtool_ops rtl8169_ethtool_ops = {
+ .get_drvinfo = rtl8169_get_drvinfo,
+ .get_regs_len = rtl8169_get_regs_len,
+ .get_link = ethtool_op_get_link,
+ .get_settings = rtl8169_get_settings,
+ .set_settings = rtl8169_set_settings,
+ .get_msglevel = rtl8169_get_msglevel,
+ .set_msglevel = rtl8169_set_msglevel,
+ .get_rx_csum = rtl8169_get_rx_csum,
+ .set_rx_csum = rtl8169_set_rx_csum,
+ .set_tx_csum = ethtool_op_set_tx_csum,
+ .set_sg = ethtool_op_set_sg,
+ .set_tso = ethtool_op_set_tso,
+ .get_regs = rtl8169_get_regs,
+ .get_wol = rtl8169_get_wol,
+ .set_wol = rtl8169_set_wol,
+ .get_strings = rtl8169_get_strings,
+ .get_sset_count = rtl8169_get_sset_count,
+ .get_ethtool_stats = rtl8169_get_ethtool_stats,
+};
+
+static void rtl8169_write_gmii_reg_bit(void __iomem *ioaddr, int reg,
+ int bitnum, int bitval)
+{
+ int val;
+
+ val = mdio_read(ioaddr, reg);
+ val = (bitval == 1) ?
+ val | (bitval << bitnum) : val & ~(0x0001 << bitnum);
+ mdio_write(ioaddr, reg, val & 0xffff);
+}
+
+static void rtl8169_get_mac_version(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ /*
+ * The driver currently handles the 8168Bf and the 8168Be identically
+ * but they can be identified more specifically through the test below
+ * if needed:
+ *
+ * (RTL_R32(TxConfig) & 0x700000) == 0x500000 ? 8168Bf : 8168Be
+ *
+ * Same thing for the 8101Eb and the 8101Ec:
+ *
+ * (RTL_R32(TxConfig) & 0x700000) == 0x200000 ? 8101Eb : 8101Ec
+ */
+ const struct {
+ u32 mask;
+ u32 val;
+ int mac_version;
+ } mac_info[] = {
+ /* 8168B family. */
+ { 0x7c800000, 0x3c800000, RTL_GIGA_MAC_VER_18 },
+ { 0x7cf00000, 0x3c000000, RTL_GIGA_MAC_VER_19 },
+ { 0x7cf00000, 0x3c200000, RTL_GIGA_MAC_VER_20 },
+ { 0x7c800000, 0x3c000000, RTL_GIGA_MAC_VER_20 },
+
+ /* 8168B family. */
+ { 0x7cf00000, 0x38000000, RTL_GIGA_MAC_VER_12 },
+ { 0x7cf00000, 0x38500000, RTL_GIGA_MAC_VER_17 },
+ { 0x7c800000, 0x38000000, RTL_GIGA_MAC_VER_17 },
+ { 0x7c800000, 0x30000000, RTL_GIGA_MAC_VER_11 },
+
+ /* 8101 family. */
+ { 0x7cf00000, 0x34000000, RTL_GIGA_MAC_VER_13 },
+ { 0x7cf00000, 0x34200000, RTL_GIGA_MAC_VER_16 },
+ { 0x7c800000, 0x34000000, RTL_GIGA_MAC_VER_16 },
+ /* FIXME: where did these entries come from ? -- FR */
+ { 0xfc800000, 0x38800000, RTL_GIGA_MAC_VER_15 },
+ { 0xfc800000, 0x30800000, RTL_GIGA_MAC_VER_14 },
+
+ /* 8110 family. */
+ { 0xfc800000, 0x98000000, RTL_GIGA_MAC_VER_06 },
+ { 0xfc800000, 0x18000000, RTL_GIGA_MAC_VER_05 },
+ { 0xfc800000, 0x10000000, RTL_GIGA_MAC_VER_04 },
+ { 0xfc800000, 0x04000000, RTL_GIGA_MAC_VER_03 },
+ { 0xfc800000, 0x00800000, RTL_GIGA_MAC_VER_02 },
+ { 0xfc800000, 0x00000000, RTL_GIGA_MAC_VER_01 },
+
+ { 0x00000000, 0x00000000, RTL_GIGA_MAC_VER_01 } /* Catch-all */
+ }, *p = mac_info;
+ u32 reg;
+
+ reg = RTL_R32(TxConfig);
+ while ((reg & p->mask) != p->val)
+ p++;
+ tp->mac_version = p->mac_version;
+
+ if (p->mask == 0x00000000) {
+ struct pci_dev *pdev = tp->pci_dev;
+
+ dev_info(&pdev->dev, "unknown MAC (%08x)\n", reg);
+ }
+}
+
+static void rtl8169_print_mac_version(struct rtl8169_private *tp)
+{
+ dprintk("mac_version = 0x%02x\n", tp->mac_version);
+}
+
+struct phy_reg {
+ u16 reg;
+ u16 val;
+};
+
+static void rtl_phy_write(void __iomem *ioaddr, struct phy_reg *regs, int len)
+{
+ while (len-- > 0) {
+ mdio_write(ioaddr, regs->reg, regs->val);
+ regs++;
+ }
+}
+
+static void rtl8169s_hw_phy_config(void __iomem *ioaddr)
+{
+ struct {
+ u16 regs[5]; /* Beware of bit-sign propagation */
+ } phy_magic[5] = { {
+ { 0x0000, //w 4 15 12 0
+ 0x00a1, //w 3 15 0 00a1
+ 0x0008, //w 2 15 0 0008
+ 0x1020, //w 1 15 0 1020
+ 0x1000 } },{ //w 0 15 0 1000
+ { 0x7000, //w 4 15 12 7
+ 0xff41, //w 3 15 0 ff41
+ 0xde60, //w 2 15 0 de60
+ 0x0140, //w 1 15 0 0140
+ 0x0077 } },{ //w 0 15 0 0077
+ { 0xa000, //w 4 15 12 a
+ 0xdf01, //w 3 15 0 df01
+ 0xdf20, //w 2 15 0 df20
+ 0xff95, //w 1 15 0 ff95
+ 0xfa00 } },{ //w 0 15 0 fa00
+ { 0xb000, //w 4 15 12 b
+ 0xff41, //w 3 15 0 ff41
+ 0xde20, //w 2 15 0 de20
+ 0x0140, //w 1 15 0 0140
+ 0x00bb } },{ //w 0 15 0 00bb
+ { 0xf000, //w 4 15 12 f
+ 0xdf01, //w 3 15 0 df01
+ 0xdf20, //w 2 15 0 df20
+ 0xff95, //w 1 15 0 ff95
+ 0xbf00 } //w 0 15 0 bf00
+ }
+ }, *p = phy_magic;
+ unsigned int i;
+
+ mdio_write(ioaddr, 0x1f, 0x0001); //w 31 2 0 1
+ mdio_write(ioaddr, 0x15, 0x1000); //w 21 15 0 1000
+ mdio_write(ioaddr, 0x18, 0x65c7); //w 24 15 0 65c7
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
+
+ for (i = 0; i < ARRAY_SIZE(phy_magic); i++, p++) {
+ int val, pos = 4;
+
+ val = (mdio_read(ioaddr, pos) & 0x0fff) | (p->regs[0] & 0xffff);
+ mdio_write(ioaddr, pos, val);
+ while (--pos >= 0)
+ mdio_write(ioaddr, pos, p->regs[4 - pos] & 0xffff);
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 1); //w 4 11 11 1
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
+ }
+ mdio_write(ioaddr, 0x1f, 0x0000); //w 31 2 0 0
+}
+
+static void rtl8169sb_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0002 },
+ { 0x01, 0x90d0 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168cp_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0000 },
+ { 0x1d, 0x0f00 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x1ec8 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168c_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x12, 0x2300 },
+ { 0x1f, 0x0002 },
+ { 0x00, 0x88d4 },
+ { 0x01, 0x82b1 },
+ { 0x03, 0x7002 },
+ { 0x08, 0x9e30 },
+ { 0x09, 0x01f0 },
+ { 0x0a, 0x5500 },
+ { 0x0c, 0x00c8 },
+ { 0x1f, 0x0003 },
+ { 0x12, 0xc096 },
+ { 0x16, 0x000a },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168cx_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0000 },
+ { 0x12, 0x2300 },
+ { 0x1f, 0x0003 },
+ { 0x16, 0x0f0a },
+ { 0x1f, 0x0000 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x7eb8 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl_hw_phy_config(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ rtl8169_print_mac_version(tp);
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_01:
+ break;
+ case RTL_GIGA_MAC_VER_02:
+ case RTL_GIGA_MAC_VER_03:
+ rtl8169s_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_04:
+ rtl8169sb_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_18:
+ rtl8168cp_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_19:
+ rtl8168c_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_20:
+ rtl8168cx_hw_phy_config(ioaddr);
+ break;
+ default:
+ break;
+ }
+}
+
+static void rtl8169_phy_timer(unsigned long __opaque)
+{
+ struct net_device *dev = (struct net_device *)__opaque;
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long timeout = RTL8169_PHY_TIMEOUT;
+
+ assert(tp->mac_version > RTL_GIGA_MAC_VER_01);
+
+ if (!(tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
+ return;
+
+ if (!tp->ecdev)
+ spin_lock_irq(&tp->lock);
+
+ if (tp->phy_reset_pending(ioaddr)) {
+ /*
+ * A busy loop could burn quite a few cycles on nowadays CPU.
+ * Let's delay the execution of the timer for a few ticks.
+ */
+ timeout = HZ/10;
+ goto out_mod_timer;
+ }
+
+ if (tp->link_ok(ioaddr))
+ goto out_unlock;
+
+ if (netif_msg_link(tp))
+ printk(KERN_WARNING "%s: PHY reset until link up\n", dev->name);
+
+ tp->phy_reset_enable(ioaddr);
+
+out_mod_timer:
+ if (!tp->ecdev)
+ mod_timer(timer, jiffies + timeout);
+out_unlock:
+ if (!tp->ecdev)
+ spin_unlock_irq(&tp->lock);
+}
+
+static inline void rtl8169_delete_timer(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+
+ if (tp->ecdev || tp->mac_version <= RTL_GIGA_MAC_VER_01)
+ return;
+
+ del_timer_sync(timer);
+}
+
+static inline void rtl8169_request_timer(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+
+ if (tp->ecdev || tp->mac_version <= RTL_GIGA_MAC_VER_01)
+ return;
+
+ mod_timer(timer, jiffies + RTL8169_PHY_TIMEOUT);
+}
+
+static void ec_poll(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ rtl8169_interrupt(pdev->irq, dev);
+
+ if (jiffies - tp->ec_watchdog_jiffies >= 2 * HZ) {
+ rtl8169_phy_timer((unsigned long) dev);
+ tp->ec_watchdog_jiffies = jiffies;
+ }
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+/*
+ * Polling 'interrupt' - used by things like netconsole to send skbs
+ * without having to re-enable interrupts. It's not called while
+ * the interrupt routine is executing.
+ */
+static void rtl8169_netpoll(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ disable_irq(pdev->irq);
+ rtl8169_interrupt(pdev->irq, dev);
+ enable_irq(pdev->irq);
+}
+#endif
+
+static void rtl8169_release_board(struct pci_dev *pdev, struct net_device *dev,
+ void __iomem *ioaddr)
+{
+ iounmap(ioaddr);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ free_netdev(dev);
+}
+
+static void rtl8169_phy_reset(struct net_device *dev,
+ struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+
+ tp->phy_reset_enable(ioaddr);
+ for (i = 0; i < 100; i++) {
+ if (!tp->phy_reset_pending(ioaddr))
+ return;
+ msleep(1);
+ }
+ if (netif_msg_link(tp))
+ printk(KERN_ERR "%s: PHY reset failed.\n", dev->name);
+}
+
+static void rtl8169_init_phy(struct net_device *dev, struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ rtl_hw_phy_config(dev);
+
+ dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
+ RTL_W8(0x82, 0x01);
+
+ pci_write_config_byte(tp->pci_dev, PCI_LATENCY_TIMER, 0x40);
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
+ pci_write_config_byte(tp->pci_dev, PCI_CACHE_LINE_SIZE, 0x08);
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_02) {
+ dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
+ RTL_W8(0x82, 0x01);
+ dprintk("Set PHY Reg 0x0bh = 0x00h\n");
+ mdio_write(ioaddr, 0x0b, 0x0000); //w 0x0b 15 0 0
+ }
+
+ rtl8169_phy_reset(dev, tp);
+
+ /*
+ * rtl8169_set_speed_xmii takes good care of the Fast Ethernet
+ * only 8101. Don't panic.
+ */
+ rtl8169_set_speed(dev, AUTONEG_ENABLE, SPEED_1000, DUPLEX_FULL);
+
+ if ((RTL_R8(PHYstatus) & TBI_Enable) && netif_msg_link(tp))
+ printk(KERN_INFO PFX "%s: TBI auto-negotiating\n", dev->name);
+}
+
+static void rtl_rar_set(struct rtl8169_private *tp, u8 *addr)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 high;
+ u32 low;
+
+ low = addr[0] | (addr[1] << 8) | (addr[2] << 16) | (addr[3] << 24);
+ high = addr[4] | (addr[5] << 8);
+
+ spin_lock_irq(&tp->lock);
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ RTL_W32(MAC0, low);
+ RTL_W32(MAC4, high);
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ spin_unlock_irq(&tp->lock);
+}
+
+static int rtl_set_mac_address(struct net_device *dev, void *p)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+
+ rtl_rar_set(tp, dev->dev_addr);
+
+ return 0;
+}
+
+static int rtl8169_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct mii_ioctl_data *data = if_mii(ifr);
+
+ if (!netif_running(dev))
+ return -ENODEV;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ data->phy_id = 32; /* Internal PHY */
+ return 0;
+
+ case SIOCGMIIREG:
+ data->val_out = mdio_read(tp->mmio_addr, data->reg_num & 0x1f);
+ return 0;
+
+ case SIOCSMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ mdio_write(tp->mmio_addr, data->reg_num & 0x1f, data->val_in);
+ return 0;
+ }
+ return -EOPNOTSUPP;
+}
+
+static const struct rtl_cfg_info {
+ void (*hw_start)(struct net_device *);
+ unsigned int region;
+ unsigned int align;
+ u16 intr_event;
+ u16 napi_event;
+ unsigned msi;
+} rtl_cfg_infos [] = {
+ [RTL_CFG_0] = {
+ .hw_start = rtl_hw_start_8169,
+ .region = 1,
+ .align = 0,
+ .intr_event = SYSErr | LinkChg | RxOverflow |
+ RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
+ .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow,
+ .msi = 0
+ },
+ [RTL_CFG_1] = {
+ .hw_start = rtl_hw_start_8168,
+ .region = 2,
+ .align = 8,
+ .intr_event = SYSErr | LinkChg | RxOverflow |
+ TxErr | TxOK | RxOK | RxErr,
+ .napi_event = TxErr | TxOK | RxOK | RxOverflow,
+ .msi = RTL_FEATURE_MSI
+ },
+ [RTL_CFG_2] = {
+ .hw_start = rtl_hw_start_8101,
+ .region = 2,
+ .align = 8,
+ .intr_event = SYSErr | LinkChg | RxOverflow | PCSTimeout |
+ RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
+ .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow,
+ .msi = RTL_FEATURE_MSI
+ }
+};
+
+/* Cfg9346_Unlock assumed. */
+static unsigned rtl_try_msi(struct pci_dev *pdev, void __iomem *ioaddr,
+ const struct rtl_cfg_info *cfg)
+{
+ unsigned msi = 0;
+ u8 cfg2;
+
+ cfg2 = RTL_R8(Config2) & ~MSIEnable;
+ if (cfg->msi) {
+ if (pci_enable_msi(pdev)) {
+ dev_info(&pdev->dev, "no MSI. Back to INTx.\n");
+ } else {
+ cfg2 |= MSIEnable;
+ msi = RTL_FEATURE_MSI;
+ }
+ }
+ RTL_W8(Config2, cfg2);
+ return msi;
+}
+
+static void rtl_disable_msi(struct pci_dev *pdev, struct rtl8169_private *tp)
+{
+ if (tp->features & RTL_FEATURE_MSI) {
+ pci_disable_msi(pdev);
+ tp->features &= ~RTL_FEATURE_MSI;
+ }
+}
+
+static int __devinit
+rtl8169_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
+ const unsigned int region = cfg->region;
+ struct rtl8169_private *tp;
+ struct net_device *dev;
+ void __iomem *ioaddr;
+ unsigned int i;
+ int rc;
+
+ if (netif_msg_drv(&debug)) {
+ printk(KERN_INFO "%s Gigabit Ethernet driver %s loaded\n",
+ MODULENAME, RTL8169_VERSION);
+ }
+
+ dev = alloc_etherdev(sizeof (*tp));
+ if (!dev) {
+ if (netif_msg_drv(&debug))
+ dev_err(&pdev->dev, "unable to alloc new ethernet\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ tp = netdev_priv(dev);
+ tp->dev = dev;
+ tp->msg_enable = netif_msg_init(debug.msg_enable, R8169_MSG_DEFAULT);
+
+ /* enable device (incl. PCI PM wakeup and hotplug setup) */
+ rc = pci_enable_device(pdev);
+ if (rc < 0) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "enable failure\n");
+ goto err_out_free_dev_1;
+ }
+
+ rc = pci_set_mwi(pdev);
+ if (rc < 0)
+ goto err_out_disable_2;
+
+ /* make sure PCI base addr 1 is MMIO */
+ if (!(pci_resource_flags(pdev, region) & IORESOURCE_MEM)) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "region #%d not an MMIO resource, aborting\n",
+ region);
+ }
+ rc = -ENODEV;
+ goto err_out_mwi_3;
+ }
+
+ /* check for weird/broken PCI region reporting */
+ if (pci_resource_len(pdev, region) < R8169_REGS_SIZE) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "Invalid PCI region size(s), aborting\n");
+ }
+ rc = -ENODEV;
+ goto err_out_mwi_3;
+ }
+
+ rc = pci_request_regions(pdev, MODULENAME);
+ if (rc < 0) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "could not request regions.\n");
+ goto err_out_mwi_3;
+ }
+
+ tp->cp_cmd = PCIMulRW | RxChkSum;
+
+ if ((sizeof(dma_addr_t) > 4) &&
+ !pci_set_dma_mask(pdev, DMA_64BIT_MASK) && use_dac) {
+ tp->cp_cmd |= PCIDAC;
+ dev->features |= NETIF_F_HIGHDMA;
+ } else {
+ rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+ if (rc < 0) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "DMA configuration failed.\n");
+ }
+ goto err_out_free_res_4;
+ }
+ }
+
+ pci_set_master(pdev);
+
+ /* ioremap MMIO region */
+ ioaddr = ioremap(pci_resource_start(pdev, region), R8169_REGS_SIZE);
+ if (!ioaddr) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "cannot remap MMIO, aborting\n");
+ rc = -EIO;
+ goto err_out_free_res_4;
+ }
+
+ /* Unneeded ? Don't mess with Mrs. Murphy. */
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+ /* Soft reset the chip. */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* Check that the chip has finished the reset. */
+ for (i = 0; i < 100; i++) {
+ if ((RTL_R8(ChipCmd) & CmdReset) == 0)
+ break;
+ msleep_interruptible(1);
+ }
+
+ /* Identify chip attached to board */
+ rtl8169_get_mac_version(tp, ioaddr);
+
+ rtl8169_print_mac_version(tp);
+
+ for (i = ARRAY_SIZE(rtl_chip_info) - 1; i >= 0; i--) {
+ if (tp->mac_version == rtl_chip_info[i].mac_version)
+ break;
+ }
+ if (i < 0) {
+ /* Unknown chip: assume array element #0, original RTL-8169 */
+ if (netif_msg_probe(tp)) {
+ dev_printk(KERN_DEBUG, &pdev->dev,
+ "unknown chip version, assuming %s\n",
+ rtl_chip_info[0].name);
+ }
+ i++;
+ }
+ tp->chipset = i;
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ RTL_W8(Config1, RTL_R8(Config1) | PMEnable);
+ RTL_W8(Config5, RTL_R8(Config5) & PMEStatus);
+ tp->features |= rtl_try_msi(pdev, ioaddr, cfg);
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ if ((tp->mac_version <= RTL_GIGA_MAC_VER_06) &&
+ (RTL_R8(PHYstatus) & TBI_Enable)) {
+ tp->set_speed = rtl8169_set_speed_tbi;
+ tp->get_settings = rtl8169_gset_tbi;
+ tp->phy_reset_enable = rtl8169_tbi_reset_enable;
+ tp->phy_reset_pending = rtl8169_tbi_reset_pending;
+ tp->link_ok = rtl8169_tbi_link_ok;
+
+ tp->phy_1000_ctrl_reg = ADVERTISE_1000FULL; /* Implied by TBI */
+ } else {
+ tp->set_speed = rtl8169_set_speed_xmii;
+ tp->get_settings = rtl8169_gset_xmii;
+ tp->phy_reset_enable = rtl8169_xmii_reset_enable;
+ tp->phy_reset_pending = rtl8169_xmii_reset_pending;
+ tp->link_ok = rtl8169_xmii_link_ok;
+
+ dev->do_ioctl = rtl8169_ioctl;
+ }
+
+ /* Get MAC address. FIXME: read EEPROM */
+ for (i = 0; i < MAC_ADDR_LEN; i++)
+ dev->dev_addr[i] = RTL_R8(MAC0 + i);
+ memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
+
+ dev->open = rtl8169_open;
+ dev->hard_start_xmit = rtl8169_start_xmit;
+ dev->get_stats = rtl8169_get_stats;
+ SET_ETHTOOL_OPS(dev, &rtl8169_ethtool_ops);
+ dev->stop = rtl8169_close;
+ dev->tx_timeout = rtl8169_tx_timeout;
+ dev->set_multicast_list = rtl_set_rx_mode;
+ dev->watchdog_timeo = RTL8169_TX_TIMEOUT;
+ dev->irq = pdev->irq;
+ dev->base_addr = (unsigned long) ioaddr;
+ dev->change_mtu = rtl8169_change_mtu;
+ dev->set_mac_address = rtl_set_mac_address;
+
+#ifdef CONFIG_R8169_NAPI
+ netif_napi_add(dev, &tp->napi, rtl8169_poll, R8169_NAPI_WEIGHT);
+#endif
+
+#ifdef CONFIG_R8169_VLAN
+ dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
+ dev->vlan_rx_register = rtl8169_vlan_rx_register;
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ dev->poll_controller = rtl8169_netpoll;
+#endif
+
+ tp->intr_mask = 0xffff;
+ tp->pci_dev = pdev;
+ tp->mmio_addr = ioaddr;
+ tp->align = cfg->align;
+ tp->hw_start = cfg->hw_start;
+ tp->intr_event = cfg->intr_event;
+ tp->napi_event = cfg->napi_event;
+
+ init_timer(&tp->timer);
+ tp->timer.data = (unsigned long) dev;
+ tp->timer.function = rtl8169_phy_timer;
+
+ spin_lock_init(&tp->lock);
+
+ // offer device to EtherCAT master module
+ tp->ecdev = ecdev_offer(dev, ec_poll, THIS_MODULE);
+
+ if (!tp->ecdev) {
+ rc = register_netdev(dev);
+ if (rc < 0)
+ goto err_out_msi_5;
+ }
+
+ pci_set_drvdata(pdev, dev);
+
+ if (netif_msg_probe(tp)) {
+ u32 xid = RTL_R32(TxConfig) & 0x7cf0f8ff;
+
+ printk(KERN_INFO "%s: %s at 0x%lx, "
+ "%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x, "
+ "XID %08x IRQ %d\n",
+ dev->name,
+ rtl_chip_info[tp->chipset].name,
+ dev->base_addr,
+ dev->dev_addr[0], dev->dev_addr[1],
+ dev->dev_addr[2], dev->dev_addr[3],
+ dev->dev_addr[4], dev->dev_addr[5], xid, dev->irq);
+ }
+
+ rtl8169_init_phy(dev, tp);
+
+ if (tp->ecdev && ecdev_open(tp->ecdev)) {
+ ecdev_withdraw(tp->ecdev);
+ goto err_out_msi_5;
+ }
+
+out:
+ return rc;
+
+err_out_msi_5:
+ rtl_disable_msi(pdev, tp);
+ iounmap(ioaddr);
+err_out_free_res_4:
+ pci_release_regions(pdev);
+err_out_mwi_3:
+ pci_clear_mwi(pdev);
+err_out_disable_2:
+ pci_disable_device(pdev);
+err_out_free_dev_1:
+ free_netdev(dev);
+ goto out;
+}
+
+static void __devexit rtl8169_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ flush_scheduled_work();
+
+ if (tp->ecdev) {
+ ecdev_close(tp->ecdev);
+ ecdev_withdraw(tp->ecdev);
+ } else {
+ unregister_netdev(dev);
+ }
+ rtl_disable_msi(pdev, tp);
+ rtl8169_release_board(pdev, dev, tp->mmio_addr);
+ pci_set_drvdata(pdev, NULL);
+}
+
+static void rtl8169_set_rxbufsize(struct rtl8169_private *tp,
+ struct net_device *dev)
+{
+ unsigned int mtu = dev->mtu;
+
+ tp->rx_buf_sz = (mtu > RX_BUF_SIZE) ? mtu + ETH_HLEN + 8 : RX_BUF_SIZE;
+}
+
+static int rtl8169_open(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+ int retval = -ENOMEM;
+
+
+ rtl8169_set_rxbufsize(tp, dev);
+
+ /*
+ * Rx and Tx desscriptors needs 256 bytes alignment.
+ * pci_alloc_consistent provides more.
+ */
+ tp->TxDescArray = pci_alloc_consistent(pdev, R8169_TX_RING_BYTES,
+ &tp->TxPhyAddr);
+ if (!tp->TxDescArray)
+ goto out;
+
+ tp->RxDescArray = pci_alloc_consistent(pdev, R8169_RX_RING_BYTES,
+ &tp->RxPhyAddr);
+ if (!tp->RxDescArray)
+ goto err_free_tx_0;
+
+ retval = rtl8169_init_ring(dev);
+ if (retval < 0)
+ goto err_free_rx_1;
+
+ INIT_DELAYED_WORK(&tp->task, NULL);
+
+ smp_mb();
+
+ if (!tp->ecdev) {
+ retval = request_irq(dev->irq, rtl8169_interrupt,
+ (tp->features & RTL_FEATURE_MSI) ? 0 : IRQF_SHARED,
+ dev->name, dev);
+ if (retval < 0)
+ goto err_release_ring_2;
+
+#ifdef CONFIG_R8169_NAPI
+ napi_enable(&tp->napi);
+#endif
+ }
+
+ rtl_hw_start(dev);
+
+ rtl8169_request_timer(dev);
+
+ rtl8169_check_link_status(dev, tp, tp->mmio_addr);
+out:
+ return retval;
+
+err_release_ring_2:
+ rtl8169_rx_clear(tp);
+err_free_rx_1:
+ pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+err_free_tx_0:
+ pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
+ tp->TxPhyAddr);
+ goto out;
+}
+
+static void rtl8169_hw_reset(void __iomem *ioaddr)
+{
+ /* Disable interrupts */
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+ /* Reset the chipset */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* PCI commit */
+ RTL_R8(ChipCmd);
+}
+
+static void rtl_set_rx_tx_config_registers(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 cfg = rtl8169_rx_config;
+
+ cfg |= (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
+ RTL_W32(RxConfig, cfg);
+
+ /* Set DMA burst size and Interframe Gap Time */
+ RTL_W32(TxConfig, (TX_DMA_BURST << TxDMAShift) |
+ (InterFrameGap << TxInterFrameGapShift));
+}
+
+static void rtl_hw_start(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+
+ /* Soft reset the chip. */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* Check that the chip has finished the reset. */
+ for (i = 0; i < 100; i++) {
+ if ((RTL_R8(ChipCmd) & CmdReset) == 0)
+ break;
+ msleep_interruptible(1);
+ }
+
+ tp->hw_start(dev);
+
+ if (!tp->ecdev)
+ netif_start_queue(dev);
+}
+
+
+static void rtl_set_rx_tx_desc_registers(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ /*
+ * Magic spell: some iop3xx ARM board needs the TxDescAddrHigh
+ * register to be written before TxDescAddrLow to work.
+ * Switching from MMIO to I/O access fixes the issue as well.
+ */
+ RTL_W32(TxDescStartAddrHigh, ((u64) tp->TxPhyAddr) >> 32);
+ RTL_W32(TxDescStartAddrLow, ((u64) tp->TxPhyAddr) & DMA_32BIT_MASK);
+ RTL_W32(RxDescAddrHigh, ((u64) tp->RxPhyAddr) >> 32);
+ RTL_W32(RxDescAddrLow, ((u64) tp->RxPhyAddr) & DMA_32BIT_MASK);
+}
+
+static u16 rtl_rw_cpluscmd(void __iomem *ioaddr)
+{
+ u16 cmd;
+
+ cmd = RTL_R16(CPlusCmd);
+ RTL_W16(CPlusCmd, cmd);
+ return cmd;
+}
+
+static void rtl_set_rx_max_size(void __iomem *ioaddr)
+{
+ /* Low hurts. Let's disable the filtering. */
+ RTL_W16(RxMaxSize, 16383);
+}
+
+static void rtl8169_set_magic_reg(void __iomem *ioaddr, unsigned mac_version)
+{
+ struct {
+ u32 mac_version;
+ u32 clk;
+ u32 val;
+ } cfg2_info [] = {
+ { RTL_GIGA_MAC_VER_05, PCI_Clock_33MHz, 0x000fff00 }, // 8110SCd
+ { RTL_GIGA_MAC_VER_05, PCI_Clock_66MHz, 0x000fffff },
+ { RTL_GIGA_MAC_VER_06, PCI_Clock_33MHz, 0x00ffff00 }, // 8110SCe
+ { RTL_GIGA_MAC_VER_06, PCI_Clock_66MHz, 0x00ffffff }
+ }, *p = cfg2_info;
+ unsigned int i;
+ u32 clk;
+
+ clk = RTL_R8(Config2) & PCI_Clock_66MHz;
+ for (i = 0; i < ARRAY_SIZE(cfg2_info); i++, p++) {
+ if ((p->mac_version == mac_version) && (p->clk == clk)) {
+ RTL_W32(0x7c, p->val);
+ break;
+ }
+ }
+}
+
+static void rtl_hw_start_8169(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ printk(KERN_INFO "%s\n", __func__);
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_05) {
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) | PCIMulRW);
+ pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x08);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_04))
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_04))
+ rtl_set_rx_tx_config_registers(tp);
+
+ tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03)) {
+ dprintk("Set MAC Reg C+CR Offset 0xE0. "
+ "Bit-3 and bit-14 MUST be 1\n");
+ tp->cp_cmd |= (1 << 14);
+ }
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ rtl8169_set_magic_reg(ioaddr, tp->mac_version);
+
+ /*
+ * Undocumented corner. Supposedly:
+ * (TxTimer << 12) | (TxPackets << 8) | (RxTimer << 4) | RxPackets
+ */
+ RTL_W16(IntrMitigate, 0x0000);
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ if ((tp->mac_version != RTL_GIGA_MAC_VER_01) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_02) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_03) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_04)) {
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_set_rx_tx_config_registers(tp);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ /* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ RTL_R8(IntrMask);
+
+ RTL_W32(RxMissed, 0);
+
+ rtl_set_rx_mode(dev);
+
+ /* no early-rx interrupts */
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+
+ /* Enable all known interrupts by setting the interrupt mask. */
+ if (!tp->ecdev)
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static void rtl_hw_start_8168(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+ u8 ctl;
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ rtl_set_rx_tx_config_registers(tp);
+
+ tp->cp_cmd |= RTL_R16(CPlusCmd) | PktCntrDisable | INTT_1;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ /* Tx performance tweak. */
+ pci_read_config_byte(pdev, 0x69, &ctl);
+ ctl = (ctl & ~0x70) | 0x50;
+ pci_write_config_byte(pdev, 0x69, ctl);
+
+ RTL_W16(IntrMitigate, 0x5151);
+
+ /* Work around for RxFIFO overflow. */
+ if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+ tp->intr_event |= RxFIFOOver | PCSTimeout;
+ tp->intr_event &= ~RxOverflow;
+ }
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ RTL_R8(IntrMask);
+
+ RTL_W32(RxMissed, 0);
+
+ rtl_set_rx_mode(dev);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+
+ if (!tp->ecdev)
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static void rtl_hw_start_8101(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16)) {
+ pci_write_config_word(pdev, 0x68, 0x00);
+ pci_write_config_word(pdev, 0x69, 0x08);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ RTL_W16(IntrMitigate, 0x0000);
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_set_rx_tx_config_registers(tp);
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ RTL_R8(IntrMask);
+
+ RTL_W32(RxMissed, 0);
+
+ rtl_set_rx_mode(dev);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xf000);
+
+ if (!tp->ecdev)
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static int rtl8169_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int ret = 0;
+
+ if (new_mtu < ETH_ZLEN || new_mtu > SafeMtu)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ if (!netif_running(dev))
+ goto out;
+
+ rtl8169_down(dev);
+
+ rtl8169_set_rxbufsize(tp, dev);
+
+ ret = rtl8169_init_ring(dev);
+ if (ret < 0)
+ goto out;
+
+#ifdef CONFIG_R8169_NAPI
+ napi_enable(&tp->napi);
+#endif
+
+ rtl_hw_start(dev);
+
+ rtl8169_request_timer(dev);
+
+out:
+ return ret;
+}
+
+static inline void rtl8169_make_unusable_by_asic(struct RxDesc *desc)
+{
+ desc->addr = cpu_to_le64(0x0badbadbadbadbadull);
+ desc->opts1 &= ~cpu_to_le32(DescOwn | RsvdMask);
+}
+
+static void rtl8169_free_rx_skb(struct rtl8169_private *tp,
+ struct sk_buff **sk_buff, struct RxDesc *desc)
+{
+ struct pci_dev *pdev = tp->pci_dev;
+
+ pci_unmap_single(pdev, le64_to_cpu(desc->addr), tp->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(*sk_buff);
+ *sk_buff = NULL;
+ rtl8169_make_unusable_by_asic(desc);
+}
+
+static inline void rtl8169_mark_to_asic(struct RxDesc *desc, u32 rx_buf_sz)
+{
+ u32 eor = le32_to_cpu(desc->opts1) & RingEnd;
+
+ desc->opts1 = cpu_to_le32(DescOwn | eor | rx_buf_sz);
+}
+
+static inline void rtl8169_map_to_asic(struct RxDesc *desc, dma_addr_t mapping,
+ u32 rx_buf_sz)
+{
+ desc->addr = cpu_to_le64(mapping);
+ wmb();
+ rtl8169_mark_to_asic(desc, rx_buf_sz);
+}
+
+static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev,
+ struct net_device *dev,
+ struct RxDesc *desc, int rx_buf_sz,
+ unsigned int align)
+{
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+ unsigned int pad;
+
+ pad = align ? align : NET_IP_ALIGN;
+
+ skb = netdev_alloc_skb(dev, rx_buf_sz + pad);
+ if (!skb)
+ goto err_out;
+
+ skb_reserve(skb, align ? ((pad - 1) & (unsigned long)skb->data) : pad);
+
+ mapping = pci_map_single(pdev, skb->data, rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+
+ rtl8169_map_to_asic(desc, mapping, rx_buf_sz);
+out:
+ return skb;
+
+err_out:
+ rtl8169_make_unusable_by_asic(desc);
+ goto out;
+}
+
+static void rtl8169_rx_clear(struct rtl8169_private *tp)
+{
+ unsigned int i;
+
+ for (i = 0; i < NUM_RX_DESC; i++) {
+ if (tp->Rx_skbuff[i]) {
+ rtl8169_free_rx_skb(tp, tp->Rx_skbuff + i,
+ tp->RxDescArray + i);
+ }
+ }
+}
+
+static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev,
+ u32 start, u32 end)
+{
+ u32 cur;
+
+ for (cur = start; end - cur != 0; cur++) {
+ struct sk_buff *skb;
+ unsigned int i = cur % NUM_RX_DESC;
+
+ WARN_ON((s32)(end - cur) < 0);
+
+ if (tp->Rx_skbuff[i])
+ continue;
+
+ skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev,
+ tp->RxDescArray + i,
+ tp->rx_buf_sz, tp->align);
+ if (!skb)
+ break;
+
+ tp->Rx_skbuff[i] = skb;
+ }
+ return cur - start;
+}
+
+static inline void rtl8169_mark_as_last_descriptor(struct RxDesc *desc)
+{
+ desc->opts1 |= cpu_to_le32(RingEnd);
+}
+
+static void rtl8169_init_ring_indexes(struct rtl8169_private *tp)
+{
+ tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0;
+}
+
+static int rtl8169_init_ring(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ rtl8169_init_ring_indexes(tp);
+
+ memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info));
+ memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *));
+
+ if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC)
+ goto err_out;
+
+ rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1);
+
+ return 0;
+
+err_out:
+ rtl8169_rx_clear(tp);
+ return -ENOMEM;
+}
+
+static void rtl8169_unmap_tx_skb(struct pci_dev *pdev, struct ring_info *tx_skb,
+ struct TxDesc *desc)
+{
+ unsigned int len = tx_skb->len;
+
+ pci_unmap_single(pdev, le64_to_cpu(desc->addr), len, PCI_DMA_TODEVICE);
+ desc->opts1 = 0x00;
+ desc->opts2 = 0x00;
+ desc->addr = 0x00;
+ tx_skb->len = 0;
+}
+
+static void rtl8169_tx_clear(struct rtl8169_private *tp)
+{
+ unsigned int i;
+
+ for (i = tp->dirty_tx; i < tp->dirty_tx + NUM_TX_DESC; i++) {
+ unsigned int entry = i % NUM_TX_DESC;
+ struct ring_info *tx_skb = tp->tx_skb + entry;
+ unsigned int len = tx_skb->len;
+
+ if (len) {
+ struct sk_buff *skb = tx_skb->skb;
+
+ rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb,
+ tp->TxDescArray + entry);
+ if (skb) {
+ if (!tp->ecdev)
+ dev_kfree_skb(skb);
+ tx_skb->skb = NULL;
+ }
+ tp->dev->stats.tx_dropped++;
+ }
+ }
+ tp->cur_tx = tp->dirty_tx = 0;
+}
+
+static void rtl8169_schedule_work(struct net_device *dev, work_func_t task)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ PREPARE_DELAYED_WORK(&tp->task, task);
+ schedule_delayed_work(&tp->task, 4);
+}
+
+static void rtl8169_wait_for_quiescence(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ synchronize_irq(dev->irq);
+
+ /* Wait for any pending NAPI task to complete */
+#ifdef CONFIG_R8169_NAPI
+ napi_disable(&tp->napi);
+#endif
+
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+#ifdef CONFIG_R8169_NAPI
+ tp->intr_mask = 0xffff;
+ RTL_W16(IntrMask, tp->intr_event);
+ napi_enable(&tp->napi);
+#endif
+}
+
+static void rtl8169_reinit_task(struct work_struct *work)
+{
+ struct rtl8169_private *tp =
+ container_of(work, struct rtl8169_private, task.work);
+ struct net_device *dev = tp->dev;
+ int ret;
+
+ rtnl_lock();
+
+ if (!netif_running(dev))
+ goto out_unlock;
+
+ rtl8169_wait_for_quiescence(dev);
+ rtl8169_close(dev);
+
+ ret = rtl8169_open(dev);
+ if (unlikely(ret < 0)) {
+ if (net_ratelimit() && netif_msg_drv(tp)) {
+ printk(KERN_ERR PFX "%s: reinit failure (status = %d)."
+ " Rescheduling.\n", dev->name, ret);
+ }
+ rtl8169_schedule_work(dev, rtl8169_reinit_task);
+ }
+
+out_unlock:
+ rtnl_unlock();
+}
+
+static void rtl8169_reset_task(struct work_struct *work)
+{
+ struct rtl8169_private *tp =
+ container_of(work, struct rtl8169_private, task.work);
+ struct net_device *dev = tp->dev;
+
+ rtnl_lock();
+
+ if (!netif_running(dev))
+ goto out_unlock;
+
+ rtl8169_wait_for_quiescence(dev);
+
+ rtl8169_rx_interrupt(dev, tp, tp->mmio_addr, ~(u32)0);
+ rtl8169_tx_clear(tp);
+
+ if (tp->dirty_rx == tp->cur_rx) {
+ rtl8169_init_ring_indexes(tp);
+ rtl_hw_start(dev);
+ netif_wake_queue(dev);
+ rtl8169_check_link_status(dev, tp, tp->mmio_addr);
+ } else {
+ if (net_ratelimit() && netif_msg_intr(tp)) {
+ printk(KERN_EMERG PFX "%s: Rx buffers shortage\n",
+ dev->name);
+ }
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+ }
+
+out_unlock:
+ rtnl_unlock();
+}
+
+static void rtl8169_tx_timeout(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ if (tp->ecdev)
+ return;
+
+ rtl8169_hw_reset(tp->mmio_addr);
+
+ /* Let's wait a bit while any (async) irq lands on */
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+}
+
+static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb,
+ u32 opts1)
+{
+ struct skb_shared_info *info = skb_shinfo(skb);
+ unsigned int cur_frag, entry;
+ struct TxDesc * uninitialized_var(txd);
+
+ entry = tp->cur_tx;
+ for (cur_frag = 0; cur_frag < info->nr_frags; cur_frag++) {
+ skb_frag_t *frag = info->frags + cur_frag;
+ dma_addr_t mapping;
+ u32 status, len;
+ void *addr;
+
+ entry = (entry + 1) % NUM_TX_DESC;
+
+ txd = tp->TxDescArray + entry;
+ len = frag->size;
+ addr = ((void *) page_address(frag->page)) + frag->page_offset;
+ mapping = pci_map_single(tp->pci_dev, addr, len, PCI_DMA_TODEVICE);
+
+ /* anti gcc 2.95.3 bugware (sic) */
+ status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+
+ txd->opts1 = cpu_to_le32(status);
+ txd->addr = cpu_to_le64(mapping);
+
+ tp->tx_skb[entry].len = len;
+ }
+
+ if (cur_frag) {
+ tp->tx_skb[entry].skb = skb;
+ txd->opts1 |= cpu_to_le32(LastFrag);
+ }
+
+ return cur_frag;
+}
+
+static inline u32 rtl8169_tso_csum(struct sk_buff *skb, struct net_device *dev)
+{
+ if (dev->features & NETIF_F_TSO) {
+ u32 mss = skb_shinfo(skb)->gso_size;
+
+ if (mss)
+ return LargeSend | ((mss & MSSMask) << MSSShift);
+ }
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ const struct iphdr *ip = ip_hdr(skb);
+
+ if (ip->protocol == IPPROTO_TCP)
+ return IPCS | TCPCS;
+ else if (ip->protocol == IPPROTO_UDP)
+ return IPCS | UDPCS;
+ WARN_ON(1); /* we need a WARN() */
+ }
+ return 0;
+}
+
+static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned int frags, entry = tp->cur_tx % NUM_TX_DESC;
+ struct TxDesc *txd = tp->TxDescArray + entry;
+ void __iomem *ioaddr = tp->mmio_addr;
+ dma_addr_t mapping;
+ u32 status, len;
+ u32 opts1;
+ int ret = NETDEV_TX_OK;
+
+ if (unlikely(TX_BUFFS_AVAIL(tp) < skb_shinfo(skb)->nr_frags)) {
+ if (netif_msg_drv(tp)) {
+ printk(KERN_ERR
+ "%s: BUG! Tx Ring full when queue awake!\n",
+ dev->name);
+ }
+ goto err_stop;
+ }
+
+ if (unlikely(le32_to_cpu(txd->opts1) & DescOwn))
+ goto err_stop;
+
+ opts1 = DescOwn | rtl8169_tso_csum(skb, dev);
+
+ frags = rtl8169_xmit_frags(tp, skb, opts1);
+ if (frags) {
+ len = skb_headlen(skb);
+ opts1 |= FirstFrag;
+ } else {
+ len = skb->len;
+
+ if (unlikely(len < ETH_ZLEN)) {
+ if (skb_padto(skb, ETH_ZLEN))
+ goto err_update_stats;
+ len = ETH_ZLEN;
+ }
+
+ opts1 |= FirstFrag | LastFrag;
+ tp->tx_skb[entry].skb = skb;
+ }
+
+ mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE);
+
+ tp->tx_skb[entry].len = len;
+ txd->addr = cpu_to_le64(mapping);
+ txd->opts2 = cpu_to_le32(rtl8169_tx_vlan_tag(tp, skb));
+
+ wmb();
+
+ /* anti gcc 2.95.3 bugware (sic) */
+ status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+ txd->opts1 = cpu_to_le32(status);
+
+ dev->trans_start = jiffies;
+
+ tp->cur_tx += frags + 1;
+
+ smp_wmb();
+
+ RTL_W8(TxPoll, NPQ); /* set polling bit */
+
+ if (!tp->ecdev) {
+ if (TX_BUFFS_AVAIL(tp) < MAX_SKB_FRAGS) {
+ netif_stop_queue(dev);
+ smp_rmb();
+ if (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)
+ netif_wake_queue(dev);
+ }
+ }
+
+out:
+ return ret;
+
+err_stop:
+ if (!tp->ecdev)
+ netif_stop_queue(dev);
+ ret = NETDEV_TX_BUSY;
+err_update_stats:
+ dev->stats.tx_dropped++;
+ goto out;
+}
+
+static void rtl8169_pcierr_interrupt(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+ void __iomem *ioaddr = tp->mmio_addr;
+ u16 pci_status, pci_cmd;
+
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
+ pci_read_config_word(pdev, PCI_STATUS, &pci_status);
+
+ if (netif_msg_intr(tp)) {
+ printk(KERN_ERR
+ "%s: PCI error (cmd = 0x%04x, status = 0x%04x).\n",
+ dev->name, pci_cmd, pci_status);
+ }
+
+ /*
+ * The recovery sequence below admits a very elaborated explanation:
+ * - it seems to work;
+ * - I did not see what else could be done;
+ * - it makes iop3xx happy.
+ *
+ * Feel free to adjust to your needs.
+ */
+ if (pdev->broken_parity_status)
+ pci_cmd &= ~PCI_COMMAND_PARITY;
+ else
+ pci_cmd |= PCI_COMMAND_SERR | PCI_COMMAND_PARITY;
+
+ pci_write_config_word(pdev, PCI_COMMAND, pci_cmd);
+
+ pci_write_config_word(pdev, PCI_STATUS,
+ pci_status & (PCI_STATUS_DETECTED_PARITY |
+ PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_REC_MASTER_ABORT |
+ PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_SIG_TARGET_ABORT));
+
+ /* The infamous DAC f*ckup only happens at boot time */
+ if ((tp->cp_cmd & PCIDAC) && !tp->dirty_rx && !tp->cur_rx) {
+ if (netif_msg_intr(tp))
+ printk(KERN_INFO "%s: disabling PCI DAC.\n", dev->name);
+ tp->cp_cmd &= ~PCIDAC;
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ dev->features &= ~NETIF_F_HIGHDMA;
+ }
+
+ rtl8169_hw_reset(ioaddr);
+
+ rtl8169_schedule_work(dev, rtl8169_reinit_task);
+}
+
+static void rtl8169_tx_interrupt(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ unsigned int dirty_tx, tx_left;
+
+ dirty_tx = tp->dirty_tx;
+ smp_rmb();
+ tx_left = tp->cur_tx - dirty_tx;
+
+ while (tx_left > 0) {
+ unsigned int entry = dirty_tx % NUM_TX_DESC;
+ struct ring_info *tx_skb = tp->tx_skb + entry;
+ u32 len = tx_skb->len;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(tp->TxDescArray[entry].opts1);
+ if (status & DescOwn)
+ break;
+
+ dev->stats.tx_bytes += len;
+ dev->stats.tx_packets++;
+
+ rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb, tp->TxDescArray + entry);
+
+ if (status & LastFrag) {
+ if (!tp->ecdev)
+ dev_kfree_skb_irq(tx_skb->skb);
+ tx_skb->skb = NULL;
+ }
+ dirty_tx++;
+ tx_left--;
+ }
+
+ if (tp->dirty_tx != dirty_tx) {
+ tp->dirty_tx = dirty_tx;
+ smp_wmb();
+ if (!tp->ecdev && netif_queue_stopped(dev) &&
+ (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)) {
+ netif_wake_queue(dev);
+ }
+ /*
+ * 8168 hack: TxPoll requests are lost when the Tx packets are
+ * too close. Let's kick an extra TxPoll request when a burst
+ * of start_xmit activity is detected (if it is not detected,
+ * it is slow enough). -- FR
+ */
+ smp_rmb();
+ if (tp->cur_tx != dirty_tx)
+ RTL_W8(TxPoll, NPQ);
+ }
+}
+
+static inline int rtl8169_fragmented_frame(u32 status)
+{
+ return (status & (FirstFrag | LastFrag)) != (FirstFrag | LastFrag);
+}
+
+static inline void rtl8169_rx_csum(struct sk_buff *skb, struct RxDesc *desc)
+{
+ u32 opts1 = le32_to_cpu(desc->opts1);
+ u32 status = opts1 & RxProtoMask;
+
+ if (((status == RxProtoTCP) && !(opts1 & TCPFail)) ||
+ ((status == RxProtoUDP) && !(opts1 & UDPFail)) ||
+ ((status == RxProtoIP) && !(opts1 & IPFail)))
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
+}
+
+static inline bool rtl8169_try_rx_copy(struct sk_buff **sk_buff,
+ struct rtl8169_private *tp, int pkt_size,
+ dma_addr_t addr)
+{
+ struct sk_buff *skb;
+ bool done = false;
+
+ if (pkt_size >= rx_copybreak)
+ goto out;
+
+ skb = netdev_alloc_skb(tp->dev, pkt_size + NET_IP_ALIGN);
+ if (!skb)
+ goto out;
+
+ pci_dma_sync_single_for_cpu(tp->pci_dev, addr, pkt_size,
+ PCI_DMA_FROMDEVICE);
+ skb_reserve(skb, NET_IP_ALIGN);
+ skb_copy_from_linear_data(*sk_buff, skb->data, pkt_size);
+ *sk_buff = skb;
+ done = true;
+out:
+ return done;
+}
+
+static int rtl8169_rx_interrupt(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr, u32 budget)
+{
+ unsigned int cur_rx, rx_left;
+ unsigned int delta, count;
+
+ cur_rx = tp->cur_rx;
+ rx_left = NUM_RX_DESC + tp->dirty_rx - cur_rx;
+ rx_left = rtl8169_rx_quota(rx_left, budget);
+
+ for (; rx_left > 0; rx_left--, cur_rx++) {
+ unsigned int entry = cur_rx % NUM_RX_DESC;
+ struct RxDesc *desc = tp->RxDescArray + entry;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(desc->opts1);
+
+ if (status & DescOwn)
+ break;
+ if (unlikely(status & RxRES)) {
+ if (netif_msg_rx_err(tp)) {
+ printk(KERN_INFO
+ "%s: Rx ERROR. status = %08x\n",
+ dev->name, status);
+ }
+ dev->stats.rx_errors++;
+ if (status & (RxRWT | RxRUNT))
+ dev->stats.rx_length_errors++;
+ if (status & RxCRC)
+ dev->stats.rx_crc_errors++;
+ if (status & RxFOVF) {
+ if (!tp->ecdev)
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+ dev->stats.rx_fifo_errors++;
+ }
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ } else {
+ struct sk_buff *skb = tp->Rx_skbuff[entry];
+ dma_addr_t addr = le64_to_cpu(desc->addr);
+ int pkt_size = (status & 0x00001FFF) - 4;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ /*
+ * The driver does not support incoming fragmented
+ * frames. They are seen as a symptom of over-mtu
+ * sized frames.
+ */
+ if (unlikely(rtl8169_fragmented_frame(status))) {
+ dev->stats.rx_dropped++;
+ dev->stats.rx_length_errors++;
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ continue;
+ }
+
+ rtl8169_rx_csum(skb, desc);
+
+ if (tp->ecdev) {
+ pci_dma_sync_single_for_cpu(pdev, addr, pkt_size,
+ PCI_DMA_FROMDEVICE);
+
+ ecdev_receive(tp->ecdev, skb->data, pkt_size);
+
+ pci_dma_sync_single_for_device(pdev, addr,
+ pkt_size, PCI_DMA_FROMDEVICE);
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+
+ // No need to detect link status as
+ // long as frames are received: Reset watchdog.
+ tp->ec_watchdog_jiffies = jiffies;
+ } else {
+ if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) {
+ pci_dma_sync_single_for_device(pdev, addr,
+ pkt_size, PCI_DMA_FROMDEVICE);
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ } else {
+ pci_unmap_single(pdev, addr, pkt_size,
+ PCI_DMA_FROMDEVICE);
+ tp->Rx_skbuff[entry] = NULL;
+ }
+
+ skb_put(skb, pkt_size);
+ skb->protocol = eth_type_trans(skb, dev);
+
+ if (rtl8169_rx_vlan_skb(tp, desc, skb) < 0)
+ rtl8169_rx_skb(skb);
+ }
+
+ dev->last_rx = jiffies;
+ dev->stats.rx_bytes += pkt_size;
+ dev->stats.rx_packets++;
+ }
+
+ /* Work around for AMD plateform. */
+ if ((desc->opts2 & cpu_to_le32(0xfffe000)) &&
+ (tp->mac_version == RTL_GIGA_MAC_VER_05)) {
+ desc->opts2 = 0;
+ cur_rx++;
+ }
+ }
+
+ count = cur_rx - tp->cur_rx;
+ tp->cur_rx = cur_rx;
+
+ if (tp->ecdev) {
+ /* descriptors are cleaned up immediately. */
+ tp->dirty_rx = tp->cur_rx;
+ } else {
+ delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx);
+ if (!delta && count && netif_msg_intr(tp))
+ printk(KERN_INFO "%s: no Rx buffer allocated\n", dev->name);
+ tp->dirty_rx += delta;
+
+ /*
+ * FIXME: until there is periodic timer to try and refill the ring,
+ * a temporary shortage may definitely kill the Rx process.
+ * - disable the asic to try and avoid an overflow and kick it again
+ * after refill ?
+ * - how do others driver handle this condition (Uh oh...).
+ */
+ if ((tp->dirty_rx + NUM_RX_DESC == tp->cur_rx) && netif_msg_intr(tp))
+ printk(KERN_EMERG "%s: Rx buffers exhausted\n", dev->name);
+ }
+
+ return count;
+}
+
+static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+{
+ struct net_device *dev = dev_instance;
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int boguscnt = max_interrupt_work;
+ void __iomem *ioaddr = tp->mmio_addr;
+ int status;
+ int handled = 0;
+
+ do {
+ status = RTL_R16(IntrStatus);
+
+ /* hotplug/major error/no more work/shared irq */
+ if ((status == 0xFFFF) || !status)
+ break;
+
+ handled = 1;
+
+ if (unlikely(!tp->ecdev && !netif_running(dev))) {
+ rtl8169_asic_down(ioaddr);
+ goto out;
+ }
+
+ status &= tp->intr_mask;
+ RTL_W16(IntrStatus,
+ (status & RxFIFOOver) ? (status | RxOverflow) : status);
+
+ if (!(status & tp->intr_event))
+ break;
+
+ /* Work around for rx fifo overflow */
+ if (!tp->ecdev && unlikely(status & RxFIFOOver) &&
+ (tp->mac_version == RTL_GIGA_MAC_VER_11)) {
+ netif_stop_queue(dev);
+ rtl8169_tx_timeout(dev);
+ break;
+ }
+
+ if (unlikely(!tp->ecdev && (status & SYSErr))) {
+ rtl8169_pcierr_interrupt(dev);
+ break;
+ }
+
+ if (status & LinkChg)
+ rtl8169_check_link_status(dev, tp, ioaddr);
+
+#ifdef CONFIG_R8169_NAPI
+ if (tp->ecdev) {
+ /* Rx interrupt */
+ if (status & (RxOK | RxOverflow | RxFIFOOver))
+ rtl8169_rx_interrupt(dev, tp, ioaddr, ~(u32)0);
+
+ /* Tx interrupt */
+ if (status & (TxOK | TxErr))
+ rtl8169_tx_interrupt(dev, tp, ioaddr);
+
+ } else if (status & tp->napi_event) {
+ RTL_W16(IntrMask, tp->intr_event & ~tp->napi_event);
+ tp->intr_mask = ~tp->napi_event;
+
+ if (likely(netif_rx_schedule_prep(dev, &tp->napi)))
+ __netif_rx_schedule(dev, &tp->napi);
+ else if (netif_msg_intr(tp)) {
+ printk(KERN_INFO "%s: interrupt %04x in poll\n",
+ dev->name, status);
+ }
+ }
+ break;
+#else
+ /* Rx interrupt */
+ if (status & (RxOK | RxOverflow | RxFIFOOver))
+ rtl8169_rx_interrupt(dev, tp, ioaddr, ~(u32)0);
+
+ /* Tx interrupt */
+ if (status & (TxOK | TxErr))
+ rtl8169_tx_interrupt(dev, tp, ioaddr);
+#endif
+
+ boguscnt--;
+ } while (boguscnt > 0);
+
+ if (!tp->ecdev) {
+ if (boguscnt <= 0) {
+ if (netif_msg_intr(tp) && net_ratelimit() ) {
+ printk(KERN_WARNING
+ "%s: Too much work at interrupt!\n", dev->name);
+ }
+ /* Clear all interrupt sources. */
+ RTL_W16(IntrStatus, 0xffff);
+ }
+ }
+out:
+ return IRQ_RETVAL(handled);
+}
+
+#ifdef CONFIG_R8169_NAPI
+static int rtl8169_poll(struct napi_struct *napi, int budget)
+{
+ struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
+ struct net_device *dev = tp->dev;
+ void __iomem *ioaddr = tp->mmio_addr;
+ int work_done;
+
+ work_done = rtl8169_rx_interrupt(dev, tp, ioaddr, (u32) budget);
+ rtl8169_tx_interrupt(dev, tp, ioaddr);
+
+ if (work_done < budget) {
+ netif_rx_complete(dev, napi);
+ tp->intr_mask = 0xffff;
+ /*
+ * 20040426: the barrier is not strictly required but the
+ * behavior of the irq handler could be less predictable
+ * without it. Btw, the lack of flush for the posted pci
+ * write is safe - FR
+ */
+ smp_wmb();
+ RTL_W16(IntrMask, tp->intr_event);
+ }
+
+ return work_done;
+}
+#endif
+
+static void rtl8169_down(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int intrmask;
+
+ rtl8169_delete_timer(dev);
+
+ if (!tp->ecdev) {
+ netif_stop_queue(dev);
+
+#ifdef CONFIG_R8169_NAPI
+ napi_disable(&tp->napi);
+#endif
+ }
+
+core_down:
+ if (!tp->ecdev)
+ spin_lock_irq(&tp->lock);
+
+ rtl8169_asic_down(ioaddr);
+
+ /* Update the error counts. */
+ dev->stats.rx_missed_errors += RTL_R32(RxMissed);
+ RTL_W32(RxMissed, 0);
+
+ if (!tp->ecdev)
+ spin_unlock_irq(&tp->lock);
+
+ if (!tp->ecdev)
+ synchronize_irq(dev->irq);
+
+ /* Give a racing hard_start_xmit a few cycles to complete. */
+ synchronize_sched(); /* FIXME: should this be synchronize_irq()? */
+
+ /*
+ * And now for the 50k$ question: are IRQ disabled or not ?
+ *
+ * Two paths lead here:
+ * 1) dev->close
+ * -> netif_running() is available to sync the current code and the
+ * IRQ handler. See rtl8169_interrupt for details.
+ * 2) dev->change_mtu
+ * -> rtl8169_poll can not be issued again and re-enable the
+ * interruptions. Let's simply issue the IRQ down sequence again.
+ *
+ * No loop if hotpluged or major error (0xffff).
+ */
+ intrmask = RTL_R16(IntrMask);
+ if (intrmask && (intrmask != 0xffff))
+ goto core_down;
+
+ rtl8169_tx_clear(tp);
+
+ rtl8169_rx_clear(tp);
+}
+
+static int rtl8169_close(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ rtl8169_down(dev);
+
+ if (!tp->ecdev)
+ free_irq(dev->irq, dev);
+
+ pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+ pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
+ tp->TxPhyAddr);
+ tp->TxDescArray = NULL;
+ tp->RxDescArray = NULL;
+
+ return 0;
+}
+
+static void rtl_set_rx_mode(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+ u32 mc_filter[2]; /* Multicast hash filter */
+ int rx_mode;
+ u32 tmp = 0;
+
+ if (dev->flags & IFF_PROMISC) {
+ /* Unconditionally log net taps. */
+ if (netif_msg_link(tp)) {
+ printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n",
+ dev->name);
+ }
+ rx_mode =
+ AcceptBroadcast | AcceptMulticast | AcceptMyPhys |
+ AcceptAllPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else if ((dev->mc_count > multicast_filter_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ /* Too many to filter perfectly -- accept all multicasts. */
+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else {
+ struct dev_mc_list *mclist;
+ unsigned int i;
+
+ rx_mode = AcceptBroadcast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0;
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
+ i++, mclist = mclist->next) {
+ int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
+ mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);
+ rx_mode |= AcceptMulticast;
+ }
+ }
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ tmp = rtl8169_rx_config | rx_mode |
+ (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_11) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_12) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_14) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_15) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_17)) {
+ mc_filter[0] = 0xffffffff;
+ mc_filter[1] = 0xffffffff;
+ }
+
+ RTL_W32(MAR0 + 0, mc_filter[0]);
+ RTL_W32(MAR0 + 4, mc_filter[1]);
+
+ RTL_W32(RxConfig, tmp);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+/**
+ * rtl8169_get_stats - Get rtl8169 read/write statistics
+ * @dev: The Ethernet Device to get statistics for
+ *
+ * Get TX/RX statistics for rtl8169
+ */
+static struct net_device_stats *rtl8169_get_stats(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ if (netif_running(dev)) {
+ spin_lock_irqsave(&tp->lock, flags);
+ dev->stats.rx_missed_errors += RTL_R32(RxMissed);
+ RTL_W32(RxMissed, 0);
+ spin_unlock_irqrestore(&tp->lock, flags);
+ }
+
+ return &dev->stats;
+}
+
+#ifdef CONFIG_PM
+
+static int rtl8169_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ if (tp->ecdev)
+ return;
+
+ if (!netif_running(dev))
+ goto out_pci_suspend;
+
+ netif_device_detach(dev);
+ netif_stop_queue(dev);
+
+ spin_lock_irq(&tp->lock);
+
+ rtl8169_asic_down(ioaddr);
+
+ dev->stats.rx_missed_errors += RTL_R32(RxMissed);
+ RTL_W32(RxMissed, 0);
+
+ spin_unlock_irq(&tp->lock);
+
+out_pci_suspend:
+ pci_save_state(pdev);
+ pci_enable_wake(pdev, pci_choose_state(pdev, state),
+ (tp->features & RTL_FEATURE_WOL) ? 1 : 0);
+ pci_set_power_state(pdev, pci_choose_state(pdev, state));
+
+ return 0;
+}
+
+static int rtl8169_resume(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ if (tp->ecdev)
+ return;
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ pci_enable_wake(pdev, PCI_D0, 0);
+
+ if (!netif_running(dev))
+ goto out;
+
+ netif_device_attach(dev);
+
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+out:
+ return 0;
+}
+
+#endif /* CONFIG_PM */
+
+static struct pci_driver rtl8169_pci_driver = {
+ .name = MODULENAME,
+ .id_table = rtl8169_pci_tbl,
+ .probe = rtl8169_init_one,
+ .remove = __devexit_p(rtl8169_remove_one),
+#ifdef CONFIG_PM
+ .suspend = rtl8169_suspend,
+ .resume = rtl8169_resume,
+#endif
+};
+
+static int __init rtl8169_init_module(void)
+{
+ return pci_register_driver(&rtl8169_pci_driver);
+}
+
+static void __exit rtl8169_cleanup_module(void)
+{
+ pci_unregister_driver(&rtl8169_pci_driver);
+}
+
+module_init(rtl8169_init_module);
+module_exit(rtl8169_cleanup_module);
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/devices/r8169-2.6.24-orig.c Wed Jan 13 00:04:47 2010 +0100
@@ -0,0 +1,3209 @@
+/*
+ * r8169.c: RealTek 8169/8168/8101 ethernet driver.
+ *
+ * Copyright (c) 2002 ShuChen <shuchen@realtek.com.tw>
+ * Copyright (c) 2003 - 2007 Francois Romieu <romieu@fr.zoreil.com>
+ * Copyright (c) a lot of people too. Please respect their work.
+ *
+ * See MAINTAINERS file for support contact information.
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/delay.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/if_vlan.h>
+#include <linux/crc32.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/init.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#ifdef CONFIG_R8169_NAPI
+#define NAPI_SUFFIX "-NAPI"
+#else
+#define NAPI_SUFFIX ""
+#endif
+
+#define RTL8169_VERSION "2.2LK" NAPI_SUFFIX
+#define MODULENAME "r8169"
+#define PFX MODULENAME ": "
+
+#ifdef RTL8169_DEBUG
+#define assert(expr) \
+ if (!(expr)) { \
+ printk( "Assertion failed! %s,%s,%s,line=%d\n", \
+ #expr,__FILE__,__FUNCTION__,__LINE__); \
+ }
+#define dprintk(fmt, args...) \
+ do { printk(KERN_DEBUG PFX fmt, ## args); } while (0)
+#else
+#define assert(expr) do {} while (0)
+#define dprintk(fmt, args...) do {} while (0)
+#endif /* RTL8169_DEBUG */
+
+#define R8169_MSG_DEFAULT \
+ (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN)
+
+#define TX_BUFFS_AVAIL(tp) \
+ (tp->dirty_tx + NUM_TX_DESC - tp->cur_tx - 1)
+
+#ifdef CONFIG_R8169_NAPI
+#define rtl8169_rx_skb netif_receive_skb
+#define rtl8169_rx_hwaccel_skb vlan_hwaccel_receive_skb
+#define rtl8169_rx_quota(count, quota) min(count, quota)
+#else
+#define rtl8169_rx_skb netif_rx
+#define rtl8169_rx_hwaccel_skb vlan_hwaccel_rx
+#define rtl8169_rx_quota(count, quota) count
+#endif
+
+/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
+static const int max_interrupt_work = 20;
+
+/* Maximum number of multicast addresses to filter (vs. Rx-all-multicast).
+ The RTL chips use a 64 element hash table based on the Ethernet CRC. */
+static const int multicast_filter_limit = 32;
+
+/* MAC address length */
+#define MAC_ADDR_LEN 6
+
+#define RX_FIFO_THRESH 7 /* 7 means NO threshold, Rx buffer level before first PCI xfer. */
+#define RX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define TX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define EarlyTxThld 0x3F /* 0x3F means NO early transmit */
+#define RxPacketMaxSize 0x3FE8 /* 16K - 1 - ETH_HLEN - VLAN - CRC... */
+#define SafeMtu 0x1c20 /* ... actually life sucks beyond ~7k */
+#define InterFrameGap 0x03 /* 3 means InterFrameGap = the shortest one */
+
+#define R8169_REGS_SIZE 256
+#define R8169_NAPI_WEIGHT 64
+#define NUM_TX_DESC 64 /* Number of Tx descriptor registers */
+#define NUM_RX_DESC 256 /* Number of Rx descriptor registers */
+#define RX_BUF_SIZE 1536 /* Rx Buffer size */
+#define R8169_TX_RING_BYTES (NUM_TX_DESC * sizeof(struct TxDesc))
+#define R8169_RX_RING_BYTES (NUM_RX_DESC * sizeof(struct RxDesc))
+
+#define RTL8169_TX_TIMEOUT (6*HZ)
+#define RTL8169_PHY_TIMEOUT (10*HZ)
+
+/* write/read MMIO register */
+#define RTL_W8(reg, val8) writeb ((val8), ioaddr + (reg))
+#define RTL_W16(reg, val16) writew ((val16), ioaddr + (reg))
+#define RTL_W32(reg, val32) writel ((val32), ioaddr + (reg))
+#define RTL_R8(reg) readb (ioaddr + (reg))
+#define RTL_R16(reg) readw (ioaddr + (reg))
+#define RTL_R32(reg) ((unsigned long) readl (ioaddr + (reg)))
+
+enum mac_version {
+ RTL_GIGA_MAC_VER_01 = 0x01, // 8169
+ RTL_GIGA_MAC_VER_02 = 0x02, // 8169S
+ RTL_GIGA_MAC_VER_03 = 0x03, // 8110S
+ RTL_GIGA_MAC_VER_04 = 0x04, // 8169SB
+ RTL_GIGA_MAC_VER_05 = 0x05, // 8110SCd
+ RTL_GIGA_MAC_VER_06 = 0x06, // 8110SCe
+ RTL_GIGA_MAC_VER_11 = 0x0b, // 8168Bb
+ RTL_GIGA_MAC_VER_12 = 0x0c, // 8168Be
+ RTL_GIGA_MAC_VER_13 = 0x0d, // 8101Eb
+ RTL_GIGA_MAC_VER_14 = 0x0e, // 8101 ?
+ RTL_GIGA_MAC_VER_15 = 0x0f, // 8101 ?
+ RTL_GIGA_MAC_VER_16 = 0x11, // 8101Ec
+ RTL_GIGA_MAC_VER_17 = 0x10, // 8168Bf
+ RTL_GIGA_MAC_VER_18 = 0x12, // 8168CP
+ RTL_GIGA_MAC_VER_19 = 0x13, // 8168C
+ RTL_GIGA_MAC_VER_20 = 0x14 // 8168C
+};
+
+#define _R(NAME,MAC,MASK) \
+ { .name = NAME, .mac_version = MAC, .RxConfigMask = MASK }
+
+static const struct {
+ const char *name;
+ u8 mac_version;
+ u32 RxConfigMask; /* Clears the bits supported by this chip */
+} rtl_chip_info[] = {
+ _R("RTL8169", RTL_GIGA_MAC_VER_01, 0xff7e1880), // 8169
+ _R("RTL8169s", RTL_GIGA_MAC_VER_02, 0xff7e1880), // 8169S
+ _R("RTL8110s", RTL_GIGA_MAC_VER_03, 0xff7e1880), // 8110S
+ _R("RTL8169sb/8110sb", RTL_GIGA_MAC_VER_04, 0xff7e1880), // 8169SB
+ _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_05, 0xff7e1880), // 8110SCd
+ _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_06, 0xff7e1880), // 8110SCe
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_11, 0xff7e1880), // PCI-E
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_12, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_13, 0xff7e1880), // PCI-E 8139
+ _R("RTL8100e", RTL_GIGA_MAC_VER_14, 0xff7e1880), // PCI-E 8139
+ _R("RTL8100e", RTL_GIGA_MAC_VER_15, 0xff7e1880), // PCI-E 8139
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_17, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_16, 0xff7e1880), // PCI-E
+ _R("RTL8168cp/8111cp", RTL_GIGA_MAC_VER_18, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_19, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_20, 0xff7e1880) // PCI-E
+};
+#undef _R
+
+enum cfg_version {
+ RTL_CFG_0 = 0x00,
+ RTL_CFG_1,
+ RTL_CFG_2
+};
+
+static void rtl_hw_start_8169(struct net_device *);
+static void rtl_hw_start_8168(struct net_device *);
+static void rtl_hw_start_8101(struct net_device *);
+
+static struct pci_device_id rtl8169_pci_tbl[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8129), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8136), 0, 0, RTL_CFG_2 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8167), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8168), 0, 0, RTL_CFG_1 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4300), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_AT, 0xc107), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(0x16ec, 0x0116), 0, 0, RTL_CFG_0 },
+ { PCI_VENDOR_ID_LINKSYS, 0x1032,
+ PCI_ANY_ID, 0x0024, 0, 0, RTL_CFG_0 },
+ { 0x0001, 0x8168,
+ PCI_ANY_ID, 0x2410, 0, 0, RTL_CFG_2 },
+ {0,},
+};
+
+MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);
+
+static int rx_copybreak = 200;
+static int use_dac;
+static struct {
+ u32 msg_enable;
+} debug = { -1 };
+
+enum rtl_registers {
+ MAC0 = 0, /* Ethernet hardware address. */
+ MAC4 = 4,
+ MAR0 = 8, /* Multicast filter. */
+ CounterAddrLow = 0x10,
+ CounterAddrHigh = 0x14,
+ TxDescStartAddrLow = 0x20,
+ TxDescStartAddrHigh = 0x24,
+ TxHDescStartAddrLow = 0x28,
+ TxHDescStartAddrHigh = 0x2c,
+ FLASH = 0x30,
+ ERSR = 0x36,
+ ChipCmd = 0x37,
+ TxPoll = 0x38,
+ IntrMask = 0x3c,
+ IntrStatus = 0x3e,
+ TxConfig = 0x40,
+ RxConfig = 0x44,
+ RxMissed = 0x4c,
+ Cfg9346 = 0x50,
+ Config0 = 0x51,
+ Config1 = 0x52,
+ Config2 = 0x53,
+ Config3 = 0x54,
+ Config4 = 0x55,
+ Config5 = 0x56,
+ MultiIntr = 0x5c,
+ PHYAR = 0x60,
+ TBICSR = 0x64,
+ TBI_ANAR = 0x68,
+ TBI_LPAR = 0x6a,
+ PHYstatus = 0x6c,
+ RxMaxSize = 0xda,
+ CPlusCmd = 0xe0,
+ IntrMitigate = 0xe2,
+ RxDescAddrLow = 0xe4,
+ RxDescAddrHigh = 0xe8,
+ EarlyTxThres = 0xec,
+ FuncEvent = 0xf0,
+ FuncEventMask = 0xf4,
+ FuncPresetState = 0xf8,
+ FuncForceEvent = 0xfc,
+};
+
+enum rtl_register_content {
+ /* InterruptStatusBits */
+ SYSErr = 0x8000,
+ PCSTimeout = 0x4000,
+ SWInt = 0x0100,
+ TxDescUnavail = 0x0080,
+ RxFIFOOver = 0x0040,
+ LinkChg = 0x0020,
+ RxOverflow = 0x0010,
+ TxErr = 0x0008,
+ TxOK = 0x0004,
+ RxErr = 0x0002,
+ RxOK = 0x0001,
+
+ /* RxStatusDesc */
+ RxFOVF = (1 << 23),
+ RxRWT = (1 << 22),
+ RxRES = (1 << 21),
+ RxRUNT = (1 << 20),
+ RxCRC = (1 << 19),
+
+ /* ChipCmdBits */
+ CmdReset = 0x10,
+ CmdRxEnb = 0x08,
+ CmdTxEnb = 0x04,
+ RxBufEmpty = 0x01,
+
+ /* TXPoll register p.5 */
+ HPQ = 0x80, /* Poll cmd on the high prio queue */
+ NPQ = 0x40, /* Poll cmd on the low prio queue */
+ FSWInt = 0x01, /* Forced software interrupt */
+
+ /* Cfg9346Bits */
+ Cfg9346_Lock = 0x00,
+ Cfg9346_Unlock = 0xc0,
+
+ /* rx_mode_bits */
+ AcceptErr = 0x20,
+ AcceptRunt = 0x10,
+ AcceptBroadcast = 0x08,
+ AcceptMulticast = 0x04,
+ AcceptMyPhys = 0x02,
+ AcceptAllPhys = 0x01,
+
+ /* RxConfigBits */
+ RxCfgFIFOShift = 13,
+ RxCfgDMAShift = 8,
+
+ /* TxConfigBits */
+ TxInterFrameGapShift = 24,
+ TxDMAShift = 8, /* DMA burst value (0-7) is shift this many bits */
+
+ /* Config1 register p.24 */
+ MSIEnable = (1 << 5), /* Enable Message Signaled Interrupt */
+ PMEnable = (1 << 0), /* Power Management Enable */
+
+ /* Config2 register p. 25 */
+ PCI_Clock_66MHz = 0x01,
+ PCI_Clock_33MHz = 0x00,
+
+ /* Config3 register p.25 */
+ MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */
+ LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */
+
+ /* Config5 register p.27 */
+ BWF = (1 << 6), /* Accept Broadcast wakeup frame */
+ MWF = (1 << 5), /* Accept Multicast wakeup frame */
+ UWF = (1 << 4), /* Accept Unicast wakeup frame */
+ LanWake = (1 << 1), /* LanWake enable/disable */
+ PMEStatus = (1 << 0), /* PME status can be reset by PCI RST# */
+
+ /* TBICSR p.28 */
+ TBIReset = 0x80000000,
+ TBILoopback = 0x40000000,
+ TBINwEnable = 0x20000000,
+ TBINwRestart = 0x10000000,
+ TBILinkOk = 0x02000000,
+ TBINwComplete = 0x01000000,
+
+ /* CPlusCmd p.31 */
+ PktCntrDisable = (1 << 7), // 8168
+ RxVlan = (1 << 6),
+ RxChkSum = (1 << 5),
+ PCIDAC = (1 << 4),
+ PCIMulRW = (1 << 3),
+ INTT_0 = 0x0000, // 8168
+ INTT_1 = 0x0001, // 8168
+ INTT_2 = 0x0002, // 8168
+ INTT_3 = 0x0003, // 8168
+
+ /* rtl8169_PHYstatus */
+ TBI_Enable = 0x80,
+ TxFlowCtrl = 0x40,
+ RxFlowCtrl = 0x20,
+ _1000bpsF = 0x10,
+ _100bps = 0x08,
+ _10bps = 0x04,
+ LinkStatus = 0x02,
+ FullDup = 0x01,
+
+ /* _TBICSRBit */
+ TBILinkOK = 0x02000000,
+
+ /* DumpCounterCommand */
+ CounterDump = 0x8,
+};
+
+enum desc_status_bit {
+ DescOwn = (1 << 31), /* Descriptor is owned by NIC */
+ RingEnd = (1 << 30), /* End of descriptor ring */
+ FirstFrag = (1 << 29), /* First segment of a packet */
+ LastFrag = (1 << 28), /* Final segment of a packet */
+
+ /* Tx private */
+ LargeSend = (1 << 27), /* TCP Large Send Offload (TSO) */
+ MSSShift = 16, /* MSS value position */
+ MSSMask = 0xfff, /* MSS value + LargeSend bit: 12 bits */
+ IPCS = (1 << 18), /* Calculate IP checksum */
+ UDPCS = (1 << 17), /* Calculate UDP/IP checksum */
+ TCPCS = (1 << 16), /* Calculate TCP/IP checksum */
+ TxVlanTag = (1 << 17), /* Add VLAN tag */
+
+ /* Rx private */
+ PID1 = (1 << 18), /* Protocol ID bit 1/2 */
+ PID0 = (1 << 17), /* Protocol ID bit 2/2 */
+
+#define RxProtoUDP (PID1)
+#define RxProtoTCP (PID0)
+#define RxProtoIP (PID1 | PID0)
+#define RxProtoMask RxProtoIP
+
+ IPFail = (1 << 16), /* IP checksum failed */
+ UDPFail = (1 << 15), /* UDP/IP checksum failed */
+ TCPFail = (1 << 14), /* TCP/IP checksum failed */
+ RxVlanTag = (1 << 16), /* VLAN tag available */
+};
+
+#define RsvdMask 0x3fffc000
+
+struct TxDesc {
+ __le32 opts1;
+ __le32 opts2;
+ __le64 addr;
+};
+
+struct RxDesc {
+ __le32 opts1;
+ __le32 opts2;
+ __le64 addr;
+};
+
+struct ring_info {
+ struct sk_buff *skb;
+ u32 len;
+ u8 __pad[sizeof(void *) - sizeof(u32)];
+};
+
+enum features {
+ RTL_FEATURE_WOL = (1 << 0),
+ RTL_FEATURE_MSI = (1 << 1),
+};
+
+struct rtl8169_private {
+ void __iomem *mmio_addr; /* memory map physical address */
+ struct pci_dev *pci_dev; /* Index of PCI device */
+ struct net_device *dev;
+#ifdef CONFIG_R8169_NAPI
+ struct napi_struct napi;
+#endif
+ spinlock_t lock; /* spin lock flag */
+ u32 msg_enable;
+ int chipset;
+ int mac_version;
+ u32 cur_rx; /* Index into the Rx descriptor buffer of next Rx pkt. */
+ u32 cur_tx; /* Index into the Tx descriptor buffer of next Rx pkt. */
+ u32 dirty_rx;
+ u32 dirty_tx;
+ struct TxDesc *TxDescArray; /* 256-aligned Tx descriptor ring */
+ struct RxDesc *RxDescArray; /* 256-aligned Rx descriptor ring */
+ dma_addr_t TxPhyAddr;
+ dma_addr_t RxPhyAddr;
+ struct sk_buff *Rx_skbuff[NUM_RX_DESC]; /* Rx data buffers */
+ struct ring_info tx_skb[NUM_TX_DESC]; /* Tx data buffers */
+ unsigned align;
+ unsigned rx_buf_sz;
+ struct timer_list timer;
+ u16 cp_cmd;
+ u16 intr_event;
+ u16 napi_event;
+ u16 intr_mask;
+ int phy_auto_nego_reg;
+ int phy_1000_ctrl_reg;
+#ifdef CONFIG_R8169_VLAN
+ struct vlan_group *vlgrp;
+#endif
+ int (*set_speed)(struct net_device *, u8 autoneg, u16 speed, u8 duplex);
+ void (*get_settings)(struct net_device *, struct ethtool_cmd *);
+ void (*phy_reset_enable)(void __iomem *);
+ void (*hw_start)(struct net_device *);
+ unsigned int (*phy_reset_pending)(void __iomem *);
+ unsigned int (*link_ok)(void __iomem *);
+ struct delayed_work task;
+ unsigned features;
+};
+
+MODULE_AUTHOR("Realtek and the Linux r8169 crew <netdev@vger.kernel.org>");
+MODULE_DESCRIPTION("RealTek RTL-8169 Gigabit Ethernet driver");
+module_param(rx_copybreak, int, 0);
+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");
+module_param(use_dac, int, 0);
+MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot.");
+module_param_named(debug, debug.msg_enable, int, 0);
+MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(RTL8169_VERSION);
+
+static int rtl8169_open(struct net_device *dev);
+static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev);
+static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance);
+static int rtl8169_init_ring(struct net_device *dev);
+static void rtl_hw_start(struct net_device *dev);
+static int rtl8169_close(struct net_device *dev);
+static void rtl_set_rx_mode(struct net_device *dev);
+static void rtl8169_tx_timeout(struct net_device *dev);
+static struct net_device_stats *rtl8169_get_stats(struct net_device *dev);
+static int rtl8169_rx_interrupt(struct net_device *, struct rtl8169_private *,
+ void __iomem *, u32 budget);
+static int rtl8169_change_mtu(struct net_device *dev, int new_mtu);
+static void rtl8169_down(struct net_device *dev);
+static void rtl8169_rx_clear(struct rtl8169_private *tp);
+
+#ifdef CONFIG_R8169_NAPI
+static int rtl8169_poll(struct napi_struct *napi, int budget);
+#endif
+
+static const unsigned int rtl8169_rx_config =
+ (RX_FIFO_THRESH << RxCfgFIFOShift) | (RX_DMA_BURST << RxCfgDMAShift);
+
+static void mdio_write(void __iomem *ioaddr, int reg_addr, int value)
+{
+ int i;
+
+ RTL_W32(PHYAR, 0x80000000 | (reg_addr & 0x1f) << 16 | (value & 0xffff));
+
+ for (i = 20; i > 0; i--) {
+ /*
+ * Check if the RTL8169 has completed writing to the specified
+ * MII register.
+ */
+ if (!(RTL_R32(PHYAR) & 0x80000000))
+ break;
+ udelay(25);
+ }
+}
+
+static int mdio_read(void __iomem *ioaddr, int reg_addr)
+{
+ int i, value = -1;
+
+ RTL_W32(PHYAR, 0x0 | (reg_addr & 0x1f) << 16);
+
+ for (i = 20; i > 0; i--) {
+ /*
+ * Check if the RTL8169 has completed retrieving data from
+ * the specified MII register.
+ */
+ if (RTL_R32(PHYAR) & 0x80000000) {
+ value = RTL_R32(PHYAR) & 0xffff;
+ break;
+ }
+ udelay(25);
+ }
+ return value;
+}
+
+static void rtl8169_irq_mask_and_ack(void __iomem *ioaddr)
+{
+ RTL_W16(IntrMask, 0x0000);
+
+ RTL_W16(IntrStatus, 0xffff);
+}
+
+static void rtl8169_asic_down(void __iomem *ioaddr)
+{
+ RTL_W8(ChipCmd, 0x00);
+ rtl8169_irq_mask_and_ack(ioaddr);
+ RTL_R16(CPlusCmd);
+}
+
+static unsigned int rtl8169_tbi_reset_pending(void __iomem *ioaddr)
+{
+ return RTL_R32(TBICSR) & TBIReset;
+}
+
+static unsigned int rtl8169_xmii_reset_pending(void __iomem *ioaddr)
+{
+ return mdio_read(ioaddr, MII_BMCR) & BMCR_RESET;
+}
+
+static unsigned int rtl8169_tbi_link_ok(void __iomem *ioaddr)
+{
+ return RTL_R32(TBICSR) & TBILinkOk;
+}
+
+static unsigned int rtl8169_xmii_link_ok(void __iomem *ioaddr)
+{
+ return RTL_R8(PHYstatus) & LinkStatus;
+}
+
+static void rtl8169_tbi_reset_enable(void __iomem *ioaddr)
+{
+ RTL_W32(TBICSR, RTL_R32(TBICSR) | TBIReset);
+}
+
+static void rtl8169_xmii_reset_enable(void __iomem *ioaddr)
+{
+ unsigned int val;
+
+ val = mdio_read(ioaddr, MII_BMCR) | BMCR_RESET;
+ mdio_write(ioaddr, MII_BMCR, val & 0xffff);
+}
+
+static void rtl8169_check_link_status(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ if (tp->link_ok(ioaddr)) {
+ netif_carrier_on(dev);
+ if (netif_msg_ifup(tp))
+ printk(KERN_INFO PFX "%s: link up\n", dev->name);
+ } else {
+ if (netif_msg_ifdown(tp))
+ printk(KERN_INFO PFX "%s: link down\n", dev->name);
+ netif_carrier_off(dev);
+ }
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static void rtl8169_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u8 options;
+
+ wol->wolopts = 0;
+
+#define WAKE_ANY (WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_BCAST | WAKE_MCAST)
+ wol->supported = WAKE_ANY;
+
+ spin_lock_irq(&tp->lock);
+
+ options = RTL_R8(Config1);
+ if (!(options & PMEnable))
+ goto out_unlock;
+
+ options = RTL_R8(Config3);
+ if (options & LinkUp)
+ wol->wolopts |= WAKE_PHY;
+ if (options & MagicPacket)
+ wol->wolopts |= WAKE_MAGIC;
+
+ options = RTL_R8(Config5);
+ if (options & UWF)
+ wol->wolopts |= WAKE_UCAST;
+ if (options & BWF)
+ wol->wolopts |= WAKE_BCAST;
+ if (options & MWF)
+ wol->wolopts |= WAKE_MCAST;
+
+out_unlock:
+ spin_unlock_irq(&tp->lock);
+}
+
+static int rtl8169_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+ static struct {
+ u32 opt;
+ u16 reg;
+ u8 mask;
+ } cfg[] = {
+ { WAKE_ANY, Config1, PMEnable },
+ { WAKE_PHY, Config3, LinkUp },
+ { WAKE_MAGIC, Config3, MagicPacket },
+ { WAKE_UCAST, Config5, UWF },
+ { WAKE_BCAST, Config5, BWF },
+ { WAKE_MCAST, Config5, MWF },
+ { WAKE_ANY, Config5, LanWake }
+ };
+
+ spin_lock_irq(&tp->lock);
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ for (i = 0; i < ARRAY_SIZE(cfg); i++) {
+ u8 options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
+ if (wol->wolopts & cfg[i].opt)
+ options |= cfg[i].mask;
+ RTL_W8(cfg[i].reg, options);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ if (wol->wolopts)
+ tp->features |= RTL_FEATURE_WOL;
+ else
+ tp->features &= ~RTL_FEATURE_WOL;
+
+ spin_unlock_irq(&tp->lock);
+
+ return 0;
+}
+
+static void rtl8169_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ strcpy(info->driver, MODULENAME);
+ strcpy(info->version, RTL8169_VERSION);
+ strcpy(info->bus_info, pci_name(tp->pci_dev));
+}
+
+static int rtl8169_get_regs_len(struct net_device *dev)
+{
+ return R8169_REGS_SIZE;
+}
+
+static int rtl8169_set_speed_tbi(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int ret = 0;
+ u32 reg;
+
+ reg = RTL_R32(TBICSR);
+ if ((autoneg == AUTONEG_DISABLE) && (speed == SPEED_1000) &&
+ (duplex == DUPLEX_FULL)) {
+ RTL_W32(TBICSR, reg & ~(TBINwEnable | TBINwRestart));
+ } else if (autoneg == AUTONEG_ENABLE)
+ RTL_W32(TBICSR, reg | TBINwEnable | TBINwRestart);
+ else {
+ if (netif_msg_link(tp)) {
+ printk(KERN_WARNING "%s: "
+ "incorrect speed setting refused in TBI mode\n",
+ dev->name);
+ }
+ ret = -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
+static int rtl8169_set_speed_xmii(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int auto_nego, giga_ctrl;
+
+ auto_nego = mdio_read(ioaddr, MII_ADVERTISE);
+ auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_10FULL |
+ ADVERTISE_100HALF | ADVERTISE_100FULL);
+ giga_ctrl = mdio_read(ioaddr, MII_CTRL1000);
+ giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
+
+ if (autoneg == AUTONEG_ENABLE) {
+ auto_nego |= (ADVERTISE_10HALF | ADVERTISE_10FULL |
+ ADVERTISE_100HALF | ADVERTISE_100FULL);
+ giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
+ } else {
+ if (speed == SPEED_10)
+ auto_nego |= ADVERTISE_10HALF | ADVERTISE_10FULL;
+ else if (speed == SPEED_100)
+ auto_nego |= ADVERTISE_100HALF | ADVERTISE_100FULL;
+ else if (speed == SPEED_1000)
+ giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
+
+ if (duplex == DUPLEX_HALF)
+ auto_nego &= ~(ADVERTISE_10FULL | ADVERTISE_100FULL);
+
+ if (duplex == DUPLEX_FULL)
+ auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_100HALF);
+
+ /* This tweak comes straight from Realtek's driver. */
+ if ((speed == SPEED_100) && (duplex == DUPLEX_HALF) &&
+ ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16))) {
+ auto_nego = ADVERTISE_100HALF | ADVERTISE_CSMA;
+ }
+ }
+
+ /* The 8100e/8101e do Fast Ethernet only. */
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_14) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_15) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16)) {
+ if ((giga_ctrl & (ADVERTISE_1000FULL | ADVERTISE_1000HALF)) &&
+ netif_msg_link(tp)) {
+ printk(KERN_INFO "%s: PHY does not support 1000Mbps.\n",
+ dev->name);
+ }
+ giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
+ }
+
+ auto_nego |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_12) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_17)) {
+ /* Vendor specific (0x1f) and reserved (0x0e) MII registers. */
+ mdio_write(ioaddr, 0x1f, 0x0000);
+ mdio_write(ioaddr, 0x0e, 0x0000);
+ }
+
+ tp->phy_auto_nego_reg = auto_nego;
+ tp->phy_1000_ctrl_reg = giga_ctrl;
+
+ mdio_write(ioaddr, MII_ADVERTISE, auto_nego);
+ mdio_write(ioaddr, MII_CTRL1000, giga_ctrl);
+ mdio_write(ioaddr, MII_BMCR, BMCR_ANENABLE | BMCR_ANRESTART);
+ return 0;
+}
+
+static int rtl8169_set_speed(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int ret;
+
+ ret = tp->set_speed(dev, autoneg, speed, duplex);
+
+ if (netif_running(dev) && (tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
+ mod_timer(&tp->timer, jiffies + RTL8169_PHY_TIMEOUT);
+
+ return ret;
+}
+
+static int rtl8169_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ ret = rtl8169_set_speed(dev, cmd->autoneg, cmd->speed, cmd->duplex);
+ spin_unlock_irqrestore(&tp->lock, flags);
+
+ return ret;
+}
+
+static u32 rtl8169_get_rx_csum(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return tp->cp_cmd & RxChkSum;
+}
+
+static int rtl8169_set_rx_csum(struct net_device *dev, u32 data)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ if (data)
+ tp->cp_cmd |= RxChkSum;
+ else
+ tp->cp_cmd &= ~RxChkSum;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+
+ return 0;
+}
+
+#ifdef CONFIG_R8169_VLAN
+
+static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+{
+ return (tp->vlgrp && vlan_tx_tag_present(skb)) ?
+ TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00;
+}
+
+static void rtl8169_vlan_rx_register(struct net_device *dev,
+ struct vlan_group *grp)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ tp->vlgrp = grp;
+ if (tp->vlgrp)
+ tp->cp_cmd |= RxVlan;
+ else
+ tp->cp_cmd &= ~RxVlan;
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
+ struct sk_buff *skb)
+{
+ u32 opts2 = le32_to_cpu(desc->opts2);
+ int ret;
+
+ if (tp->vlgrp && (opts2 & RxVlanTag)) {
+ rtl8169_rx_hwaccel_skb(skb, tp->vlgrp, swab16(opts2 & 0xffff));
+ ret = 0;
+ } else
+ ret = -1;
+ desc->opts2 = 0;
+ return ret;
+}
+
+#else /* !CONFIG_R8169_VLAN */
+
+static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+{
+ return 0;
+}
+
+static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
+ struct sk_buff *skb)
+{
+ return -1;
+}
+
+#endif
+
+static void rtl8169_gset_tbi(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 status;
+
+ cmd->supported =
+ SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_FIBRE;
+ cmd->port = PORT_FIBRE;
+ cmd->transceiver = XCVR_INTERNAL;
+
+ status = RTL_R32(TBICSR);
+ cmd->advertising = (status & TBINwEnable) ? ADVERTISED_Autoneg : 0;
+ cmd->autoneg = !!(status & TBINwEnable);
+
+ cmd->speed = SPEED_1000;
+ cmd->duplex = DUPLEX_FULL; /* Always set */
+}
+
+static void rtl8169_gset_xmii(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u8 status;
+
+ cmd->supported = SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full |
+ SUPPORTED_100baseT_Half |
+ SUPPORTED_100baseT_Full |
+ SUPPORTED_1000baseT_Full |
+ SUPPORTED_Autoneg |
+ SUPPORTED_TP;
+
+ cmd->autoneg = 1;
+ cmd->advertising = ADVERTISED_TP | ADVERTISED_Autoneg;
+
+ if (tp->phy_auto_nego_reg & ADVERTISE_10HALF)
+ cmd->advertising |= ADVERTISED_10baseT_Half;
+ if (tp->phy_auto_nego_reg & ADVERTISE_10FULL)
+ cmd->advertising |= ADVERTISED_10baseT_Full;
+ if (tp->phy_auto_nego_reg & ADVERTISE_100HALF)
+ cmd->advertising |= ADVERTISED_100baseT_Half;
+ if (tp->phy_auto_nego_reg & ADVERTISE_100FULL)
+ cmd->advertising |= ADVERTISED_100baseT_Full;
+ if (tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL)
+ cmd->advertising |= ADVERTISED_1000baseT_Full;
+
+ status = RTL_R8(PHYstatus);
+
+ if (status & _1000bpsF)
+ cmd->speed = SPEED_1000;
+ else if (status & _100bps)
+ cmd->speed = SPEED_100;
+ else if (status & _10bps)
+ cmd->speed = SPEED_10;
+
+ if (status & TxFlowCtrl)
+ cmd->advertising |= ADVERTISED_Asym_Pause;
+ if (status & RxFlowCtrl)
+ cmd->advertising |= ADVERTISED_Pause;
+
+ cmd->duplex = ((status & _1000bpsF) || (status & FullDup)) ?
+ DUPLEX_FULL : DUPLEX_HALF;
+}
+
+static int rtl8169_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ tp->get_settings(dev, cmd);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+ return 0;
+}
+
+static void rtl8169_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+ void *p)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+
+ if (regs->len > R8169_REGS_SIZE)
+ regs->len = R8169_REGS_SIZE;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ memcpy_fromio(p, tp->mmio_addr, regs->len);
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static u32 rtl8169_get_msglevel(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return tp->msg_enable;
+}
+
+static void rtl8169_set_msglevel(struct net_device *dev, u32 value)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ tp->msg_enable = value;
+}
+
+static const char rtl8169_gstrings[][ETH_GSTRING_LEN] = {
+ "tx_packets",
+ "rx_packets",
+ "tx_errors",
+ "rx_errors",
+ "rx_missed",
+ "align_errors",
+ "tx_single_collisions",
+ "tx_multi_collisions",
+ "unicast",
+ "broadcast",
+ "multicast",
+ "tx_aborted",
+ "tx_underrun",
+};
+
+struct rtl8169_counters {
+ __le64 tx_packets;
+ __le64 rx_packets;
+ __le64 tx_errors;
+ __le32 rx_errors;
+ __le16 rx_missed;
+ __le16 align_errors;
+ __le32 tx_one_collision;
+ __le32 tx_multi_collision;
+ __le64 rx_unicast;
+ __le64 rx_broadcast;
+ __le32 rx_multicast;
+ __le16 tx_aborted;
+ __le16 tx_underun;
+};
+
+static int rtl8169_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return ARRAY_SIZE(rtl8169_gstrings);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void rtl8169_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct rtl8169_counters *counters;
+ dma_addr_t paddr;
+ u32 cmd;
+
+ ASSERT_RTNL();
+
+ counters = pci_alloc_consistent(tp->pci_dev, sizeof(*counters), &paddr);
+ if (!counters)
+ return;
+
+ RTL_W32(CounterAddrHigh, (u64)paddr >> 32);
+ cmd = (u64)paddr & DMA_32BIT_MASK;
+ RTL_W32(CounterAddrLow, cmd);
+ RTL_W32(CounterAddrLow, cmd | CounterDump);
+
+ while (RTL_R32(CounterAddrLow) & CounterDump) {
+ if (msleep_interruptible(1))
+ break;
+ }
+
+ RTL_W32(CounterAddrLow, 0);
+ RTL_W32(CounterAddrHigh, 0);
+
+ data[0] = le64_to_cpu(counters->tx_packets);
+ data[1] = le64_to_cpu(counters->rx_packets);
+ data[2] = le64_to_cpu(counters->tx_errors);
+ data[3] = le32_to_cpu(counters->rx_errors);
+ data[4] = le16_to_cpu(counters->rx_missed);
+ data[5] = le16_to_cpu(counters->align_errors);
+ data[6] = le32_to_cpu(counters->tx_one_collision);
+ data[7] = le32_to_cpu(counters->tx_multi_collision);
+ data[8] = le64_to_cpu(counters->rx_unicast);
+ data[9] = le64_to_cpu(counters->rx_broadcast);
+ data[10] = le32_to_cpu(counters->rx_multicast);
+ data[11] = le16_to_cpu(counters->tx_aborted);
+ data[12] = le16_to_cpu(counters->tx_underun);
+
+ pci_free_consistent(tp->pci_dev, sizeof(*counters), counters, paddr);
+}
+
+static void rtl8169_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+{
+ switch(stringset) {
+ case ETH_SS_STATS:
+ memcpy(data, *rtl8169_gstrings, sizeof(rtl8169_gstrings));
+ break;
+ }
+}
+
+static const struct ethtool_ops rtl8169_ethtool_ops = {
+ .get_drvinfo = rtl8169_get_drvinfo,
+ .get_regs_len = rtl8169_get_regs_len,
+ .get_link = ethtool_op_get_link,
+ .get_settings = rtl8169_get_settings,
+ .set_settings = rtl8169_set_settings,
+ .get_msglevel = rtl8169_get_msglevel,
+ .set_msglevel = rtl8169_set_msglevel,
+ .get_rx_csum = rtl8169_get_rx_csum,
+ .set_rx_csum = rtl8169_set_rx_csum,
+ .set_tx_csum = ethtool_op_set_tx_csum,
+ .set_sg = ethtool_op_set_sg,
+ .set_tso = ethtool_op_set_tso,
+ .get_regs = rtl8169_get_regs,
+ .get_wol = rtl8169_get_wol,
+ .set_wol = rtl8169_set_wol,
+ .get_strings = rtl8169_get_strings,
+ .get_sset_count = rtl8169_get_sset_count,
+ .get_ethtool_stats = rtl8169_get_ethtool_stats,
+};
+
+static void rtl8169_write_gmii_reg_bit(void __iomem *ioaddr, int reg,
+ int bitnum, int bitval)
+{
+ int val;
+
+ val = mdio_read(ioaddr, reg);
+ val = (bitval == 1) ?
+ val | (bitval << bitnum) : val & ~(0x0001 << bitnum);
+ mdio_write(ioaddr, reg, val & 0xffff);
+}
+
+static void rtl8169_get_mac_version(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ /*
+ * The driver currently handles the 8168Bf and the 8168Be identically
+ * but they can be identified more specifically through the test below
+ * if needed:
+ *
+ * (RTL_R32(TxConfig) & 0x700000) == 0x500000 ? 8168Bf : 8168Be
+ *
+ * Same thing for the 8101Eb and the 8101Ec:
+ *
+ * (RTL_R32(TxConfig) & 0x700000) == 0x200000 ? 8101Eb : 8101Ec
+ */
+ const struct {
+ u32 mask;
+ u32 val;
+ int mac_version;
+ } mac_info[] = {
+ /* 8168B family. */
+ { 0x7c800000, 0x3c800000, RTL_GIGA_MAC_VER_18 },
+ { 0x7cf00000, 0x3c000000, RTL_GIGA_MAC_VER_19 },
+ { 0x7cf00000, 0x3c200000, RTL_GIGA_MAC_VER_20 },
+ { 0x7c800000, 0x3c000000, RTL_GIGA_MAC_VER_20 },
+
+ /* 8168B family. */
+ { 0x7cf00000, 0x38000000, RTL_GIGA_MAC_VER_12 },
+ { 0x7cf00000, 0x38500000, RTL_GIGA_MAC_VER_17 },
+ { 0x7c800000, 0x38000000, RTL_GIGA_MAC_VER_17 },
+ { 0x7c800000, 0x30000000, RTL_GIGA_MAC_VER_11 },
+
+ /* 8101 family. */
+ { 0x7cf00000, 0x34000000, RTL_GIGA_MAC_VER_13 },
+ { 0x7cf00000, 0x34200000, RTL_GIGA_MAC_VER_16 },
+ { 0x7c800000, 0x34000000, RTL_GIGA_MAC_VER_16 },
+ /* FIXME: where did these entries come from ? -- FR */
+ { 0xfc800000, 0x38800000, RTL_GIGA_MAC_VER_15 },
+ { 0xfc800000, 0x30800000, RTL_GIGA_MAC_VER_14 },
+
+ /* 8110 family. */
+ { 0xfc800000, 0x98000000, RTL_GIGA_MAC_VER_06 },
+ { 0xfc800000, 0x18000000, RTL_GIGA_MAC_VER_05 },
+ { 0xfc800000, 0x10000000, RTL_GIGA_MAC_VER_04 },
+ { 0xfc800000, 0x04000000, RTL_GIGA_MAC_VER_03 },
+ { 0xfc800000, 0x00800000, RTL_GIGA_MAC_VER_02 },
+ { 0xfc800000, 0x00000000, RTL_GIGA_MAC_VER_01 },
+
+ { 0x00000000, 0x00000000, RTL_GIGA_MAC_VER_01 } /* Catch-all */
+ }, *p = mac_info;
+ u32 reg;
+
+ reg = RTL_R32(TxConfig);
+ while ((reg & p->mask) != p->val)
+ p++;
+ tp->mac_version = p->mac_version;
+
+ if (p->mask == 0x00000000) {
+ struct pci_dev *pdev = tp->pci_dev;
+
+ dev_info(&pdev->dev, "unknown MAC (%08x)\n", reg);
+ }
+}
+
+static void rtl8169_print_mac_version(struct rtl8169_private *tp)
+{
+ dprintk("mac_version = 0x%02x\n", tp->mac_version);
+}
+
+struct phy_reg {
+ u16 reg;
+ u16 val;
+};
+
+static void rtl_phy_write(void __iomem *ioaddr, struct phy_reg *regs, int len)
+{
+ while (len-- > 0) {
+ mdio_write(ioaddr, regs->reg, regs->val);
+ regs++;
+ }
+}
+
+static void rtl8169s_hw_phy_config(void __iomem *ioaddr)
+{
+ struct {
+ u16 regs[5]; /* Beware of bit-sign propagation */
+ } phy_magic[5] = { {
+ { 0x0000, //w 4 15 12 0
+ 0x00a1, //w 3 15 0 00a1
+ 0x0008, //w 2 15 0 0008
+ 0x1020, //w 1 15 0 1020
+ 0x1000 } },{ //w 0 15 0 1000
+ { 0x7000, //w 4 15 12 7
+ 0xff41, //w 3 15 0 ff41
+ 0xde60, //w 2 15 0 de60
+ 0x0140, //w 1 15 0 0140
+ 0x0077 } },{ //w 0 15 0 0077
+ { 0xa000, //w 4 15 12 a
+ 0xdf01, //w 3 15 0 df01
+ 0xdf20, //w 2 15 0 df20
+ 0xff95, //w 1 15 0 ff95
+ 0xfa00 } },{ //w 0 15 0 fa00
+ { 0xb000, //w 4 15 12 b
+ 0xff41, //w 3 15 0 ff41
+ 0xde20, //w 2 15 0 de20
+ 0x0140, //w 1 15 0 0140
+ 0x00bb } },{ //w 0 15 0 00bb
+ { 0xf000, //w 4 15 12 f
+ 0xdf01, //w 3 15 0 df01
+ 0xdf20, //w 2 15 0 df20
+ 0xff95, //w 1 15 0 ff95
+ 0xbf00 } //w 0 15 0 bf00
+ }
+ }, *p = phy_magic;
+ unsigned int i;
+
+ mdio_write(ioaddr, 0x1f, 0x0001); //w 31 2 0 1
+ mdio_write(ioaddr, 0x15, 0x1000); //w 21 15 0 1000
+ mdio_write(ioaddr, 0x18, 0x65c7); //w 24 15 0 65c7
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
+
+ for (i = 0; i < ARRAY_SIZE(phy_magic); i++, p++) {
+ int val, pos = 4;
+
+ val = (mdio_read(ioaddr, pos) & 0x0fff) | (p->regs[0] & 0xffff);
+ mdio_write(ioaddr, pos, val);
+ while (--pos >= 0)
+ mdio_write(ioaddr, pos, p->regs[4 - pos] & 0xffff);
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 1); //w 4 11 11 1
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
+ }
+ mdio_write(ioaddr, 0x1f, 0x0000); //w 31 2 0 0
+}
+
+static void rtl8169sb_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0002 },
+ { 0x01, 0x90d0 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168cp_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0000 },
+ { 0x1d, 0x0f00 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x1ec8 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168c_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x12, 0x2300 },
+ { 0x1f, 0x0002 },
+ { 0x00, 0x88d4 },
+ { 0x01, 0x82b1 },
+ { 0x03, 0x7002 },
+ { 0x08, 0x9e30 },
+ { 0x09, 0x01f0 },
+ { 0x0a, 0x5500 },
+ { 0x0c, 0x00c8 },
+ { 0x1f, 0x0003 },
+ { 0x12, 0xc096 },
+ { 0x16, 0x000a },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168cx_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0000 },
+ { 0x12, 0x2300 },
+ { 0x1f, 0x0003 },
+ { 0x16, 0x0f0a },
+ { 0x1f, 0x0000 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x7eb8 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl_hw_phy_config(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ rtl8169_print_mac_version(tp);
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_01:
+ break;
+ case RTL_GIGA_MAC_VER_02:
+ case RTL_GIGA_MAC_VER_03:
+ rtl8169s_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_04:
+ rtl8169sb_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_18:
+ rtl8168cp_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_19:
+ rtl8168c_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_20:
+ rtl8168cx_hw_phy_config(ioaddr);
+ break;
+ default:
+ break;
+ }
+}
+
+static void rtl8169_phy_timer(unsigned long __opaque)
+{
+ struct net_device *dev = (struct net_device *)__opaque;
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long timeout = RTL8169_PHY_TIMEOUT;
+
+ assert(tp->mac_version > RTL_GIGA_MAC_VER_01);
+
+ if (!(tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
+ return;
+
+ spin_lock_irq(&tp->lock);
+
+ if (tp->phy_reset_pending(ioaddr)) {
+ /*
+ * A busy loop could burn quite a few cycles on nowadays CPU.
+ * Let's delay the execution of the timer for a few ticks.
+ */
+ timeout = HZ/10;
+ goto out_mod_timer;
+ }
+
+ if (tp->link_ok(ioaddr))
+ goto out_unlock;
+
+ if (netif_msg_link(tp))
+ printk(KERN_WARNING "%s: PHY reset until link up\n", dev->name);
+
+ tp->phy_reset_enable(ioaddr);
+
+out_mod_timer:
+ mod_timer(timer, jiffies + timeout);
+out_unlock:
+ spin_unlock_irq(&tp->lock);
+}
+
+static inline void rtl8169_delete_timer(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_01)
+ return;
+
+ del_timer_sync(timer);
+}
+
+static inline void rtl8169_request_timer(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_01)
+ return;
+
+ mod_timer(timer, jiffies + RTL8169_PHY_TIMEOUT);
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+/*
+ * Polling 'interrupt' - used by things like netconsole to send skbs
+ * without having to re-enable interrupts. It's not called while
+ * the interrupt routine is executing.
+ */
+static void rtl8169_netpoll(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ disable_irq(pdev->irq);
+ rtl8169_interrupt(pdev->irq, dev);
+ enable_irq(pdev->irq);
+}
+#endif
+
+static void rtl8169_release_board(struct pci_dev *pdev, struct net_device *dev,
+ void __iomem *ioaddr)
+{
+ iounmap(ioaddr);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ free_netdev(dev);
+}
+
+static void rtl8169_phy_reset(struct net_device *dev,
+ struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+
+ tp->phy_reset_enable(ioaddr);
+ for (i = 0; i < 100; i++) {
+ if (!tp->phy_reset_pending(ioaddr))
+ return;
+ msleep(1);
+ }
+ if (netif_msg_link(tp))
+ printk(KERN_ERR "%s: PHY reset failed.\n", dev->name);
+}
+
+static void rtl8169_init_phy(struct net_device *dev, struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ rtl_hw_phy_config(dev);
+
+ dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
+ RTL_W8(0x82, 0x01);
+
+ pci_write_config_byte(tp->pci_dev, PCI_LATENCY_TIMER, 0x40);
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
+ pci_write_config_byte(tp->pci_dev, PCI_CACHE_LINE_SIZE, 0x08);
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_02) {
+ dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
+ RTL_W8(0x82, 0x01);
+ dprintk("Set PHY Reg 0x0bh = 0x00h\n");
+ mdio_write(ioaddr, 0x0b, 0x0000); //w 0x0b 15 0 0
+ }
+
+ rtl8169_phy_reset(dev, tp);
+
+ /*
+ * rtl8169_set_speed_xmii takes good care of the Fast Ethernet
+ * only 8101. Don't panic.
+ */
+ rtl8169_set_speed(dev, AUTONEG_ENABLE, SPEED_1000, DUPLEX_FULL);
+
+ if ((RTL_R8(PHYstatus) & TBI_Enable) && netif_msg_link(tp))
+ printk(KERN_INFO PFX "%s: TBI auto-negotiating\n", dev->name);
+}
+
+static void rtl_rar_set(struct rtl8169_private *tp, u8 *addr)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 high;
+ u32 low;
+
+ low = addr[0] | (addr[1] << 8) | (addr[2] << 16) | (addr[3] << 24);
+ high = addr[4] | (addr[5] << 8);
+
+ spin_lock_irq(&tp->lock);
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ RTL_W32(MAC0, low);
+ RTL_W32(MAC4, high);
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ spin_unlock_irq(&tp->lock);
+}
+
+static int rtl_set_mac_address(struct net_device *dev, void *p)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+
+ rtl_rar_set(tp, dev->dev_addr);
+
+ return 0;
+}
+
+static int rtl8169_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct mii_ioctl_data *data = if_mii(ifr);
+
+ if (!netif_running(dev))
+ return -ENODEV;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ data->phy_id = 32; /* Internal PHY */
+ return 0;
+
+ case SIOCGMIIREG:
+ data->val_out = mdio_read(tp->mmio_addr, data->reg_num & 0x1f);
+ return 0;
+
+ case SIOCSMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ mdio_write(tp->mmio_addr, data->reg_num & 0x1f, data->val_in);
+ return 0;
+ }
+ return -EOPNOTSUPP;
+}
+
+static const struct rtl_cfg_info {
+ void (*hw_start)(struct net_device *);
+ unsigned int region;
+ unsigned int align;
+ u16 intr_event;
+ u16 napi_event;
+ unsigned msi;
+} rtl_cfg_infos [] = {
+ [RTL_CFG_0] = {
+ .hw_start = rtl_hw_start_8169,
+ .region = 1,
+ .align = 0,
+ .intr_event = SYSErr | LinkChg | RxOverflow |
+ RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
+ .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow,
+ .msi = 0
+ },
+ [RTL_CFG_1] = {
+ .hw_start = rtl_hw_start_8168,
+ .region = 2,
+ .align = 8,
+ .intr_event = SYSErr | LinkChg | RxOverflow |
+ TxErr | TxOK | RxOK | RxErr,
+ .napi_event = TxErr | TxOK | RxOK | RxOverflow,
+ .msi = RTL_FEATURE_MSI
+ },
+ [RTL_CFG_2] = {
+ .hw_start = rtl_hw_start_8101,
+ .region = 2,
+ .align = 8,
+ .intr_event = SYSErr | LinkChg | RxOverflow | PCSTimeout |
+ RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
+ .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow,
+ .msi = RTL_FEATURE_MSI
+ }
+};
+
+/* Cfg9346_Unlock assumed. */
+static unsigned rtl_try_msi(struct pci_dev *pdev, void __iomem *ioaddr,
+ const struct rtl_cfg_info *cfg)
+{
+ unsigned msi = 0;
+ u8 cfg2;
+
+ cfg2 = RTL_R8(Config2) & ~MSIEnable;
+ if (cfg->msi) {
+ if (pci_enable_msi(pdev)) {
+ dev_info(&pdev->dev, "no MSI. Back to INTx.\n");
+ } else {
+ cfg2 |= MSIEnable;
+ msi = RTL_FEATURE_MSI;
+ }
+ }
+ RTL_W8(Config2, cfg2);
+ return msi;
+}
+
+static void rtl_disable_msi(struct pci_dev *pdev, struct rtl8169_private *tp)
+{
+ if (tp->features & RTL_FEATURE_MSI) {
+ pci_disable_msi(pdev);
+ tp->features &= ~RTL_FEATURE_MSI;
+ }
+}
+
+static int __devinit
+rtl8169_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
+ const unsigned int region = cfg->region;
+ struct rtl8169_private *tp;
+ struct net_device *dev;
+ void __iomem *ioaddr;
+ unsigned int i;
+ int rc;
+
+ if (netif_msg_drv(&debug)) {
+ printk(KERN_INFO "%s Gigabit Ethernet driver %s loaded\n",
+ MODULENAME, RTL8169_VERSION);
+ }
+
+ dev = alloc_etherdev(sizeof (*tp));
+ if (!dev) {
+ if (netif_msg_drv(&debug))
+ dev_err(&pdev->dev, "unable to alloc new ethernet\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ tp = netdev_priv(dev);
+ tp->dev = dev;
+ tp->msg_enable = netif_msg_init(debug.msg_enable, R8169_MSG_DEFAULT);
+
+ /* enable device (incl. PCI PM wakeup and hotplug setup) */
+ rc = pci_enable_device(pdev);
+ if (rc < 0) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "enable failure\n");
+ goto err_out_free_dev_1;
+ }
+
+ rc = pci_set_mwi(pdev);
+ if (rc < 0)
+ goto err_out_disable_2;
+
+ /* make sure PCI base addr 1 is MMIO */
+ if (!(pci_resource_flags(pdev, region) & IORESOURCE_MEM)) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "region #%d not an MMIO resource, aborting\n",
+ region);
+ }
+ rc = -ENODEV;
+ goto err_out_mwi_3;
+ }
+
+ /* check for weird/broken PCI region reporting */
+ if (pci_resource_len(pdev, region) < R8169_REGS_SIZE) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "Invalid PCI region size(s), aborting\n");
+ }
+ rc = -ENODEV;
+ goto err_out_mwi_3;
+ }
+
+ rc = pci_request_regions(pdev, MODULENAME);
+ if (rc < 0) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "could not request regions.\n");
+ goto err_out_mwi_3;
+ }
+
+ tp->cp_cmd = PCIMulRW | RxChkSum;
+
+ if ((sizeof(dma_addr_t) > 4) &&
+ !pci_set_dma_mask(pdev, DMA_64BIT_MASK) && use_dac) {
+ tp->cp_cmd |= PCIDAC;
+ dev->features |= NETIF_F_HIGHDMA;
+ } else {
+ rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+ if (rc < 0) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "DMA configuration failed.\n");
+ }
+ goto err_out_free_res_4;
+ }
+ }
+
+ pci_set_master(pdev);
+
+ /* ioremap MMIO region */
+ ioaddr = ioremap(pci_resource_start(pdev, region), R8169_REGS_SIZE);
+ if (!ioaddr) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "cannot remap MMIO, aborting\n");
+ rc = -EIO;
+ goto err_out_free_res_4;
+ }
+
+ /* Unneeded ? Don't mess with Mrs. Murphy. */
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+ /* Soft reset the chip. */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* Check that the chip has finished the reset. */
+ for (i = 0; i < 100; i++) {
+ if ((RTL_R8(ChipCmd) & CmdReset) == 0)
+ break;
+ msleep_interruptible(1);
+ }
+
+ /* Identify chip attached to board */
+ rtl8169_get_mac_version(tp, ioaddr);
+
+ rtl8169_print_mac_version(tp);
+
+ for (i = ARRAY_SIZE(rtl_chip_info) - 1; i >= 0; i--) {
+ if (tp->mac_version == rtl_chip_info[i].mac_version)
+ break;
+ }
+ if (i < 0) {
+ /* Unknown chip: assume array element #0, original RTL-8169 */
+ if (netif_msg_probe(tp)) {
+ dev_printk(KERN_DEBUG, &pdev->dev,
+ "unknown chip version, assuming %s\n",
+ rtl_chip_info[0].name);
+ }
+ i++;
+ }
+ tp->chipset = i;
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ RTL_W8(Config1, RTL_R8(Config1) | PMEnable);
+ RTL_W8(Config5, RTL_R8(Config5) & PMEStatus);
+ tp->features |= rtl_try_msi(pdev, ioaddr, cfg);
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ if ((tp->mac_version <= RTL_GIGA_MAC_VER_06) &&
+ (RTL_R8(PHYstatus) & TBI_Enable)) {
+ tp->set_speed = rtl8169_set_speed_tbi;
+ tp->get_settings = rtl8169_gset_tbi;
+ tp->phy_reset_enable = rtl8169_tbi_reset_enable;
+ tp->phy_reset_pending = rtl8169_tbi_reset_pending;
+ tp->link_ok = rtl8169_tbi_link_ok;
+
+ tp->phy_1000_ctrl_reg = ADVERTISE_1000FULL; /* Implied by TBI */
+ } else {
+ tp->set_speed = rtl8169_set_speed_xmii;
+ tp->get_settings = rtl8169_gset_xmii;
+ tp->phy_reset_enable = rtl8169_xmii_reset_enable;
+ tp->phy_reset_pending = rtl8169_xmii_reset_pending;
+ tp->link_ok = rtl8169_xmii_link_ok;
+
+ dev->do_ioctl = rtl8169_ioctl;
+ }
+
+ /* Get MAC address. FIXME: read EEPROM */
+ for (i = 0; i < MAC_ADDR_LEN; i++)
+ dev->dev_addr[i] = RTL_R8(MAC0 + i);
+ memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
+
+ dev->open = rtl8169_open;
+ dev->hard_start_xmit = rtl8169_start_xmit;
+ dev->get_stats = rtl8169_get_stats;
+ SET_ETHTOOL_OPS(dev, &rtl8169_ethtool_ops);
+ dev->stop = rtl8169_close;
+ dev->tx_timeout = rtl8169_tx_timeout;
+ dev->set_multicast_list = rtl_set_rx_mode;
+ dev->watchdog_timeo = RTL8169_TX_TIMEOUT;
+ dev->irq = pdev->irq;
+ dev->base_addr = (unsigned long) ioaddr;
+ dev->change_mtu = rtl8169_change_mtu;
+ dev->set_mac_address = rtl_set_mac_address;
+
+#ifdef CONFIG_R8169_NAPI
+ netif_napi_add(dev, &tp->napi, rtl8169_poll, R8169_NAPI_WEIGHT);
+#endif
+
+#ifdef CONFIG_R8169_VLAN
+ dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
+ dev->vlan_rx_register = rtl8169_vlan_rx_register;
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ dev->poll_controller = rtl8169_netpoll;
+#endif
+
+ tp->intr_mask = 0xffff;
+ tp->pci_dev = pdev;
+ tp->mmio_addr = ioaddr;
+ tp->align = cfg->align;
+ tp->hw_start = cfg->hw_start;
+ tp->intr_event = cfg->intr_event;
+ tp->napi_event = cfg->napi_event;
+
+ init_timer(&tp->timer);
+ tp->timer.data = (unsigned long) dev;
+ tp->timer.function = rtl8169_phy_timer;
+
+ spin_lock_init(&tp->lock);
+
+ rc = register_netdev(dev);
+ if (rc < 0)
+ goto err_out_msi_5;
+
+ pci_set_drvdata(pdev, dev);
+
+ if (netif_msg_probe(tp)) {
+ u32 xid = RTL_R32(TxConfig) & 0x7cf0f8ff;
+
+ printk(KERN_INFO "%s: %s at 0x%lx, "
+ "%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x, "
+ "XID %08x IRQ %d\n",
+ dev->name,
+ rtl_chip_info[tp->chipset].name,
+ dev->base_addr,
+ dev->dev_addr[0], dev->dev_addr[1],
+ dev->dev_addr[2], dev->dev_addr[3],
+ dev->dev_addr[4], dev->dev_addr[5], xid, dev->irq);
+ }
+
+ rtl8169_init_phy(dev, tp);
+
+out:
+ return rc;
+
+err_out_msi_5:
+ rtl_disable_msi(pdev, tp);
+ iounmap(ioaddr);
+err_out_free_res_4:
+ pci_release_regions(pdev);
+err_out_mwi_3:
+ pci_clear_mwi(pdev);
+err_out_disable_2:
+ pci_disable_device(pdev);
+err_out_free_dev_1:
+ free_netdev(dev);
+ goto out;
+}
+
+static void __devexit rtl8169_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ flush_scheduled_work();
+
+ unregister_netdev(dev);
+ rtl_disable_msi(pdev, tp);
+ rtl8169_release_board(pdev, dev, tp->mmio_addr);
+ pci_set_drvdata(pdev, NULL);
+}
+
+static void rtl8169_set_rxbufsize(struct rtl8169_private *tp,
+ struct net_device *dev)
+{
+ unsigned int mtu = dev->mtu;
+
+ tp->rx_buf_sz = (mtu > RX_BUF_SIZE) ? mtu + ETH_HLEN + 8 : RX_BUF_SIZE;
+}
+
+static int rtl8169_open(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+ int retval = -ENOMEM;
+
+
+ rtl8169_set_rxbufsize(tp, dev);
+
+ /*
+ * Rx and Tx desscriptors needs 256 bytes alignment.
+ * pci_alloc_consistent provides more.
+ */
+ tp->TxDescArray = pci_alloc_consistent(pdev, R8169_TX_RING_BYTES,
+ &tp->TxPhyAddr);
+ if (!tp->TxDescArray)
+ goto out;
+
+ tp->RxDescArray = pci_alloc_consistent(pdev, R8169_RX_RING_BYTES,
+ &tp->RxPhyAddr);
+ if (!tp->RxDescArray)
+ goto err_free_tx_0;
+
+ retval = rtl8169_init_ring(dev);
+ if (retval < 0)
+ goto err_free_rx_1;
+
+ INIT_DELAYED_WORK(&tp->task, NULL);
+
+ smp_mb();
+
+ retval = request_irq(dev->irq, rtl8169_interrupt,
+ (tp->features & RTL_FEATURE_MSI) ? 0 : IRQF_SHARED,
+ dev->name, dev);
+ if (retval < 0)
+ goto err_release_ring_2;
+
+#ifdef CONFIG_R8169_NAPI
+ napi_enable(&tp->napi);
+#endif
+
+ rtl_hw_start(dev);
+
+ rtl8169_request_timer(dev);
+
+ rtl8169_check_link_status(dev, tp, tp->mmio_addr);
+out:
+ return retval;
+
+err_release_ring_2:
+ rtl8169_rx_clear(tp);
+err_free_rx_1:
+ pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+err_free_tx_0:
+ pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
+ tp->TxPhyAddr);
+ goto out;
+}
+
+static void rtl8169_hw_reset(void __iomem *ioaddr)
+{
+ /* Disable interrupts */
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+ /* Reset the chipset */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* PCI commit */
+ RTL_R8(ChipCmd);
+}
+
+static void rtl_set_rx_tx_config_registers(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 cfg = rtl8169_rx_config;
+
+ cfg |= (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
+ RTL_W32(RxConfig, cfg);
+
+ /* Set DMA burst size and Interframe Gap Time */
+ RTL_W32(TxConfig, (TX_DMA_BURST << TxDMAShift) |
+ (InterFrameGap << TxInterFrameGapShift));
+}
+
+static void rtl_hw_start(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+
+ /* Soft reset the chip. */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* Check that the chip has finished the reset. */
+ for (i = 0; i < 100; i++) {
+ if ((RTL_R8(ChipCmd) & CmdReset) == 0)
+ break;
+ msleep_interruptible(1);
+ }
+
+ tp->hw_start(dev);
+
+ netif_start_queue(dev);
+}
+
+
+static void rtl_set_rx_tx_desc_registers(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ /*
+ * Magic spell: some iop3xx ARM board needs the TxDescAddrHigh
+ * register to be written before TxDescAddrLow to work.
+ * Switching from MMIO to I/O access fixes the issue as well.
+ */
+ RTL_W32(TxDescStartAddrHigh, ((u64) tp->TxPhyAddr) >> 32);
+ RTL_W32(TxDescStartAddrLow, ((u64) tp->TxPhyAddr) & DMA_32BIT_MASK);
+ RTL_W32(RxDescAddrHigh, ((u64) tp->RxPhyAddr) >> 32);
+ RTL_W32(RxDescAddrLow, ((u64) tp->RxPhyAddr) & DMA_32BIT_MASK);
+}
+
+static u16 rtl_rw_cpluscmd(void __iomem *ioaddr)
+{
+ u16 cmd;
+
+ cmd = RTL_R16(CPlusCmd);
+ RTL_W16(CPlusCmd, cmd);
+ return cmd;
+}
+
+static void rtl_set_rx_max_size(void __iomem *ioaddr)
+{
+ /* Low hurts. Let's disable the filtering. */
+ RTL_W16(RxMaxSize, 16383);
+}
+
+static void rtl8169_set_magic_reg(void __iomem *ioaddr, unsigned mac_version)
+{
+ struct {
+ u32 mac_version;
+ u32 clk;
+ u32 val;
+ } cfg2_info [] = {
+ { RTL_GIGA_MAC_VER_05, PCI_Clock_33MHz, 0x000fff00 }, // 8110SCd
+ { RTL_GIGA_MAC_VER_05, PCI_Clock_66MHz, 0x000fffff },
+ { RTL_GIGA_MAC_VER_06, PCI_Clock_33MHz, 0x00ffff00 }, // 8110SCe
+ { RTL_GIGA_MAC_VER_06, PCI_Clock_66MHz, 0x00ffffff }
+ }, *p = cfg2_info;
+ unsigned int i;
+ u32 clk;
+
+ clk = RTL_R8(Config2) & PCI_Clock_66MHz;
+ for (i = 0; i < ARRAY_SIZE(cfg2_info); i++, p++) {
+ if ((p->mac_version == mac_version) && (p->clk == clk)) {
+ RTL_W32(0x7c, p->val);
+ break;
+ }
+ }
+}
+
+static void rtl_hw_start_8169(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_05) {
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) | PCIMulRW);
+ pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x08);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_04))
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_04))
+ rtl_set_rx_tx_config_registers(tp);
+
+ tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03)) {
+ dprintk("Set MAC Reg C+CR Offset 0xE0. "
+ "Bit-3 and bit-14 MUST be 1\n");
+ tp->cp_cmd |= (1 << 14);
+ }
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ rtl8169_set_magic_reg(ioaddr, tp->mac_version);
+
+ /*
+ * Undocumented corner. Supposedly:
+ * (TxTimer << 12) | (TxPackets << 8) | (RxTimer << 4) | RxPackets
+ */
+ RTL_W16(IntrMitigate, 0x0000);
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ if ((tp->mac_version != RTL_GIGA_MAC_VER_01) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_02) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_03) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_04)) {
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_set_rx_tx_config_registers(tp);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ /* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ RTL_R8(IntrMask);
+
+ RTL_W32(RxMissed, 0);
+
+ rtl_set_rx_mode(dev);
+
+ /* no early-rx interrupts */
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+
+ /* Enable all known interrupts by setting the interrupt mask. */
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static void rtl_hw_start_8168(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+ u8 ctl;
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ rtl_set_rx_tx_config_registers(tp);
+
+ tp->cp_cmd |= RTL_R16(CPlusCmd) | PktCntrDisable | INTT_1;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ /* Tx performance tweak. */
+ pci_read_config_byte(pdev, 0x69, &ctl);
+ ctl = (ctl & ~0x70) | 0x50;
+ pci_write_config_byte(pdev, 0x69, ctl);
+
+ RTL_W16(IntrMitigate, 0x5151);
+
+ /* Work around for RxFIFO overflow. */
+ if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+ tp->intr_event |= RxFIFOOver | PCSTimeout;
+ tp->intr_event &= ~RxOverflow;
+ }
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ RTL_R8(IntrMask);
+
+ RTL_W32(RxMissed, 0);
+
+ rtl_set_rx_mode(dev);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static void rtl_hw_start_8101(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16)) {
+ pci_write_config_word(pdev, 0x68, 0x00);
+ pci_write_config_word(pdev, 0x69, 0x08);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ RTL_W16(IntrMitigate, 0x0000);
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_set_rx_tx_config_registers(tp);
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ RTL_R8(IntrMask);
+
+ RTL_W32(RxMissed, 0);
+
+ rtl_set_rx_mode(dev);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xf000);
+
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static int rtl8169_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int ret = 0;
+
+ if (new_mtu < ETH_ZLEN || new_mtu > SafeMtu)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ if (!netif_running(dev))
+ goto out;
+
+ rtl8169_down(dev);
+
+ rtl8169_set_rxbufsize(tp, dev);
+
+ ret = rtl8169_init_ring(dev);
+ if (ret < 0)
+ goto out;
+
+#ifdef CONFIG_R8169_NAPI
+ napi_enable(&tp->napi);
+#endif
+
+ rtl_hw_start(dev);
+
+ rtl8169_request_timer(dev);
+
+out:
+ return ret;
+}
+
+static inline void rtl8169_make_unusable_by_asic(struct RxDesc *desc)
+{
+ desc->addr = cpu_to_le64(0x0badbadbadbadbadull);
+ desc->opts1 &= ~cpu_to_le32(DescOwn | RsvdMask);
+}
+
+static void rtl8169_free_rx_skb(struct rtl8169_private *tp,
+ struct sk_buff **sk_buff, struct RxDesc *desc)
+{
+ struct pci_dev *pdev = tp->pci_dev;
+
+ pci_unmap_single(pdev, le64_to_cpu(desc->addr), tp->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(*sk_buff);
+ *sk_buff = NULL;
+ rtl8169_make_unusable_by_asic(desc);
+}
+
+static inline void rtl8169_mark_to_asic(struct RxDesc *desc, u32 rx_buf_sz)
+{
+ u32 eor = le32_to_cpu(desc->opts1) & RingEnd;
+
+ desc->opts1 = cpu_to_le32(DescOwn | eor | rx_buf_sz);
+}
+
+static inline void rtl8169_map_to_asic(struct RxDesc *desc, dma_addr_t mapping,
+ u32 rx_buf_sz)
+{
+ desc->addr = cpu_to_le64(mapping);
+ wmb();
+ rtl8169_mark_to_asic(desc, rx_buf_sz);
+}
+
+static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev,
+ struct net_device *dev,
+ struct RxDesc *desc, int rx_buf_sz,
+ unsigned int align)
+{
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+ unsigned int pad;
+
+ pad = align ? align : NET_IP_ALIGN;
+
+ skb = netdev_alloc_skb(dev, rx_buf_sz + pad);
+ if (!skb)
+ goto err_out;
+
+ skb_reserve(skb, align ? ((pad - 1) & (unsigned long)skb->data) : pad);
+
+ mapping = pci_map_single(pdev, skb->data, rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+
+ rtl8169_map_to_asic(desc, mapping, rx_buf_sz);
+out:
+ return skb;
+
+err_out:
+ rtl8169_make_unusable_by_asic(desc);
+ goto out;
+}
+
+static void rtl8169_rx_clear(struct rtl8169_private *tp)
+{
+ unsigned int i;
+
+ for (i = 0; i < NUM_RX_DESC; i++) {
+ if (tp->Rx_skbuff[i]) {
+ rtl8169_free_rx_skb(tp, tp->Rx_skbuff + i,
+ tp->RxDescArray + i);
+ }
+ }
+}
+
+static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev,
+ u32 start, u32 end)
+{
+ u32 cur;
+
+ for (cur = start; end - cur != 0; cur++) {
+ struct sk_buff *skb;
+ unsigned int i = cur % NUM_RX_DESC;
+
+ WARN_ON((s32)(end - cur) < 0);
+
+ if (tp->Rx_skbuff[i])
+ continue;
+
+ skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev,
+ tp->RxDescArray + i,
+ tp->rx_buf_sz, tp->align);
+ if (!skb)
+ break;
+
+ tp->Rx_skbuff[i] = skb;
+ }
+ return cur - start;
+}
+
+static inline void rtl8169_mark_as_last_descriptor(struct RxDesc *desc)
+{
+ desc->opts1 |= cpu_to_le32(RingEnd);
+}
+
+static void rtl8169_init_ring_indexes(struct rtl8169_private *tp)
+{
+ tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0;
+}
+
+static int rtl8169_init_ring(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ rtl8169_init_ring_indexes(tp);
+
+ memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info));
+ memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *));
+
+ if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC)
+ goto err_out;
+
+ rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1);
+
+ return 0;
+
+err_out:
+ rtl8169_rx_clear(tp);
+ return -ENOMEM;
+}
+
+static void rtl8169_unmap_tx_skb(struct pci_dev *pdev, struct ring_info *tx_skb,
+ struct TxDesc *desc)
+{
+ unsigned int len = tx_skb->len;
+
+ pci_unmap_single(pdev, le64_to_cpu(desc->addr), len, PCI_DMA_TODEVICE);
+ desc->opts1 = 0x00;
+ desc->opts2 = 0x00;
+ desc->addr = 0x00;
+ tx_skb->len = 0;
+}
+
+static void rtl8169_tx_clear(struct rtl8169_private *tp)
+{
+ unsigned int i;
+
+ for (i = tp->dirty_tx; i < tp->dirty_tx + NUM_TX_DESC; i++) {
+ unsigned int entry = i % NUM_TX_DESC;
+ struct ring_info *tx_skb = tp->tx_skb + entry;
+ unsigned int len = tx_skb->len;
+
+ if (len) {
+ struct sk_buff *skb = tx_skb->skb;
+
+ rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb,
+ tp->TxDescArray + entry);
+ if (skb) {
+ dev_kfree_skb(skb);
+ tx_skb->skb = NULL;
+ }
+ tp->dev->stats.tx_dropped++;
+ }
+ }
+ tp->cur_tx = tp->dirty_tx = 0;
+}
+
+static void rtl8169_schedule_work(struct net_device *dev, work_func_t task)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ PREPARE_DELAYED_WORK(&tp->task, task);
+ schedule_delayed_work(&tp->task, 4);
+}
+
+static void rtl8169_wait_for_quiescence(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ synchronize_irq(dev->irq);
+
+ /* Wait for any pending NAPI task to complete */
+#ifdef CONFIG_R8169_NAPI
+ napi_disable(&tp->napi);
+#endif
+
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+#ifdef CONFIG_R8169_NAPI
+ tp->intr_mask = 0xffff;
+ RTL_W16(IntrMask, tp->intr_event);
+ napi_enable(&tp->napi);
+#endif
+}
+
+static void rtl8169_reinit_task(struct work_struct *work)
+{
+ struct rtl8169_private *tp =
+ container_of(work, struct rtl8169_private, task.work);
+ struct net_device *dev = tp->dev;
+ int ret;
+
+ rtnl_lock();
+
+ if (!netif_running(dev))
+ goto out_unlock;
+
+ rtl8169_wait_for_quiescence(dev);
+ rtl8169_close(dev);
+
+ ret = rtl8169_open(dev);
+ if (unlikely(ret < 0)) {
+ if (net_ratelimit() && netif_msg_drv(tp)) {
+ printk(KERN_ERR PFX "%s: reinit failure (status = %d)."
+ " Rescheduling.\n", dev->name, ret);
+ }
+ rtl8169_schedule_work(dev, rtl8169_reinit_task);
+ }
+
+out_unlock:
+ rtnl_unlock();
+}
+
+static void rtl8169_reset_task(struct work_struct *work)
+{
+ struct rtl8169_private *tp =
+ container_of(work, struct rtl8169_private, task.work);
+ struct net_device *dev = tp->dev;
+
+ rtnl_lock();
+
+ if (!netif_running(dev))
+ goto out_unlock;
+
+ rtl8169_wait_for_quiescence(dev);
+
+ rtl8169_rx_interrupt(dev, tp, tp->mmio_addr, ~(u32)0);
+ rtl8169_tx_clear(tp);
+
+ if (tp->dirty_rx == tp->cur_rx) {
+ rtl8169_init_ring_indexes(tp);
+ rtl_hw_start(dev);
+ netif_wake_queue(dev);
+ rtl8169_check_link_status(dev, tp, tp->mmio_addr);
+ } else {
+ if (net_ratelimit() && netif_msg_intr(tp)) {
+ printk(KERN_EMERG PFX "%s: Rx buffers shortage\n",
+ dev->name);
+ }
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+ }
+
+out_unlock:
+ rtnl_unlock();
+}
+
+static void rtl8169_tx_timeout(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ rtl8169_hw_reset(tp->mmio_addr);
+
+ /* Let's wait a bit while any (async) irq lands on */
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+}
+
+static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb,
+ u32 opts1)
+{
+ struct skb_shared_info *info = skb_shinfo(skb);
+ unsigned int cur_frag, entry;
+ struct TxDesc * uninitialized_var(txd);
+
+ entry = tp->cur_tx;
+ for (cur_frag = 0; cur_frag < info->nr_frags; cur_frag++) {
+ skb_frag_t *frag = info->frags + cur_frag;
+ dma_addr_t mapping;
+ u32 status, len;
+ void *addr;
+
+ entry = (entry + 1) % NUM_TX_DESC;
+
+ txd = tp->TxDescArray + entry;
+ len = frag->size;
+ addr = ((void *) page_address(frag->page)) + frag->page_offset;
+ mapping = pci_map_single(tp->pci_dev, addr, len, PCI_DMA_TODEVICE);
+
+ /* anti gcc 2.95.3 bugware (sic) */
+ status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+
+ txd->opts1 = cpu_to_le32(status);
+ txd->addr = cpu_to_le64(mapping);
+
+ tp->tx_skb[entry].len = len;
+ }
+
+ if (cur_frag) {
+ tp->tx_skb[entry].skb = skb;
+ txd->opts1 |= cpu_to_le32(LastFrag);
+ }
+
+ return cur_frag;
+}
+
+static inline u32 rtl8169_tso_csum(struct sk_buff *skb, struct net_device *dev)
+{
+ if (dev->features & NETIF_F_TSO) {
+ u32 mss = skb_shinfo(skb)->gso_size;
+
+ if (mss)
+ return LargeSend | ((mss & MSSMask) << MSSShift);
+ }
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ const struct iphdr *ip = ip_hdr(skb);
+
+ if (ip->protocol == IPPROTO_TCP)
+ return IPCS | TCPCS;
+ else if (ip->protocol == IPPROTO_UDP)
+ return IPCS | UDPCS;
+ WARN_ON(1); /* we need a WARN() */
+ }
+ return 0;
+}
+
+static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned int frags, entry = tp->cur_tx % NUM_TX_DESC;
+ struct TxDesc *txd = tp->TxDescArray + entry;
+ void __iomem *ioaddr = tp->mmio_addr;
+ dma_addr_t mapping;
+ u32 status, len;
+ u32 opts1;
+ int ret = NETDEV_TX_OK;
+
+ if (unlikely(TX_BUFFS_AVAIL(tp) < skb_shinfo(skb)->nr_frags)) {
+ if (netif_msg_drv(tp)) {
+ printk(KERN_ERR
+ "%s: BUG! Tx Ring full when queue awake!\n",
+ dev->name);
+ }
+ goto err_stop;
+ }
+
+ if (unlikely(le32_to_cpu(txd->opts1) & DescOwn))
+ goto err_stop;
+
+ opts1 = DescOwn | rtl8169_tso_csum(skb, dev);
+
+ frags = rtl8169_xmit_frags(tp, skb, opts1);
+ if (frags) {
+ len = skb_headlen(skb);
+ opts1 |= FirstFrag;
+ } else {
+ len = skb->len;
+
+ if (unlikely(len < ETH_ZLEN)) {
+ if (skb_padto(skb, ETH_ZLEN))
+ goto err_update_stats;
+ len = ETH_ZLEN;
+ }
+
+ opts1 |= FirstFrag | LastFrag;
+ tp->tx_skb[entry].skb = skb;
+ }
+
+ mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE);
+
+ tp->tx_skb[entry].len = len;
+ txd->addr = cpu_to_le64(mapping);
+ txd->opts2 = cpu_to_le32(rtl8169_tx_vlan_tag(tp, skb));
+
+ wmb();
+
+ /* anti gcc 2.95.3 bugware (sic) */
+ status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+ txd->opts1 = cpu_to_le32(status);
+
+ dev->trans_start = jiffies;
+
+ tp->cur_tx += frags + 1;
+
+ smp_wmb();
+
+ RTL_W8(TxPoll, NPQ); /* set polling bit */
+
+ if (TX_BUFFS_AVAIL(tp) < MAX_SKB_FRAGS) {
+ netif_stop_queue(dev);
+ smp_rmb();
+ if (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)
+ netif_wake_queue(dev);
+ }
+
+out:
+ return ret;
+
+err_stop:
+ netif_stop_queue(dev);
+ ret = NETDEV_TX_BUSY;
+err_update_stats:
+ dev->stats.tx_dropped++;
+ goto out;
+}
+
+static void rtl8169_pcierr_interrupt(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+ void __iomem *ioaddr = tp->mmio_addr;
+ u16 pci_status, pci_cmd;
+
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
+ pci_read_config_word(pdev, PCI_STATUS, &pci_status);
+
+ if (netif_msg_intr(tp)) {
+ printk(KERN_ERR
+ "%s: PCI error (cmd = 0x%04x, status = 0x%04x).\n",
+ dev->name, pci_cmd, pci_status);
+ }
+
+ /*
+ * The recovery sequence below admits a very elaborated explanation:
+ * - it seems to work;
+ * - I did not see what else could be done;
+ * - it makes iop3xx happy.
+ *
+ * Feel free to adjust to your needs.
+ */
+ if (pdev->broken_parity_status)
+ pci_cmd &= ~PCI_COMMAND_PARITY;
+ else
+ pci_cmd |= PCI_COMMAND_SERR | PCI_COMMAND_PARITY;
+
+ pci_write_config_word(pdev, PCI_COMMAND, pci_cmd);
+
+ pci_write_config_word(pdev, PCI_STATUS,
+ pci_status & (PCI_STATUS_DETECTED_PARITY |
+ PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_REC_MASTER_ABORT |
+ PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_SIG_TARGET_ABORT));
+
+ /* The infamous DAC f*ckup only happens at boot time */
+ if ((tp->cp_cmd & PCIDAC) && !tp->dirty_rx && !tp->cur_rx) {
+ if (netif_msg_intr(tp))
+ printk(KERN_INFO "%s: disabling PCI DAC.\n", dev->name);
+ tp->cp_cmd &= ~PCIDAC;
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ dev->features &= ~NETIF_F_HIGHDMA;
+ }
+
+ rtl8169_hw_reset(ioaddr);
+
+ rtl8169_schedule_work(dev, rtl8169_reinit_task);
+}
+
+static void rtl8169_tx_interrupt(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ unsigned int dirty_tx, tx_left;
+
+ dirty_tx = tp->dirty_tx;
+ smp_rmb();
+ tx_left = tp->cur_tx - dirty_tx;
+
+ while (tx_left > 0) {
+ unsigned int entry = dirty_tx % NUM_TX_DESC;
+ struct ring_info *tx_skb = tp->tx_skb + entry;
+ u32 len = tx_skb->len;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(tp->TxDescArray[entry].opts1);
+ if (status & DescOwn)
+ break;
+
+ dev->stats.tx_bytes += len;
+ dev->stats.tx_packets++;
+
+ rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb, tp->TxDescArray + entry);
+
+ if (status & LastFrag) {
+ dev_kfree_skb_irq(tx_skb->skb);
+ tx_skb->skb = NULL;
+ }
+ dirty_tx++;
+ tx_left--;
+ }
+
+ if (tp->dirty_tx != dirty_tx) {
+ tp->dirty_tx = dirty_tx;
+ smp_wmb();
+ if (netif_queue_stopped(dev) &&
+ (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)) {
+ netif_wake_queue(dev);
+ }
+ /*
+ * 8168 hack: TxPoll requests are lost when the Tx packets are
+ * too close. Let's kick an extra TxPoll request when a burst
+ * of start_xmit activity is detected (if it is not detected,
+ * it is slow enough). -- FR
+ */
+ smp_rmb();
+ if (tp->cur_tx != dirty_tx)
+ RTL_W8(TxPoll, NPQ);
+ }
+}
+
+static inline int rtl8169_fragmented_frame(u32 status)
+{
+ return (status & (FirstFrag | LastFrag)) != (FirstFrag | LastFrag);
+}
+
+static inline void rtl8169_rx_csum(struct sk_buff *skb, struct RxDesc *desc)
+{
+ u32 opts1 = le32_to_cpu(desc->opts1);
+ u32 status = opts1 & RxProtoMask;
+
+ if (((status == RxProtoTCP) && !(opts1 & TCPFail)) ||
+ ((status == RxProtoUDP) && !(opts1 & UDPFail)) ||
+ ((status == RxProtoIP) && !(opts1 & IPFail)))
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
+}
+
+static inline bool rtl8169_try_rx_copy(struct sk_buff **sk_buff,
+ struct rtl8169_private *tp, int pkt_size,
+ dma_addr_t addr)
+{
+ struct sk_buff *skb;
+ bool done = false;
+
+ if (pkt_size >= rx_copybreak)
+ goto out;
+
+ skb = netdev_alloc_skb(tp->dev, pkt_size + NET_IP_ALIGN);
+ if (!skb)
+ goto out;
+
+ pci_dma_sync_single_for_cpu(tp->pci_dev, addr, pkt_size,
+ PCI_DMA_FROMDEVICE);
+ skb_reserve(skb, NET_IP_ALIGN);
+ skb_copy_from_linear_data(*sk_buff, skb->data, pkt_size);
+ *sk_buff = skb;
+ done = true;
+out:
+ return done;
+}
+
+static int rtl8169_rx_interrupt(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr, u32 budget)
+{
+ unsigned int cur_rx, rx_left;
+ unsigned int delta, count;
+
+ cur_rx = tp->cur_rx;
+ rx_left = NUM_RX_DESC + tp->dirty_rx - cur_rx;
+ rx_left = rtl8169_rx_quota(rx_left, budget);
+
+ for (; rx_left > 0; rx_left--, cur_rx++) {
+ unsigned int entry = cur_rx % NUM_RX_DESC;
+ struct RxDesc *desc = tp->RxDescArray + entry;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(desc->opts1);
+
+ if (status & DescOwn)
+ break;
+ if (unlikely(status & RxRES)) {
+ if (netif_msg_rx_err(tp)) {
+ printk(KERN_INFO
+ "%s: Rx ERROR. status = %08x\n",
+ dev->name, status);
+ }
+ dev->stats.rx_errors++;
+ if (status & (RxRWT | RxRUNT))
+ dev->stats.rx_length_errors++;
+ if (status & RxCRC)
+ dev->stats.rx_crc_errors++;
+ if (status & RxFOVF) {
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+ dev->stats.rx_fifo_errors++;
+ }
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ } else {
+ struct sk_buff *skb = tp->Rx_skbuff[entry];
+ dma_addr_t addr = le64_to_cpu(desc->addr);
+ int pkt_size = (status & 0x00001FFF) - 4;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ /*
+ * The driver does not support incoming fragmented
+ * frames. They are seen as a symptom of over-mtu
+ * sized frames.
+ */
+ if (unlikely(rtl8169_fragmented_frame(status))) {
+ dev->stats.rx_dropped++;
+ dev->stats.rx_length_errors++;
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ continue;
+ }
+
+ rtl8169_rx_csum(skb, desc);
+
+ if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) {
+ pci_dma_sync_single_for_device(pdev, addr,
+ pkt_size, PCI_DMA_FROMDEVICE);
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ } else {
+ pci_unmap_single(pdev, addr, pkt_size,
+ PCI_DMA_FROMDEVICE);
+ tp->Rx_skbuff[entry] = NULL;
+ }
+
+ skb_put(skb, pkt_size);
+ skb->protocol = eth_type_trans(skb, dev);
+
+ if (rtl8169_rx_vlan_skb(tp, desc, skb) < 0)
+ rtl8169_rx_skb(skb);
+
+ dev->last_rx = jiffies;
+ dev->stats.rx_bytes += pkt_size;
+ dev->stats.rx_packets++;
+ }
+
+ /* Work around for AMD plateform. */
+ if ((desc->opts2 & cpu_to_le32(0xfffe000)) &&
+ (tp->mac_version == RTL_GIGA_MAC_VER_05)) {
+ desc->opts2 = 0;
+ cur_rx++;
+ }
+ }
+
+ count = cur_rx - tp->cur_rx;
+ tp->cur_rx = cur_rx;
+
+ delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx);
+ if (!delta && count && netif_msg_intr(tp))
+ printk(KERN_INFO "%s: no Rx buffer allocated\n", dev->name);
+ tp->dirty_rx += delta;
+
+ /*
+ * FIXME: until there is periodic timer to try and refill the ring,
+ * a temporary shortage may definitely kill the Rx process.
+ * - disable the asic to try and avoid an overflow and kick it again
+ * after refill ?
+ * - how do others driver handle this condition (Uh oh...).
+ */
+ if ((tp->dirty_rx + NUM_RX_DESC == tp->cur_rx) && netif_msg_intr(tp))
+ printk(KERN_EMERG "%s: Rx buffers exhausted\n", dev->name);
+
+ return count;
+}
+
+static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+{
+ struct net_device *dev = dev_instance;
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int boguscnt = max_interrupt_work;
+ void __iomem *ioaddr = tp->mmio_addr;
+ int status;
+ int handled = 0;
+
+ do {
+ status = RTL_R16(IntrStatus);
+
+ /* hotplug/major error/no more work/shared irq */
+ if ((status == 0xFFFF) || !status)
+ break;
+
+ handled = 1;
+
+ if (unlikely(!netif_running(dev))) {
+ rtl8169_asic_down(ioaddr);
+ goto out;
+ }
+
+ status &= tp->intr_mask;
+ RTL_W16(IntrStatus,
+ (status & RxFIFOOver) ? (status | RxOverflow) : status);
+
+ if (!(status & tp->intr_event))
+ break;
+
+ /* Work around for rx fifo overflow */
+ if (unlikely(status & RxFIFOOver) &&
+ (tp->mac_version == RTL_GIGA_MAC_VER_11)) {
+ netif_stop_queue(dev);
+ rtl8169_tx_timeout(dev);
+ break;
+ }
+
+ if (unlikely(status & SYSErr)) {
+ rtl8169_pcierr_interrupt(dev);
+ break;
+ }
+
+ if (status & LinkChg)
+ rtl8169_check_link_status(dev, tp, ioaddr);
+
+#ifdef CONFIG_R8169_NAPI
+ if (status & tp->napi_event) {
+ RTL_W16(IntrMask, tp->intr_event & ~tp->napi_event);
+ tp->intr_mask = ~tp->napi_event;
+
+ if (likely(netif_rx_schedule_prep(dev, &tp->napi)))
+ __netif_rx_schedule(dev, &tp->napi);
+ else if (netif_msg_intr(tp)) {
+ printk(KERN_INFO "%s: interrupt %04x in poll\n",
+ dev->name, status);
+ }
+ }
+ break;
+#else
+ /* Rx interrupt */
+ if (status & (RxOK | RxOverflow | RxFIFOOver))
+ rtl8169_rx_interrupt(dev, tp, ioaddr, ~(u32)0);
+
+ /* Tx interrupt */
+ if (status & (TxOK | TxErr))
+ rtl8169_tx_interrupt(dev, tp, ioaddr);
+#endif
+
+ boguscnt--;
+ } while (boguscnt > 0);
+
+ if (boguscnt <= 0) {
+ if (netif_msg_intr(tp) && net_ratelimit() ) {
+ printk(KERN_WARNING
+ "%s: Too much work at interrupt!\n", dev->name);
+ }
+ /* Clear all interrupt sources. */
+ RTL_W16(IntrStatus, 0xffff);
+ }
+out:
+ return IRQ_RETVAL(handled);
+}
+
+#ifdef CONFIG_R8169_NAPI
+static int rtl8169_poll(struct napi_struct *napi, int budget)
+{
+ struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
+ struct net_device *dev = tp->dev;
+ void __iomem *ioaddr = tp->mmio_addr;
+ int work_done;
+
+ work_done = rtl8169_rx_interrupt(dev, tp, ioaddr, (u32) budget);
+ rtl8169_tx_interrupt(dev, tp, ioaddr);
+
+ if (work_done < budget) {
+ netif_rx_complete(dev, napi);
+ tp->intr_mask = 0xffff;
+ /*
+ * 20040426: the barrier is not strictly required but the
+ * behavior of the irq handler could be less predictable
+ * without it. Btw, the lack of flush for the posted pci
+ * write is safe - FR
+ */
+ smp_wmb();
+ RTL_W16(IntrMask, tp->intr_event);
+ }
+
+ return work_done;
+}
+#endif
+
+static void rtl8169_down(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int intrmask;
+
+ rtl8169_delete_timer(dev);
+
+ netif_stop_queue(dev);
+
+#ifdef CONFIG_R8169_NAPI
+ napi_disable(&tp->napi);
+#endif
+
+core_down:
+ spin_lock_irq(&tp->lock);
+
+ rtl8169_asic_down(ioaddr);
+
+ /* Update the error counts. */
+ dev->stats.rx_missed_errors += RTL_R32(RxMissed);
+ RTL_W32(RxMissed, 0);
+
+ spin_unlock_irq(&tp->lock);
+
+ synchronize_irq(dev->irq);
+
+ /* Give a racing hard_start_xmit a few cycles to complete. */
+ synchronize_sched(); /* FIXME: should this be synchronize_irq()? */
+
+ /*
+ * And now for the 50k$ question: are IRQ disabled or not ?
+ *
+ * Two paths lead here:
+ * 1) dev->close
+ * -> netif_running() is available to sync the current code and the
+ * IRQ handler. See rtl8169_interrupt for details.
+ * 2) dev->change_mtu
+ * -> rtl8169_poll can not be issued again and re-enable the
+ * interruptions. Let's simply issue the IRQ down sequence again.
+ *
+ * No loop if hotpluged or major error (0xffff).
+ */
+ intrmask = RTL_R16(IntrMask);
+ if (intrmask && (intrmask != 0xffff))
+ goto core_down;
+
+ rtl8169_tx_clear(tp);
+
+ rtl8169_rx_clear(tp);
+}
+
+static int rtl8169_close(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ rtl8169_down(dev);
+
+ free_irq(dev->irq, dev);
+
+ pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+ pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
+ tp->TxPhyAddr);
+ tp->TxDescArray = NULL;
+ tp->RxDescArray = NULL;
+
+ return 0;
+}
+
+static void rtl_set_rx_mode(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+ u32 mc_filter[2]; /* Multicast hash filter */
+ int rx_mode;
+ u32 tmp = 0;
+
+ if (dev->flags & IFF_PROMISC) {
+ /* Unconditionally log net taps. */
+ if (netif_msg_link(tp)) {
+ printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n",
+ dev->name);
+ }
+ rx_mode =
+ AcceptBroadcast | AcceptMulticast | AcceptMyPhys |
+ AcceptAllPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else if ((dev->mc_count > multicast_filter_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ /* Too many to filter perfectly -- accept all multicasts. */
+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else {
+ struct dev_mc_list *mclist;
+ unsigned int i;
+
+ rx_mode = AcceptBroadcast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0;
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
+ i++, mclist = mclist->next) {
+ int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
+ mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);
+ rx_mode |= AcceptMulticast;
+ }
+ }
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ tmp = rtl8169_rx_config | rx_mode |
+ (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_11) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_12) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_14) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_15) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_17)) {
+ mc_filter[0] = 0xffffffff;
+ mc_filter[1] = 0xffffffff;
+ }
+
+ RTL_W32(MAR0 + 0, mc_filter[0]);
+ RTL_W32(MAR0 + 4, mc_filter[1]);
+
+ RTL_W32(RxConfig, tmp);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+/**
+ * rtl8169_get_stats - Get rtl8169 read/write statistics
+ * @dev: The Ethernet Device to get statistics for
+ *
+ * Get TX/RX statistics for rtl8169
+ */
+static struct net_device_stats *rtl8169_get_stats(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ if (netif_running(dev)) {
+ spin_lock_irqsave(&tp->lock, flags);
+ dev->stats.rx_missed_errors += RTL_R32(RxMissed);
+ RTL_W32(RxMissed, 0);
+ spin_unlock_irqrestore(&tp->lock, flags);
+ }
+
+ return &dev->stats;
+}
+
+#ifdef CONFIG_PM
+
+static int rtl8169_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ if (!netif_running(dev))
+ goto out_pci_suspend;
+
+ netif_device_detach(dev);
+ netif_stop_queue(dev);
+
+ spin_lock_irq(&tp->lock);
+
+ rtl8169_asic_down(ioaddr);
+
+ dev->stats.rx_missed_errors += RTL_R32(RxMissed);
+ RTL_W32(RxMissed, 0);
+
+ spin_unlock_irq(&tp->lock);
+
+out_pci_suspend:
+ pci_save_state(pdev);
+ pci_enable_wake(pdev, pci_choose_state(pdev, state),
+ (tp->features & RTL_FEATURE_WOL) ? 1 : 0);
+ pci_set_power_state(pdev, pci_choose_state(pdev, state));
+
+ return 0;
+}
+
+static int rtl8169_resume(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ pci_enable_wake(pdev, PCI_D0, 0);
+
+ if (!netif_running(dev))
+ goto out;
+
+ netif_device_attach(dev);
+
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+out:
+ return 0;
+}
+
+#endif /* CONFIG_PM */
+
+static struct pci_driver rtl8169_pci_driver = {
+ .name = MODULENAME,
+ .id_table = rtl8169_pci_tbl,
+ .probe = rtl8169_init_one,
+ .remove = __devexit_p(rtl8169_remove_one),
+#ifdef CONFIG_PM
+ .suspend = rtl8169_suspend,
+ .resume = rtl8169_resume,
+#endif
+};
+
+static int __init rtl8169_init_module(void)
+{
+ return pci_register_driver(&rtl8169_pci_driver);
+}
+
+static void __exit rtl8169_cleanup_module(void)
+{
+ pci_unregister_driver(&rtl8169_pci_driver);
+}
+
+module_init(rtl8169_init_module);
+module_exit(rtl8169_cleanup_module);
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/devices/r8169-2.6.28-ethercat.c Wed Jan 13 00:04:47 2010 +0100
@@ -0,0 +1,3940 @@
+/*
+ * r8169.c: RealTek 8169/8168/8101 ethernet driver.
+ *
+ * Copyright (c) 2002 ShuChen <shuchen@realtek.com.tw>
+ * Copyright (c) 2003 - 2007 Francois Romieu <romieu@fr.zoreil.com>
+ * Copyright (c) a lot of people too. Please respect their work.
+ *
+ * See MAINTAINERS file for support contact information.
+ *
+ * vim: noexpandtab
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/delay.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/if_vlan.h>
+#include <linux/crc32.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/init.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include "../globals.h"
+#include "ecdev.h"
+
+#define RTL8169_VERSION "2.3LK-NAPI"
+#define MODULENAME "ec_r8169"
+#define PFX MODULENAME ": "
+
+#ifdef RTL8169_DEBUG
+#define assert(expr) \
+ if (!(expr)) { \
+ printk( "Assertion failed! %s,%s,%s,line=%d\n", \
+ #expr,__FILE__,__func__,__LINE__); \
+ }
+#define dprintk(fmt, args...) \
+ do { printk(KERN_DEBUG PFX fmt, ## args); } while (0)
+#else
+#define assert(expr) do {} while (0)
+#define dprintk(fmt, args...) do {} while (0)
+#endif /* RTL8169_DEBUG */
+
+#define R8169_MSG_DEFAULT \
+ (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN)
+
+#define TX_BUFFS_AVAIL(tp) \
+ (tp->dirty_tx + NUM_TX_DESC - tp->cur_tx - 1)
+
+/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
+static const int max_interrupt_work = 20;
+
+/* Maximum number of multicast addresses to filter (vs. Rx-all-multicast).
+ The RTL chips use a 64 element hash table based on the Ethernet CRC. */
+static const int multicast_filter_limit = 32;
+
+/* MAC address length */
+#define MAC_ADDR_LEN 6
+
+#define MAX_READ_REQUEST_SHIFT 12
+#define RX_FIFO_THRESH 7 /* 7 means NO threshold, Rx buffer level before first PCI xfer. */
+#define RX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define TX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define EarlyTxThld 0x3F /* 0x3F means NO early transmit */
+#define RxPacketMaxSize 0x3FE8 /* 16K - 1 - ETH_HLEN - VLAN - CRC... */
+#define SafeMtu 0x1c20 /* ... actually life sucks beyond ~7k */
+#define InterFrameGap 0x03 /* 3 means InterFrameGap = the shortest one */
+
+#define R8169_REGS_SIZE 256
+#define R8169_NAPI_WEIGHT 64
+#define NUM_TX_DESC 64 /* Number of Tx descriptor registers */
+#define NUM_RX_DESC 256 /* Number of Rx descriptor registers */
+#define RX_BUF_SIZE 1536 /* Rx Buffer size */
+#define R8169_TX_RING_BYTES (NUM_TX_DESC * sizeof(struct TxDesc))
+#define R8169_RX_RING_BYTES (NUM_RX_DESC * sizeof(struct RxDesc))
+
+#define RTL8169_TX_TIMEOUT (6*HZ)
+#define RTL8169_PHY_TIMEOUT (10*HZ)
+
+#define RTL_EEPROM_SIG cpu_to_le32(0x8129)
+#define RTL_EEPROM_SIG_MASK cpu_to_le32(0xffff)
+#define RTL_EEPROM_SIG_ADDR 0x0000
+
+/* write/read MMIO register */
+#define RTL_W8(reg, val8) writeb ((val8), ioaddr + (reg))
+#define RTL_W16(reg, val16) writew ((val16), ioaddr + (reg))
+#define RTL_W32(reg, val32) writel ((val32), ioaddr + (reg))
+#define RTL_R8(reg) readb (ioaddr + (reg))
+#define RTL_R16(reg) readw (ioaddr + (reg))
+#define RTL_R32(reg) ((unsigned long) readl (ioaddr + (reg)))
+
+enum mac_version {
+ RTL_GIGA_MAC_VER_01 = 0x01, // 8169
+ RTL_GIGA_MAC_VER_02 = 0x02, // 8169S
+ RTL_GIGA_MAC_VER_03 = 0x03, // 8110S
+ RTL_GIGA_MAC_VER_04 = 0x04, // 8169SB
+ RTL_GIGA_MAC_VER_05 = 0x05, // 8110SCd
+ RTL_GIGA_MAC_VER_06 = 0x06, // 8110SCe
+ RTL_GIGA_MAC_VER_07 = 0x07, // 8102e
+ RTL_GIGA_MAC_VER_08 = 0x08, // 8102e
+ RTL_GIGA_MAC_VER_09 = 0x09, // 8102e
+ RTL_GIGA_MAC_VER_10 = 0x0a, // 8101e
+ RTL_GIGA_MAC_VER_11 = 0x0b, // 8168Bb
+ RTL_GIGA_MAC_VER_12 = 0x0c, // 8168Be
+ RTL_GIGA_MAC_VER_13 = 0x0d, // 8101Eb
+ RTL_GIGA_MAC_VER_14 = 0x0e, // 8101 ?
+ RTL_GIGA_MAC_VER_15 = 0x0f, // 8101 ?
+ RTL_GIGA_MAC_VER_16 = 0x11, // 8101Ec
+ RTL_GIGA_MAC_VER_17 = 0x10, // 8168Bf
+ RTL_GIGA_MAC_VER_18 = 0x12, // 8168CP
+ RTL_GIGA_MAC_VER_19 = 0x13, // 8168C
+ RTL_GIGA_MAC_VER_20 = 0x14, // 8168C
+ RTL_GIGA_MAC_VER_21 = 0x15, // 8168C
+ RTL_GIGA_MAC_VER_22 = 0x16, // 8168C
+ RTL_GIGA_MAC_VER_23 = 0x17, // 8168CP
+ RTL_GIGA_MAC_VER_24 = 0x18, // 8168CP
+ RTL_GIGA_MAC_VER_25 = 0x19 // 8168D
+};
+
+#define _R(NAME,MAC,MASK) \
+ { .name = NAME, .mac_version = MAC, .RxConfigMask = MASK }
+
+static const struct {
+ const char *name;
+ u8 mac_version;
+ u32 RxConfigMask; /* Clears the bits supported by this chip */
+} rtl_chip_info[] = {
+ _R("RTL8169", RTL_GIGA_MAC_VER_01, 0xff7e1880), // 8169
+ _R("RTL8169s", RTL_GIGA_MAC_VER_02, 0xff7e1880), // 8169S
+ _R("RTL8110s", RTL_GIGA_MAC_VER_03, 0xff7e1880), // 8110S
+ _R("RTL8169sb/8110sb", RTL_GIGA_MAC_VER_04, 0xff7e1880), // 8169SB
+ _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_05, 0xff7e1880), // 8110SCd
+ _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_06, 0xff7e1880), // 8110SCe
+ _R("RTL8102e", RTL_GIGA_MAC_VER_07, 0xff7e1880), // PCI-E
+ _R("RTL8102e", RTL_GIGA_MAC_VER_08, 0xff7e1880), // PCI-E
+ _R("RTL8102e", RTL_GIGA_MAC_VER_09, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_10, 0xff7e1880), // PCI-E
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_11, 0xff7e1880), // PCI-E
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_12, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_13, 0xff7e1880), // PCI-E 8139
+ _R("RTL8100e", RTL_GIGA_MAC_VER_14, 0xff7e1880), // PCI-E 8139
+ _R("RTL8100e", RTL_GIGA_MAC_VER_15, 0xff7e1880), // PCI-E 8139
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_17, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_16, 0xff7e1880), // PCI-E
+ _R("RTL8168cp/8111cp", RTL_GIGA_MAC_VER_18, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_19, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_20, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_21, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_22, 0xff7e1880), // PCI-E
+ _R("RTL8168cp/8111cp", RTL_GIGA_MAC_VER_23, 0xff7e1880), // PCI-E
+ _R("RTL8168cp/8111cp", RTL_GIGA_MAC_VER_24, 0xff7e1880), // PCI-E
+ _R("RTL8168d/8111d", RTL_GIGA_MAC_VER_25, 0xff7e1880) // PCI-E
+};
+#undef _R
+
+enum cfg_version {
+ RTL_CFG_0 = 0x00,
+ RTL_CFG_1,
+ RTL_CFG_2
+};
+
+static void rtl_hw_start_8169(struct net_device *);
+static void rtl_hw_start_8168(struct net_device *);
+static void rtl_hw_start_8101(struct net_device *);
+
+static struct pci_device_id rtl8169_pci_tbl[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8129), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8136), 0, 0, RTL_CFG_2 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8167), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8168), 0, 0, RTL_CFG_1 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4300), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_AT, 0xc107), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(0x16ec, 0x0116), 0, 0, RTL_CFG_0 },
+ { PCI_VENDOR_ID_LINKSYS, 0x1032,
+ PCI_ANY_ID, 0x0024, 0, 0, RTL_CFG_0 },
+ { 0x0001, 0x8168,
+ PCI_ANY_ID, 0x2410, 0, 0, RTL_CFG_2 },
+ {0,},
+};
+
+/* prevent driver from being loaded automatically */
+//MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);
+
+static int rx_copybreak = 200;
+static int use_dac;
+static struct {
+ u32 msg_enable;
+} debug = { -1 };
+
+enum rtl_registers {
+ MAC0 = 0, /* Ethernet hardware address. */
+ MAC4 = 4,
+ MAR0 = 8, /* Multicast filter. */
+ CounterAddrLow = 0x10,
+ CounterAddrHigh = 0x14,
+ TxDescStartAddrLow = 0x20,
+ TxDescStartAddrHigh = 0x24,
+ TxHDescStartAddrLow = 0x28,
+ TxHDescStartAddrHigh = 0x2c,
+ FLASH = 0x30,
+ ERSR = 0x36,
+ ChipCmd = 0x37,
+ TxPoll = 0x38,
+ IntrMask = 0x3c,
+ IntrStatus = 0x3e,
+ TxConfig = 0x40,
+ RxConfig = 0x44,
+ RxMissed = 0x4c,
+ Cfg9346 = 0x50,
+ Config0 = 0x51,
+ Config1 = 0x52,
+ Config2 = 0x53,
+ Config3 = 0x54,
+ Config4 = 0x55,
+ Config5 = 0x56,
+ MultiIntr = 0x5c,
+ PHYAR = 0x60,
+ PHYstatus = 0x6c,
+ RxMaxSize = 0xda,
+ CPlusCmd = 0xe0,
+ IntrMitigate = 0xe2,
+ RxDescAddrLow = 0xe4,
+ RxDescAddrHigh = 0xe8,
+ EarlyTxThres = 0xec,
+ FuncEvent = 0xf0,
+ FuncEventMask = 0xf4,
+ FuncPresetState = 0xf8,
+ FuncForceEvent = 0xfc,
+};
+
+enum rtl8110_registers {
+ TBICSR = 0x64,
+ TBI_ANAR = 0x68,
+ TBI_LPAR = 0x6a,
+};
+
+enum rtl8168_8101_registers {
+ CSIDR = 0x64,
+ CSIAR = 0x68,
+#define CSIAR_FLAG 0x80000000
+#define CSIAR_WRITE_CMD 0x80000000
+#define CSIAR_BYTE_ENABLE 0x0f
+#define CSIAR_BYTE_ENABLE_SHIFT 12
+#define CSIAR_ADDR_MASK 0x0fff
+
+ EPHYAR = 0x80,
+#define EPHYAR_FLAG 0x80000000
+#define EPHYAR_WRITE_CMD 0x80000000
+#define EPHYAR_REG_MASK 0x1f
+#define EPHYAR_REG_SHIFT 16
+#define EPHYAR_DATA_MASK 0xffff
+ DBG_REG = 0xd1,
+#define FIX_NAK_1 (1 << 4)
+#define FIX_NAK_2 (1 << 3)
+};
+
+enum rtl_register_content {
+ /* InterruptStatusBits */
+ SYSErr = 0x8000,
+ PCSTimeout = 0x4000,
+ SWInt = 0x0100,
+ TxDescUnavail = 0x0080,
+ RxFIFOOver = 0x0040,
+ LinkChg = 0x0020,
+ RxOverflow = 0x0010,
+ TxErr = 0x0008,
+ TxOK = 0x0004,
+ RxErr = 0x0002,
+ RxOK = 0x0001,
+
+ /* RxStatusDesc */
+ RxFOVF = (1 << 23),
+ RxRWT = (1 << 22),
+ RxRES = (1 << 21),
+ RxRUNT = (1 << 20),
+ RxCRC = (1 << 19),
+
+ /* ChipCmdBits */
+ CmdReset = 0x10,
+ CmdRxEnb = 0x08,
+ CmdTxEnb = 0x04,
+ RxBufEmpty = 0x01,
+
+ /* TXPoll register p.5 */
+ HPQ = 0x80, /* Poll cmd on the high prio queue */
+ NPQ = 0x40, /* Poll cmd on the low prio queue */
+ FSWInt = 0x01, /* Forced software interrupt */
+
+ /* Cfg9346Bits */
+ Cfg9346_Lock = 0x00,
+ Cfg9346_Unlock = 0xc0,
+
+ /* rx_mode_bits */
+ AcceptErr = 0x20,
+ AcceptRunt = 0x10,
+ AcceptBroadcast = 0x08,
+ AcceptMulticast = 0x04,
+ AcceptMyPhys = 0x02,
+ AcceptAllPhys = 0x01,
+
+ /* RxConfigBits */
+ RxCfgFIFOShift = 13,
+ RxCfgDMAShift = 8,
+
+ /* TxConfigBits */
+ TxInterFrameGapShift = 24,
+ TxDMAShift = 8, /* DMA burst value (0-7) is shift this many bits */
+
+ /* Config1 register p.24 */
+ LEDS1 = (1 << 7),
+ LEDS0 = (1 << 6),
+ MSIEnable = (1 << 5), /* Enable Message Signaled Interrupt */
+ Speed_down = (1 << 4),
+ MEMMAP = (1 << 3),
+ IOMAP = (1 << 2),
+ VPD = (1 << 1),
+ PMEnable = (1 << 0), /* Power Management Enable */
+
+ /* Config2 register p. 25 */
+ PCI_Clock_66MHz = 0x01,
+ PCI_Clock_33MHz = 0x00,
+
+ /* Config3 register p.25 */
+ MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */
+ LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */
+ Beacon_en = (1 << 0), /* 8168 only. Reserved in the 8168b */
+
+ /* Config5 register p.27 */
+ BWF = (1 << 6), /* Accept Broadcast wakeup frame */
+ MWF = (1 << 5), /* Accept Multicast wakeup frame */
+ UWF = (1 << 4), /* Accept Unicast wakeup frame */
+ LanWake = (1 << 1), /* LanWake enable/disable */
+ PMEStatus = (1 << 0), /* PME status can be reset by PCI RST# */
+
+ /* TBICSR p.28 */
+ TBIReset = 0x80000000,
+ TBILoopback = 0x40000000,
+ TBINwEnable = 0x20000000,
+ TBINwRestart = 0x10000000,
+ TBILinkOk = 0x02000000,
+ TBINwComplete = 0x01000000,
+
+ /* CPlusCmd p.31 */
+ EnableBist = (1 << 15), // 8168 8101
+ Mac_dbgo_oe = (1 << 14), // 8168 8101
+ Normal_mode = (1 << 13), // unused
+ Force_half_dup = (1 << 12), // 8168 8101
+ Force_rxflow_en = (1 << 11), // 8168 8101
+ Force_txflow_en = (1 << 10), // 8168 8101
+ Cxpl_dbg_sel = (1 << 9), // 8168 8101
+ ASF = (1 << 8), // 8168 8101
+ PktCntrDisable = (1 << 7), // 8168 8101
+ Mac_dbgo_sel = 0x001c, // 8168
+ RxVlan = (1 << 6),
+ RxChkSum = (1 << 5),
+ PCIDAC = (1 << 4),
+ PCIMulRW = (1 << 3),
+ INTT_0 = 0x0000, // 8168
+ INTT_1 = 0x0001, // 8168
+ INTT_2 = 0x0002, // 8168
+ INTT_3 = 0x0003, // 8168
+
+ /* rtl8169_PHYstatus */
+ TBI_Enable = 0x80,
+ TxFlowCtrl = 0x40,
+ RxFlowCtrl = 0x20,
+ _1000bpsF = 0x10,
+ _100bps = 0x08,
+ _10bps = 0x04,
+ LinkStatus = 0x02,
+ FullDup = 0x01,
+
+ /* _TBICSRBit */
+ TBILinkOK = 0x02000000,
+
+ /* DumpCounterCommand */
+ CounterDump = 0x8,
+};
+
+enum desc_status_bit {
+ DescOwn = (1 << 31), /* Descriptor is owned by NIC */
+ RingEnd = (1 << 30), /* End of descriptor ring */
+ FirstFrag = (1 << 29), /* First segment of a packet */
+ LastFrag = (1 << 28), /* Final segment of a packet */
+
+ /* Tx private */
+ LargeSend = (1 << 27), /* TCP Large Send Offload (TSO) */
+ MSSShift = 16, /* MSS value position */
+ MSSMask = 0xfff, /* MSS value + LargeSend bit: 12 bits */
+ IPCS = (1 << 18), /* Calculate IP checksum */
+ UDPCS = (1 << 17), /* Calculate UDP/IP checksum */
+ TCPCS = (1 << 16), /* Calculate TCP/IP checksum */
+ TxVlanTag = (1 << 17), /* Add VLAN tag */
+
+ /* Rx private */
+ PID1 = (1 << 18), /* Protocol ID bit 1/2 */
+ PID0 = (1 << 17), /* Protocol ID bit 2/2 */
+
+#define RxProtoUDP (PID1)
+#define RxProtoTCP (PID0)
+#define RxProtoIP (PID1 | PID0)
+#define RxProtoMask RxProtoIP
+
+ IPFail = (1 << 16), /* IP checksum failed */
+ UDPFail = (1 << 15), /* UDP/IP checksum failed */
+ TCPFail = (1 << 14), /* TCP/IP checksum failed */
+ RxVlanTag = (1 << 16), /* VLAN tag available */
+};
+
+#define RsvdMask 0x3fffc000
+
+struct TxDesc {
+ __le32 opts1;
+ __le32 opts2;
+ __le64 addr;
+};
+
+struct RxDesc {
+ __le32 opts1;
+ __le32 opts2;
+ __le64 addr;
+};
+
+struct ring_info {
+ struct sk_buff *skb;
+ u32 len;
+ u8 __pad[sizeof(void *) - sizeof(u32)];
+};
+
+enum features {
+ RTL_FEATURE_WOL = (1 << 0),
+ RTL_FEATURE_MSI = (1 << 1),
+ RTL_FEATURE_GMII = (1 << 2),
+};
+
+struct rtl8169_private {
+ void __iomem *mmio_addr; /* memory map physical address */
+ struct pci_dev *pci_dev; /* Index of PCI device */
+ struct net_device *dev;
+ struct napi_struct napi;
+ spinlock_t lock; /* spin lock flag */
+ u32 msg_enable;
+ int chipset;
+ int mac_version;
+ u32 cur_rx; /* Index into the Rx descriptor buffer of next Rx pkt. */
+ u32 cur_tx; /* Index into the Tx descriptor buffer of next Rx pkt. */
+ u32 dirty_rx;
+ u32 dirty_tx;
+ struct TxDesc *TxDescArray; /* 256-aligned Tx descriptor ring */
+ struct RxDesc *RxDescArray; /* 256-aligned Rx descriptor ring */
+ dma_addr_t TxPhyAddr;
+ dma_addr_t RxPhyAddr;
+ struct sk_buff *Rx_skbuff[NUM_RX_DESC]; /* Rx data buffers */
+ struct ring_info tx_skb[NUM_TX_DESC]; /* Tx data buffers */
+ unsigned align;
+ unsigned rx_buf_sz;
+ struct timer_list timer;
+ u16 cp_cmd;
+ u16 intr_event;
+ u16 napi_event;
+ u16 intr_mask;
+ int phy_auto_nego_reg;
+ int phy_1000_ctrl_reg;
+#ifdef CONFIG_R8169_VLAN
+ struct vlan_group *vlgrp;
+#endif
+ int (*set_speed)(struct net_device *, u8 autoneg, u16 speed, u8 duplex);
+ int (*get_settings)(struct net_device *, struct ethtool_cmd *);
+ void (*phy_reset_enable)(void __iomem *);
+ void (*hw_start)(struct net_device *);
+ unsigned int (*phy_reset_pending)(void __iomem *);
+ unsigned int (*link_ok)(void __iomem *);
+ int pcie_cap;
+ struct delayed_work task;
+ unsigned features;
+
+ struct mii_if_info mii;
+
+ ec_device_t *ecdev;
+ unsigned long ec_watchdog_jiffies;
+};
+
+MODULE_AUTHOR("Florian Pose <fp@igh-essen.com>");
+MODULE_DESCRIPTION("EtherCAT-capable RealTek RTL-8169 Gigabit Ethernet driver");
+module_param(rx_copybreak, int, 0);
+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");
+module_param(use_dac, int, 0);
+MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot.");
+module_param_named(debug, debug.msg_enable, int, 0);
+MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(EC_MASTER_VERSION);
+
+static int rtl8169_open(struct net_device *dev);
+static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev);
+static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance);
+static int rtl8169_init_ring(struct net_device *dev);
+static void rtl_hw_start(struct net_device *dev);
+static int rtl8169_close(struct net_device *dev);
+static void rtl_set_rx_mode(struct net_device *dev);
+static void rtl8169_tx_timeout(struct net_device *dev);
+static struct net_device_stats *rtl8169_get_stats(struct net_device *dev);
+static int rtl8169_rx_interrupt(struct net_device *, struct rtl8169_private *,
+ void __iomem *, u32 budget);
+static int rtl8169_change_mtu(struct net_device *dev, int new_mtu);
+static void rtl8169_down(struct net_device *dev);
+static void rtl8169_rx_clear(struct rtl8169_private *tp);
+static void ec_poll(struct net_device *dev);
+static int rtl8169_poll(struct napi_struct *napi, int budget);
+
+static const unsigned int rtl8169_rx_config =
+ (RX_FIFO_THRESH << RxCfgFIFOShift) | (RX_DMA_BURST << RxCfgDMAShift);
+
+static void mdio_write(void __iomem *ioaddr, int reg_addr, int value)
+{
+ int i;
+
+ RTL_W32(PHYAR, 0x80000000 | (reg_addr & 0x1f) << 16 | (value & 0xffff));
+
+ for (i = 20; i > 0; i--) {
+ /*
+ * Check if the RTL8169 has completed writing to the specified
+ * MII register.
+ */
+ if (!(RTL_R32(PHYAR) & 0x80000000))
+ break;
+ udelay(25);
+ }
+}
+
+static int mdio_read(void __iomem *ioaddr, int reg_addr)
+{
+ int i, value = -1;
+
+ RTL_W32(PHYAR, 0x0 | (reg_addr & 0x1f) << 16);
+
+ for (i = 20; i > 0; i--) {
+ /*
+ * Check if the RTL8169 has completed retrieving data from
+ * the specified MII register.
+ */
+ if (RTL_R32(PHYAR) & 0x80000000) {
+ value = RTL_R32(PHYAR) & 0xffff;
+ break;
+ }
+ udelay(25);
+ }
+ return value;
+}
+
+static void mdio_patch(void __iomem *ioaddr, int reg_addr, int value)
+{
+ mdio_write(ioaddr, reg_addr, mdio_read(ioaddr, reg_addr) | value);
+}
+
+static void rtl_mdio_write(struct net_device *dev, int phy_id, int location,
+ int val)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ mdio_write(ioaddr, location, val);
+}
+
+static int rtl_mdio_read(struct net_device *dev, int phy_id, int location)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ return mdio_read(ioaddr, location);
+}
+
+static void rtl_ephy_write(void __iomem *ioaddr, int reg_addr, int value)
+{
+ unsigned int i;
+
+ RTL_W32(EPHYAR, EPHYAR_WRITE_CMD | (value & EPHYAR_DATA_MASK) |
+ (reg_addr & EPHYAR_REG_MASK) << EPHYAR_REG_SHIFT);
+
+ for (i = 0; i < 100; i++) {
+ if (!(RTL_R32(EPHYAR) & EPHYAR_FLAG))
+ break;
+ udelay(10);
+ }
+}
+
+static u16 rtl_ephy_read(void __iomem *ioaddr, int reg_addr)
+{
+ u16 value = 0xffff;
+ unsigned int i;
+
+ RTL_W32(EPHYAR, (reg_addr & EPHYAR_REG_MASK) << EPHYAR_REG_SHIFT);
+
+ for (i = 0; i < 100; i++) {
+ if (RTL_R32(EPHYAR) & EPHYAR_FLAG) {
+ value = RTL_R32(EPHYAR) & EPHYAR_DATA_MASK;
+ break;
+ }
+ udelay(10);
+ }
+
+ return value;
+}
+
+static void rtl_csi_write(void __iomem *ioaddr, int addr, int value)
+{
+ unsigned int i;
+
+ RTL_W32(CSIDR, value);
+ RTL_W32(CSIAR, CSIAR_WRITE_CMD | (addr & CSIAR_ADDR_MASK) |
+ CSIAR_BYTE_ENABLE << CSIAR_BYTE_ENABLE_SHIFT);
+
+ for (i = 0; i < 100; i++) {
+ if (!(RTL_R32(CSIAR) & CSIAR_FLAG))
+ break;
+ udelay(10);
+ }
+}
+
+static u32 rtl_csi_read(void __iomem *ioaddr, int addr)
+{
+ u32 value = ~0x00;
+ unsigned int i;
+
+ RTL_W32(CSIAR, (addr & CSIAR_ADDR_MASK) |
+ CSIAR_BYTE_ENABLE << CSIAR_BYTE_ENABLE_SHIFT);
+
+ for (i = 0; i < 100; i++) {
+ if (RTL_R32(CSIAR) & CSIAR_FLAG) {
+ value = RTL_R32(CSIDR);
+ break;
+ }
+ udelay(10);
+ }
+
+ return value;
+}
+
+static void rtl8169_irq_mask_and_ack(void __iomem *ioaddr)
+{
+ RTL_W16(IntrMask, 0x0000);
+
+ RTL_W16(IntrStatus, 0xffff);
+}
+
+static void rtl8169_asic_down(void __iomem *ioaddr)
+{
+ RTL_W8(ChipCmd, 0x00);
+ rtl8169_irq_mask_and_ack(ioaddr);
+ RTL_R16(CPlusCmd);
+}
+
+static unsigned int rtl8169_tbi_reset_pending(void __iomem *ioaddr)
+{
+ return RTL_R32(TBICSR) & TBIReset;
+}
+
+static unsigned int rtl8169_xmii_reset_pending(void __iomem *ioaddr)
+{
+ return mdio_read(ioaddr, MII_BMCR) & BMCR_RESET;
+}
+
+static unsigned int rtl8169_tbi_link_ok(void __iomem *ioaddr)
+{
+ return RTL_R32(TBICSR) & TBILinkOk;
+}
+
+static unsigned int rtl8169_xmii_link_ok(void __iomem *ioaddr)
+{
+ return RTL_R8(PHYstatus) & LinkStatus;
+}
+
+static void rtl8169_tbi_reset_enable(void __iomem *ioaddr)
+{
+ RTL_W32(TBICSR, RTL_R32(TBICSR) | TBIReset);
+}
+
+static void rtl8169_xmii_reset_enable(void __iomem *ioaddr)
+{
+ unsigned int val;
+
+ val = mdio_read(ioaddr, MII_BMCR) | BMCR_RESET;
+ mdio_write(ioaddr, MII_BMCR, val & 0xffff);
+}
+
+static void rtl8169_check_link_status(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ unsigned long flags;
+
+ if (tp->ecdev) {
+ ecdev_set_link(tp->ecdev, tp->link_ok(ioaddr) ? 1 : 0);
+ } else {
+ spin_lock_irqsave(&tp->lock, flags);
+ if (tp->link_ok(ioaddr)) {
+ netif_carrier_on(dev);
+ if (netif_msg_ifup(tp))
+ printk(KERN_INFO PFX "%s: link up\n", dev->name);
+ } else {
+ if (netif_msg_ifdown(tp))
+ printk(KERN_INFO PFX "%s: link down\n", dev->name);
+ netif_carrier_off(dev);
+ }
+ spin_unlock_irqrestore(&tp->lock, flags);
+ }
+}
+
+static void rtl8169_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u8 options;
+
+ wol->wolopts = 0;
+
+#define WAKE_ANY (WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_BCAST | WAKE_MCAST)
+ wol->supported = WAKE_ANY;
+
+ spin_lock_irq(&tp->lock);
+
+ options = RTL_R8(Config1);
+ if (!(options & PMEnable))
+ goto out_unlock;
+
+ options = RTL_R8(Config3);
+ if (options & LinkUp)
+ wol->wolopts |= WAKE_PHY;
+ if (options & MagicPacket)
+ wol->wolopts |= WAKE_MAGIC;
+
+ options = RTL_R8(Config5);
+ if (options & UWF)
+ wol->wolopts |= WAKE_UCAST;
+ if (options & BWF)
+ wol->wolopts |= WAKE_BCAST;
+ if (options & MWF)
+ wol->wolopts |= WAKE_MCAST;
+
+out_unlock:
+ spin_unlock_irq(&tp->lock);
+}
+
+static int rtl8169_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+ static struct {
+ u32 opt;
+ u16 reg;
+ u8 mask;
+ } cfg[] = {
+ { WAKE_ANY, Config1, PMEnable },
+ { WAKE_PHY, Config3, LinkUp },
+ { WAKE_MAGIC, Config3, MagicPacket },
+ { WAKE_UCAST, Config5, UWF },
+ { WAKE_BCAST, Config5, BWF },
+ { WAKE_MCAST, Config5, MWF },
+ { WAKE_ANY, Config5, LanWake }
+ };
+
+ spin_lock_irq(&tp->lock);
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ for (i = 0; i < ARRAY_SIZE(cfg); i++) {
+ u8 options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
+ if (wol->wolopts & cfg[i].opt)
+ options |= cfg[i].mask;
+ RTL_W8(cfg[i].reg, options);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ if (wol->wolopts)
+ tp->features |= RTL_FEATURE_WOL;
+ else
+ tp->features &= ~RTL_FEATURE_WOL;
+ device_set_wakeup_enable(&tp->pci_dev->dev, wol->wolopts);
+
+ spin_unlock_irq(&tp->lock);
+
+ return 0;
+}
+
+static void rtl8169_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ strcpy(info->driver, MODULENAME);
+ strcpy(info->version, RTL8169_VERSION);
+ strcpy(info->bus_info, pci_name(tp->pci_dev));
+}
+
+static int rtl8169_get_regs_len(struct net_device *dev)
+{
+ return R8169_REGS_SIZE;
+}
+
+static int rtl8169_set_speed_tbi(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int ret = 0;
+ u32 reg;
+
+ reg = RTL_R32(TBICSR);
+ if ((autoneg == AUTONEG_DISABLE) && (speed == SPEED_1000) &&
+ (duplex == DUPLEX_FULL)) {
+ RTL_W32(TBICSR, reg & ~(TBINwEnable | TBINwRestart));
+ } else if (autoneg == AUTONEG_ENABLE)
+ RTL_W32(TBICSR, reg | TBINwEnable | TBINwRestart);
+ else {
+ if (netif_msg_link(tp)) {
+ printk(KERN_WARNING "%s: "
+ "incorrect speed setting refused in TBI mode\n",
+ dev->name);
+ }
+ ret = -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
+static int rtl8169_set_speed_xmii(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int auto_nego, giga_ctrl;
+
+ auto_nego = mdio_read(ioaddr, MII_ADVERTISE);
+ auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_10FULL |
+ ADVERTISE_100HALF | ADVERTISE_100FULL);
+ giga_ctrl = mdio_read(ioaddr, MII_CTRL1000);
+ giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
+
+ if (autoneg == AUTONEG_ENABLE) {
+ auto_nego |= (ADVERTISE_10HALF | ADVERTISE_10FULL |
+ ADVERTISE_100HALF | ADVERTISE_100FULL);
+ giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
+ } else {
+ if (speed == SPEED_10)
+ auto_nego |= ADVERTISE_10HALF | ADVERTISE_10FULL;
+ else if (speed == SPEED_100)
+ auto_nego |= ADVERTISE_100HALF | ADVERTISE_100FULL;
+ else if (speed == SPEED_1000)
+ giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
+
+ if (duplex == DUPLEX_HALF)
+ auto_nego &= ~(ADVERTISE_10FULL | ADVERTISE_100FULL);
+
+ if (duplex == DUPLEX_FULL)
+ auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_100HALF);
+
+ /* This tweak comes straight from Realtek's driver. */
+ if ((speed == SPEED_100) && (duplex == DUPLEX_HALF) &&
+ ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16))) {
+ auto_nego = ADVERTISE_100HALF | ADVERTISE_CSMA;
+ }
+ }
+
+ /* The 8100e/8101e/8102e do Fast Ethernet only. */
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_07) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_08) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_09) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_10) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_14) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_15) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16)) {
+ if ((giga_ctrl & (ADVERTISE_1000FULL | ADVERTISE_1000HALF)) &&
+ netif_msg_link(tp)) {
+ printk(KERN_INFO "%s: PHY does not support 1000Mbps.\n",
+ dev->name);
+ }
+ giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
+ }
+
+ auto_nego |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_11) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_12) ||
+ (tp->mac_version >= RTL_GIGA_MAC_VER_17)) {
+ /*
+ * Wake up the PHY.
+ * Vendor specific (0x1f) and reserved (0x0e) MII registers.
+ */
+ mdio_write(ioaddr, 0x1f, 0x0000);
+ mdio_write(ioaddr, 0x0e, 0x0000);
+ }
+
+ tp->phy_auto_nego_reg = auto_nego;
+ tp->phy_1000_ctrl_reg = giga_ctrl;
+
+ mdio_write(ioaddr, MII_ADVERTISE, auto_nego);
+ mdio_write(ioaddr, MII_CTRL1000, giga_ctrl);
+ mdio_write(ioaddr, MII_BMCR, BMCR_ANENABLE | BMCR_ANRESTART);
+ return 0;
+}
+
+static int rtl8169_set_speed(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int ret;
+
+ ret = tp->set_speed(dev, autoneg, speed, duplex);
+
+ if (netif_running(dev) && (tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
+ mod_timer(&tp->timer, jiffies + RTL8169_PHY_TIMEOUT);
+
+ return ret;
+}
+
+static int rtl8169_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ ret = rtl8169_set_speed(dev, cmd->autoneg, cmd->speed, cmd->duplex);
+ spin_unlock_irqrestore(&tp->lock, flags);
+
+ return ret;
+}
+
+static u32 rtl8169_get_rx_csum(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return tp->cp_cmd & RxChkSum;
+}
+
+static int rtl8169_set_rx_csum(struct net_device *dev, u32 data)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ if (data)
+ tp->cp_cmd |= RxChkSum;
+ else
+ tp->cp_cmd &= ~RxChkSum;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+
+ return 0;
+}
+
+#ifdef CONFIG_R8169_VLAN
+
+static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+{
+ return (tp->vlgrp && vlan_tx_tag_present(skb)) ?
+ TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00;
+}
+
+static void rtl8169_vlan_rx_register(struct net_device *dev,
+ struct vlan_group *grp)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ tp->vlgrp = grp;
+ if (tp->vlgrp)
+ tp->cp_cmd |= RxVlan;
+ else
+ tp->cp_cmd &= ~RxVlan;
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
+ struct sk_buff *skb)
+{
+ u32 opts2 = le32_to_cpu(desc->opts2);
+ struct vlan_group *vlgrp = tp->vlgrp;
+ int ret;
+
+ if (vlgrp && (opts2 & RxVlanTag)) {
+ vlan_hwaccel_receive_skb(skb, vlgrp, swab16(opts2 & 0xffff));
+ ret = 0;
+ } else
+ ret = -1;
+ desc->opts2 = 0;
+ return ret;
+}
+
+#else /* !CONFIG_R8169_VLAN */
+
+static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+{
+ return 0;
+}
+
+static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
+ struct sk_buff *skb)
+{
+ return -1;
+}
+
+#endif
+
+static int rtl8169_gset_tbi(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 status;
+
+ cmd->supported =
+ SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_FIBRE;
+ cmd->port = PORT_FIBRE;
+ cmd->transceiver = XCVR_INTERNAL;
+
+ status = RTL_R32(TBICSR);
+ cmd->advertising = (status & TBINwEnable) ? ADVERTISED_Autoneg : 0;
+ cmd->autoneg = !!(status & TBINwEnable);
+
+ cmd->speed = SPEED_1000;
+ cmd->duplex = DUPLEX_FULL; /* Always set */
+
+ return 0;
+}
+
+static int rtl8169_gset_xmii(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return mii_ethtool_gset(&tp->mii, cmd);
+}
+
+static int rtl8169_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+ int rc;
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ rc = tp->get_settings(dev, cmd);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+ return rc;
+}
+
+static void rtl8169_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+ void *p)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+
+ if (regs->len > R8169_REGS_SIZE)
+ regs->len = R8169_REGS_SIZE;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ memcpy_fromio(p, tp->mmio_addr, regs->len);
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static u32 rtl8169_get_msglevel(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return tp->msg_enable;
+}
+
+static void rtl8169_set_msglevel(struct net_device *dev, u32 value)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ tp->msg_enable = value;
+}
+
+static const char rtl8169_gstrings[][ETH_GSTRING_LEN] = {
+ "tx_packets",
+ "rx_packets",
+ "tx_errors",
+ "rx_errors",
+ "rx_missed",
+ "align_errors",
+ "tx_single_collisions",
+ "tx_multi_collisions",
+ "unicast",
+ "broadcast",
+ "multicast",
+ "tx_aborted",
+ "tx_underrun",
+};
+
+struct rtl8169_counters {
+ __le64 tx_packets;
+ __le64 rx_packets;
+ __le64 tx_errors;
+ __le32 rx_errors;
+ __le16 rx_missed;
+ __le16 align_errors;
+ __le32 tx_one_collision;
+ __le32 tx_multi_collision;
+ __le64 rx_unicast;
+ __le64 rx_broadcast;
+ __le32 rx_multicast;
+ __le16 tx_aborted;
+ __le16 tx_underun;
+};
+
+static int rtl8169_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return ARRAY_SIZE(rtl8169_gstrings);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void rtl8169_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct rtl8169_counters *counters;
+ dma_addr_t paddr;
+ u32 cmd;
+
+ ASSERT_RTNL();
+
+ counters = pci_alloc_consistent(tp->pci_dev, sizeof(*counters), &paddr);
+ if (!counters)
+ return;
+
+ RTL_W32(CounterAddrHigh, (u64)paddr >> 32);
+ cmd = (u64)paddr & DMA_32BIT_MASK;
+ RTL_W32(CounterAddrLow, cmd);
+ RTL_W32(CounterAddrLow, cmd | CounterDump);
+
+ while (RTL_R32(CounterAddrLow) & CounterDump) {
+ if (msleep_interruptible(1))
+ break;
+ }
+
+ RTL_W32(CounterAddrLow, 0);
+ RTL_W32(CounterAddrHigh, 0);
+
+ data[0] = le64_to_cpu(counters->tx_packets);
+ data[1] = le64_to_cpu(counters->rx_packets);
+ data[2] = le64_to_cpu(counters->tx_errors);
+ data[3] = le32_to_cpu(counters->rx_errors);
+ data[4] = le16_to_cpu(counters->rx_missed);
+ data[5] = le16_to_cpu(counters->align_errors);
+ data[6] = le32_to_cpu(counters->tx_one_collision);
+ data[7] = le32_to_cpu(counters->tx_multi_collision);
+ data[8] = le64_to_cpu(counters->rx_unicast);
+ data[9] = le64_to_cpu(counters->rx_broadcast);
+ data[10] = le32_to_cpu(counters->rx_multicast);
+ data[11] = le16_to_cpu(counters->tx_aborted);
+ data[12] = le16_to_cpu(counters->tx_underun);
+
+ pci_free_consistent(tp->pci_dev, sizeof(*counters), counters, paddr);
+}
+
+static void rtl8169_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+{
+ switch(stringset) {
+ case ETH_SS_STATS:
+ memcpy(data, *rtl8169_gstrings, sizeof(rtl8169_gstrings));
+ break;
+ }
+}
+
+static const struct ethtool_ops rtl8169_ethtool_ops = {
+ .get_drvinfo = rtl8169_get_drvinfo,
+ .get_regs_len = rtl8169_get_regs_len,
+ .get_link = ethtool_op_get_link,
+ .get_settings = rtl8169_get_settings,
+ .set_settings = rtl8169_set_settings,
+ .get_msglevel = rtl8169_get_msglevel,
+ .set_msglevel = rtl8169_set_msglevel,
+ .get_rx_csum = rtl8169_get_rx_csum,
+ .set_rx_csum = rtl8169_set_rx_csum,
+ .set_tx_csum = ethtool_op_set_tx_csum,
+ .set_sg = ethtool_op_set_sg,
+ .set_tso = ethtool_op_set_tso,
+ .get_regs = rtl8169_get_regs,
+ .get_wol = rtl8169_get_wol,
+ .set_wol = rtl8169_set_wol,
+ .get_strings = rtl8169_get_strings,
+ .get_sset_count = rtl8169_get_sset_count,
+ .get_ethtool_stats = rtl8169_get_ethtool_stats,
+};
+
+static void rtl8169_write_gmii_reg_bit(void __iomem *ioaddr, int reg,
+ int bitnum, int bitval)
+{
+ int val;
+
+ val = mdio_read(ioaddr, reg);
+ val = (bitval == 1) ?
+ val | (bitval << bitnum) : val & ~(0x0001 << bitnum);
+ mdio_write(ioaddr, reg, val & 0xffff);
+}
+
+static void rtl8169_get_mac_version(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ /*
+ * The driver currently handles the 8168Bf and the 8168Be identically
+ * but they can be identified more specifically through the test below
+ * if needed:
+ *
+ * (RTL_R32(TxConfig) & 0x700000) == 0x500000 ? 8168Bf : 8168Be
+ *
+ * Same thing for the 8101Eb and the 8101Ec:
+ *
+ * (RTL_R32(TxConfig) & 0x700000) == 0x200000 ? 8101Eb : 8101Ec
+ */
+ const struct {
+ u32 mask;
+ u32 val;
+ int mac_version;
+ } mac_info[] = {
+ /* 8168D family. */
+ { 0x7c800000, 0x28000000, RTL_GIGA_MAC_VER_25 },
+
+ /* 8168C family. */
+ { 0x7cf00000, 0x3ca00000, RTL_GIGA_MAC_VER_24 },
+ { 0x7cf00000, 0x3c900000, RTL_GIGA_MAC_VER_23 },
+ { 0x7cf00000, 0x3c800000, RTL_GIGA_MAC_VER_18 },
+ { 0x7c800000, 0x3c800000, RTL_GIGA_MAC_VER_24 },
+ { 0x7cf00000, 0x3c000000, RTL_GIGA_MAC_VER_19 },
+ { 0x7cf00000, 0x3c200000, RTL_GIGA_MAC_VER_20 },
+ { 0x7cf00000, 0x3c300000, RTL_GIGA_MAC_VER_21 },
+ { 0x7cf00000, 0x3c400000, RTL_GIGA_MAC_VER_22 },
+ { 0x7c800000, 0x3c000000, RTL_GIGA_MAC_VER_22 },
+
+ /* 8168B family. */
+ { 0x7cf00000, 0x38000000, RTL_GIGA_MAC_VER_12 },
+ { 0x7cf00000, 0x38500000, RTL_GIGA_MAC_VER_17 },
+ { 0x7c800000, 0x38000000, RTL_GIGA_MAC_VER_17 },
+ { 0x7c800000, 0x30000000, RTL_GIGA_MAC_VER_11 },
+
+ /* 8101 family. */
+ { 0x7cf00000, 0x34a00000, RTL_GIGA_MAC_VER_09 },
+ { 0x7cf00000, 0x24a00000, RTL_GIGA_MAC_VER_09 },
+ { 0x7cf00000, 0x34900000, RTL_GIGA_MAC_VER_08 },
+ { 0x7cf00000, 0x24900000, RTL_GIGA_MAC_VER_08 },
+ { 0x7cf00000, 0x34800000, RTL_GIGA_MAC_VER_07 },
+ { 0x7cf00000, 0x24800000, RTL_GIGA_MAC_VER_07 },
+ { 0x7cf00000, 0x34000000, RTL_GIGA_MAC_VER_13 },
+ { 0x7cf00000, 0x34300000, RTL_GIGA_MAC_VER_10 },
+ { 0x7cf00000, 0x34200000, RTL_GIGA_MAC_VER_16 },
+ { 0x7c800000, 0x34800000, RTL_GIGA_MAC_VER_09 },
+ { 0x7c800000, 0x24800000, RTL_GIGA_MAC_VER_09 },
+ { 0x7c800000, 0x34000000, RTL_GIGA_MAC_VER_16 },
+ /* FIXME: where did these entries come from ? -- FR */
+ { 0xfc800000, 0x38800000, RTL_GIGA_MAC_VER_15 },
+ { 0xfc800000, 0x30800000, RTL_GIGA_MAC_VER_14 },
+
+ /* 8110 family. */
+ { 0xfc800000, 0x98000000, RTL_GIGA_MAC_VER_06 },
+ { 0xfc800000, 0x18000000, RTL_GIGA_MAC_VER_05 },
+ { 0xfc800000, 0x10000000, RTL_GIGA_MAC_VER_04 },
+ { 0xfc800000, 0x04000000, RTL_GIGA_MAC_VER_03 },
+ { 0xfc800000, 0x00800000, RTL_GIGA_MAC_VER_02 },
+ { 0xfc800000, 0x00000000, RTL_GIGA_MAC_VER_01 },
+
+ { 0x00000000, 0x00000000, RTL_GIGA_MAC_VER_01 } /* Catch-all */
+ }, *p = mac_info;
+ u32 reg;
+
+ reg = RTL_R32(TxConfig);
+ while ((reg & p->mask) != p->val)
+ p++;
+ tp->mac_version = p->mac_version;
+
+ if (p->mask == 0x00000000) {
+ struct pci_dev *pdev = tp->pci_dev;
+
+ dev_info(&pdev->dev, "unknown MAC (%08x)\n", reg);
+ }
+}
+
+static void rtl8169_print_mac_version(struct rtl8169_private *tp)
+{
+ dprintk("mac_version = 0x%02x\n", tp->mac_version);
+}
+
+struct phy_reg {
+ u16 reg;
+ u16 val;
+};
+
+static void rtl_phy_write(void __iomem *ioaddr, struct phy_reg *regs, int len)
+{
+ while (len-- > 0) {
+ mdio_write(ioaddr, regs->reg, regs->val);
+ regs++;
+ }
+}
+
+static void rtl8169s_hw_phy_config(void __iomem *ioaddr)
+{
+ struct {
+ u16 regs[5]; /* Beware of bit-sign propagation */
+ } phy_magic[5] = { {
+ { 0x0000, //w 4 15 12 0
+ 0x00a1, //w 3 15 0 00a1
+ 0x0008, //w 2 15 0 0008
+ 0x1020, //w 1 15 0 1020
+ 0x1000 } },{ //w 0 15 0 1000
+ { 0x7000, //w 4 15 12 7
+ 0xff41, //w 3 15 0 ff41
+ 0xde60, //w 2 15 0 de60
+ 0x0140, //w 1 15 0 0140
+ 0x0077 } },{ //w 0 15 0 0077
+ { 0xa000, //w 4 15 12 a
+ 0xdf01, //w 3 15 0 df01
+ 0xdf20, //w 2 15 0 df20
+ 0xff95, //w 1 15 0 ff95
+ 0xfa00 } },{ //w 0 15 0 fa00
+ { 0xb000, //w 4 15 12 b
+ 0xff41, //w 3 15 0 ff41
+ 0xde20, //w 2 15 0 de20
+ 0x0140, //w 1 15 0 0140
+ 0x00bb } },{ //w 0 15 0 00bb
+ { 0xf000, //w 4 15 12 f
+ 0xdf01, //w 3 15 0 df01
+ 0xdf20, //w 2 15 0 df20
+ 0xff95, //w 1 15 0 ff95
+ 0xbf00 } //w 0 15 0 bf00
+ }
+ }, *p = phy_magic;
+ unsigned int i;
+
+ mdio_write(ioaddr, 0x1f, 0x0001); //w 31 2 0 1
+ mdio_write(ioaddr, 0x15, 0x1000); //w 21 15 0 1000
+ mdio_write(ioaddr, 0x18, 0x65c7); //w 24 15 0 65c7
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
+
+ for (i = 0; i < ARRAY_SIZE(phy_magic); i++, p++) {
+ int val, pos = 4;
+
+ val = (mdio_read(ioaddr, pos) & 0x0fff) | (p->regs[0] & 0xffff);
+ mdio_write(ioaddr, pos, val);
+ while (--pos >= 0)
+ mdio_write(ioaddr, pos, p->regs[4 - pos] & 0xffff);
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 1); //w 4 11 11 1
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
+ }
+ mdio_write(ioaddr, 0x1f, 0x0000); //w 31 2 0 0
+}
+
+static void rtl8169sb_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0002 },
+ { 0x01, 0x90d0 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168bb_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x10, 0xf41b },
+ { 0x1f, 0x0000 }
+ };
+
+ mdio_write(ioaddr, 0x1f, 0x0001);
+ mdio_patch(ioaddr, 0x16, 1 << 0);
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168bef_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x10, 0xf41b },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168cp_1_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0000 },
+ { 0x1d, 0x0f00 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x1ec8 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168cp_2_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x1d, 0x3d98 },
+ { 0x1f, 0x0000 }
+ };
+
+ mdio_write(ioaddr, 0x1f, 0x0000);
+ mdio_patch(ioaddr, 0x14, 1 << 5);
+ mdio_patch(ioaddr, 0x0d, 1 << 5);
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168c_1_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x12, 0x2300 },
+ { 0x1f, 0x0002 },
+ { 0x00, 0x88d4 },
+ { 0x01, 0x82b1 },
+ { 0x03, 0x7002 },
+ { 0x08, 0x9e30 },
+ { 0x09, 0x01f0 },
+ { 0x0a, 0x5500 },
+ { 0x0c, 0x00c8 },
+ { 0x1f, 0x0003 },
+ { 0x12, 0xc096 },
+ { 0x16, 0x000a },
+ { 0x1f, 0x0000 },
+ { 0x1f, 0x0000 },
+ { 0x09, 0x2000 },
+ { 0x09, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+
+ mdio_patch(ioaddr, 0x14, 1 << 5);
+ mdio_patch(ioaddr, 0x0d, 1 << 5);
+ mdio_write(ioaddr, 0x1f, 0x0000);
+}
+
+static void rtl8168c_2_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x12, 0x2300 },
+ { 0x03, 0x802f },
+ { 0x02, 0x4f02 },
+ { 0x01, 0x0409 },
+ { 0x00, 0xf099 },
+ { 0x04, 0x9800 },
+ { 0x04, 0x9000 },
+ { 0x1d, 0x3d98 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x7eb8 },
+ { 0x06, 0x0761 },
+ { 0x1f, 0x0003 },
+ { 0x16, 0x0f0a },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+
+ mdio_patch(ioaddr, 0x16, 1 << 0);
+ mdio_patch(ioaddr, 0x14, 1 << 5);
+ mdio_patch(ioaddr, 0x0d, 1 << 5);
+ mdio_write(ioaddr, 0x1f, 0x0000);
+}
+
+static void rtl8168c_3_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x12, 0x2300 },
+ { 0x1d, 0x3d98 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x7eb8 },
+ { 0x06, 0x5461 },
+ { 0x1f, 0x0003 },
+ { 0x16, 0x0f0a },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+
+ mdio_patch(ioaddr, 0x16, 1 << 0);
+ mdio_patch(ioaddr, 0x14, 1 << 5);
+ mdio_patch(ioaddr, 0x0d, 1 << 5);
+ mdio_write(ioaddr, 0x1f, 0x0000);
+}
+
+static void rtl8168c_4_hw_phy_config(void __iomem *ioaddr)
+{
+ rtl8168c_3_hw_phy_config(ioaddr);
+}
+
+static void rtl8168d_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init_0[] = {
+ { 0x1f, 0x0001 },
+ { 0x09, 0x2770 },
+ { 0x08, 0x04d0 },
+ { 0x0b, 0xad15 },
+ { 0x0c, 0x5bf0 },
+ { 0x1c, 0xf101 },
+ { 0x1f, 0x0003 },
+ { 0x14, 0x94d7 },
+ { 0x12, 0xf4d6 },
+ { 0x09, 0xca0f },
+ { 0x1f, 0x0002 },
+ { 0x0b, 0x0b10 },
+ { 0x0c, 0xd1f7 },
+ { 0x1f, 0x0002 },
+ { 0x06, 0x5461 },
+ { 0x1f, 0x0002 },
+ { 0x05, 0x6662 },
+ { 0x1f, 0x0000 },
+ { 0x14, 0x0060 },
+ { 0x1f, 0x0000 },
+ { 0x0d, 0xf8a0 },
+ { 0x1f, 0x0005 },
+ { 0x05, 0xffc2 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init_0, ARRAY_SIZE(phy_reg_init_0));
+
+ if (mdio_read(ioaddr, 0x06) == 0xc400) {
+ struct phy_reg phy_reg_init_1[] = {
+ { 0x1f, 0x0005 },
+ { 0x01, 0x0300 },
+ { 0x1f, 0x0000 },
+ { 0x11, 0x401c },
+ { 0x16, 0x4100 },
+ { 0x1f, 0x0005 },
+ { 0x07, 0x0010 },
+ { 0x05, 0x83dc },
+ { 0x06, 0x087d },
+ { 0x05, 0x8300 },
+ { 0x06, 0x0101 },
+ { 0x06, 0x05f8 },
+ { 0x06, 0xf9fa },
+ { 0x06, 0xfbef },
+ { 0x06, 0x79e2 },
+ { 0x06, 0x835f },
+ { 0x06, 0xe0f8 },
+ { 0x06, 0x9ae1 },
+ { 0x06, 0xf89b },
+ { 0x06, 0xef31 },
+ { 0x06, 0x3b65 },
+ { 0x06, 0xaa07 },
+ { 0x06, 0x81e4 },
+ { 0x06, 0xf89a },
+ { 0x06, 0xe5f8 },
+ { 0x06, 0x9baf },
+ { 0x06, 0x06ae },
+ { 0x05, 0x83dc },
+ { 0x06, 0x8300 },
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init_1,
+ ARRAY_SIZE(phy_reg_init_1));
+ }
+
+ mdio_write(ioaddr, 0x1f, 0x0000);
+}
+
+static void rtl8102e_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0003 },
+ { 0x08, 0x441d },
+ { 0x01, 0x9100 },
+ { 0x1f, 0x0000 }
+ };
+
+ mdio_write(ioaddr, 0x1f, 0x0000);
+ mdio_patch(ioaddr, 0x11, 1 << 12);
+ mdio_patch(ioaddr, 0x19, 1 << 13);
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl_hw_phy_config(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ rtl8169_print_mac_version(tp);
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_01:
+ break;
+ case RTL_GIGA_MAC_VER_02:
+ case RTL_GIGA_MAC_VER_03:
+ rtl8169s_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_04:
+ rtl8169sb_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_07:
+ case RTL_GIGA_MAC_VER_08:
+ case RTL_GIGA_MAC_VER_09:
+ rtl8102e_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_11:
+ rtl8168bb_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_12:
+ rtl8168bef_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_17:
+ rtl8168bef_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_18:
+ rtl8168cp_1_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_19:
+ rtl8168c_1_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_20:
+ rtl8168c_2_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_21:
+ rtl8168c_3_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_22:
+ rtl8168c_4_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_23:
+ case RTL_GIGA_MAC_VER_24:
+ rtl8168cp_2_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_25:
+ rtl8168d_hw_phy_config(ioaddr);
+ break;
+
+ default:
+ break;
+ }
+}
+
+static void rtl8169_phy_timer(unsigned long __opaque)
+{
+ struct net_device *dev = (struct net_device *)__opaque;
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long timeout = RTL8169_PHY_TIMEOUT;
+
+ assert(tp->mac_version > RTL_GIGA_MAC_VER_01);
+
+ if (!(tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
+ return;
+
+ if (!tp->ecdev)
+ spin_lock_irq(&tp->lock);
+
+ if (tp->phy_reset_pending(ioaddr)) {
+ /*
+ * A busy loop could burn quite a few cycles on nowadays CPU.
+ * Let's delay the execution of the timer for a few ticks.
+ */
+ timeout = HZ/10;
+ goto out_mod_timer;
+ }
+
+ if (tp->link_ok(ioaddr))
+ goto out_unlock;
+
+ if (netif_msg_link(tp))
+ printk(KERN_WARNING "%s: PHY reset until link up\n", dev->name);
+
+ tp->phy_reset_enable(ioaddr);
+
+out_mod_timer:
+ if (!tp->ecdev)
+ mod_timer(timer, jiffies + timeout);
+out_unlock:
+ if (!tp->ecdev)
+ spin_unlock_irq(&tp->lock);
+}
+
+static inline void rtl8169_delete_timer(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+
+ if (tp->ecdev || tp->mac_version <= RTL_GIGA_MAC_VER_01)
+ return;
+
+ del_timer_sync(timer);
+}
+
+static inline void rtl8169_request_timer(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+
+ if (tp->ecdev || tp->mac_version <= RTL_GIGA_MAC_VER_01)
+ return;
+
+ mod_timer(timer, jiffies + RTL8169_PHY_TIMEOUT);
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+/*
+ * Polling 'interrupt' - used by things like netconsole to send skbs
+ * without having to re-enable interrupts. It's not called while
+ * the interrupt routine is executing.
+ */
+static void rtl8169_netpoll(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ disable_irq(pdev->irq);
+ rtl8169_interrupt(pdev->irq, dev);
+ enable_irq(pdev->irq);
+}
+#endif
+
+static void rtl8169_release_board(struct pci_dev *pdev, struct net_device *dev,
+ void __iomem *ioaddr)
+{
+ iounmap(ioaddr);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ free_netdev(dev);
+}
+
+static void rtl8169_phy_reset(struct net_device *dev,
+ struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+
+ tp->phy_reset_enable(ioaddr);
+ for (i = 0; i < 100; i++) {
+ if (!tp->phy_reset_pending(ioaddr))
+ return;
+ msleep(1);
+ }
+ if (netif_msg_link(tp))
+ printk(KERN_ERR "%s: PHY reset failed.\n", dev->name);
+}
+
+static void rtl8169_init_phy(struct net_device *dev, struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ rtl_hw_phy_config(dev);
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
+ dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
+ RTL_W8(0x82, 0x01);
+ }
+
+ pci_write_config_byte(tp->pci_dev, PCI_LATENCY_TIMER, 0x40);
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
+ pci_write_config_byte(tp->pci_dev, PCI_CACHE_LINE_SIZE, 0x08);
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_02) {
+ dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
+ RTL_W8(0x82, 0x01);
+ dprintk("Set PHY Reg 0x0bh = 0x00h\n");
+ mdio_write(ioaddr, 0x0b, 0x0000); //w 0x0b 15 0 0
+ }
+
+ rtl8169_phy_reset(dev, tp);
+
+ /*
+ * rtl8169_set_speed_xmii takes good care of the Fast Ethernet
+ * only 8101. Don't panic.
+ */
+ rtl8169_set_speed(dev, AUTONEG_ENABLE, SPEED_1000, DUPLEX_FULL);
+
+ if ((RTL_R8(PHYstatus) & TBI_Enable) && netif_msg_link(tp))
+ printk(KERN_INFO PFX "%s: TBI auto-negotiating\n", dev->name);
+}
+
+static void rtl_rar_set(struct rtl8169_private *tp, u8 *addr)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 high;
+ u32 low;
+
+ low = addr[0] | (addr[1] << 8) | (addr[2] << 16) | (addr[3] << 24);
+ high = addr[4] | (addr[5] << 8);
+
+ spin_lock_irq(&tp->lock);
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ RTL_W32(MAC0, low);
+ RTL_W32(MAC4, high);
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ spin_unlock_irq(&tp->lock);
+}
+
+static int rtl_set_mac_address(struct net_device *dev, void *p)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+
+ rtl_rar_set(tp, dev->dev_addr);
+
+ return 0;
+}
+
+static int rtl8169_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct mii_ioctl_data *data = if_mii(ifr);
+
+ if (!netif_running(dev))
+ return -ENODEV;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ data->phy_id = 32; /* Internal PHY */
+ return 0;
+
+ case SIOCGMIIREG:
+ data->val_out = mdio_read(tp->mmio_addr, data->reg_num & 0x1f);
+ return 0;
+
+ case SIOCSMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ mdio_write(tp->mmio_addr, data->reg_num & 0x1f, data->val_in);
+ return 0;
+ }
+ return -EOPNOTSUPP;
+}
+
+static const struct rtl_cfg_info {
+ void (*hw_start)(struct net_device *);
+ unsigned int region;
+ unsigned int align;
+ u16 intr_event;
+ u16 napi_event;
+ unsigned features;
+} rtl_cfg_infos [] = {
+ [RTL_CFG_0] = {
+ .hw_start = rtl_hw_start_8169,
+ .region = 1,
+ .align = 0,
+ .intr_event = SYSErr | LinkChg | RxOverflow |
+ RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
+ .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow,
+ .features = RTL_FEATURE_GMII
+ },
+ [RTL_CFG_1] = {
+ .hw_start = rtl_hw_start_8168,
+ .region = 2,
+ .align = 8,
+ .intr_event = SYSErr | LinkChg | RxOverflow |
+ TxErr | TxOK | RxOK | RxErr,
+ .napi_event = TxErr | TxOK | RxOK | RxOverflow,
+ .features = RTL_FEATURE_GMII | RTL_FEATURE_MSI
+ },
+ [RTL_CFG_2] = {
+ .hw_start = rtl_hw_start_8101,
+ .region = 2,
+ .align = 8,
+ .intr_event = SYSErr | LinkChg | RxOverflow | PCSTimeout |
+ RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
+ .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow,
+ .features = RTL_FEATURE_MSI
+ }
+};
+
+/* Cfg9346_Unlock assumed. */
+static unsigned rtl_try_msi(struct pci_dev *pdev, void __iomem *ioaddr,
+ const struct rtl_cfg_info *cfg)
+{
+ unsigned msi = 0;
+ u8 cfg2;
+
+ cfg2 = RTL_R8(Config2) & ~MSIEnable;
+ if (cfg->features & RTL_FEATURE_MSI) {
+ if (pci_enable_msi(pdev)) {
+ dev_info(&pdev->dev, "no MSI. Back to INTx.\n");
+ } else {
+ cfg2 |= MSIEnable;
+ msi = RTL_FEATURE_MSI;
+ }
+ }
+ RTL_W8(Config2, cfg2);
+ return msi;
+}
+
+static void rtl_disable_msi(struct pci_dev *pdev, struct rtl8169_private *tp)
+{
+ if (tp->features & RTL_FEATURE_MSI) {
+ pci_disable_msi(pdev);
+ tp->features &= ~RTL_FEATURE_MSI;
+ }
+}
+
+static int __devinit
+rtl8169_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
+ const unsigned int region = cfg->region;
+ struct rtl8169_private *tp;
+ struct mii_if_info *mii;
+ struct net_device *dev;
+ void __iomem *ioaddr;
+ unsigned int i;
+ int rc;
+
+ if (netif_msg_drv(&debug)) {
+ printk(KERN_INFO "%s Gigabit Ethernet driver %s loaded\n",
+ MODULENAME, RTL8169_VERSION);
+ }
+
+ dev = alloc_etherdev(sizeof (*tp));
+ if (!dev) {
+ if (netif_msg_drv(&debug))
+ dev_err(&pdev->dev, "unable to alloc new ethernet\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ tp = netdev_priv(dev);
+ tp->dev = dev;
+ tp->pci_dev = pdev;
+ tp->msg_enable = netif_msg_init(debug.msg_enable, R8169_MSG_DEFAULT);
+
+ mii = &tp->mii;
+ mii->dev = dev;
+ mii->mdio_read = rtl_mdio_read;
+ mii->mdio_write = rtl_mdio_write;
+ mii->phy_id_mask = 0x1f;
+ mii->reg_num_mask = 0x1f;
+ mii->supports_gmii = !!(cfg->features & RTL_FEATURE_GMII);
+
+ /* enable device (incl. PCI PM wakeup and hotplug setup) */
+ rc = pci_enable_device(pdev);
+ if (rc < 0) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "enable failure\n");
+ goto err_out_free_dev_1;
+ }
+
+ rc = pci_set_mwi(pdev);
+ if (rc < 0)
+ goto err_out_disable_2;
+
+ /* make sure PCI base addr 1 is MMIO */
+ if (!(pci_resource_flags(pdev, region) & IORESOURCE_MEM)) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "region #%d not an MMIO resource, aborting\n",
+ region);
+ }
+ rc = -ENODEV;
+ goto err_out_mwi_3;
+ }
+
+ /* check for weird/broken PCI region reporting */
+ if (pci_resource_len(pdev, region) < R8169_REGS_SIZE) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "Invalid PCI region size(s), aborting\n");
+ }
+ rc = -ENODEV;
+ goto err_out_mwi_3;
+ }
+
+ rc = pci_request_regions(pdev, MODULENAME);
+ if (rc < 0) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "could not request regions.\n");
+ goto err_out_mwi_3;
+ }
+
+ tp->cp_cmd = PCIMulRW | RxChkSum;
+
+ if ((sizeof(dma_addr_t) > 4) &&
+ !pci_set_dma_mask(pdev, DMA_64BIT_MASK) && use_dac) {
+ tp->cp_cmd |= PCIDAC;
+ dev->features |= NETIF_F_HIGHDMA;
+ } else {
+ rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+ if (rc < 0) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "DMA configuration failed.\n");
+ }
+ goto err_out_free_res_4;
+ }
+ }
+
+ pci_set_master(pdev);
+
+ /* ioremap MMIO region */
+ ioaddr = ioremap(pci_resource_start(pdev, region), R8169_REGS_SIZE);
+ if (!ioaddr) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "cannot remap MMIO, aborting\n");
+ rc = -EIO;
+ goto err_out_free_res_4;
+ }
+
+ tp->pcie_cap = pci_find_capability(pdev, PCI_CAP_ID_EXP);
+ if (!tp->pcie_cap && netif_msg_probe(tp))
+ dev_info(&pdev->dev, "no PCI Express capability\n");
+
+ RTL_W16(IntrMask, 0x0000);
+
+ /* Soft reset the chip. */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* Check that the chip has finished the reset. */
+ for (i = 0; i < 100; i++) {
+ if ((RTL_R8(ChipCmd) & CmdReset) == 0)
+ break;
+ msleep_interruptible(1);
+ }
+
+ RTL_W16(IntrStatus, 0xffff);
+
+ /* Identify chip attached to board */
+ rtl8169_get_mac_version(tp, ioaddr);
+
+ rtl8169_print_mac_version(tp);
+
+ for (i = 0; i < ARRAY_SIZE(rtl_chip_info); i++) {
+ if (tp->mac_version == rtl_chip_info[i].mac_version)
+ break;
+ }
+ if (i == ARRAY_SIZE(rtl_chip_info)) {
+ /* Unknown chip: assume array element #0, original RTL-8169 */
+ if (netif_msg_probe(tp)) {
+ dev_printk(KERN_DEBUG, &pdev->dev,
+ "unknown chip version, assuming %s\n",
+ rtl_chip_info[0].name);
+ }
+ i = 0;
+ }
+ tp->chipset = i;
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ RTL_W8(Config1, RTL_R8(Config1) | PMEnable);
+ RTL_W8(Config5, RTL_R8(Config5) & PMEStatus);
+ if ((RTL_R8(Config3) & (LinkUp | MagicPacket)) != 0)
+ tp->features |= RTL_FEATURE_WOL;
+ if ((RTL_R8(Config5) & (UWF | BWF | MWF)) != 0)
+ tp->features |= RTL_FEATURE_WOL;
+ tp->features |= rtl_try_msi(pdev, ioaddr, cfg);
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ if ((tp->mac_version <= RTL_GIGA_MAC_VER_06) &&
+ (RTL_R8(PHYstatus) & TBI_Enable)) {
+ tp->set_speed = rtl8169_set_speed_tbi;
+ tp->get_settings = rtl8169_gset_tbi;
+ tp->phy_reset_enable = rtl8169_tbi_reset_enable;
+ tp->phy_reset_pending = rtl8169_tbi_reset_pending;
+ tp->link_ok = rtl8169_tbi_link_ok;
+
+ tp->phy_1000_ctrl_reg = ADVERTISE_1000FULL; /* Implied by TBI */
+ } else {
+ tp->set_speed = rtl8169_set_speed_xmii;
+ tp->get_settings = rtl8169_gset_xmii;
+ tp->phy_reset_enable = rtl8169_xmii_reset_enable;
+ tp->phy_reset_pending = rtl8169_xmii_reset_pending;
+ tp->link_ok = rtl8169_xmii_link_ok;
+
+ dev->do_ioctl = rtl8169_ioctl;
+ }
+
+ spin_lock_init(&tp->lock);
+
+ tp->mmio_addr = ioaddr;
+
+ /* Get MAC address */
+ for (i = 0; i < MAC_ADDR_LEN; i++)
+ dev->dev_addr[i] = RTL_R8(MAC0 + i);
+ memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
+
+ dev->open = rtl8169_open;
+ dev->hard_start_xmit = rtl8169_start_xmit;
+ dev->get_stats = rtl8169_get_stats;
+ SET_ETHTOOL_OPS(dev, &rtl8169_ethtool_ops);
+ dev->stop = rtl8169_close;
+ dev->tx_timeout = rtl8169_tx_timeout;
+ dev->set_multicast_list = rtl_set_rx_mode;
+ dev->watchdog_timeo = RTL8169_TX_TIMEOUT;
+ dev->irq = pdev->irq;
+ dev->base_addr = (unsigned long) ioaddr;
+ dev->change_mtu = rtl8169_change_mtu;
+ dev->set_mac_address = rtl_set_mac_address;
+
+ netif_napi_add(dev, &tp->napi, rtl8169_poll, R8169_NAPI_WEIGHT);
+
+#ifdef CONFIG_R8169_VLAN
+ dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
+ dev->vlan_rx_register = rtl8169_vlan_rx_register;
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ dev->poll_controller = rtl8169_netpoll;
+#endif
+
+ tp->intr_mask = 0xffff;
+ tp->align = cfg->align;
+ tp->hw_start = cfg->hw_start;
+ tp->intr_event = cfg->intr_event;
+ tp->napi_event = cfg->napi_event;
+
+ init_timer(&tp->timer);
+ tp->timer.data = (unsigned long) dev;
+ tp->timer.function = rtl8169_phy_timer;
+
+ // offer device to EtherCAT master module
+ tp->ecdev = ecdev_offer(dev, ec_poll, THIS_MODULE);
+
+ if (!tp->ecdev) {
+ rc = register_netdev(dev);
+ if (rc < 0)
+ goto err_out_msi_5;
+ }
+
+ pci_set_drvdata(pdev, dev);
+
+ if (netif_msg_probe(tp)) {
+ u32 xid = RTL_R32(TxConfig) & 0x7cf0f8ff;
+
+ printk(KERN_INFO "%s: %s at 0x%lx, "
+ "%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x, "
+ "XID %08x IRQ %d\n",
+ dev->name,
+ rtl_chip_info[tp->chipset].name,
+ dev->base_addr,
+ dev->dev_addr[0], dev->dev_addr[1],
+ dev->dev_addr[2], dev->dev_addr[3],
+ dev->dev_addr[4], dev->dev_addr[5], xid, dev->irq);
+ }
+
+ rtl8169_init_phy(dev, tp);
+ device_set_wakeup_enable(&pdev->dev, tp->features & RTL_FEATURE_WOL);
+ if (tp->ecdev && ecdev_open(tp->ecdev)) {
+ ecdev_withdraw(tp->ecdev);
+ goto err_out_msi_5;
+ }
+
+
+out:
+ return rc;
+
+err_out_msi_5:
+ rtl_disable_msi(pdev, tp);
+ iounmap(ioaddr);
+err_out_free_res_4:
+ pci_release_regions(pdev);
+err_out_mwi_3:
+ pci_clear_mwi(pdev);
+err_out_disable_2:
+ pci_disable_device(pdev);
+err_out_free_dev_1:
+ free_netdev(dev);
+ goto out;
+}
+
+static void __devexit rtl8169_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ flush_scheduled_work();
+
+ if (tp->ecdev) {
+ ecdev_close(tp->ecdev);
+ ecdev_withdraw(tp->ecdev);
+ } else {
+ unregister_netdev(dev);
+ }
+ rtl_disable_msi(pdev, tp);
+ rtl8169_release_board(pdev, dev, tp->mmio_addr);
+ pci_set_drvdata(pdev, NULL);
+}
+
+static void rtl8169_set_rxbufsize(struct rtl8169_private *tp,
+ struct net_device *dev)
+{
+ unsigned int mtu = dev->mtu;
+
+ tp->rx_buf_sz = (mtu > RX_BUF_SIZE) ? mtu + ETH_HLEN + 8 : RX_BUF_SIZE;
+}
+
+static int rtl8169_open(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+ int retval = -ENOMEM;
+
+
+ rtl8169_set_rxbufsize(tp, dev);
+
+ /*
+ * Rx and Tx desscriptors needs 256 bytes alignment.
+ * pci_alloc_consistent provides more.
+ */
+ tp->TxDescArray = pci_alloc_consistent(pdev, R8169_TX_RING_BYTES,
+ &tp->TxPhyAddr);
+ if (!tp->TxDescArray)
+ goto out;
+
+ tp->RxDescArray = pci_alloc_consistent(pdev, R8169_RX_RING_BYTES,
+ &tp->RxPhyAddr);
+ if (!tp->RxDescArray)
+ goto err_free_tx_0;
+
+ retval = rtl8169_init_ring(dev);
+ if (retval < 0)
+ goto err_free_rx_1;
+
+ INIT_DELAYED_WORK(&tp->task, NULL);
+
+ smp_mb();
+
+ if (!tp->ecdev) {
+ retval = request_irq(dev->irq, rtl8169_interrupt,
+ (tp->features & RTL_FEATURE_MSI) ? 0 : IRQF_SHARED,
+ dev->name, dev);
+ if (retval < 0)
+ goto err_release_ring_2;
+
+ napi_enable(&tp->napi);
+
+ }
+ rtl_hw_start(dev);
+
+ rtl8169_request_timer(dev);
+
+ rtl8169_check_link_status(dev, tp, tp->mmio_addr);
+out:
+ return retval;
+
+err_release_ring_2:
+ rtl8169_rx_clear(tp);
+err_free_rx_1:
+ pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+err_free_tx_0:
+ pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
+ tp->TxPhyAddr);
+ goto out;
+}
+
+static void rtl8169_hw_reset(void __iomem *ioaddr)
+{
+ /* Disable interrupts */
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+ /* Reset the chipset */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* PCI commit */
+ RTL_R8(ChipCmd);
+}
+
+static void rtl_set_rx_tx_config_registers(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 cfg = rtl8169_rx_config;
+
+ cfg |= (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
+ RTL_W32(RxConfig, cfg);
+
+ /* Set DMA burst size and Interframe Gap Time */
+ RTL_W32(TxConfig, (TX_DMA_BURST << TxDMAShift) |
+ (InterFrameGap << TxInterFrameGapShift));
+}
+
+static void rtl_hw_start(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+
+ /* Soft reset the chip. */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* Check that the chip has finished the reset. */
+ for (i = 0; i < 100; i++) {
+ if ((RTL_R8(ChipCmd) & CmdReset) == 0)
+ break;
+ msleep_interruptible(1);
+ }
+
+ tp->hw_start(dev);
+
+ if (!tp->ecdev)
+ netif_start_queue(dev);
+}
+
+
+static void rtl_set_rx_tx_desc_registers(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ /*
+ * Magic spell: some iop3xx ARM board needs the TxDescAddrHigh
+ * register to be written before TxDescAddrLow to work.
+ * Switching from MMIO to I/O access fixes the issue as well.
+ */
+ RTL_W32(TxDescStartAddrHigh, ((u64) tp->TxPhyAddr) >> 32);
+ RTL_W32(TxDescStartAddrLow, ((u64) tp->TxPhyAddr) & DMA_32BIT_MASK);
+ RTL_W32(RxDescAddrHigh, ((u64) tp->RxPhyAddr) >> 32);
+ RTL_W32(RxDescAddrLow, ((u64) tp->RxPhyAddr) & DMA_32BIT_MASK);
+}
+
+static u16 rtl_rw_cpluscmd(void __iomem *ioaddr)
+{
+ u16 cmd;
+
+ cmd = RTL_R16(CPlusCmd);
+ RTL_W16(CPlusCmd, cmd);
+ return cmd;
+}
+
+static void rtl_set_rx_max_size(void __iomem *ioaddr)
+{
+ /* Low hurts. Let's disable the filtering. */
+ RTL_W16(RxMaxSize, 16383);
+}
+
+static void rtl8169_set_magic_reg(void __iomem *ioaddr, unsigned mac_version)
+{
+ struct {
+ u32 mac_version;
+ u32 clk;
+ u32 val;
+ } cfg2_info [] = {
+ { RTL_GIGA_MAC_VER_05, PCI_Clock_33MHz, 0x000fff00 }, // 8110SCd
+ { RTL_GIGA_MAC_VER_05, PCI_Clock_66MHz, 0x000fffff },
+ { RTL_GIGA_MAC_VER_06, PCI_Clock_33MHz, 0x00ffff00 }, // 8110SCe
+ { RTL_GIGA_MAC_VER_06, PCI_Clock_66MHz, 0x00ffffff }
+ }, *p = cfg2_info;
+ unsigned int i;
+ u32 clk;
+
+ clk = RTL_R8(Config2) & PCI_Clock_66MHz;
+ for (i = 0; i < ARRAY_SIZE(cfg2_info); i++, p++) {
+ if ((p->mac_version == mac_version) && (p->clk == clk)) {
+ RTL_W32(0x7c, p->val);
+ break;
+ }
+ }
+}
+
+static void rtl_hw_start_8169(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ printk(KERN_INFO "%s\n", __func__);
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_05) {
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) | PCIMulRW);
+ pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x08);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_04))
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_04))
+ rtl_set_rx_tx_config_registers(tp);
+
+ tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03)) {
+ dprintk("Set MAC Reg C+CR Offset 0xE0. "
+ "Bit-3 and bit-14 MUST be 1\n");
+ tp->cp_cmd |= (1 << 14);
+ }
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ rtl8169_set_magic_reg(ioaddr, tp->mac_version);
+
+ /*
+ * Undocumented corner. Supposedly:
+ * (TxTimer << 12) | (TxPackets << 8) | (RxTimer << 4) | RxPackets
+ */
+ RTL_W16(IntrMitigate, 0x0000);
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ if ((tp->mac_version != RTL_GIGA_MAC_VER_01) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_02) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_03) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_04)) {
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_set_rx_tx_config_registers(tp);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ /* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ RTL_R8(IntrMask);
+
+ RTL_W32(RxMissed, 0);
+
+ rtl_set_rx_mode(dev);
+
+ /* no early-rx interrupts */
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+
+ /* Enable all known interrupts by setting the interrupt mask. */
+ if (!tp->ecdev)
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static void rtl_tx_performance_tweak(struct pci_dev *pdev, u16 force)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int cap = tp->pcie_cap;
+
+ if (cap) {
+ u16 ctl;
+
+ pci_read_config_word(pdev, cap + PCI_EXP_DEVCTL, &ctl);
+ ctl = (ctl & ~PCI_EXP_DEVCTL_READRQ) | force;
+ pci_write_config_word(pdev, cap + PCI_EXP_DEVCTL, ctl);
+ }
+}
+
+static void rtl_csi_access_enable(void __iomem *ioaddr)
+{
+ u32 csi;
+
+ csi = rtl_csi_read(ioaddr, 0x070c) & 0x00ffffff;
+ rtl_csi_write(ioaddr, 0x070c, csi | 0x27000000);
+}
+
+struct ephy_info {
+ unsigned int offset;
+ u16 mask;
+ u16 bits;
+};
+
+static void rtl_ephy_init(void __iomem *ioaddr, struct ephy_info *e, int len)
+{
+ u16 w;
+
+ while (len-- > 0) {
+ w = (rtl_ephy_read(ioaddr, e->offset) & ~e->mask) | e->bits;
+ rtl_ephy_write(ioaddr, e->offset, w);
+ e++;
+ }
+}
+
+static void rtl_disable_clock_request(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int cap = tp->pcie_cap;
+
+ if (cap) {
+ u16 ctl;
+
+ pci_read_config_word(pdev, cap + PCI_EXP_LNKCTL, &ctl);
+ ctl &= ~PCI_EXP_LNKCTL_CLKREQ_EN;
+ pci_write_config_word(pdev, cap + PCI_EXP_LNKCTL, ctl);
+ }
+}
+
+#define R8168_CPCMD_QUIRK_MASK (\
+ EnableBist | \
+ Mac_dbgo_oe | \
+ Force_half_dup | \
+ Force_rxflow_en | \
+ Force_txflow_en | \
+ Cxpl_dbg_sel | \
+ ASF | \
+ PktCntrDisable | \
+ Mac_dbgo_sel)
+
+static void rtl_hw_start_8168bb(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+
+ rtl_tx_performance_tweak(pdev,
+ (0x5 << MAX_READ_REQUEST_SHIFT) | PCI_EXP_DEVCTL_NOSNOOP_EN);
+}
+
+static void rtl_hw_start_8168bef(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_hw_start_8168bb(ioaddr, pdev);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ RTL_W8(Config4, RTL_R8(Config4) & ~(1 << 0));
+}
+
+static void __rtl_hw_start_8168cp(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ RTL_W8(Config1, RTL_R8(Config1) | Speed_down);
+
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ rtl_disable_clock_request(pdev);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8168cp_1(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ static struct ephy_info e_info_8168cp[] = {
+ { 0x01, 0, 0x0001 },
+ { 0x02, 0x0800, 0x1000 },
+ { 0x03, 0, 0x0042 },
+ { 0x06, 0x0080, 0x0000 },
+ { 0x07, 0, 0x2000 }
+ };
+
+ rtl_csi_access_enable(ioaddr);
+
+ rtl_ephy_init(ioaddr, e_info_8168cp, ARRAY_SIZE(e_info_8168cp));
+
+ __rtl_hw_start_8168cp(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168cp_2(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8168cp_3(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ /* Magic. */
+ RTL_W8(DBG_REG, 0x20);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8168c_1(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ static struct ephy_info e_info_8168c_1[] = {
+ { 0x02, 0x0800, 0x1000 },
+ { 0x03, 0, 0x0002 },
+ { 0x06, 0x0080, 0x0000 }
+ };
+
+ rtl_csi_access_enable(ioaddr);
+
+ RTL_W8(DBG_REG, 0x06 | FIX_NAK_1 | FIX_NAK_2);
+
+ rtl_ephy_init(ioaddr, e_info_8168c_1, ARRAY_SIZE(e_info_8168c_1));
+
+ __rtl_hw_start_8168cp(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168c_2(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ static struct ephy_info e_info_8168c_2[] = {
+ { 0x01, 0, 0x0001 },
+ { 0x03, 0x0400, 0x0220 }
+ };
+
+ rtl_csi_access_enable(ioaddr);
+
+ rtl_ephy_init(ioaddr, e_info_8168c_2, ARRAY_SIZE(e_info_8168c_2));
+
+ __rtl_hw_start_8168cp(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168c_3(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_hw_start_8168c_2(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168c_4(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ __rtl_hw_start_8168cp(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168d(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ rtl_disable_clock_request(pdev);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8168(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ tp->cp_cmd |= RTL_R16(CPlusCmd) | PktCntrDisable | INTT_1;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ RTL_W16(IntrMitigate, 0x5151);
+
+ /* Work around for RxFIFO overflow. */
+ if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+ tp->intr_event |= RxFIFOOver | PCSTimeout;
+ tp->intr_event &= ~RxOverflow;
+ }
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ rtl_set_rx_mode(dev);
+
+ RTL_W32(TxConfig, (TX_DMA_BURST << TxDMAShift) |
+ (InterFrameGap << TxInterFrameGapShift));
+
+ RTL_R8(IntrMask);
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_11:
+ rtl_hw_start_8168bb(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_12:
+ case RTL_GIGA_MAC_VER_17:
+ rtl_hw_start_8168bef(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_18:
+ rtl_hw_start_8168cp_1(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_19:
+ rtl_hw_start_8168c_1(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_20:
+ rtl_hw_start_8168c_2(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_21:
+ rtl_hw_start_8168c_3(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_22:
+ rtl_hw_start_8168c_4(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_23:
+ rtl_hw_start_8168cp_2(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_24:
+ rtl_hw_start_8168cp_3(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_25:
+ rtl_hw_start_8168d(ioaddr, pdev);
+ break;
+
+ default:
+ printk(KERN_ERR PFX "%s: unknown chipset (mac_version = %d).\n",
+ dev->name, tp->mac_version);
+ break;
+ }
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+
+ if (!tp->ecdev)
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+#define R810X_CPCMD_QUIRK_MASK (\
+ EnableBist | \
+ Mac_dbgo_oe | \
+ Force_half_dup | \
+ Force_half_dup | \
+ Force_txflow_en | \
+ Cxpl_dbg_sel | \
+ ASF | \
+ PktCntrDisable | \
+ PCIDAC | \
+ PCIMulRW)
+
+static void rtl_hw_start_8102e_1(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ static struct ephy_info e_info_8102e_1[] = {
+ { 0x01, 0, 0x6e65 },
+ { 0x02, 0, 0x091f },
+ { 0x03, 0, 0xc2f9 },
+ { 0x06, 0, 0xafb5 },
+ { 0x07, 0, 0x0e00 },
+ { 0x19, 0, 0xec80 },
+ { 0x01, 0, 0x2e65 },
+ { 0x01, 0, 0x6e65 }
+ };
+ u8 cfg1;
+
+ rtl_csi_access_enable(ioaddr);
+
+ RTL_W8(DBG_REG, FIX_NAK_1);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W8(Config1,
+ LEDS1 | LEDS0 | Speed_down | MEMMAP | IOMAP | VPD | PMEnable);
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ cfg1 = RTL_R8(Config1);
+ if ((cfg1 & LEDS0) && (cfg1 & LEDS1))
+ RTL_W8(Config1, cfg1 & ~LEDS0);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R810X_CPCMD_QUIRK_MASK);
+
+ rtl_ephy_init(ioaddr, e_info_8102e_1, ARRAY_SIZE(e_info_8102e_1));
+}
+
+static void rtl_hw_start_8102e_2(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W8(Config1, MEMMAP | IOMAP | VPD | PMEnable);
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R810X_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8102e_3(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_hw_start_8102e_2(ioaddr, pdev);
+
+ rtl_ephy_write(ioaddr, 0x03, 0xc2f9);
+}
+
+static void rtl_hw_start_8101(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16)) {
+ int cap = tp->pcie_cap;
+
+ if (cap) {
+ pci_write_config_word(pdev, cap + PCI_EXP_DEVCTL,
+ PCI_EXP_DEVCTL_NOSNOOP_EN);
+ }
+ }
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_07:
+ rtl_hw_start_8102e_1(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_08:
+ rtl_hw_start_8102e_3(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_09:
+ rtl_hw_start_8102e_2(ioaddr, pdev);
+ break;
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ RTL_W16(IntrMitigate, 0x0000);
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_set_rx_tx_config_registers(tp);
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ RTL_R8(IntrMask);
+
+ rtl_set_rx_mode(dev);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xf000);
+
+ if (!tp->ecdev)
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static int rtl8169_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int ret = 0;
+
+ if (new_mtu < ETH_ZLEN || new_mtu > SafeMtu)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ if (!netif_running(dev))
+ goto out;
+
+ rtl8169_down(dev);
+
+ rtl8169_set_rxbufsize(tp, dev);
+
+ ret = rtl8169_init_ring(dev);
+ if (ret < 0)
+ goto out;
+
+ napi_enable(&tp->napi);
+
+ rtl_hw_start(dev);
+
+ rtl8169_request_timer(dev);
+
+out:
+ return ret;
+}
+
+static inline void rtl8169_make_unusable_by_asic(struct RxDesc *desc)
+{
+ desc->addr = cpu_to_le64(0x0badbadbadbadbadull);
+ desc->opts1 &= ~cpu_to_le32(DescOwn | RsvdMask);
+}
+
+static void rtl8169_free_rx_skb(struct rtl8169_private *tp,
+ struct sk_buff **sk_buff, struct RxDesc *desc)
+{
+ struct pci_dev *pdev = tp->pci_dev;
+
+ pci_unmap_single(pdev, le64_to_cpu(desc->addr), tp->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(*sk_buff);
+ *sk_buff = NULL;
+ rtl8169_make_unusable_by_asic(desc);
+}
+
+static inline void rtl8169_mark_to_asic(struct RxDesc *desc, u32 rx_buf_sz)
+{
+ u32 eor = le32_to_cpu(desc->opts1) & RingEnd;
+
+ desc->opts1 = cpu_to_le32(DescOwn | eor | rx_buf_sz);
+}
+
+static inline void rtl8169_map_to_asic(struct RxDesc *desc, dma_addr_t mapping,
+ u32 rx_buf_sz)
+{
+ desc->addr = cpu_to_le64(mapping);
+ wmb();
+ rtl8169_mark_to_asic(desc, rx_buf_sz);
+}
+
+static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev,
+ struct net_device *dev,
+ struct RxDesc *desc, int rx_buf_sz,
+ unsigned int align)
+{
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+ unsigned int pad;
+
+ pad = align ? align : NET_IP_ALIGN;
+
+ skb = netdev_alloc_skb(dev, rx_buf_sz + pad);
+ if (!skb)
+ goto err_out;
+
+ skb_reserve(skb, align ? ((pad - 1) & (unsigned long)skb->data) : pad);
+
+ mapping = pci_map_single(pdev, skb->data, rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+
+ rtl8169_map_to_asic(desc, mapping, rx_buf_sz);
+out:
+ return skb;
+
+err_out:
+ rtl8169_make_unusable_by_asic(desc);
+ goto out;
+}
+
+static void rtl8169_rx_clear(struct rtl8169_private *tp)
+{
+ unsigned int i;
+
+ for (i = 0; i < NUM_RX_DESC; i++) {
+ if (tp->Rx_skbuff[i]) {
+ rtl8169_free_rx_skb(tp, tp->Rx_skbuff + i,
+ tp->RxDescArray + i);
+ }
+ }
+}
+
+static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev,
+ u32 start, u32 end)
+{
+ u32 cur;
+
+ for (cur = start; end - cur != 0; cur++) {
+ struct sk_buff *skb;
+ unsigned int i = cur % NUM_RX_DESC;
+
+ WARN_ON((s32)(end - cur) < 0);
+
+ if (tp->Rx_skbuff[i])
+ continue;
+
+ skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev,
+ tp->RxDescArray + i,
+ tp->rx_buf_sz, tp->align);
+ if (!skb)
+ break;
+
+ tp->Rx_skbuff[i] = skb;
+ }
+ return cur - start;
+}
+
+static inline void rtl8169_mark_as_last_descriptor(struct RxDesc *desc)
+{
+ desc->opts1 |= cpu_to_le32(RingEnd);
+}
+
+static void rtl8169_init_ring_indexes(struct rtl8169_private *tp)
+{
+ tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0;
+}
+
+static int rtl8169_init_ring(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ rtl8169_init_ring_indexes(tp);
+
+ memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info));
+ memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *));
+
+ if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC)
+ goto err_out;
+
+ rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1);
+
+ return 0;
+
+err_out:
+ rtl8169_rx_clear(tp);
+ return -ENOMEM;
+}
+
+static void rtl8169_unmap_tx_skb(struct pci_dev *pdev, struct ring_info *tx_skb,
+ struct TxDesc *desc)
+{
+ unsigned int len = tx_skb->len;
+
+ pci_unmap_single(pdev, le64_to_cpu(desc->addr), len, PCI_DMA_TODEVICE);
+ desc->opts1 = 0x00;
+ desc->opts2 = 0x00;
+ desc->addr = 0x00;
+ tx_skb->len = 0;
+}
+
+static void rtl8169_tx_clear(struct rtl8169_private *tp)
+{
+ unsigned int i;
+
+ for (i = tp->dirty_tx; i < tp->dirty_tx + NUM_TX_DESC; i++) {
+ unsigned int entry = i % NUM_TX_DESC;
+ struct ring_info *tx_skb = tp->tx_skb + entry;
+ unsigned int len = tx_skb->len;
+
+ if (len) {
+ struct sk_buff *skb = tx_skb->skb;
+
+ rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb,
+ tp->TxDescArray + entry);
+ if (skb) {
+ if (!tp->ecdev)
+ dev_kfree_skb(skb);
+ tx_skb->skb = NULL;
+ }
+ tp->dev->stats.tx_dropped++;
+ }
+ }
+ tp->cur_tx = tp->dirty_tx = 0;
+}
+
+static void rtl8169_schedule_work(struct net_device *dev, work_func_t task)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ PREPARE_DELAYED_WORK(&tp->task, task);
+ schedule_delayed_work(&tp->task, 4);
+}
+
+static void rtl8169_wait_for_quiescence(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ synchronize_irq(dev->irq);
+
+ /* Wait for any pending NAPI task to complete */
+ napi_disable(&tp->napi);
+
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+ tp->intr_mask = 0xffff;
+ RTL_W16(IntrMask, tp->intr_event);
+ napi_enable(&tp->napi);
+}
+
+static void rtl8169_reinit_task(struct work_struct *work)
+{
+ struct rtl8169_private *tp =
+ container_of(work, struct rtl8169_private, task.work);
+ struct net_device *dev = tp->dev;
+ int ret;
+
+ rtnl_lock();
+
+ if (!netif_running(dev))
+ goto out_unlock;
+
+ rtl8169_wait_for_quiescence(dev);
+ rtl8169_close(dev);
+
+ ret = rtl8169_open(dev);
+ if (unlikely(ret < 0)) {
+ if (net_ratelimit() && netif_msg_drv(tp)) {
+ printk(KERN_ERR PFX "%s: reinit failure (status = %d)."
+ " Rescheduling.\n", dev->name, ret);
+ }
+ rtl8169_schedule_work(dev, rtl8169_reinit_task);
+ }
+
+out_unlock:
+ rtnl_unlock();
+}
+
+static void rtl8169_reset_task(struct work_struct *work)
+{
+ struct rtl8169_private *tp =
+ container_of(work, struct rtl8169_private, task.work);
+ struct net_device *dev = tp->dev;
+
+ rtnl_lock();
+
+ if (!netif_running(dev))
+ goto out_unlock;
+
+ rtl8169_wait_for_quiescence(dev);
+
+ rtl8169_rx_interrupt(dev, tp, tp->mmio_addr, ~(u32)0);
+ rtl8169_tx_clear(tp);
+
+ if (tp->dirty_rx == tp->cur_rx) {
+ rtl8169_init_ring_indexes(tp);
+ rtl_hw_start(dev);
+ netif_wake_queue(dev);
+ rtl8169_check_link_status(dev, tp, tp->mmio_addr);
+ } else {
+ if (net_ratelimit() && netif_msg_intr(tp)) {
+ printk(KERN_EMERG PFX "%s: Rx buffers shortage\n",
+ dev->name);
+ }
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+ }
+
+out_unlock:
+ rtnl_unlock();
+}
+
+static void rtl8169_tx_timeout(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ if (tp->ecdev)
+ return;
+
+ rtl8169_hw_reset(tp->mmio_addr);
+
+ /* Let's wait a bit while any (async) irq lands on */
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+}
+
+static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb,
+ u32 opts1)
+{
+ struct skb_shared_info *info = skb_shinfo(skb);
+ unsigned int cur_frag, entry;
+ struct TxDesc * uninitialized_var(txd);
+
+ entry = tp->cur_tx;
+ for (cur_frag = 0; cur_frag < info->nr_frags; cur_frag++) {
+ skb_frag_t *frag = info->frags + cur_frag;
+ dma_addr_t mapping;
+ u32 status, len;
+ void *addr;
+
+ entry = (entry + 1) % NUM_TX_DESC;
+
+ txd = tp->TxDescArray + entry;
+ len = frag->size;
+ addr = ((void *) page_address(frag->page)) + frag->page_offset;
+ mapping = pci_map_single(tp->pci_dev, addr, len, PCI_DMA_TODEVICE);
+
+ /* anti gcc 2.95.3 bugware (sic) */
+ status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+
+ txd->opts1 = cpu_to_le32(status);
+ txd->addr = cpu_to_le64(mapping);
+
+ tp->tx_skb[entry].len = len;
+ }
+
+ if (cur_frag) {
+ tp->tx_skb[entry].skb = skb;
+ txd->opts1 |= cpu_to_le32(LastFrag);
+ }
+
+ return cur_frag;
+}
+
+static inline u32 rtl8169_tso_csum(struct sk_buff *skb, struct net_device *dev)
+{
+ if (dev->features & NETIF_F_TSO) {
+ u32 mss = skb_shinfo(skb)->gso_size;
+
+ if (mss)
+ return LargeSend | ((mss & MSSMask) << MSSShift);
+ }
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ const struct iphdr *ip = ip_hdr(skb);
+
+ if (ip->protocol == IPPROTO_TCP)
+ return IPCS | TCPCS;
+ else if (ip->protocol == IPPROTO_UDP)
+ return IPCS | UDPCS;
+ WARN_ON(1); /* we need a WARN() */
+ }
+ return 0;
+}
+
+static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned int frags, entry = tp->cur_tx % NUM_TX_DESC;
+ struct TxDesc *txd = tp->TxDescArray + entry;
+ void __iomem *ioaddr = tp->mmio_addr;
+ dma_addr_t mapping;
+ u32 status, len;
+ u32 opts1;
+ int ret = NETDEV_TX_OK;
+
+ if (unlikely(TX_BUFFS_AVAIL(tp) < skb_shinfo(skb)->nr_frags)) {
+ if (netif_msg_drv(tp)) {
+ printk(KERN_ERR
+ "%s: BUG! Tx Ring full when queue awake!\n",
+ dev->name);
+ }
+ goto err_stop;
+ }
+
+ if (unlikely(le32_to_cpu(txd->opts1) & DescOwn))
+ goto err_stop;
+
+ opts1 = DescOwn | rtl8169_tso_csum(skb, dev);
+
+ frags = rtl8169_xmit_frags(tp, skb, opts1);
+ if (frags) {
+ len = skb_headlen(skb);
+ opts1 |= FirstFrag;
+ } else {
+ len = skb->len;
+
+ if (unlikely(len < ETH_ZLEN)) {
+ if (skb_padto(skb, ETH_ZLEN))
+ goto err_update_stats;
+ len = ETH_ZLEN;
+ }
+
+ opts1 |= FirstFrag | LastFrag;
+ tp->tx_skb[entry].skb = skb;
+ }
+
+ mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE);
+
+ tp->tx_skb[entry].len = len;
+ txd->addr = cpu_to_le64(mapping);
+ txd->opts2 = cpu_to_le32(rtl8169_tx_vlan_tag(tp, skb));
+
+ wmb();
+
+ /* anti gcc 2.95.3 bugware (sic) */
+ status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+ txd->opts1 = cpu_to_le32(status);
+
+ dev->trans_start = jiffies;
+
+ tp->cur_tx += frags + 1;
+
+ smp_wmb();
+
+ RTL_W8(TxPoll, NPQ); /* set polling bit */
+
+ if (!tp->ecdev) {
+ if (TX_BUFFS_AVAIL(tp) < MAX_SKB_FRAGS) {
+ netif_stop_queue(dev);
+ smp_rmb();
+ if (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)
+ netif_wake_queue(dev);
+ }
+ }
+
+out:
+ return ret;
+
+err_stop:
+ if (!tp->ecdev)
+ netif_stop_queue(dev);
+ ret = NETDEV_TX_BUSY;
+err_update_stats:
+ dev->stats.tx_dropped++;
+ goto out;
+}
+
+static void rtl8169_pcierr_interrupt(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+ void __iomem *ioaddr = tp->mmio_addr;
+ u16 pci_status, pci_cmd;
+
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
+ pci_read_config_word(pdev, PCI_STATUS, &pci_status);
+
+ if (netif_msg_intr(tp)) {
+ printk(KERN_ERR
+ "%s: PCI error (cmd = 0x%04x, status = 0x%04x).\n",
+ dev->name, pci_cmd, pci_status);
+ }
+
+ /*
+ * The recovery sequence below admits a very elaborated explanation:
+ * - it seems to work;
+ * - I did not see what else could be done;
+ * - it makes iop3xx happy.
+ *
+ * Feel free to adjust to your needs.
+ */
+ if (pdev->broken_parity_status)
+ pci_cmd &= ~PCI_COMMAND_PARITY;
+ else
+ pci_cmd |= PCI_COMMAND_SERR | PCI_COMMAND_PARITY;
+
+ pci_write_config_word(pdev, PCI_COMMAND, pci_cmd);
+
+ pci_write_config_word(pdev, PCI_STATUS,
+ pci_status & (PCI_STATUS_DETECTED_PARITY |
+ PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_REC_MASTER_ABORT |
+ PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_SIG_TARGET_ABORT));
+
+ /* The infamous DAC f*ckup only happens at boot time */
+ if ((tp->cp_cmd & PCIDAC) && !tp->dirty_rx && !tp->cur_rx) {
+ if (netif_msg_intr(tp))
+ printk(KERN_INFO "%s: disabling PCI DAC.\n", dev->name);
+ tp->cp_cmd &= ~PCIDAC;
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ dev->features &= ~NETIF_F_HIGHDMA;
+ }
+
+ rtl8169_hw_reset(ioaddr);
+
+ rtl8169_schedule_work(dev, rtl8169_reinit_task);
+}
+
+static void rtl8169_tx_interrupt(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ unsigned int dirty_tx, tx_left;
+
+ dirty_tx = tp->dirty_tx;
+ smp_rmb();
+ tx_left = tp->cur_tx - dirty_tx;
+
+ while (tx_left > 0) {
+ unsigned int entry = dirty_tx % NUM_TX_DESC;
+ struct ring_info *tx_skb = tp->tx_skb + entry;
+ u32 len = tx_skb->len;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(tp->TxDescArray[entry].opts1);
+ if (status & DescOwn)
+ break;
+
+ dev->stats.tx_bytes += len;
+ dev->stats.tx_packets++;
+
+ rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb, tp->TxDescArray + entry);
+
+ if (status & LastFrag) {
+ if (!tp->ecdev)
+ dev_kfree_skb_irq(tx_skb->skb);
+ tx_skb->skb = NULL;
+ }
+ dirty_tx++;
+ tx_left--;
+ }
+
+ if (tp->dirty_tx != dirty_tx) {
+ tp->dirty_tx = dirty_tx;
+ smp_wmb();
+ if (!tp->ecdev && netif_queue_stopped(dev) &&
+ (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)) {
+ netif_wake_queue(dev);
+ }
+ /*
+ * 8168 hack: TxPoll requests are lost when the Tx packets are
+ * too close. Let's kick an extra TxPoll request when a burst
+ * of start_xmit activity is detected (if it is not detected,
+ * it is slow enough). -- FR
+ */
+ smp_rmb();
+ if (tp->cur_tx != dirty_tx)
+ RTL_W8(TxPoll, NPQ);
+ }
+}
+
+static inline int rtl8169_fragmented_frame(u32 status)
+{
+ return (status & (FirstFrag | LastFrag)) != (FirstFrag | LastFrag);
+}
+
+static inline void rtl8169_rx_csum(struct sk_buff *skb, struct RxDesc *desc)
+{
+ u32 opts1 = le32_to_cpu(desc->opts1);
+ u32 status = opts1 & RxProtoMask;
+
+ if (((status == RxProtoTCP) && !(opts1 & TCPFail)) ||
+ ((status == RxProtoUDP) && !(opts1 & UDPFail)) ||
+ ((status == RxProtoIP) && !(opts1 & IPFail)))
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
+}
+
+static inline bool rtl8169_try_rx_copy(struct sk_buff **sk_buff,
+ struct rtl8169_private *tp, int pkt_size,
+ dma_addr_t addr)
+{
+ struct sk_buff *skb;
+ bool done = false;
+
+ if (pkt_size >= rx_copybreak)
+ goto out;
+
+ skb = netdev_alloc_skb(tp->dev, pkt_size + NET_IP_ALIGN);
+ if (!skb)
+ goto out;
+
+ pci_dma_sync_single_for_cpu(tp->pci_dev, addr, pkt_size,
+ PCI_DMA_FROMDEVICE);
+ skb_reserve(skb, NET_IP_ALIGN);
+ skb_copy_from_linear_data(*sk_buff, skb->data, pkt_size);
+ *sk_buff = skb;
+ done = true;
+out:
+ return done;
+}
+
+static int rtl8169_rx_interrupt(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr, u32 budget)
+{
+ unsigned int cur_rx, rx_left;
+ unsigned int delta, count;
+
+ cur_rx = tp->cur_rx;
+ rx_left = NUM_RX_DESC + tp->dirty_rx - cur_rx;
+ rx_left = min(rx_left, budget);
+
+ for (; rx_left > 0; rx_left--, cur_rx++) {
+ unsigned int entry = cur_rx % NUM_RX_DESC;
+ struct RxDesc *desc = tp->RxDescArray + entry;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(desc->opts1);
+
+ if (status & DescOwn)
+ break;
+ if (unlikely(status & RxRES)) {
+ if (netif_msg_rx_err(tp)) {
+ printk(KERN_INFO
+ "%s: Rx ERROR. status = %08x\n",
+ dev->name, status);
+ }
+ dev->stats.rx_errors++;
+ if (status & (RxRWT | RxRUNT))
+ dev->stats.rx_length_errors++;
+ if (status & RxCRC)
+ dev->stats.rx_crc_errors++;
+ if (status & RxFOVF) {
+ if (!tp->ecdev)
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+ dev->stats.rx_fifo_errors++;
+ }
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ } else {
+ struct sk_buff *skb = tp->Rx_skbuff[entry];
+ dma_addr_t addr = le64_to_cpu(desc->addr);
+ int pkt_size = (status & 0x00001FFF) - 4;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ /*
+ * The driver does not support incoming fragmented
+ * frames. They are seen as a symptom of over-mtu
+ * sized frames.
+ */
+ if (unlikely(rtl8169_fragmented_frame(status))) {
+ dev->stats.rx_dropped++;
+ dev->stats.rx_length_errors++;
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ continue;
+ }
+
+ rtl8169_rx_csum(skb, desc);
+
+ if (tp->ecdev) {
+ pci_dma_sync_single_for_cpu(pdev, addr, pkt_size,
+ PCI_DMA_FROMDEVICE);
+
+ ecdev_receive(tp->ecdev, skb->data, pkt_size);
+
+ pci_dma_sync_single_for_device(pdev, addr,
+ pkt_size, PCI_DMA_FROMDEVICE);
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+
+ // No need to detect link status as
+ // long as frames are received: Reset watchdog.
+ tp->ec_watchdog_jiffies = jiffies;
+ } else {
+ if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) {
+ pci_dma_sync_single_for_device(pdev, addr,
+ pkt_size, PCI_DMA_FROMDEVICE);
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ } else {
+ pci_unmap_single(pdev, addr, tp->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ tp->Rx_skbuff[entry] = NULL;
+ }
+
+ skb_put(skb, pkt_size);
+ skb->protocol = eth_type_trans(skb, dev);
+
+ if (rtl8169_rx_vlan_skb(tp, desc, skb) < 0)
+ netif_receive_skb(skb);
+ }
+
+ dev->last_rx = jiffies;
+ dev->stats.rx_bytes += pkt_size;
+ dev->stats.rx_packets++;
+ }
+
+ /* Work around for AMD plateform. */
+ if ((desc->opts2 & cpu_to_le32(0xfffe000)) &&
+ (tp->mac_version == RTL_GIGA_MAC_VER_05)) {
+ desc->opts2 = 0;
+ cur_rx++;
+ }
+ }
+
+ count = cur_rx - tp->cur_rx;
+ tp->cur_rx = cur_rx;
+
+ if (tp->ecdev) {
+ /* descriptors are cleaned up immediately. */
+ tp->dirty_rx = tp->cur_rx;
+ } else {
+ delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx);
+ if (!delta && count && netif_msg_intr(tp))
+ printk(KERN_INFO "%s: no Rx buffer allocated\n", dev->name);
+ tp->dirty_rx += delta;
+
+ /*
+ * FIXME: until there is periodic timer to try and refill the ring,
+ * a temporary shortage may definitely kill the Rx process.
+ * - disable the asic to try and avoid an overflow and kick it again
+ * after refill ?
+ * - how do others driver handle this condition (Uh oh...).
+ */
+ if ((tp->dirty_rx + NUM_RX_DESC == tp->cur_rx) && netif_msg_intr(tp))
+ printk(KERN_EMERG "%s: Rx buffers exhausted\n", dev->name);
+ }
+
+ return count;
+}
+
+static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+{
+ struct net_device *dev = dev_instance;
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int handled = 0;
+ int status;
+
+ status = RTL_R16(IntrStatus);
+
+ /* hotplug/major error/no more work/shared irq */
+ if ((status == 0xffff) || !status)
+ goto out;
+
+ handled = 1;
+
+ if (unlikely(!tp->ecdev && !netif_running(dev))) {
+ rtl8169_asic_down(ioaddr);
+ goto out;
+ }
+
+ status &= tp->intr_mask;
+ RTL_W16(IntrStatus,
+ (status & RxFIFOOver) ? (status | RxOverflow) : status);
+
+ if (!(status & tp->intr_event))
+ goto out;
+
+ /* Work around for rx fifo overflow */
+ if (unlikely(status & RxFIFOOver) &&
+ (tp->mac_version == RTL_GIGA_MAC_VER_11)) {
+ netif_stop_queue(dev);
+ rtl8169_tx_timeout(dev);
+ goto out;
+ }
+
+ if (unlikely(status & SYSErr)) {
+ rtl8169_pcierr_interrupt(dev);
+ goto out;
+ }
+
+ if (status & LinkChg)
+ rtl8169_check_link_status(dev, tp, ioaddr);
+
+ if (status & tp->napi_event) {
+ RTL_W16(IntrMask, tp->intr_event & ~tp->napi_event);
+ tp->intr_mask = ~tp->napi_event;
+
+ if (likely(netif_rx_schedule_prep(dev, &tp->napi)))
+ __netif_rx_schedule(dev, &tp->napi);
+ else if (netif_msg_intr(tp)) {
+ printk(KERN_INFO "%s: interrupt %04x in poll\n",
+ dev->name, status);
+ }
+ }
+out:
+ return IRQ_RETVAL(handled);
+}
+
+static void ec_poll(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ rtl8169_interrupt(pdev->irq, dev);
+ rtl8169_rx_interrupt(dev, tp, tp->mmio_addr, 100); // FIXME
+ rtl8169_tx_interrupt(dev, tp, tp->mmio_addr);
+
+ if (jiffies - tp->ec_watchdog_jiffies >= 2 * HZ) {
+ rtl8169_phy_timer((unsigned long) dev);
+ tp->ec_watchdog_jiffies = jiffies;
+ }
+}
+
+static int rtl8169_poll(struct napi_struct *napi, int budget)
+{
+ struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
+ struct net_device *dev = tp->dev;
+ void __iomem *ioaddr = tp->mmio_addr;
+ int work_done;
+
+ work_done = rtl8169_rx_interrupt(dev, tp, ioaddr, (u32) budget);
+ rtl8169_tx_interrupt(dev, tp, ioaddr);
+
+ if (work_done < budget) {
+ netif_rx_complete(dev, napi);
+ tp->intr_mask = 0xffff;
+ /*
+ * 20040426: the barrier is not strictly required but the
+ * behavior of the irq handler could be less predictable
+ * without it. Btw, the lack of flush for the posted pci
+ * write is safe - FR
+ */
+ smp_wmb();
+ RTL_W16(IntrMask, tp->intr_event);
+ }
+
+ return work_done;
+}
+
+static void rtl8169_rx_missed(struct net_device *dev, void __iomem *ioaddr)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ if (tp->mac_version > RTL_GIGA_MAC_VER_06)
+ return;
+
+ dev->stats.rx_missed_errors += (RTL_R32(RxMissed) & 0xffffff);
+ RTL_W32(RxMissed, 0);
+}
+
+static void rtl8169_down(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int intrmask;
+
+ rtl8169_delete_timer(dev);
+
+ if (!tp->ecdev) {
+ netif_stop_queue(dev);
+
+ napi_disable(&tp->napi);
+
+ }
+core_down:
+ if (!tp->ecdev)
+ spin_lock_irq(&tp->lock);
+
+ rtl8169_asic_down(ioaddr);
+
+ rtl8169_rx_missed(dev, ioaddr);
+
+ if (!tp->ecdev)
+ spin_unlock_irq(&tp->lock);
+
+ if (!tp->ecdev)
+ synchronize_irq(dev->irq);
+
+ /* Give a racing hard_start_xmit a few cycles to complete. */
+ synchronize_sched(); /* FIXME: should this be synchronize_irq()? */
+
+ /*
+ * And now for the 50k$ question: are IRQ disabled or not ?
+ *
+ * Two paths lead here:
+ * 1) dev->close
+ * -> netif_running() is available to sync the current code and the
+ * IRQ handler. See rtl8169_interrupt for details.
+ * 2) dev->change_mtu
+ * -> rtl8169_poll can not be issued again and re-enable the
+ * interruptions. Let's simply issue the IRQ down sequence again.
+ *
+ * No loop if hotpluged or major error (0xffff).
+ */
+ intrmask = RTL_R16(IntrMask);
+ if (intrmask && (intrmask != 0xffff))
+ goto core_down;
+
+ rtl8169_tx_clear(tp);
+
+ rtl8169_rx_clear(tp);
+}
+
+static int rtl8169_close(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ rtl8169_down(dev);
+
+ if (!tp->ecdev)
+ free_irq(dev->irq, dev);
+
+ pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+ pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
+ tp->TxPhyAddr);
+ tp->TxDescArray = NULL;
+ tp->RxDescArray = NULL;
+
+ return 0;
+}
+
+static void rtl_set_rx_mode(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+ u32 mc_filter[2]; /* Multicast hash filter */
+ int rx_mode;
+ u32 tmp = 0;
+
+ if (dev->flags & IFF_PROMISC) {
+ /* Unconditionally log net taps. */
+ if (netif_msg_link(tp)) {
+ printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n",
+ dev->name);
+ }
+ rx_mode =
+ AcceptBroadcast | AcceptMulticast | AcceptMyPhys |
+ AcceptAllPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else if ((dev->mc_count > multicast_filter_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ /* Too many to filter perfectly -- accept all multicasts. */
+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else {
+ struct dev_mc_list *mclist;
+ unsigned int i;
+
+ rx_mode = AcceptBroadcast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0;
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
+ i++, mclist = mclist->next) {
+ int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
+ mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);
+ rx_mode |= AcceptMulticast;
+ }
+ }
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ tmp = rtl8169_rx_config | rx_mode |
+ (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
+
+ if (tp->mac_version > RTL_GIGA_MAC_VER_06) {
+ u32 data = mc_filter[0];
+
+ mc_filter[0] = swab32(mc_filter[1]);
+ mc_filter[1] = swab32(data);
+ }
+
+ RTL_W32(MAR0 + 0, mc_filter[0]);
+ RTL_W32(MAR0 + 4, mc_filter[1]);
+
+ RTL_W32(RxConfig, tmp);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+/**
+ * rtl8169_get_stats - Get rtl8169 read/write statistics
+ * @dev: The Ethernet Device to get statistics for
+ *
+ * Get TX/RX statistics for rtl8169
+ */
+static struct net_device_stats *rtl8169_get_stats(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ if (netif_running(dev)) {
+ spin_lock_irqsave(&tp->lock, flags);
+ rtl8169_rx_missed(dev, ioaddr);
+ spin_unlock_irqrestore(&tp->lock, flags);
+ }
+
+ return &dev->stats;
+}
+
+#ifdef CONFIG_PM
+
+static int rtl8169_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ if (tp->ecdev)
+ return;
+
+ if (!netif_running(dev))
+ goto out_pci_suspend;
+
+ netif_device_detach(dev);
+ netif_stop_queue(dev);
+
+ spin_lock_irq(&tp->lock);
+
+ rtl8169_asic_down(ioaddr);
+
+ rtl8169_rx_missed(dev, ioaddr);
+
+ spin_unlock_irq(&tp->lock);
+
+out_pci_suspend:
+ pci_save_state(pdev);
+ pci_enable_wake(pdev, pci_choose_state(pdev, state),
+ (tp->features & RTL_FEATURE_WOL) ? 1 : 0);
+ pci_set_power_state(pdev, pci_choose_state(pdev, state));
+
+ return 0;
+}
+
+static int rtl8169_resume(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ if (tp->ecdev)
+ return;
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ pci_enable_wake(pdev, PCI_D0, 0);
+
+ if (!netif_running(dev))
+ goto out;
+
+ netif_device_attach(dev);
+
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+out:
+ return 0;
+}
+
+static void rtl_shutdown(struct pci_dev *pdev)
+{
+ rtl8169_suspend(pdev, PMSG_SUSPEND);
+}
+
+#endif /* CONFIG_PM */
+
+static struct pci_driver rtl8169_pci_driver = {
+ .name = MODULENAME,
+ .id_table = rtl8169_pci_tbl,
+ .probe = rtl8169_init_one,
+ .remove = __devexit_p(rtl8169_remove_one),
+#ifdef CONFIG_PM
+ .suspend = rtl8169_suspend,
+ .resume = rtl8169_resume,
+ .shutdown = rtl_shutdown,
+#endif
+};
+
+static int __init rtl8169_init_module(void)
+{
+ return pci_register_driver(&rtl8169_pci_driver);
+}
+
+static void __exit rtl8169_cleanup_module(void)
+{
+ pci_unregister_driver(&rtl8169_pci_driver);
+}
+
+module_init(rtl8169_init_module);
+module_exit(rtl8169_cleanup_module);
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/devices/r8169-2.6.28-orig.c Wed Jan 13 00:04:47 2010 +0100
@@ -0,0 +1,3843 @@
+/*
+ * r8169.c: RealTek 8169/8168/8101 ethernet driver.
+ *
+ * Copyright (c) 2002 ShuChen <shuchen@realtek.com.tw>
+ * Copyright (c) 2003 - 2007 Francois Romieu <romieu@fr.zoreil.com>
+ * Copyright (c) a lot of people too. Please respect their work.
+ *
+ * See MAINTAINERS file for support contact information.
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/delay.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/if_vlan.h>
+#include <linux/crc32.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/init.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#define RTL8169_VERSION "2.3LK-NAPI"
+#define MODULENAME "r8169"
+#define PFX MODULENAME ": "
+
+#ifdef RTL8169_DEBUG
+#define assert(expr) \
+ if (!(expr)) { \
+ printk( "Assertion failed! %s,%s,%s,line=%d\n", \
+ #expr,__FILE__,__func__,__LINE__); \
+ }
+#define dprintk(fmt, args...) \
+ do { printk(KERN_DEBUG PFX fmt, ## args); } while (0)
+#else
+#define assert(expr) do {} while (0)
+#define dprintk(fmt, args...) do {} while (0)
+#endif /* RTL8169_DEBUG */
+
+#define R8169_MSG_DEFAULT \
+ (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN)
+
+#define TX_BUFFS_AVAIL(tp) \
+ (tp->dirty_tx + NUM_TX_DESC - tp->cur_tx - 1)
+
+/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
+static const int max_interrupt_work = 20;
+
+/* Maximum number of multicast addresses to filter (vs. Rx-all-multicast).
+ The RTL chips use a 64 element hash table based on the Ethernet CRC. */
+static const int multicast_filter_limit = 32;
+
+/* MAC address length */
+#define MAC_ADDR_LEN 6
+
+#define MAX_READ_REQUEST_SHIFT 12
+#define RX_FIFO_THRESH 7 /* 7 means NO threshold, Rx buffer level before first PCI xfer. */
+#define RX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define TX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define EarlyTxThld 0x3F /* 0x3F means NO early transmit */
+#define RxPacketMaxSize 0x3FE8 /* 16K - 1 - ETH_HLEN - VLAN - CRC... */
+#define SafeMtu 0x1c20 /* ... actually life sucks beyond ~7k */
+#define InterFrameGap 0x03 /* 3 means InterFrameGap = the shortest one */
+
+#define R8169_REGS_SIZE 256
+#define R8169_NAPI_WEIGHT 64
+#define NUM_TX_DESC 64 /* Number of Tx descriptor registers */
+#define NUM_RX_DESC 256 /* Number of Rx descriptor registers */
+#define RX_BUF_SIZE 1536 /* Rx Buffer size */
+#define R8169_TX_RING_BYTES (NUM_TX_DESC * sizeof(struct TxDesc))
+#define R8169_RX_RING_BYTES (NUM_RX_DESC * sizeof(struct RxDesc))
+
+#define RTL8169_TX_TIMEOUT (6*HZ)
+#define RTL8169_PHY_TIMEOUT (10*HZ)
+
+#define RTL_EEPROM_SIG cpu_to_le32(0x8129)
+#define RTL_EEPROM_SIG_MASK cpu_to_le32(0xffff)
+#define RTL_EEPROM_SIG_ADDR 0x0000
+
+/* write/read MMIO register */
+#define RTL_W8(reg, val8) writeb ((val8), ioaddr + (reg))
+#define RTL_W16(reg, val16) writew ((val16), ioaddr + (reg))
+#define RTL_W32(reg, val32) writel ((val32), ioaddr + (reg))
+#define RTL_R8(reg) readb (ioaddr + (reg))
+#define RTL_R16(reg) readw (ioaddr + (reg))
+#define RTL_R32(reg) ((unsigned long) readl (ioaddr + (reg)))
+
+enum mac_version {
+ RTL_GIGA_MAC_VER_01 = 0x01, // 8169
+ RTL_GIGA_MAC_VER_02 = 0x02, // 8169S
+ RTL_GIGA_MAC_VER_03 = 0x03, // 8110S
+ RTL_GIGA_MAC_VER_04 = 0x04, // 8169SB
+ RTL_GIGA_MAC_VER_05 = 0x05, // 8110SCd
+ RTL_GIGA_MAC_VER_06 = 0x06, // 8110SCe
+ RTL_GIGA_MAC_VER_07 = 0x07, // 8102e
+ RTL_GIGA_MAC_VER_08 = 0x08, // 8102e
+ RTL_GIGA_MAC_VER_09 = 0x09, // 8102e
+ RTL_GIGA_MAC_VER_10 = 0x0a, // 8101e
+ RTL_GIGA_MAC_VER_11 = 0x0b, // 8168Bb
+ RTL_GIGA_MAC_VER_12 = 0x0c, // 8168Be
+ RTL_GIGA_MAC_VER_13 = 0x0d, // 8101Eb
+ RTL_GIGA_MAC_VER_14 = 0x0e, // 8101 ?
+ RTL_GIGA_MAC_VER_15 = 0x0f, // 8101 ?
+ RTL_GIGA_MAC_VER_16 = 0x11, // 8101Ec
+ RTL_GIGA_MAC_VER_17 = 0x10, // 8168Bf
+ RTL_GIGA_MAC_VER_18 = 0x12, // 8168CP
+ RTL_GIGA_MAC_VER_19 = 0x13, // 8168C
+ RTL_GIGA_MAC_VER_20 = 0x14, // 8168C
+ RTL_GIGA_MAC_VER_21 = 0x15, // 8168C
+ RTL_GIGA_MAC_VER_22 = 0x16, // 8168C
+ RTL_GIGA_MAC_VER_23 = 0x17, // 8168CP
+ RTL_GIGA_MAC_VER_24 = 0x18, // 8168CP
+ RTL_GIGA_MAC_VER_25 = 0x19 // 8168D
+};
+
+#define _R(NAME,MAC,MASK) \
+ { .name = NAME, .mac_version = MAC, .RxConfigMask = MASK }
+
+static const struct {
+ const char *name;
+ u8 mac_version;
+ u32 RxConfigMask; /* Clears the bits supported by this chip */
+} rtl_chip_info[] = {
+ _R("RTL8169", RTL_GIGA_MAC_VER_01, 0xff7e1880), // 8169
+ _R("RTL8169s", RTL_GIGA_MAC_VER_02, 0xff7e1880), // 8169S
+ _R("RTL8110s", RTL_GIGA_MAC_VER_03, 0xff7e1880), // 8110S
+ _R("RTL8169sb/8110sb", RTL_GIGA_MAC_VER_04, 0xff7e1880), // 8169SB
+ _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_05, 0xff7e1880), // 8110SCd
+ _R("RTL8169sc/8110sc", RTL_GIGA_MAC_VER_06, 0xff7e1880), // 8110SCe
+ _R("RTL8102e", RTL_GIGA_MAC_VER_07, 0xff7e1880), // PCI-E
+ _R("RTL8102e", RTL_GIGA_MAC_VER_08, 0xff7e1880), // PCI-E
+ _R("RTL8102e", RTL_GIGA_MAC_VER_09, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_10, 0xff7e1880), // PCI-E
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_11, 0xff7e1880), // PCI-E
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_12, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_13, 0xff7e1880), // PCI-E 8139
+ _R("RTL8100e", RTL_GIGA_MAC_VER_14, 0xff7e1880), // PCI-E 8139
+ _R("RTL8100e", RTL_GIGA_MAC_VER_15, 0xff7e1880), // PCI-E 8139
+ _R("RTL8168b/8111b", RTL_GIGA_MAC_VER_17, 0xff7e1880), // PCI-E
+ _R("RTL8101e", RTL_GIGA_MAC_VER_16, 0xff7e1880), // PCI-E
+ _R("RTL8168cp/8111cp", RTL_GIGA_MAC_VER_18, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_19, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_20, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_21, 0xff7e1880), // PCI-E
+ _R("RTL8168c/8111c", RTL_GIGA_MAC_VER_22, 0xff7e1880), // PCI-E
+ _R("RTL8168cp/8111cp", RTL_GIGA_MAC_VER_23, 0xff7e1880), // PCI-E
+ _R("RTL8168cp/8111cp", RTL_GIGA_MAC_VER_24, 0xff7e1880), // PCI-E
+ _R("RTL8168d/8111d", RTL_GIGA_MAC_VER_25, 0xff7e1880) // PCI-E
+};
+#undef _R
+
+enum cfg_version {
+ RTL_CFG_0 = 0x00,
+ RTL_CFG_1,
+ RTL_CFG_2
+};
+
+static void rtl_hw_start_8169(struct net_device *);
+static void rtl_hw_start_8168(struct net_device *);
+static void rtl_hw_start_8101(struct net_device *);
+
+static struct pci_device_id rtl8169_pci_tbl[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8129), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8136), 0, 0, RTL_CFG_2 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8167), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8168), 0, 0, RTL_CFG_1 },
+ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4300), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(PCI_VENDOR_ID_AT, 0xc107), 0, 0, RTL_CFG_0 },
+ { PCI_DEVICE(0x16ec, 0x0116), 0, 0, RTL_CFG_0 },
+ { PCI_VENDOR_ID_LINKSYS, 0x1032,
+ PCI_ANY_ID, 0x0024, 0, 0, RTL_CFG_0 },
+ { 0x0001, 0x8168,
+ PCI_ANY_ID, 0x2410, 0, 0, RTL_CFG_2 },
+ {0,},
+};
+
+MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);
+
+static int rx_copybreak = 200;
+static int use_dac;
+static struct {
+ u32 msg_enable;
+} debug = { -1 };
+
+enum rtl_registers {
+ MAC0 = 0, /* Ethernet hardware address. */
+ MAC4 = 4,
+ MAR0 = 8, /* Multicast filter. */
+ CounterAddrLow = 0x10,
+ CounterAddrHigh = 0x14,
+ TxDescStartAddrLow = 0x20,
+ TxDescStartAddrHigh = 0x24,
+ TxHDescStartAddrLow = 0x28,
+ TxHDescStartAddrHigh = 0x2c,
+ FLASH = 0x30,
+ ERSR = 0x36,
+ ChipCmd = 0x37,
+ TxPoll = 0x38,
+ IntrMask = 0x3c,
+ IntrStatus = 0x3e,
+ TxConfig = 0x40,
+ RxConfig = 0x44,
+ RxMissed = 0x4c,
+ Cfg9346 = 0x50,
+ Config0 = 0x51,
+ Config1 = 0x52,
+ Config2 = 0x53,
+ Config3 = 0x54,
+ Config4 = 0x55,
+ Config5 = 0x56,
+ MultiIntr = 0x5c,
+ PHYAR = 0x60,
+ PHYstatus = 0x6c,
+ RxMaxSize = 0xda,
+ CPlusCmd = 0xe0,
+ IntrMitigate = 0xe2,
+ RxDescAddrLow = 0xe4,
+ RxDescAddrHigh = 0xe8,
+ EarlyTxThres = 0xec,
+ FuncEvent = 0xf0,
+ FuncEventMask = 0xf4,
+ FuncPresetState = 0xf8,
+ FuncForceEvent = 0xfc,
+};
+
+enum rtl8110_registers {
+ TBICSR = 0x64,
+ TBI_ANAR = 0x68,
+ TBI_LPAR = 0x6a,
+};
+
+enum rtl8168_8101_registers {
+ CSIDR = 0x64,
+ CSIAR = 0x68,
+#define CSIAR_FLAG 0x80000000
+#define CSIAR_WRITE_CMD 0x80000000
+#define CSIAR_BYTE_ENABLE 0x0f
+#define CSIAR_BYTE_ENABLE_SHIFT 12
+#define CSIAR_ADDR_MASK 0x0fff
+
+ EPHYAR = 0x80,
+#define EPHYAR_FLAG 0x80000000
+#define EPHYAR_WRITE_CMD 0x80000000
+#define EPHYAR_REG_MASK 0x1f
+#define EPHYAR_REG_SHIFT 16
+#define EPHYAR_DATA_MASK 0xffff
+ DBG_REG = 0xd1,
+#define FIX_NAK_1 (1 << 4)
+#define FIX_NAK_2 (1 << 3)
+};
+
+enum rtl_register_content {
+ /* InterruptStatusBits */
+ SYSErr = 0x8000,
+ PCSTimeout = 0x4000,
+ SWInt = 0x0100,
+ TxDescUnavail = 0x0080,
+ RxFIFOOver = 0x0040,
+ LinkChg = 0x0020,
+ RxOverflow = 0x0010,
+ TxErr = 0x0008,
+ TxOK = 0x0004,
+ RxErr = 0x0002,
+ RxOK = 0x0001,
+
+ /* RxStatusDesc */
+ RxFOVF = (1 << 23),
+ RxRWT = (1 << 22),
+ RxRES = (1 << 21),
+ RxRUNT = (1 << 20),
+ RxCRC = (1 << 19),
+
+ /* ChipCmdBits */
+ CmdReset = 0x10,
+ CmdRxEnb = 0x08,
+ CmdTxEnb = 0x04,
+ RxBufEmpty = 0x01,
+
+ /* TXPoll register p.5 */
+ HPQ = 0x80, /* Poll cmd on the high prio queue */
+ NPQ = 0x40, /* Poll cmd on the low prio queue */
+ FSWInt = 0x01, /* Forced software interrupt */
+
+ /* Cfg9346Bits */
+ Cfg9346_Lock = 0x00,
+ Cfg9346_Unlock = 0xc0,
+
+ /* rx_mode_bits */
+ AcceptErr = 0x20,
+ AcceptRunt = 0x10,
+ AcceptBroadcast = 0x08,
+ AcceptMulticast = 0x04,
+ AcceptMyPhys = 0x02,
+ AcceptAllPhys = 0x01,
+
+ /* RxConfigBits */
+ RxCfgFIFOShift = 13,
+ RxCfgDMAShift = 8,
+
+ /* TxConfigBits */
+ TxInterFrameGapShift = 24,
+ TxDMAShift = 8, /* DMA burst value (0-7) is shift this many bits */
+
+ /* Config1 register p.24 */
+ LEDS1 = (1 << 7),
+ LEDS0 = (1 << 6),
+ MSIEnable = (1 << 5), /* Enable Message Signaled Interrupt */
+ Speed_down = (1 << 4),
+ MEMMAP = (1 << 3),
+ IOMAP = (1 << 2),
+ VPD = (1 << 1),
+ PMEnable = (1 << 0), /* Power Management Enable */
+
+ /* Config2 register p. 25 */
+ PCI_Clock_66MHz = 0x01,
+ PCI_Clock_33MHz = 0x00,
+
+ /* Config3 register p.25 */
+ MagicPacket = (1 << 5), /* Wake up when receives a Magic Packet */
+ LinkUp = (1 << 4), /* Wake up when the cable connection is re-established */
+ Beacon_en = (1 << 0), /* 8168 only. Reserved in the 8168b */
+
+ /* Config5 register p.27 */
+ BWF = (1 << 6), /* Accept Broadcast wakeup frame */
+ MWF = (1 << 5), /* Accept Multicast wakeup frame */
+ UWF = (1 << 4), /* Accept Unicast wakeup frame */
+ LanWake = (1 << 1), /* LanWake enable/disable */
+ PMEStatus = (1 << 0), /* PME status can be reset by PCI RST# */
+
+ /* TBICSR p.28 */
+ TBIReset = 0x80000000,
+ TBILoopback = 0x40000000,
+ TBINwEnable = 0x20000000,
+ TBINwRestart = 0x10000000,
+ TBILinkOk = 0x02000000,
+ TBINwComplete = 0x01000000,
+
+ /* CPlusCmd p.31 */
+ EnableBist = (1 << 15), // 8168 8101
+ Mac_dbgo_oe = (1 << 14), // 8168 8101
+ Normal_mode = (1 << 13), // unused
+ Force_half_dup = (1 << 12), // 8168 8101
+ Force_rxflow_en = (1 << 11), // 8168 8101
+ Force_txflow_en = (1 << 10), // 8168 8101
+ Cxpl_dbg_sel = (1 << 9), // 8168 8101
+ ASF = (1 << 8), // 8168 8101
+ PktCntrDisable = (1 << 7), // 8168 8101
+ Mac_dbgo_sel = 0x001c, // 8168
+ RxVlan = (1 << 6),
+ RxChkSum = (1 << 5),
+ PCIDAC = (1 << 4),
+ PCIMulRW = (1 << 3),
+ INTT_0 = 0x0000, // 8168
+ INTT_1 = 0x0001, // 8168
+ INTT_2 = 0x0002, // 8168
+ INTT_3 = 0x0003, // 8168
+
+ /* rtl8169_PHYstatus */
+ TBI_Enable = 0x80,
+ TxFlowCtrl = 0x40,
+ RxFlowCtrl = 0x20,
+ _1000bpsF = 0x10,
+ _100bps = 0x08,
+ _10bps = 0x04,
+ LinkStatus = 0x02,
+ FullDup = 0x01,
+
+ /* _TBICSRBit */
+ TBILinkOK = 0x02000000,
+
+ /* DumpCounterCommand */
+ CounterDump = 0x8,
+};
+
+enum desc_status_bit {
+ DescOwn = (1 << 31), /* Descriptor is owned by NIC */
+ RingEnd = (1 << 30), /* End of descriptor ring */
+ FirstFrag = (1 << 29), /* First segment of a packet */
+ LastFrag = (1 << 28), /* Final segment of a packet */
+
+ /* Tx private */
+ LargeSend = (1 << 27), /* TCP Large Send Offload (TSO) */
+ MSSShift = 16, /* MSS value position */
+ MSSMask = 0xfff, /* MSS value + LargeSend bit: 12 bits */
+ IPCS = (1 << 18), /* Calculate IP checksum */
+ UDPCS = (1 << 17), /* Calculate UDP/IP checksum */
+ TCPCS = (1 << 16), /* Calculate TCP/IP checksum */
+ TxVlanTag = (1 << 17), /* Add VLAN tag */
+
+ /* Rx private */
+ PID1 = (1 << 18), /* Protocol ID bit 1/2 */
+ PID0 = (1 << 17), /* Protocol ID bit 2/2 */
+
+#define RxProtoUDP (PID1)
+#define RxProtoTCP (PID0)
+#define RxProtoIP (PID1 | PID0)
+#define RxProtoMask RxProtoIP
+
+ IPFail = (1 << 16), /* IP checksum failed */
+ UDPFail = (1 << 15), /* UDP/IP checksum failed */
+ TCPFail = (1 << 14), /* TCP/IP checksum failed */
+ RxVlanTag = (1 << 16), /* VLAN tag available */
+};
+
+#define RsvdMask 0x3fffc000
+
+struct TxDesc {
+ __le32 opts1;
+ __le32 opts2;
+ __le64 addr;
+};
+
+struct RxDesc {
+ __le32 opts1;
+ __le32 opts2;
+ __le64 addr;
+};
+
+struct ring_info {
+ struct sk_buff *skb;
+ u32 len;
+ u8 __pad[sizeof(void *) - sizeof(u32)];
+};
+
+enum features {
+ RTL_FEATURE_WOL = (1 << 0),
+ RTL_FEATURE_MSI = (1 << 1),
+ RTL_FEATURE_GMII = (1 << 2),
+};
+
+struct rtl8169_private {
+ void __iomem *mmio_addr; /* memory map physical address */
+ struct pci_dev *pci_dev; /* Index of PCI device */
+ struct net_device *dev;
+ struct napi_struct napi;
+ spinlock_t lock; /* spin lock flag */
+ u32 msg_enable;
+ int chipset;
+ int mac_version;
+ u32 cur_rx; /* Index into the Rx descriptor buffer of next Rx pkt. */
+ u32 cur_tx; /* Index into the Tx descriptor buffer of next Rx pkt. */
+ u32 dirty_rx;
+ u32 dirty_tx;
+ struct TxDesc *TxDescArray; /* 256-aligned Tx descriptor ring */
+ struct RxDesc *RxDescArray; /* 256-aligned Rx descriptor ring */
+ dma_addr_t TxPhyAddr;
+ dma_addr_t RxPhyAddr;
+ struct sk_buff *Rx_skbuff[NUM_RX_DESC]; /* Rx data buffers */
+ struct ring_info tx_skb[NUM_TX_DESC]; /* Tx data buffers */
+ unsigned align;
+ unsigned rx_buf_sz;
+ struct timer_list timer;
+ u16 cp_cmd;
+ u16 intr_event;
+ u16 napi_event;
+ u16 intr_mask;
+ int phy_auto_nego_reg;
+ int phy_1000_ctrl_reg;
+#ifdef CONFIG_R8169_VLAN
+ struct vlan_group *vlgrp;
+#endif
+ int (*set_speed)(struct net_device *, u8 autoneg, u16 speed, u8 duplex);
+ int (*get_settings)(struct net_device *, struct ethtool_cmd *);
+ void (*phy_reset_enable)(void __iomem *);
+ void (*hw_start)(struct net_device *);
+ unsigned int (*phy_reset_pending)(void __iomem *);
+ unsigned int (*link_ok)(void __iomem *);
+ int pcie_cap;
+ struct delayed_work task;
+ unsigned features;
+
+ struct mii_if_info mii;
+};
+
+MODULE_AUTHOR("Realtek and the Linux r8169 crew <netdev@vger.kernel.org>");
+MODULE_DESCRIPTION("RealTek RTL-8169 Gigabit Ethernet driver");
+module_param(rx_copybreak, int, 0);
+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");
+module_param(use_dac, int, 0);
+MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot.");
+module_param_named(debug, debug.msg_enable, int, 0);
+MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(RTL8169_VERSION);
+
+static int rtl8169_open(struct net_device *dev);
+static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev);
+static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance);
+static int rtl8169_init_ring(struct net_device *dev);
+static void rtl_hw_start(struct net_device *dev);
+static int rtl8169_close(struct net_device *dev);
+static void rtl_set_rx_mode(struct net_device *dev);
+static void rtl8169_tx_timeout(struct net_device *dev);
+static struct net_device_stats *rtl8169_get_stats(struct net_device *dev);
+static int rtl8169_rx_interrupt(struct net_device *, struct rtl8169_private *,
+ void __iomem *, u32 budget);
+static int rtl8169_change_mtu(struct net_device *dev, int new_mtu);
+static void rtl8169_down(struct net_device *dev);
+static void rtl8169_rx_clear(struct rtl8169_private *tp);
+static int rtl8169_poll(struct napi_struct *napi, int budget);
+
+static const unsigned int rtl8169_rx_config =
+ (RX_FIFO_THRESH << RxCfgFIFOShift) | (RX_DMA_BURST << RxCfgDMAShift);
+
+static void mdio_write(void __iomem *ioaddr, int reg_addr, int value)
+{
+ int i;
+
+ RTL_W32(PHYAR, 0x80000000 | (reg_addr & 0x1f) << 16 | (value & 0xffff));
+
+ for (i = 20; i > 0; i--) {
+ /*
+ * Check if the RTL8169 has completed writing to the specified
+ * MII register.
+ */
+ if (!(RTL_R32(PHYAR) & 0x80000000))
+ break;
+ udelay(25);
+ }
+}
+
+static int mdio_read(void __iomem *ioaddr, int reg_addr)
+{
+ int i, value = -1;
+
+ RTL_W32(PHYAR, 0x0 | (reg_addr & 0x1f) << 16);
+
+ for (i = 20; i > 0; i--) {
+ /*
+ * Check if the RTL8169 has completed retrieving data from
+ * the specified MII register.
+ */
+ if (RTL_R32(PHYAR) & 0x80000000) {
+ value = RTL_R32(PHYAR) & 0xffff;
+ break;
+ }
+ udelay(25);
+ }
+ return value;
+}
+
+static void mdio_patch(void __iomem *ioaddr, int reg_addr, int value)
+{
+ mdio_write(ioaddr, reg_addr, mdio_read(ioaddr, reg_addr) | value);
+}
+
+static void rtl_mdio_write(struct net_device *dev, int phy_id, int location,
+ int val)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ mdio_write(ioaddr, location, val);
+}
+
+static int rtl_mdio_read(struct net_device *dev, int phy_id, int location)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ return mdio_read(ioaddr, location);
+}
+
+static void rtl_ephy_write(void __iomem *ioaddr, int reg_addr, int value)
+{
+ unsigned int i;
+
+ RTL_W32(EPHYAR, EPHYAR_WRITE_CMD | (value & EPHYAR_DATA_MASK) |
+ (reg_addr & EPHYAR_REG_MASK) << EPHYAR_REG_SHIFT);
+
+ for (i = 0; i < 100; i++) {
+ if (!(RTL_R32(EPHYAR) & EPHYAR_FLAG))
+ break;
+ udelay(10);
+ }
+}
+
+static u16 rtl_ephy_read(void __iomem *ioaddr, int reg_addr)
+{
+ u16 value = 0xffff;
+ unsigned int i;
+
+ RTL_W32(EPHYAR, (reg_addr & EPHYAR_REG_MASK) << EPHYAR_REG_SHIFT);
+
+ for (i = 0; i < 100; i++) {
+ if (RTL_R32(EPHYAR) & EPHYAR_FLAG) {
+ value = RTL_R32(EPHYAR) & EPHYAR_DATA_MASK;
+ break;
+ }
+ udelay(10);
+ }
+
+ return value;
+}
+
+static void rtl_csi_write(void __iomem *ioaddr, int addr, int value)
+{
+ unsigned int i;
+
+ RTL_W32(CSIDR, value);
+ RTL_W32(CSIAR, CSIAR_WRITE_CMD | (addr & CSIAR_ADDR_MASK) |
+ CSIAR_BYTE_ENABLE << CSIAR_BYTE_ENABLE_SHIFT);
+
+ for (i = 0; i < 100; i++) {
+ if (!(RTL_R32(CSIAR) & CSIAR_FLAG))
+ break;
+ udelay(10);
+ }
+}
+
+static u32 rtl_csi_read(void __iomem *ioaddr, int addr)
+{
+ u32 value = ~0x00;
+ unsigned int i;
+
+ RTL_W32(CSIAR, (addr & CSIAR_ADDR_MASK) |
+ CSIAR_BYTE_ENABLE << CSIAR_BYTE_ENABLE_SHIFT);
+
+ for (i = 0; i < 100; i++) {
+ if (RTL_R32(CSIAR) & CSIAR_FLAG) {
+ value = RTL_R32(CSIDR);
+ break;
+ }
+ udelay(10);
+ }
+
+ return value;
+}
+
+static void rtl8169_irq_mask_and_ack(void __iomem *ioaddr)
+{
+ RTL_W16(IntrMask, 0x0000);
+
+ RTL_W16(IntrStatus, 0xffff);
+}
+
+static void rtl8169_asic_down(void __iomem *ioaddr)
+{
+ RTL_W8(ChipCmd, 0x00);
+ rtl8169_irq_mask_and_ack(ioaddr);
+ RTL_R16(CPlusCmd);
+}
+
+static unsigned int rtl8169_tbi_reset_pending(void __iomem *ioaddr)
+{
+ return RTL_R32(TBICSR) & TBIReset;
+}
+
+static unsigned int rtl8169_xmii_reset_pending(void __iomem *ioaddr)
+{
+ return mdio_read(ioaddr, MII_BMCR) & BMCR_RESET;
+}
+
+static unsigned int rtl8169_tbi_link_ok(void __iomem *ioaddr)
+{
+ return RTL_R32(TBICSR) & TBILinkOk;
+}
+
+static unsigned int rtl8169_xmii_link_ok(void __iomem *ioaddr)
+{
+ return RTL_R8(PHYstatus) & LinkStatus;
+}
+
+static void rtl8169_tbi_reset_enable(void __iomem *ioaddr)
+{
+ RTL_W32(TBICSR, RTL_R32(TBICSR) | TBIReset);
+}
+
+static void rtl8169_xmii_reset_enable(void __iomem *ioaddr)
+{
+ unsigned int val;
+
+ val = mdio_read(ioaddr, MII_BMCR) | BMCR_RESET;
+ mdio_write(ioaddr, MII_BMCR, val & 0xffff);
+}
+
+static void rtl8169_check_link_status(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ if (tp->link_ok(ioaddr)) {
+ netif_carrier_on(dev);
+ if (netif_msg_ifup(tp))
+ printk(KERN_INFO PFX "%s: link up\n", dev->name);
+ } else {
+ if (netif_msg_ifdown(tp))
+ printk(KERN_INFO PFX "%s: link down\n", dev->name);
+ netif_carrier_off(dev);
+ }
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static void rtl8169_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u8 options;
+
+ wol->wolopts = 0;
+
+#define WAKE_ANY (WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_BCAST | WAKE_MCAST)
+ wol->supported = WAKE_ANY;
+
+ spin_lock_irq(&tp->lock);
+
+ options = RTL_R8(Config1);
+ if (!(options & PMEnable))
+ goto out_unlock;
+
+ options = RTL_R8(Config3);
+ if (options & LinkUp)
+ wol->wolopts |= WAKE_PHY;
+ if (options & MagicPacket)
+ wol->wolopts |= WAKE_MAGIC;
+
+ options = RTL_R8(Config5);
+ if (options & UWF)
+ wol->wolopts |= WAKE_UCAST;
+ if (options & BWF)
+ wol->wolopts |= WAKE_BCAST;
+ if (options & MWF)
+ wol->wolopts |= WAKE_MCAST;
+
+out_unlock:
+ spin_unlock_irq(&tp->lock);
+}
+
+static int rtl8169_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+ static struct {
+ u32 opt;
+ u16 reg;
+ u8 mask;
+ } cfg[] = {
+ { WAKE_ANY, Config1, PMEnable },
+ { WAKE_PHY, Config3, LinkUp },
+ { WAKE_MAGIC, Config3, MagicPacket },
+ { WAKE_UCAST, Config5, UWF },
+ { WAKE_BCAST, Config5, BWF },
+ { WAKE_MCAST, Config5, MWF },
+ { WAKE_ANY, Config5, LanWake }
+ };
+
+ spin_lock_irq(&tp->lock);
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ for (i = 0; i < ARRAY_SIZE(cfg); i++) {
+ u8 options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
+ if (wol->wolopts & cfg[i].opt)
+ options |= cfg[i].mask;
+ RTL_W8(cfg[i].reg, options);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ if (wol->wolopts)
+ tp->features |= RTL_FEATURE_WOL;
+ else
+ tp->features &= ~RTL_FEATURE_WOL;
+ device_set_wakeup_enable(&tp->pci_dev->dev, wol->wolopts);
+
+ spin_unlock_irq(&tp->lock);
+
+ return 0;
+}
+
+static void rtl8169_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ strcpy(info->driver, MODULENAME);
+ strcpy(info->version, RTL8169_VERSION);
+ strcpy(info->bus_info, pci_name(tp->pci_dev));
+}
+
+static int rtl8169_get_regs_len(struct net_device *dev)
+{
+ return R8169_REGS_SIZE;
+}
+
+static int rtl8169_set_speed_tbi(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int ret = 0;
+ u32 reg;
+
+ reg = RTL_R32(TBICSR);
+ if ((autoneg == AUTONEG_DISABLE) && (speed == SPEED_1000) &&
+ (duplex == DUPLEX_FULL)) {
+ RTL_W32(TBICSR, reg & ~(TBINwEnable | TBINwRestart));
+ } else if (autoneg == AUTONEG_ENABLE)
+ RTL_W32(TBICSR, reg | TBINwEnable | TBINwRestart);
+ else {
+ if (netif_msg_link(tp)) {
+ printk(KERN_WARNING "%s: "
+ "incorrect speed setting refused in TBI mode\n",
+ dev->name);
+ }
+ ret = -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
+static int rtl8169_set_speed_xmii(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int auto_nego, giga_ctrl;
+
+ auto_nego = mdio_read(ioaddr, MII_ADVERTISE);
+ auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_10FULL |
+ ADVERTISE_100HALF | ADVERTISE_100FULL);
+ giga_ctrl = mdio_read(ioaddr, MII_CTRL1000);
+ giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
+
+ if (autoneg == AUTONEG_ENABLE) {
+ auto_nego |= (ADVERTISE_10HALF | ADVERTISE_10FULL |
+ ADVERTISE_100HALF | ADVERTISE_100FULL);
+ giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
+ } else {
+ if (speed == SPEED_10)
+ auto_nego |= ADVERTISE_10HALF | ADVERTISE_10FULL;
+ else if (speed == SPEED_100)
+ auto_nego |= ADVERTISE_100HALF | ADVERTISE_100FULL;
+ else if (speed == SPEED_1000)
+ giga_ctrl |= ADVERTISE_1000FULL | ADVERTISE_1000HALF;
+
+ if (duplex == DUPLEX_HALF)
+ auto_nego &= ~(ADVERTISE_10FULL | ADVERTISE_100FULL);
+
+ if (duplex == DUPLEX_FULL)
+ auto_nego &= ~(ADVERTISE_10HALF | ADVERTISE_100HALF);
+
+ /* This tweak comes straight from Realtek's driver. */
+ if ((speed == SPEED_100) && (duplex == DUPLEX_HALF) &&
+ ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16))) {
+ auto_nego = ADVERTISE_100HALF | ADVERTISE_CSMA;
+ }
+ }
+
+ /* The 8100e/8101e/8102e do Fast Ethernet only. */
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_07) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_08) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_09) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_10) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_14) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_15) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16)) {
+ if ((giga_ctrl & (ADVERTISE_1000FULL | ADVERTISE_1000HALF)) &&
+ netif_msg_link(tp)) {
+ printk(KERN_INFO "%s: PHY does not support 1000Mbps.\n",
+ dev->name);
+ }
+ giga_ctrl &= ~(ADVERTISE_1000FULL | ADVERTISE_1000HALF);
+ }
+
+ auto_nego |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_11) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_12) ||
+ (tp->mac_version >= RTL_GIGA_MAC_VER_17)) {
+ /*
+ * Wake up the PHY.
+ * Vendor specific (0x1f) and reserved (0x0e) MII registers.
+ */
+ mdio_write(ioaddr, 0x1f, 0x0000);
+ mdio_write(ioaddr, 0x0e, 0x0000);
+ }
+
+ tp->phy_auto_nego_reg = auto_nego;
+ tp->phy_1000_ctrl_reg = giga_ctrl;
+
+ mdio_write(ioaddr, MII_ADVERTISE, auto_nego);
+ mdio_write(ioaddr, MII_CTRL1000, giga_ctrl);
+ mdio_write(ioaddr, MII_BMCR, BMCR_ANENABLE | BMCR_ANRESTART);
+ return 0;
+}
+
+static int rtl8169_set_speed(struct net_device *dev,
+ u8 autoneg, u16 speed, u8 duplex)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int ret;
+
+ ret = tp->set_speed(dev, autoneg, speed, duplex);
+
+ if (netif_running(dev) && (tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
+ mod_timer(&tp->timer, jiffies + RTL8169_PHY_TIMEOUT);
+
+ return ret;
+}
+
+static int rtl8169_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ ret = rtl8169_set_speed(dev, cmd->autoneg, cmd->speed, cmd->duplex);
+ spin_unlock_irqrestore(&tp->lock, flags);
+
+ return ret;
+}
+
+static u32 rtl8169_get_rx_csum(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return tp->cp_cmd & RxChkSum;
+}
+
+static int rtl8169_set_rx_csum(struct net_device *dev, u32 data)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ if (data)
+ tp->cp_cmd |= RxChkSum;
+ else
+ tp->cp_cmd &= ~RxChkSum;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+
+ return 0;
+}
+
+#ifdef CONFIG_R8169_VLAN
+
+static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+{
+ return (tp->vlgrp && vlan_tx_tag_present(skb)) ?
+ TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00;
+}
+
+static void rtl8169_vlan_rx_register(struct net_device *dev,
+ struct vlan_group *grp)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ tp->vlgrp = grp;
+ if (tp->vlgrp)
+ tp->cp_cmd |= RxVlan;
+ else
+ tp->cp_cmd &= ~RxVlan;
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ RTL_R16(CPlusCmd);
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
+ struct sk_buff *skb)
+{
+ u32 opts2 = le32_to_cpu(desc->opts2);
+ struct vlan_group *vlgrp = tp->vlgrp;
+ int ret;
+
+ if (vlgrp && (opts2 & RxVlanTag)) {
+ vlan_hwaccel_receive_skb(skb, vlgrp, swab16(opts2 & 0xffff));
+ ret = 0;
+ } else
+ ret = -1;
+ desc->opts2 = 0;
+ return ret;
+}
+
+#else /* !CONFIG_R8169_VLAN */
+
+static inline u32 rtl8169_tx_vlan_tag(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+{
+ return 0;
+}
+
+static int rtl8169_rx_vlan_skb(struct rtl8169_private *tp, struct RxDesc *desc,
+ struct sk_buff *skb)
+{
+ return -1;
+}
+
+#endif
+
+static int rtl8169_gset_tbi(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 status;
+
+ cmd->supported =
+ SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_FIBRE;
+ cmd->port = PORT_FIBRE;
+ cmd->transceiver = XCVR_INTERNAL;
+
+ status = RTL_R32(TBICSR);
+ cmd->advertising = (status & TBINwEnable) ? ADVERTISED_Autoneg : 0;
+ cmd->autoneg = !!(status & TBINwEnable);
+
+ cmd->speed = SPEED_1000;
+ cmd->duplex = DUPLEX_FULL; /* Always set */
+
+ return 0;
+}
+
+static int rtl8169_gset_xmii(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return mii_ethtool_gset(&tp->mii, cmd);
+}
+
+static int rtl8169_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+ int rc;
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ rc = tp->get_settings(dev, cmd);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+ return rc;
+}
+
+static void rtl8169_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+ void *p)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned long flags;
+
+ if (regs->len > R8169_REGS_SIZE)
+ regs->len = R8169_REGS_SIZE;
+
+ spin_lock_irqsave(&tp->lock, flags);
+ memcpy_fromio(p, tp->mmio_addr, regs->len);
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+static u32 rtl8169_get_msglevel(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ return tp->msg_enable;
+}
+
+static void rtl8169_set_msglevel(struct net_device *dev, u32 value)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ tp->msg_enable = value;
+}
+
+static const char rtl8169_gstrings[][ETH_GSTRING_LEN] = {
+ "tx_packets",
+ "rx_packets",
+ "tx_errors",
+ "rx_errors",
+ "rx_missed",
+ "align_errors",
+ "tx_single_collisions",
+ "tx_multi_collisions",
+ "unicast",
+ "broadcast",
+ "multicast",
+ "tx_aborted",
+ "tx_underrun",
+};
+
+struct rtl8169_counters {
+ __le64 tx_packets;
+ __le64 rx_packets;
+ __le64 tx_errors;
+ __le32 rx_errors;
+ __le16 rx_missed;
+ __le16 align_errors;
+ __le32 tx_one_collision;
+ __le32 tx_multi_collision;
+ __le64 rx_unicast;
+ __le64 rx_broadcast;
+ __le32 rx_multicast;
+ __le16 tx_aborted;
+ __le16 tx_underun;
+};
+
+static int rtl8169_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return ARRAY_SIZE(rtl8169_gstrings);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void rtl8169_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct rtl8169_counters *counters;
+ dma_addr_t paddr;
+ u32 cmd;
+
+ ASSERT_RTNL();
+
+ counters = pci_alloc_consistent(tp->pci_dev, sizeof(*counters), &paddr);
+ if (!counters)
+ return;
+
+ RTL_W32(CounterAddrHigh, (u64)paddr >> 32);
+ cmd = (u64)paddr & DMA_32BIT_MASK;
+ RTL_W32(CounterAddrLow, cmd);
+ RTL_W32(CounterAddrLow, cmd | CounterDump);
+
+ while (RTL_R32(CounterAddrLow) & CounterDump) {
+ if (msleep_interruptible(1))
+ break;
+ }
+
+ RTL_W32(CounterAddrLow, 0);
+ RTL_W32(CounterAddrHigh, 0);
+
+ data[0] = le64_to_cpu(counters->tx_packets);
+ data[1] = le64_to_cpu(counters->rx_packets);
+ data[2] = le64_to_cpu(counters->tx_errors);
+ data[3] = le32_to_cpu(counters->rx_errors);
+ data[4] = le16_to_cpu(counters->rx_missed);
+ data[5] = le16_to_cpu(counters->align_errors);
+ data[6] = le32_to_cpu(counters->tx_one_collision);
+ data[7] = le32_to_cpu(counters->tx_multi_collision);
+ data[8] = le64_to_cpu(counters->rx_unicast);
+ data[9] = le64_to_cpu(counters->rx_broadcast);
+ data[10] = le32_to_cpu(counters->rx_multicast);
+ data[11] = le16_to_cpu(counters->tx_aborted);
+ data[12] = le16_to_cpu(counters->tx_underun);
+
+ pci_free_consistent(tp->pci_dev, sizeof(*counters), counters, paddr);
+}
+
+static void rtl8169_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+{
+ switch(stringset) {
+ case ETH_SS_STATS:
+ memcpy(data, *rtl8169_gstrings, sizeof(rtl8169_gstrings));
+ break;
+ }
+}
+
+static const struct ethtool_ops rtl8169_ethtool_ops = {
+ .get_drvinfo = rtl8169_get_drvinfo,
+ .get_regs_len = rtl8169_get_regs_len,
+ .get_link = ethtool_op_get_link,
+ .get_settings = rtl8169_get_settings,
+ .set_settings = rtl8169_set_settings,
+ .get_msglevel = rtl8169_get_msglevel,
+ .set_msglevel = rtl8169_set_msglevel,
+ .get_rx_csum = rtl8169_get_rx_csum,
+ .set_rx_csum = rtl8169_set_rx_csum,
+ .set_tx_csum = ethtool_op_set_tx_csum,
+ .set_sg = ethtool_op_set_sg,
+ .set_tso = ethtool_op_set_tso,
+ .get_regs = rtl8169_get_regs,
+ .get_wol = rtl8169_get_wol,
+ .set_wol = rtl8169_set_wol,
+ .get_strings = rtl8169_get_strings,
+ .get_sset_count = rtl8169_get_sset_count,
+ .get_ethtool_stats = rtl8169_get_ethtool_stats,
+};
+
+static void rtl8169_write_gmii_reg_bit(void __iomem *ioaddr, int reg,
+ int bitnum, int bitval)
+{
+ int val;
+
+ val = mdio_read(ioaddr, reg);
+ val = (bitval == 1) ?
+ val | (bitval << bitnum) : val & ~(0x0001 << bitnum);
+ mdio_write(ioaddr, reg, val & 0xffff);
+}
+
+static void rtl8169_get_mac_version(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ /*
+ * The driver currently handles the 8168Bf and the 8168Be identically
+ * but they can be identified more specifically through the test below
+ * if needed:
+ *
+ * (RTL_R32(TxConfig) & 0x700000) == 0x500000 ? 8168Bf : 8168Be
+ *
+ * Same thing for the 8101Eb and the 8101Ec:
+ *
+ * (RTL_R32(TxConfig) & 0x700000) == 0x200000 ? 8101Eb : 8101Ec
+ */
+ const struct {
+ u32 mask;
+ u32 val;
+ int mac_version;
+ } mac_info[] = {
+ /* 8168D family. */
+ { 0x7c800000, 0x28000000, RTL_GIGA_MAC_VER_25 },
+
+ /* 8168C family. */
+ { 0x7cf00000, 0x3ca00000, RTL_GIGA_MAC_VER_24 },
+ { 0x7cf00000, 0x3c900000, RTL_GIGA_MAC_VER_23 },
+ { 0x7cf00000, 0x3c800000, RTL_GIGA_MAC_VER_18 },
+ { 0x7c800000, 0x3c800000, RTL_GIGA_MAC_VER_24 },
+ { 0x7cf00000, 0x3c000000, RTL_GIGA_MAC_VER_19 },
+ { 0x7cf00000, 0x3c200000, RTL_GIGA_MAC_VER_20 },
+ { 0x7cf00000, 0x3c300000, RTL_GIGA_MAC_VER_21 },
+ { 0x7cf00000, 0x3c400000, RTL_GIGA_MAC_VER_22 },
+ { 0x7c800000, 0x3c000000, RTL_GIGA_MAC_VER_22 },
+
+ /* 8168B family. */
+ { 0x7cf00000, 0x38000000, RTL_GIGA_MAC_VER_12 },
+ { 0x7cf00000, 0x38500000, RTL_GIGA_MAC_VER_17 },
+ { 0x7c800000, 0x38000000, RTL_GIGA_MAC_VER_17 },
+ { 0x7c800000, 0x30000000, RTL_GIGA_MAC_VER_11 },
+
+ /* 8101 family. */
+ { 0x7cf00000, 0x34a00000, RTL_GIGA_MAC_VER_09 },
+ { 0x7cf00000, 0x24a00000, RTL_GIGA_MAC_VER_09 },
+ { 0x7cf00000, 0x34900000, RTL_GIGA_MAC_VER_08 },
+ { 0x7cf00000, 0x24900000, RTL_GIGA_MAC_VER_08 },
+ { 0x7cf00000, 0x34800000, RTL_GIGA_MAC_VER_07 },
+ { 0x7cf00000, 0x24800000, RTL_GIGA_MAC_VER_07 },
+ { 0x7cf00000, 0x34000000, RTL_GIGA_MAC_VER_13 },
+ { 0x7cf00000, 0x34300000, RTL_GIGA_MAC_VER_10 },
+ { 0x7cf00000, 0x34200000, RTL_GIGA_MAC_VER_16 },
+ { 0x7c800000, 0x34800000, RTL_GIGA_MAC_VER_09 },
+ { 0x7c800000, 0x24800000, RTL_GIGA_MAC_VER_09 },
+ { 0x7c800000, 0x34000000, RTL_GIGA_MAC_VER_16 },
+ /* FIXME: where did these entries come from ? -- FR */
+ { 0xfc800000, 0x38800000, RTL_GIGA_MAC_VER_15 },
+ { 0xfc800000, 0x30800000, RTL_GIGA_MAC_VER_14 },
+
+ /* 8110 family. */
+ { 0xfc800000, 0x98000000, RTL_GIGA_MAC_VER_06 },
+ { 0xfc800000, 0x18000000, RTL_GIGA_MAC_VER_05 },
+ { 0xfc800000, 0x10000000, RTL_GIGA_MAC_VER_04 },
+ { 0xfc800000, 0x04000000, RTL_GIGA_MAC_VER_03 },
+ { 0xfc800000, 0x00800000, RTL_GIGA_MAC_VER_02 },
+ { 0xfc800000, 0x00000000, RTL_GIGA_MAC_VER_01 },
+
+ { 0x00000000, 0x00000000, RTL_GIGA_MAC_VER_01 } /* Catch-all */
+ }, *p = mac_info;
+ u32 reg;
+
+ reg = RTL_R32(TxConfig);
+ while ((reg & p->mask) != p->val)
+ p++;
+ tp->mac_version = p->mac_version;
+
+ if (p->mask == 0x00000000) {
+ struct pci_dev *pdev = tp->pci_dev;
+
+ dev_info(&pdev->dev, "unknown MAC (%08x)\n", reg);
+ }
+}
+
+static void rtl8169_print_mac_version(struct rtl8169_private *tp)
+{
+ dprintk("mac_version = 0x%02x\n", tp->mac_version);
+}
+
+struct phy_reg {
+ u16 reg;
+ u16 val;
+};
+
+static void rtl_phy_write(void __iomem *ioaddr, struct phy_reg *regs, int len)
+{
+ while (len-- > 0) {
+ mdio_write(ioaddr, regs->reg, regs->val);
+ regs++;
+ }
+}
+
+static void rtl8169s_hw_phy_config(void __iomem *ioaddr)
+{
+ struct {
+ u16 regs[5]; /* Beware of bit-sign propagation */
+ } phy_magic[5] = { {
+ { 0x0000, //w 4 15 12 0
+ 0x00a1, //w 3 15 0 00a1
+ 0x0008, //w 2 15 0 0008
+ 0x1020, //w 1 15 0 1020
+ 0x1000 } },{ //w 0 15 0 1000
+ { 0x7000, //w 4 15 12 7
+ 0xff41, //w 3 15 0 ff41
+ 0xde60, //w 2 15 0 de60
+ 0x0140, //w 1 15 0 0140
+ 0x0077 } },{ //w 0 15 0 0077
+ { 0xa000, //w 4 15 12 a
+ 0xdf01, //w 3 15 0 df01
+ 0xdf20, //w 2 15 0 df20
+ 0xff95, //w 1 15 0 ff95
+ 0xfa00 } },{ //w 0 15 0 fa00
+ { 0xb000, //w 4 15 12 b
+ 0xff41, //w 3 15 0 ff41
+ 0xde20, //w 2 15 0 de20
+ 0x0140, //w 1 15 0 0140
+ 0x00bb } },{ //w 0 15 0 00bb
+ { 0xf000, //w 4 15 12 f
+ 0xdf01, //w 3 15 0 df01
+ 0xdf20, //w 2 15 0 df20
+ 0xff95, //w 1 15 0 ff95
+ 0xbf00 } //w 0 15 0 bf00
+ }
+ }, *p = phy_magic;
+ unsigned int i;
+
+ mdio_write(ioaddr, 0x1f, 0x0001); //w 31 2 0 1
+ mdio_write(ioaddr, 0x15, 0x1000); //w 21 15 0 1000
+ mdio_write(ioaddr, 0x18, 0x65c7); //w 24 15 0 65c7
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
+
+ for (i = 0; i < ARRAY_SIZE(phy_magic); i++, p++) {
+ int val, pos = 4;
+
+ val = (mdio_read(ioaddr, pos) & 0x0fff) | (p->regs[0] & 0xffff);
+ mdio_write(ioaddr, pos, val);
+ while (--pos >= 0)
+ mdio_write(ioaddr, pos, p->regs[4 - pos] & 0xffff);
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 1); //w 4 11 11 1
+ rtl8169_write_gmii_reg_bit(ioaddr, 4, 11, 0); //w 4 11 11 0
+ }
+ mdio_write(ioaddr, 0x1f, 0x0000); //w 31 2 0 0
+}
+
+static void rtl8169sb_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0002 },
+ { 0x01, 0x90d0 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168bb_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x10, 0xf41b },
+ { 0x1f, 0x0000 }
+ };
+
+ mdio_write(ioaddr, 0x1f, 0x0001);
+ mdio_patch(ioaddr, 0x16, 1 << 0);
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168bef_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x10, 0xf41b },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168cp_1_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0000 },
+ { 0x1d, 0x0f00 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x1ec8 },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168cp_2_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x1d, 0x3d98 },
+ { 0x1f, 0x0000 }
+ };
+
+ mdio_write(ioaddr, 0x1f, 0x0000);
+ mdio_patch(ioaddr, 0x14, 1 << 5);
+ mdio_patch(ioaddr, 0x0d, 1 << 5);
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl8168c_1_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x12, 0x2300 },
+ { 0x1f, 0x0002 },
+ { 0x00, 0x88d4 },
+ { 0x01, 0x82b1 },
+ { 0x03, 0x7002 },
+ { 0x08, 0x9e30 },
+ { 0x09, 0x01f0 },
+ { 0x0a, 0x5500 },
+ { 0x0c, 0x00c8 },
+ { 0x1f, 0x0003 },
+ { 0x12, 0xc096 },
+ { 0x16, 0x000a },
+ { 0x1f, 0x0000 },
+ { 0x1f, 0x0000 },
+ { 0x09, 0x2000 },
+ { 0x09, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+
+ mdio_patch(ioaddr, 0x14, 1 << 5);
+ mdio_patch(ioaddr, 0x0d, 1 << 5);
+ mdio_write(ioaddr, 0x1f, 0x0000);
+}
+
+static void rtl8168c_2_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x12, 0x2300 },
+ { 0x03, 0x802f },
+ { 0x02, 0x4f02 },
+ { 0x01, 0x0409 },
+ { 0x00, 0xf099 },
+ { 0x04, 0x9800 },
+ { 0x04, 0x9000 },
+ { 0x1d, 0x3d98 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x7eb8 },
+ { 0x06, 0x0761 },
+ { 0x1f, 0x0003 },
+ { 0x16, 0x0f0a },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+
+ mdio_patch(ioaddr, 0x16, 1 << 0);
+ mdio_patch(ioaddr, 0x14, 1 << 5);
+ mdio_patch(ioaddr, 0x0d, 1 << 5);
+ mdio_write(ioaddr, 0x1f, 0x0000);
+}
+
+static void rtl8168c_3_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0001 },
+ { 0x12, 0x2300 },
+ { 0x1d, 0x3d98 },
+ { 0x1f, 0x0002 },
+ { 0x0c, 0x7eb8 },
+ { 0x06, 0x5461 },
+ { 0x1f, 0x0003 },
+ { 0x16, 0x0f0a },
+ { 0x1f, 0x0000 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+
+ mdio_patch(ioaddr, 0x16, 1 << 0);
+ mdio_patch(ioaddr, 0x14, 1 << 5);
+ mdio_patch(ioaddr, 0x0d, 1 << 5);
+ mdio_write(ioaddr, 0x1f, 0x0000);
+}
+
+static void rtl8168c_4_hw_phy_config(void __iomem *ioaddr)
+{
+ rtl8168c_3_hw_phy_config(ioaddr);
+}
+
+static void rtl8168d_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init_0[] = {
+ { 0x1f, 0x0001 },
+ { 0x09, 0x2770 },
+ { 0x08, 0x04d0 },
+ { 0x0b, 0xad15 },
+ { 0x0c, 0x5bf0 },
+ { 0x1c, 0xf101 },
+ { 0x1f, 0x0003 },
+ { 0x14, 0x94d7 },
+ { 0x12, 0xf4d6 },
+ { 0x09, 0xca0f },
+ { 0x1f, 0x0002 },
+ { 0x0b, 0x0b10 },
+ { 0x0c, 0xd1f7 },
+ { 0x1f, 0x0002 },
+ { 0x06, 0x5461 },
+ { 0x1f, 0x0002 },
+ { 0x05, 0x6662 },
+ { 0x1f, 0x0000 },
+ { 0x14, 0x0060 },
+ { 0x1f, 0x0000 },
+ { 0x0d, 0xf8a0 },
+ { 0x1f, 0x0005 },
+ { 0x05, 0xffc2 }
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init_0, ARRAY_SIZE(phy_reg_init_0));
+
+ if (mdio_read(ioaddr, 0x06) == 0xc400) {
+ struct phy_reg phy_reg_init_1[] = {
+ { 0x1f, 0x0005 },
+ { 0x01, 0x0300 },
+ { 0x1f, 0x0000 },
+ { 0x11, 0x401c },
+ { 0x16, 0x4100 },
+ { 0x1f, 0x0005 },
+ { 0x07, 0x0010 },
+ { 0x05, 0x83dc },
+ { 0x06, 0x087d },
+ { 0x05, 0x8300 },
+ { 0x06, 0x0101 },
+ { 0x06, 0x05f8 },
+ { 0x06, 0xf9fa },
+ { 0x06, 0xfbef },
+ { 0x06, 0x79e2 },
+ { 0x06, 0x835f },
+ { 0x06, 0xe0f8 },
+ { 0x06, 0x9ae1 },
+ { 0x06, 0xf89b },
+ { 0x06, 0xef31 },
+ { 0x06, 0x3b65 },
+ { 0x06, 0xaa07 },
+ { 0x06, 0x81e4 },
+ { 0x06, 0xf89a },
+ { 0x06, 0xe5f8 },
+ { 0x06, 0x9baf },
+ { 0x06, 0x06ae },
+ { 0x05, 0x83dc },
+ { 0x06, 0x8300 },
+ };
+
+ rtl_phy_write(ioaddr, phy_reg_init_1,
+ ARRAY_SIZE(phy_reg_init_1));
+ }
+
+ mdio_write(ioaddr, 0x1f, 0x0000);
+}
+
+static void rtl8102e_hw_phy_config(void __iomem *ioaddr)
+{
+ struct phy_reg phy_reg_init[] = {
+ { 0x1f, 0x0003 },
+ { 0x08, 0x441d },
+ { 0x01, 0x9100 },
+ { 0x1f, 0x0000 }
+ };
+
+ mdio_write(ioaddr, 0x1f, 0x0000);
+ mdio_patch(ioaddr, 0x11, 1 << 12);
+ mdio_patch(ioaddr, 0x19, 1 << 13);
+
+ rtl_phy_write(ioaddr, phy_reg_init, ARRAY_SIZE(phy_reg_init));
+}
+
+static void rtl_hw_phy_config(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ rtl8169_print_mac_version(tp);
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_01:
+ break;
+ case RTL_GIGA_MAC_VER_02:
+ case RTL_GIGA_MAC_VER_03:
+ rtl8169s_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_04:
+ rtl8169sb_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_07:
+ case RTL_GIGA_MAC_VER_08:
+ case RTL_GIGA_MAC_VER_09:
+ rtl8102e_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_11:
+ rtl8168bb_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_12:
+ rtl8168bef_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_17:
+ rtl8168bef_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_18:
+ rtl8168cp_1_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_19:
+ rtl8168c_1_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_20:
+ rtl8168c_2_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_21:
+ rtl8168c_3_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_22:
+ rtl8168c_4_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_23:
+ case RTL_GIGA_MAC_VER_24:
+ rtl8168cp_2_hw_phy_config(ioaddr);
+ break;
+ case RTL_GIGA_MAC_VER_25:
+ rtl8168d_hw_phy_config(ioaddr);
+ break;
+
+ default:
+ break;
+ }
+}
+
+static void rtl8169_phy_timer(unsigned long __opaque)
+{
+ struct net_device *dev = (struct net_device *)__opaque;
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long timeout = RTL8169_PHY_TIMEOUT;
+
+ assert(tp->mac_version > RTL_GIGA_MAC_VER_01);
+
+ if (!(tp->phy_1000_ctrl_reg & ADVERTISE_1000FULL))
+ return;
+
+ spin_lock_irq(&tp->lock);
+
+ if (tp->phy_reset_pending(ioaddr)) {
+ /*
+ * A busy loop could burn quite a few cycles on nowadays CPU.
+ * Let's delay the execution of the timer for a few ticks.
+ */
+ timeout = HZ/10;
+ goto out_mod_timer;
+ }
+
+ if (tp->link_ok(ioaddr))
+ goto out_unlock;
+
+ if (netif_msg_link(tp))
+ printk(KERN_WARNING "%s: PHY reset until link up\n", dev->name);
+
+ tp->phy_reset_enable(ioaddr);
+
+out_mod_timer:
+ mod_timer(timer, jiffies + timeout);
+out_unlock:
+ spin_unlock_irq(&tp->lock);
+}
+
+static inline void rtl8169_delete_timer(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_01)
+ return;
+
+ del_timer_sync(timer);
+}
+
+static inline void rtl8169_request_timer(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct timer_list *timer = &tp->timer;
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_01)
+ return;
+
+ mod_timer(timer, jiffies + RTL8169_PHY_TIMEOUT);
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+/*
+ * Polling 'interrupt' - used by things like netconsole to send skbs
+ * without having to re-enable interrupts. It's not called while
+ * the interrupt routine is executing.
+ */
+static void rtl8169_netpoll(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ disable_irq(pdev->irq);
+ rtl8169_interrupt(pdev->irq, dev);
+ enable_irq(pdev->irq);
+}
+#endif
+
+static void rtl8169_release_board(struct pci_dev *pdev, struct net_device *dev,
+ void __iomem *ioaddr)
+{
+ iounmap(ioaddr);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ free_netdev(dev);
+}
+
+static void rtl8169_phy_reset(struct net_device *dev,
+ struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+
+ tp->phy_reset_enable(ioaddr);
+ for (i = 0; i < 100; i++) {
+ if (!tp->phy_reset_pending(ioaddr))
+ return;
+ msleep(1);
+ }
+ if (netif_msg_link(tp))
+ printk(KERN_ERR "%s: PHY reset failed.\n", dev->name);
+}
+
+static void rtl8169_init_phy(struct net_device *dev, struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ rtl_hw_phy_config(dev);
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
+ dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
+ RTL_W8(0x82, 0x01);
+ }
+
+ pci_write_config_byte(tp->pci_dev, PCI_LATENCY_TIMER, 0x40);
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
+ pci_write_config_byte(tp->pci_dev, PCI_CACHE_LINE_SIZE, 0x08);
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_02) {
+ dprintk("Set MAC Reg C+CR Offset 0x82h = 0x01h\n");
+ RTL_W8(0x82, 0x01);
+ dprintk("Set PHY Reg 0x0bh = 0x00h\n");
+ mdio_write(ioaddr, 0x0b, 0x0000); //w 0x0b 15 0 0
+ }
+
+ rtl8169_phy_reset(dev, tp);
+
+ /*
+ * rtl8169_set_speed_xmii takes good care of the Fast Ethernet
+ * only 8101. Don't panic.
+ */
+ rtl8169_set_speed(dev, AUTONEG_ENABLE, SPEED_1000, DUPLEX_FULL);
+
+ if ((RTL_R8(PHYstatus) & TBI_Enable) && netif_msg_link(tp))
+ printk(KERN_INFO PFX "%s: TBI auto-negotiating\n", dev->name);
+}
+
+static void rtl_rar_set(struct rtl8169_private *tp, u8 *addr)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 high;
+ u32 low;
+
+ low = addr[0] | (addr[1] << 8) | (addr[2] << 16) | (addr[3] << 24);
+ high = addr[4] | (addr[5] << 8);
+
+ spin_lock_irq(&tp->lock);
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ RTL_W32(MAC0, low);
+ RTL_W32(MAC4, high);
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ spin_unlock_irq(&tp->lock);
+}
+
+static int rtl_set_mac_address(struct net_device *dev, void *p)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+
+ rtl_rar_set(tp, dev->dev_addr);
+
+ return 0;
+}
+
+static int rtl8169_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct mii_ioctl_data *data = if_mii(ifr);
+
+ if (!netif_running(dev))
+ return -ENODEV;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ data->phy_id = 32; /* Internal PHY */
+ return 0;
+
+ case SIOCGMIIREG:
+ data->val_out = mdio_read(tp->mmio_addr, data->reg_num & 0x1f);
+ return 0;
+
+ case SIOCSMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ mdio_write(tp->mmio_addr, data->reg_num & 0x1f, data->val_in);
+ return 0;
+ }
+ return -EOPNOTSUPP;
+}
+
+static const struct rtl_cfg_info {
+ void (*hw_start)(struct net_device *);
+ unsigned int region;
+ unsigned int align;
+ u16 intr_event;
+ u16 napi_event;
+ unsigned features;
+} rtl_cfg_infos [] = {
+ [RTL_CFG_0] = {
+ .hw_start = rtl_hw_start_8169,
+ .region = 1,
+ .align = 0,
+ .intr_event = SYSErr | LinkChg | RxOverflow |
+ RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
+ .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow,
+ .features = RTL_FEATURE_GMII
+ },
+ [RTL_CFG_1] = {
+ .hw_start = rtl_hw_start_8168,
+ .region = 2,
+ .align = 8,
+ .intr_event = SYSErr | LinkChg | RxOverflow |
+ TxErr | TxOK | RxOK | RxErr,
+ .napi_event = TxErr | TxOK | RxOK | RxOverflow,
+ .features = RTL_FEATURE_GMII | RTL_FEATURE_MSI
+ },
+ [RTL_CFG_2] = {
+ .hw_start = rtl_hw_start_8101,
+ .region = 2,
+ .align = 8,
+ .intr_event = SYSErr | LinkChg | RxOverflow | PCSTimeout |
+ RxFIFOOver | TxErr | TxOK | RxOK | RxErr,
+ .napi_event = RxFIFOOver | TxErr | TxOK | RxOK | RxOverflow,
+ .features = RTL_FEATURE_MSI
+ }
+};
+
+/* Cfg9346_Unlock assumed. */
+static unsigned rtl_try_msi(struct pci_dev *pdev, void __iomem *ioaddr,
+ const struct rtl_cfg_info *cfg)
+{
+ unsigned msi = 0;
+ u8 cfg2;
+
+ cfg2 = RTL_R8(Config2) & ~MSIEnable;
+ if (cfg->features & RTL_FEATURE_MSI) {
+ if (pci_enable_msi(pdev)) {
+ dev_info(&pdev->dev, "no MSI. Back to INTx.\n");
+ } else {
+ cfg2 |= MSIEnable;
+ msi = RTL_FEATURE_MSI;
+ }
+ }
+ RTL_W8(Config2, cfg2);
+ return msi;
+}
+
+static void rtl_disable_msi(struct pci_dev *pdev, struct rtl8169_private *tp)
+{
+ if (tp->features & RTL_FEATURE_MSI) {
+ pci_disable_msi(pdev);
+ tp->features &= ~RTL_FEATURE_MSI;
+ }
+}
+
+static int __devinit
+rtl8169_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
+ const unsigned int region = cfg->region;
+ struct rtl8169_private *tp;
+ struct mii_if_info *mii;
+ struct net_device *dev;
+ void __iomem *ioaddr;
+ unsigned int i;
+ int rc;
+
+ if (netif_msg_drv(&debug)) {
+ printk(KERN_INFO "%s Gigabit Ethernet driver %s loaded\n",
+ MODULENAME, RTL8169_VERSION);
+ }
+
+ dev = alloc_etherdev(sizeof (*tp));
+ if (!dev) {
+ if (netif_msg_drv(&debug))
+ dev_err(&pdev->dev, "unable to alloc new ethernet\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ tp = netdev_priv(dev);
+ tp->dev = dev;
+ tp->pci_dev = pdev;
+ tp->msg_enable = netif_msg_init(debug.msg_enable, R8169_MSG_DEFAULT);
+
+ mii = &tp->mii;
+ mii->dev = dev;
+ mii->mdio_read = rtl_mdio_read;
+ mii->mdio_write = rtl_mdio_write;
+ mii->phy_id_mask = 0x1f;
+ mii->reg_num_mask = 0x1f;
+ mii->supports_gmii = !!(cfg->features & RTL_FEATURE_GMII);
+
+ /* enable device (incl. PCI PM wakeup and hotplug setup) */
+ rc = pci_enable_device(pdev);
+ if (rc < 0) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "enable failure\n");
+ goto err_out_free_dev_1;
+ }
+
+ rc = pci_set_mwi(pdev);
+ if (rc < 0)
+ goto err_out_disable_2;
+
+ /* make sure PCI base addr 1 is MMIO */
+ if (!(pci_resource_flags(pdev, region) & IORESOURCE_MEM)) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "region #%d not an MMIO resource, aborting\n",
+ region);
+ }
+ rc = -ENODEV;
+ goto err_out_mwi_3;
+ }
+
+ /* check for weird/broken PCI region reporting */
+ if (pci_resource_len(pdev, region) < R8169_REGS_SIZE) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "Invalid PCI region size(s), aborting\n");
+ }
+ rc = -ENODEV;
+ goto err_out_mwi_3;
+ }
+
+ rc = pci_request_regions(pdev, MODULENAME);
+ if (rc < 0) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "could not request regions.\n");
+ goto err_out_mwi_3;
+ }
+
+ tp->cp_cmd = PCIMulRW | RxChkSum;
+
+ if ((sizeof(dma_addr_t) > 4) &&
+ !pci_set_dma_mask(pdev, DMA_64BIT_MASK) && use_dac) {
+ tp->cp_cmd |= PCIDAC;
+ dev->features |= NETIF_F_HIGHDMA;
+ } else {
+ rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+ if (rc < 0) {
+ if (netif_msg_probe(tp)) {
+ dev_err(&pdev->dev,
+ "DMA configuration failed.\n");
+ }
+ goto err_out_free_res_4;
+ }
+ }
+
+ pci_set_master(pdev);
+
+ /* ioremap MMIO region */
+ ioaddr = ioremap(pci_resource_start(pdev, region), R8169_REGS_SIZE);
+ if (!ioaddr) {
+ if (netif_msg_probe(tp))
+ dev_err(&pdev->dev, "cannot remap MMIO, aborting\n");
+ rc = -EIO;
+ goto err_out_free_res_4;
+ }
+
+ tp->pcie_cap = pci_find_capability(pdev, PCI_CAP_ID_EXP);
+ if (!tp->pcie_cap && netif_msg_probe(tp))
+ dev_info(&pdev->dev, "no PCI Express capability\n");
+
+ RTL_W16(IntrMask, 0x0000);
+
+ /* Soft reset the chip. */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* Check that the chip has finished the reset. */
+ for (i = 0; i < 100; i++) {
+ if ((RTL_R8(ChipCmd) & CmdReset) == 0)
+ break;
+ msleep_interruptible(1);
+ }
+
+ RTL_W16(IntrStatus, 0xffff);
+
+ /* Identify chip attached to board */
+ rtl8169_get_mac_version(tp, ioaddr);
+
+ rtl8169_print_mac_version(tp);
+
+ for (i = 0; i < ARRAY_SIZE(rtl_chip_info); i++) {
+ if (tp->mac_version == rtl_chip_info[i].mac_version)
+ break;
+ }
+ if (i == ARRAY_SIZE(rtl_chip_info)) {
+ /* Unknown chip: assume array element #0, original RTL-8169 */
+ if (netif_msg_probe(tp)) {
+ dev_printk(KERN_DEBUG, &pdev->dev,
+ "unknown chip version, assuming %s\n",
+ rtl_chip_info[0].name);
+ }
+ i = 0;
+ }
+ tp->chipset = i;
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ RTL_W8(Config1, RTL_R8(Config1) | PMEnable);
+ RTL_W8(Config5, RTL_R8(Config5) & PMEStatus);
+ if ((RTL_R8(Config3) & (LinkUp | MagicPacket)) != 0)
+ tp->features |= RTL_FEATURE_WOL;
+ if ((RTL_R8(Config5) & (UWF | BWF | MWF)) != 0)
+ tp->features |= RTL_FEATURE_WOL;
+ tp->features |= rtl_try_msi(pdev, ioaddr, cfg);
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ if ((tp->mac_version <= RTL_GIGA_MAC_VER_06) &&
+ (RTL_R8(PHYstatus) & TBI_Enable)) {
+ tp->set_speed = rtl8169_set_speed_tbi;
+ tp->get_settings = rtl8169_gset_tbi;
+ tp->phy_reset_enable = rtl8169_tbi_reset_enable;
+ tp->phy_reset_pending = rtl8169_tbi_reset_pending;
+ tp->link_ok = rtl8169_tbi_link_ok;
+
+ tp->phy_1000_ctrl_reg = ADVERTISE_1000FULL; /* Implied by TBI */
+ } else {
+ tp->set_speed = rtl8169_set_speed_xmii;
+ tp->get_settings = rtl8169_gset_xmii;
+ tp->phy_reset_enable = rtl8169_xmii_reset_enable;
+ tp->phy_reset_pending = rtl8169_xmii_reset_pending;
+ tp->link_ok = rtl8169_xmii_link_ok;
+
+ dev->do_ioctl = rtl8169_ioctl;
+ }
+
+ spin_lock_init(&tp->lock);
+
+ tp->mmio_addr = ioaddr;
+
+ /* Get MAC address */
+ for (i = 0; i < MAC_ADDR_LEN; i++)
+ dev->dev_addr[i] = RTL_R8(MAC0 + i);
+ memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
+
+ dev->open = rtl8169_open;
+ dev->hard_start_xmit = rtl8169_start_xmit;
+ dev->get_stats = rtl8169_get_stats;
+ SET_ETHTOOL_OPS(dev, &rtl8169_ethtool_ops);
+ dev->stop = rtl8169_close;
+ dev->tx_timeout = rtl8169_tx_timeout;
+ dev->set_multicast_list = rtl_set_rx_mode;
+ dev->watchdog_timeo = RTL8169_TX_TIMEOUT;
+ dev->irq = pdev->irq;
+ dev->base_addr = (unsigned long) ioaddr;
+ dev->change_mtu = rtl8169_change_mtu;
+ dev->set_mac_address = rtl_set_mac_address;
+
+ netif_napi_add(dev, &tp->napi, rtl8169_poll, R8169_NAPI_WEIGHT);
+
+#ifdef CONFIG_R8169_VLAN
+ dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
+ dev->vlan_rx_register = rtl8169_vlan_rx_register;
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ dev->poll_controller = rtl8169_netpoll;
+#endif
+
+ tp->intr_mask = 0xffff;
+ tp->align = cfg->align;
+ tp->hw_start = cfg->hw_start;
+ tp->intr_event = cfg->intr_event;
+ tp->napi_event = cfg->napi_event;
+
+ init_timer(&tp->timer);
+ tp->timer.data = (unsigned long) dev;
+ tp->timer.function = rtl8169_phy_timer;
+
+ rc = register_netdev(dev);
+ if (rc < 0)
+ goto err_out_msi_5;
+
+ pci_set_drvdata(pdev, dev);
+
+ if (netif_msg_probe(tp)) {
+ u32 xid = RTL_R32(TxConfig) & 0x7cf0f8ff;
+
+ printk(KERN_INFO "%s: %s at 0x%lx, "
+ "%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x, "
+ "XID %08x IRQ %d\n",
+ dev->name,
+ rtl_chip_info[tp->chipset].name,
+ dev->base_addr,
+ dev->dev_addr[0], dev->dev_addr[1],
+ dev->dev_addr[2], dev->dev_addr[3],
+ dev->dev_addr[4], dev->dev_addr[5], xid, dev->irq);
+ }
+
+ rtl8169_init_phy(dev, tp);
+ device_set_wakeup_enable(&pdev->dev, tp->features & RTL_FEATURE_WOL);
+
+out:
+ return rc;
+
+err_out_msi_5:
+ rtl_disable_msi(pdev, tp);
+ iounmap(ioaddr);
+err_out_free_res_4:
+ pci_release_regions(pdev);
+err_out_mwi_3:
+ pci_clear_mwi(pdev);
+err_out_disable_2:
+ pci_disable_device(pdev);
+err_out_free_dev_1:
+ free_netdev(dev);
+ goto out;
+}
+
+static void __devexit rtl8169_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ flush_scheduled_work();
+
+ unregister_netdev(dev);
+ rtl_disable_msi(pdev, tp);
+ rtl8169_release_board(pdev, dev, tp->mmio_addr);
+ pci_set_drvdata(pdev, NULL);
+}
+
+static void rtl8169_set_rxbufsize(struct rtl8169_private *tp,
+ struct net_device *dev)
+{
+ unsigned int mtu = dev->mtu;
+
+ tp->rx_buf_sz = (mtu > RX_BUF_SIZE) ? mtu + ETH_HLEN + 8 : RX_BUF_SIZE;
+}
+
+static int rtl8169_open(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+ int retval = -ENOMEM;
+
+
+ rtl8169_set_rxbufsize(tp, dev);
+
+ /*
+ * Rx and Tx desscriptors needs 256 bytes alignment.
+ * pci_alloc_consistent provides more.
+ */
+ tp->TxDescArray = pci_alloc_consistent(pdev, R8169_TX_RING_BYTES,
+ &tp->TxPhyAddr);
+ if (!tp->TxDescArray)
+ goto out;
+
+ tp->RxDescArray = pci_alloc_consistent(pdev, R8169_RX_RING_BYTES,
+ &tp->RxPhyAddr);
+ if (!tp->RxDescArray)
+ goto err_free_tx_0;
+
+ retval = rtl8169_init_ring(dev);
+ if (retval < 0)
+ goto err_free_rx_1;
+
+ INIT_DELAYED_WORK(&tp->task, NULL);
+
+ smp_mb();
+
+ retval = request_irq(dev->irq, rtl8169_interrupt,
+ (tp->features & RTL_FEATURE_MSI) ? 0 : IRQF_SHARED,
+ dev->name, dev);
+ if (retval < 0)
+ goto err_release_ring_2;
+
+ napi_enable(&tp->napi);
+
+ rtl_hw_start(dev);
+
+ rtl8169_request_timer(dev);
+
+ rtl8169_check_link_status(dev, tp, tp->mmio_addr);
+out:
+ return retval;
+
+err_release_ring_2:
+ rtl8169_rx_clear(tp);
+err_free_rx_1:
+ pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+err_free_tx_0:
+ pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
+ tp->TxPhyAddr);
+ goto out;
+}
+
+static void rtl8169_hw_reset(void __iomem *ioaddr)
+{
+ /* Disable interrupts */
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+ /* Reset the chipset */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* PCI commit */
+ RTL_R8(ChipCmd);
+}
+
+static void rtl_set_rx_tx_config_registers(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 cfg = rtl8169_rx_config;
+
+ cfg |= (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
+ RTL_W32(RxConfig, cfg);
+
+ /* Set DMA burst size and Interframe Gap Time */
+ RTL_W32(TxConfig, (TX_DMA_BURST << TxDMAShift) |
+ (InterFrameGap << TxInterFrameGapShift));
+}
+
+static void rtl_hw_start(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int i;
+
+ /* Soft reset the chip. */
+ RTL_W8(ChipCmd, CmdReset);
+
+ /* Check that the chip has finished the reset. */
+ for (i = 0; i < 100; i++) {
+ if ((RTL_R8(ChipCmd) & CmdReset) == 0)
+ break;
+ msleep_interruptible(1);
+ }
+
+ tp->hw_start(dev);
+
+ netif_start_queue(dev);
+}
+
+
+static void rtl_set_rx_tx_desc_registers(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ /*
+ * Magic spell: some iop3xx ARM board needs the TxDescAddrHigh
+ * register to be written before TxDescAddrLow to work.
+ * Switching from MMIO to I/O access fixes the issue as well.
+ */
+ RTL_W32(TxDescStartAddrHigh, ((u64) tp->TxPhyAddr) >> 32);
+ RTL_W32(TxDescStartAddrLow, ((u64) tp->TxPhyAddr) & DMA_32BIT_MASK);
+ RTL_W32(RxDescAddrHigh, ((u64) tp->RxPhyAddr) >> 32);
+ RTL_W32(RxDescAddrLow, ((u64) tp->RxPhyAddr) & DMA_32BIT_MASK);
+}
+
+static u16 rtl_rw_cpluscmd(void __iomem *ioaddr)
+{
+ u16 cmd;
+
+ cmd = RTL_R16(CPlusCmd);
+ RTL_W16(CPlusCmd, cmd);
+ return cmd;
+}
+
+static void rtl_set_rx_max_size(void __iomem *ioaddr)
+{
+ /* Low hurts. Let's disable the filtering. */
+ RTL_W16(RxMaxSize, 16383);
+}
+
+static void rtl8169_set_magic_reg(void __iomem *ioaddr, unsigned mac_version)
+{
+ struct {
+ u32 mac_version;
+ u32 clk;
+ u32 val;
+ } cfg2_info [] = {
+ { RTL_GIGA_MAC_VER_05, PCI_Clock_33MHz, 0x000fff00 }, // 8110SCd
+ { RTL_GIGA_MAC_VER_05, PCI_Clock_66MHz, 0x000fffff },
+ { RTL_GIGA_MAC_VER_06, PCI_Clock_33MHz, 0x00ffff00 }, // 8110SCe
+ { RTL_GIGA_MAC_VER_06, PCI_Clock_66MHz, 0x00ffffff }
+ }, *p = cfg2_info;
+ unsigned int i;
+ u32 clk;
+
+ clk = RTL_R8(Config2) & PCI_Clock_66MHz;
+ for (i = 0; i < ARRAY_SIZE(cfg2_info); i++, p++) {
+ if ((p->mac_version == mac_version) && (p->clk == clk)) {
+ RTL_W32(0x7c, p->val);
+ break;
+ }
+ }
+}
+
+static void rtl_hw_start_8169(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_05) {
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) | PCIMulRW);
+ pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x08);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_04))
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_01) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_04))
+ rtl_set_rx_tx_config_registers(tp);
+
+ tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_02) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_03)) {
+ dprintk("Set MAC Reg C+CR Offset 0xE0. "
+ "Bit-3 and bit-14 MUST be 1\n");
+ tp->cp_cmd |= (1 << 14);
+ }
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ rtl8169_set_magic_reg(ioaddr, tp->mac_version);
+
+ /*
+ * Undocumented corner. Supposedly:
+ * (TxTimer << 12) | (TxPackets << 8) | (RxTimer << 4) | RxPackets
+ */
+ RTL_W16(IntrMitigate, 0x0000);
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ if ((tp->mac_version != RTL_GIGA_MAC_VER_01) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_02) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_03) &&
+ (tp->mac_version != RTL_GIGA_MAC_VER_04)) {
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_set_rx_tx_config_registers(tp);
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ /* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ RTL_R8(IntrMask);
+
+ RTL_W32(RxMissed, 0);
+
+ rtl_set_rx_mode(dev);
+
+ /* no early-rx interrupts */
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+
+ /* Enable all known interrupts by setting the interrupt mask. */
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static void rtl_tx_performance_tweak(struct pci_dev *pdev, u16 force)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int cap = tp->pcie_cap;
+
+ if (cap) {
+ u16 ctl;
+
+ pci_read_config_word(pdev, cap + PCI_EXP_DEVCTL, &ctl);
+ ctl = (ctl & ~PCI_EXP_DEVCTL_READRQ) | force;
+ pci_write_config_word(pdev, cap + PCI_EXP_DEVCTL, ctl);
+ }
+}
+
+static void rtl_csi_access_enable(void __iomem *ioaddr)
+{
+ u32 csi;
+
+ csi = rtl_csi_read(ioaddr, 0x070c) & 0x00ffffff;
+ rtl_csi_write(ioaddr, 0x070c, csi | 0x27000000);
+}
+
+struct ephy_info {
+ unsigned int offset;
+ u16 mask;
+ u16 bits;
+};
+
+static void rtl_ephy_init(void __iomem *ioaddr, struct ephy_info *e, int len)
+{
+ u16 w;
+
+ while (len-- > 0) {
+ w = (rtl_ephy_read(ioaddr, e->offset) & ~e->mask) | e->bits;
+ rtl_ephy_write(ioaddr, e->offset, w);
+ e++;
+ }
+}
+
+static void rtl_disable_clock_request(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int cap = tp->pcie_cap;
+
+ if (cap) {
+ u16 ctl;
+
+ pci_read_config_word(pdev, cap + PCI_EXP_LNKCTL, &ctl);
+ ctl &= ~PCI_EXP_LNKCTL_CLKREQ_EN;
+ pci_write_config_word(pdev, cap + PCI_EXP_LNKCTL, ctl);
+ }
+}
+
+#define R8168_CPCMD_QUIRK_MASK (\
+ EnableBist | \
+ Mac_dbgo_oe | \
+ Force_half_dup | \
+ Force_rxflow_en | \
+ Force_txflow_en | \
+ Cxpl_dbg_sel | \
+ ASF | \
+ PktCntrDisable | \
+ Mac_dbgo_sel)
+
+static void rtl_hw_start_8168bb(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+
+ rtl_tx_performance_tweak(pdev,
+ (0x5 << MAX_READ_REQUEST_SHIFT) | PCI_EXP_DEVCTL_NOSNOOP_EN);
+}
+
+static void rtl_hw_start_8168bef(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_hw_start_8168bb(ioaddr, pdev);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ RTL_W8(Config4, RTL_R8(Config4) & ~(1 << 0));
+}
+
+static void __rtl_hw_start_8168cp(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ RTL_W8(Config1, RTL_R8(Config1) | Speed_down);
+
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ rtl_disable_clock_request(pdev);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8168cp_1(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ static struct ephy_info e_info_8168cp[] = {
+ { 0x01, 0, 0x0001 },
+ { 0x02, 0x0800, 0x1000 },
+ { 0x03, 0, 0x0042 },
+ { 0x06, 0x0080, 0x0000 },
+ { 0x07, 0, 0x2000 }
+ };
+
+ rtl_csi_access_enable(ioaddr);
+
+ rtl_ephy_init(ioaddr, e_info_8168cp, ARRAY_SIZE(e_info_8168cp));
+
+ __rtl_hw_start_8168cp(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168cp_2(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8168cp_3(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ /* Magic. */
+ RTL_W8(DBG_REG, 0x20);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8168c_1(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ static struct ephy_info e_info_8168c_1[] = {
+ { 0x02, 0x0800, 0x1000 },
+ { 0x03, 0, 0x0002 },
+ { 0x06, 0x0080, 0x0000 }
+ };
+
+ rtl_csi_access_enable(ioaddr);
+
+ RTL_W8(DBG_REG, 0x06 | FIX_NAK_1 | FIX_NAK_2);
+
+ rtl_ephy_init(ioaddr, e_info_8168c_1, ARRAY_SIZE(e_info_8168c_1));
+
+ __rtl_hw_start_8168cp(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168c_2(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ static struct ephy_info e_info_8168c_2[] = {
+ { 0x01, 0, 0x0001 },
+ { 0x03, 0x0400, 0x0220 }
+ };
+
+ rtl_csi_access_enable(ioaddr);
+
+ rtl_ephy_init(ioaddr, e_info_8168c_2, ARRAY_SIZE(e_info_8168c_2));
+
+ __rtl_hw_start_8168cp(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168c_3(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_hw_start_8168c_2(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168c_4(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ __rtl_hw_start_8168cp(ioaddr, pdev);
+}
+
+static void rtl_hw_start_8168d(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ rtl_disable_clock_request(pdev);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R8168_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8168(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ tp->cp_cmd |= RTL_R16(CPlusCmd) | PktCntrDisable | INTT_1;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ RTL_W16(IntrMitigate, 0x5151);
+
+ /* Work around for RxFIFO overflow. */
+ if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+ tp->intr_event |= RxFIFOOver | PCSTimeout;
+ tp->intr_event &= ~RxOverflow;
+ }
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ rtl_set_rx_mode(dev);
+
+ RTL_W32(TxConfig, (TX_DMA_BURST << TxDMAShift) |
+ (InterFrameGap << TxInterFrameGapShift));
+
+ RTL_R8(IntrMask);
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_11:
+ rtl_hw_start_8168bb(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_12:
+ case RTL_GIGA_MAC_VER_17:
+ rtl_hw_start_8168bef(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_18:
+ rtl_hw_start_8168cp_1(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_19:
+ rtl_hw_start_8168c_1(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_20:
+ rtl_hw_start_8168c_2(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_21:
+ rtl_hw_start_8168c_3(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_22:
+ rtl_hw_start_8168c_4(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_23:
+ rtl_hw_start_8168cp_2(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_24:
+ rtl_hw_start_8168cp_3(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_25:
+ rtl_hw_start_8168d(ioaddr, pdev);
+ break;
+
+ default:
+ printk(KERN_ERR PFX "%s: unknown chipset (mac_version = %d).\n",
+ dev->name, tp->mac_version);
+ break;
+ }
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+#define R810X_CPCMD_QUIRK_MASK (\
+ EnableBist | \
+ Mac_dbgo_oe | \
+ Force_half_dup | \
+ Force_half_dup | \
+ Force_txflow_en | \
+ Cxpl_dbg_sel | \
+ ASF | \
+ PktCntrDisable | \
+ PCIDAC | \
+ PCIMulRW)
+
+static void rtl_hw_start_8102e_1(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ static struct ephy_info e_info_8102e_1[] = {
+ { 0x01, 0, 0x6e65 },
+ { 0x02, 0, 0x091f },
+ { 0x03, 0, 0xc2f9 },
+ { 0x06, 0, 0xafb5 },
+ { 0x07, 0, 0x0e00 },
+ { 0x19, 0, 0xec80 },
+ { 0x01, 0, 0x2e65 },
+ { 0x01, 0, 0x6e65 }
+ };
+ u8 cfg1;
+
+ rtl_csi_access_enable(ioaddr);
+
+ RTL_W8(DBG_REG, FIX_NAK_1);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W8(Config1,
+ LEDS1 | LEDS0 | Speed_down | MEMMAP | IOMAP | VPD | PMEnable);
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ cfg1 = RTL_R8(Config1);
+ if ((cfg1 & LEDS0) && (cfg1 & LEDS1))
+ RTL_W8(Config1, cfg1 & ~LEDS0);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R810X_CPCMD_QUIRK_MASK);
+
+ rtl_ephy_init(ioaddr, e_info_8102e_1, ARRAY_SIZE(e_info_8102e_1));
+}
+
+static void rtl_hw_start_8102e_2(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_csi_access_enable(ioaddr);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ RTL_W8(Config1, MEMMAP | IOMAP | VPD | PMEnable);
+ RTL_W8(Config3, RTL_R8(Config3) & ~Beacon_en);
+
+ RTL_W16(CPlusCmd, RTL_R16(CPlusCmd) & ~R810X_CPCMD_QUIRK_MASK);
+}
+
+static void rtl_hw_start_8102e_3(void __iomem *ioaddr, struct pci_dev *pdev)
+{
+ rtl_hw_start_8102e_2(ioaddr, pdev);
+
+ rtl_ephy_write(ioaddr, 0x03, 0xc2f9);
+}
+
+static void rtl_hw_start_8101(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_13) ||
+ (tp->mac_version == RTL_GIGA_MAC_VER_16)) {
+ int cap = tp->pcie_cap;
+
+ if (cap) {
+ pci_write_config_word(pdev, cap + PCI_EXP_DEVCTL,
+ PCI_EXP_DEVCTL_NOSNOOP_EN);
+ }
+ }
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_07:
+ rtl_hw_start_8102e_1(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_08:
+ rtl_hw_start_8102e_3(ioaddr, pdev);
+ break;
+
+ case RTL_GIGA_MAC_VER_09:
+ rtl_hw_start_8102e_2(ioaddr, pdev);
+ break;
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Unlock);
+
+ RTL_W8(EarlyTxThres, EarlyTxThld);
+
+ rtl_set_rx_max_size(ioaddr);
+
+ tp->cp_cmd |= rtl_rw_cpluscmd(ioaddr) | PCIMulRW;
+
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+
+ RTL_W16(IntrMitigate, 0x0000);
+
+ rtl_set_rx_tx_desc_registers(tp, ioaddr);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_set_rx_tx_config_registers(tp);
+
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ RTL_R8(IntrMask);
+
+ rtl_set_rx_mode(dev);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xf000);
+
+ RTL_W16(IntrMask, tp->intr_event);
+}
+
+static int rtl8169_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ int ret = 0;
+
+ if (new_mtu < ETH_ZLEN || new_mtu > SafeMtu)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ if (!netif_running(dev))
+ goto out;
+
+ rtl8169_down(dev);
+
+ rtl8169_set_rxbufsize(tp, dev);
+
+ ret = rtl8169_init_ring(dev);
+ if (ret < 0)
+ goto out;
+
+ napi_enable(&tp->napi);
+
+ rtl_hw_start(dev);
+
+ rtl8169_request_timer(dev);
+
+out:
+ return ret;
+}
+
+static inline void rtl8169_make_unusable_by_asic(struct RxDesc *desc)
+{
+ desc->addr = cpu_to_le64(0x0badbadbadbadbadull);
+ desc->opts1 &= ~cpu_to_le32(DescOwn | RsvdMask);
+}
+
+static void rtl8169_free_rx_skb(struct rtl8169_private *tp,
+ struct sk_buff **sk_buff, struct RxDesc *desc)
+{
+ struct pci_dev *pdev = tp->pci_dev;
+
+ pci_unmap_single(pdev, le64_to_cpu(desc->addr), tp->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(*sk_buff);
+ *sk_buff = NULL;
+ rtl8169_make_unusable_by_asic(desc);
+}
+
+static inline void rtl8169_mark_to_asic(struct RxDesc *desc, u32 rx_buf_sz)
+{
+ u32 eor = le32_to_cpu(desc->opts1) & RingEnd;
+
+ desc->opts1 = cpu_to_le32(DescOwn | eor | rx_buf_sz);
+}
+
+static inline void rtl8169_map_to_asic(struct RxDesc *desc, dma_addr_t mapping,
+ u32 rx_buf_sz)
+{
+ desc->addr = cpu_to_le64(mapping);
+ wmb();
+ rtl8169_mark_to_asic(desc, rx_buf_sz);
+}
+
+static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev,
+ struct net_device *dev,
+ struct RxDesc *desc, int rx_buf_sz,
+ unsigned int align)
+{
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+ unsigned int pad;
+
+ pad = align ? align : NET_IP_ALIGN;
+
+ skb = netdev_alloc_skb(dev, rx_buf_sz + pad);
+ if (!skb)
+ goto err_out;
+
+ skb_reserve(skb, align ? ((pad - 1) & (unsigned long)skb->data) : pad);
+
+ mapping = pci_map_single(pdev, skb->data, rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+
+ rtl8169_map_to_asic(desc, mapping, rx_buf_sz);
+out:
+ return skb;
+
+err_out:
+ rtl8169_make_unusable_by_asic(desc);
+ goto out;
+}
+
+static void rtl8169_rx_clear(struct rtl8169_private *tp)
+{
+ unsigned int i;
+
+ for (i = 0; i < NUM_RX_DESC; i++) {
+ if (tp->Rx_skbuff[i]) {
+ rtl8169_free_rx_skb(tp, tp->Rx_skbuff + i,
+ tp->RxDescArray + i);
+ }
+ }
+}
+
+static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev,
+ u32 start, u32 end)
+{
+ u32 cur;
+
+ for (cur = start; end - cur != 0; cur++) {
+ struct sk_buff *skb;
+ unsigned int i = cur % NUM_RX_DESC;
+
+ WARN_ON((s32)(end - cur) < 0);
+
+ if (tp->Rx_skbuff[i])
+ continue;
+
+ skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev,
+ tp->RxDescArray + i,
+ tp->rx_buf_sz, tp->align);
+ if (!skb)
+ break;
+
+ tp->Rx_skbuff[i] = skb;
+ }
+ return cur - start;
+}
+
+static inline void rtl8169_mark_as_last_descriptor(struct RxDesc *desc)
+{
+ desc->opts1 |= cpu_to_le32(RingEnd);
+}
+
+static void rtl8169_init_ring_indexes(struct rtl8169_private *tp)
+{
+ tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0;
+}
+
+static int rtl8169_init_ring(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ rtl8169_init_ring_indexes(tp);
+
+ memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info));
+ memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *));
+
+ if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC)
+ goto err_out;
+
+ rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1);
+
+ return 0;
+
+err_out:
+ rtl8169_rx_clear(tp);
+ return -ENOMEM;
+}
+
+static void rtl8169_unmap_tx_skb(struct pci_dev *pdev, struct ring_info *tx_skb,
+ struct TxDesc *desc)
+{
+ unsigned int len = tx_skb->len;
+
+ pci_unmap_single(pdev, le64_to_cpu(desc->addr), len, PCI_DMA_TODEVICE);
+ desc->opts1 = 0x00;
+ desc->opts2 = 0x00;
+ desc->addr = 0x00;
+ tx_skb->len = 0;
+}
+
+static void rtl8169_tx_clear(struct rtl8169_private *tp)
+{
+ unsigned int i;
+
+ for (i = tp->dirty_tx; i < tp->dirty_tx + NUM_TX_DESC; i++) {
+ unsigned int entry = i % NUM_TX_DESC;
+ struct ring_info *tx_skb = tp->tx_skb + entry;
+ unsigned int len = tx_skb->len;
+
+ if (len) {
+ struct sk_buff *skb = tx_skb->skb;
+
+ rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb,
+ tp->TxDescArray + entry);
+ if (skb) {
+ dev_kfree_skb(skb);
+ tx_skb->skb = NULL;
+ }
+ tp->dev->stats.tx_dropped++;
+ }
+ }
+ tp->cur_tx = tp->dirty_tx = 0;
+}
+
+static void rtl8169_schedule_work(struct net_device *dev, work_func_t task)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ PREPARE_DELAYED_WORK(&tp->task, task);
+ schedule_delayed_work(&tp->task, 4);
+}
+
+static void rtl8169_wait_for_quiescence(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ synchronize_irq(dev->irq);
+
+ /* Wait for any pending NAPI task to complete */
+ napi_disable(&tp->napi);
+
+ rtl8169_irq_mask_and_ack(ioaddr);
+
+ tp->intr_mask = 0xffff;
+ RTL_W16(IntrMask, tp->intr_event);
+ napi_enable(&tp->napi);
+}
+
+static void rtl8169_reinit_task(struct work_struct *work)
+{
+ struct rtl8169_private *tp =
+ container_of(work, struct rtl8169_private, task.work);
+ struct net_device *dev = tp->dev;
+ int ret;
+
+ rtnl_lock();
+
+ if (!netif_running(dev))
+ goto out_unlock;
+
+ rtl8169_wait_for_quiescence(dev);
+ rtl8169_close(dev);
+
+ ret = rtl8169_open(dev);
+ if (unlikely(ret < 0)) {
+ if (net_ratelimit() && netif_msg_drv(tp)) {
+ printk(KERN_ERR PFX "%s: reinit failure (status = %d)."
+ " Rescheduling.\n", dev->name, ret);
+ }
+ rtl8169_schedule_work(dev, rtl8169_reinit_task);
+ }
+
+out_unlock:
+ rtnl_unlock();
+}
+
+static void rtl8169_reset_task(struct work_struct *work)
+{
+ struct rtl8169_private *tp =
+ container_of(work, struct rtl8169_private, task.work);
+ struct net_device *dev = tp->dev;
+
+ rtnl_lock();
+
+ if (!netif_running(dev))
+ goto out_unlock;
+
+ rtl8169_wait_for_quiescence(dev);
+
+ rtl8169_rx_interrupt(dev, tp, tp->mmio_addr, ~(u32)0);
+ rtl8169_tx_clear(tp);
+
+ if (tp->dirty_rx == tp->cur_rx) {
+ rtl8169_init_ring_indexes(tp);
+ rtl_hw_start(dev);
+ netif_wake_queue(dev);
+ rtl8169_check_link_status(dev, tp, tp->mmio_addr);
+ } else {
+ if (net_ratelimit() && netif_msg_intr(tp)) {
+ printk(KERN_EMERG PFX "%s: Rx buffers shortage\n",
+ dev->name);
+ }
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+ }
+
+out_unlock:
+ rtnl_unlock();
+}
+
+static void rtl8169_tx_timeout(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ rtl8169_hw_reset(tp->mmio_addr);
+
+ /* Let's wait a bit while any (async) irq lands on */
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+}
+
+static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb,
+ u32 opts1)
+{
+ struct skb_shared_info *info = skb_shinfo(skb);
+ unsigned int cur_frag, entry;
+ struct TxDesc * uninitialized_var(txd);
+
+ entry = tp->cur_tx;
+ for (cur_frag = 0; cur_frag < info->nr_frags; cur_frag++) {
+ skb_frag_t *frag = info->frags + cur_frag;
+ dma_addr_t mapping;
+ u32 status, len;
+ void *addr;
+
+ entry = (entry + 1) % NUM_TX_DESC;
+
+ txd = tp->TxDescArray + entry;
+ len = frag->size;
+ addr = ((void *) page_address(frag->page)) + frag->page_offset;
+ mapping = pci_map_single(tp->pci_dev, addr, len, PCI_DMA_TODEVICE);
+
+ /* anti gcc 2.95.3 bugware (sic) */
+ status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+
+ txd->opts1 = cpu_to_le32(status);
+ txd->addr = cpu_to_le64(mapping);
+
+ tp->tx_skb[entry].len = len;
+ }
+
+ if (cur_frag) {
+ tp->tx_skb[entry].skb = skb;
+ txd->opts1 |= cpu_to_le32(LastFrag);
+ }
+
+ return cur_frag;
+}
+
+static inline u32 rtl8169_tso_csum(struct sk_buff *skb, struct net_device *dev)
+{
+ if (dev->features & NETIF_F_TSO) {
+ u32 mss = skb_shinfo(skb)->gso_size;
+
+ if (mss)
+ return LargeSend | ((mss & MSSMask) << MSSShift);
+ }
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ const struct iphdr *ip = ip_hdr(skb);
+
+ if (ip->protocol == IPPROTO_TCP)
+ return IPCS | TCPCS;
+ else if (ip->protocol == IPPROTO_UDP)
+ return IPCS | UDPCS;
+ WARN_ON(1); /* we need a WARN() */
+ }
+ return 0;
+}
+
+static int rtl8169_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ unsigned int frags, entry = tp->cur_tx % NUM_TX_DESC;
+ struct TxDesc *txd = tp->TxDescArray + entry;
+ void __iomem *ioaddr = tp->mmio_addr;
+ dma_addr_t mapping;
+ u32 status, len;
+ u32 opts1;
+ int ret = NETDEV_TX_OK;
+
+ if (unlikely(TX_BUFFS_AVAIL(tp) < skb_shinfo(skb)->nr_frags)) {
+ if (netif_msg_drv(tp)) {
+ printk(KERN_ERR
+ "%s: BUG! Tx Ring full when queue awake!\n",
+ dev->name);
+ }
+ goto err_stop;
+ }
+
+ if (unlikely(le32_to_cpu(txd->opts1) & DescOwn))
+ goto err_stop;
+
+ opts1 = DescOwn | rtl8169_tso_csum(skb, dev);
+
+ frags = rtl8169_xmit_frags(tp, skb, opts1);
+ if (frags) {
+ len = skb_headlen(skb);
+ opts1 |= FirstFrag;
+ } else {
+ len = skb->len;
+
+ if (unlikely(len < ETH_ZLEN)) {
+ if (skb_padto(skb, ETH_ZLEN))
+ goto err_update_stats;
+ len = ETH_ZLEN;
+ }
+
+ opts1 |= FirstFrag | LastFrag;
+ tp->tx_skb[entry].skb = skb;
+ }
+
+ mapping = pci_map_single(tp->pci_dev, skb->data, len, PCI_DMA_TODEVICE);
+
+ tp->tx_skb[entry].len = len;
+ txd->addr = cpu_to_le64(mapping);
+ txd->opts2 = cpu_to_le32(rtl8169_tx_vlan_tag(tp, skb));
+
+ wmb();
+
+ /* anti gcc 2.95.3 bugware (sic) */
+ status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+ txd->opts1 = cpu_to_le32(status);
+
+ dev->trans_start = jiffies;
+
+ tp->cur_tx += frags + 1;
+
+ smp_wmb();
+
+ RTL_W8(TxPoll, NPQ); /* set polling bit */
+
+ if (TX_BUFFS_AVAIL(tp) < MAX_SKB_FRAGS) {
+ netif_stop_queue(dev);
+ smp_rmb();
+ if (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)
+ netif_wake_queue(dev);
+ }
+
+out:
+ return ret;
+
+err_stop:
+ netif_stop_queue(dev);
+ ret = NETDEV_TX_BUSY;
+err_update_stats:
+ dev->stats.tx_dropped++;
+ goto out;
+}
+
+static void rtl8169_pcierr_interrupt(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+ void __iomem *ioaddr = tp->mmio_addr;
+ u16 pci_status, pci_cmd;
+
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
+ pci_read_config_word(pdev, PCI_STATUS, &pci_status);
+
+ if (netif_msg_intr(tp)) {
+ printk(KERN_ERR
+ "%s: PCI error (cmd = 0x%04x, status = 0x%04x).\n",
+ dev->name, pci_cmd, pci_status);
+ }
+
+ /*
+ * The recovery sequence below admits a very elaborated explanation:
+ * - it seems to work;
+ * - I did not see what else could be done;
+ * - it makes iop3xx happy.
+ *
+ * Feel free to adjust to your needs.
+ */
+ if (pdev->broken_parity_status)
+ pci_cmd &= ~PCI_COMMAND_PARITY;
+ else
+ pci_cmd |= PCI_COMMAND_SERR | PCI_COMMAND_PARITY;
+
+ pci_write_config_word(pdev, PCI_COMMAND, pci_cmd);
+
+ pci_write_config_word(pdev, PCI_STATUS,
+ pci_status & (PCI_STATUS_DETECTED_PARITY |
+ PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_REC_MASTER_ABORT |
+ PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_SIG_TARGET_ABORT));
+
+ /* The infamous DAC f*ckup only happens at boot time */
+ if ((tp->cp_cmd & PCIDAC) && !tp->dirty_rx && !tp->cur_rx) {
+ if (netif_msg_intr(tp))
+ printk(KERN_INFO "%s: disabling PCI DAC.\n", dev->name);
+ tp->cp_cmd &= ~PCIDAC;
+ RTL_W16(CPlusCmd, tp->cp_cmd);
+ dev->features &= ~NETIF_F_HIGHDMA;
+ }
+
+ rtl8169_hw_reset(ioaddr);
+
+ rtl8169_schedule_work(dev, rtl8169_reinit_task);
+}
+
+static void rtl8169_tx_interrupt(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ unsigned int dirty_tx, tx_left;
+
+ dirty_tx = tp->dirty_tx;
+ smp_rmb();
+ tx_left = tp->cur_tx - dirty_tx;
+
+ while (tx_left > 0) {
+ unsigned int entry = dirty_tx % NUM_TX_DESC;
+ struct ring_info *tx_skb = tp->tx_skb + entry;
+ u32 len = tx_skb->len;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(tp->TxDescArray[entry].opts1);
+ if (status & DescOwn)
+ break;
+
+ dev->stats.tx_bytes += len;
+ dev->stats.tx_packets++;
+
+ rtl8169_unmap_tx_skb(tp->pci_dev, tx_skb, tp->TxDescArray + entry);
+
+ if (status & LastFrag) {
+ dev_kfree_skb_irq(tx_skb->skb);
+ tx_skb->skb = NULL;
+ }
+ dirty_tx++;
+ tx_left--;
+ }
+
+ if (tp->dirty_tx != dirty_tx) {
+ tp->dirty_tx = dirty_tx;
+ smp_wmb();
+ if (netif_queue_stopped(dev) &&
+ (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)) {
+ netif_wake_queue(dev);
+ }
+ /*
+ * 8168 hack: TxPoll requests are lost when the Tx packets are
+ * too close. Let's kick an extra TxPoll request when a burst
+ * of start_xmit activity is detected (if it is not detected,
+ * it is slow enough). -- FR
+ */
+ smp_rmb();
+ if (tp->cur_tx != dirty_tx)
+ RTL_W8(TxPoll, NPQ);
+ }
+}
+
+static inline int rtl8169_fragmented_frame(u32 status)
+{
+ return (status & (FirstFrag | LastFrag)) != (FirstFrag | LastFrag);
+}
+
+static inline void rtl8169_rx_csum(struct sk_buff *skb, struct RxDesc *desc)
+{
+ u32 opts1 = le32_to_cpu(desc->opts1);
+ u32 status = opts1 & RxProtoMask;
+
+ if (((status == RxProtoTCP) && !(opts1 & TCPFail)) ||
+ ((status == RxProtoUDP) && !(opts1 & UDPFail)) ||
+ ((status == RxProtoIP) && !(opts1 & IPFail)))
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
+}
+
+static inline bool rtl8169_try_rx_copy(struct sk_buff **sk_buff,
+ struct rtl8169_private *tp, int pkt_size,
+ dma_addr_t addr)
+{
+ struct sk_buff *skb;
+ bool done = false;
+
+ if (pkt_size >= rx_copybreak)
+ goto out;
+
+ skb = netdev_alloc_skb(tp->dev, pkt_size + NET_IP_ALIGN);
+ if (!skb)
+ goto out;
+
+ pci_dma_sync_single_for_cpu(tp->pci_dev, addr, pkt_size,
+ PCI_DMA_FROMDEVICE);
+ skb_reserve(skb, NET_IP_ALIGN);
+ skb_copy_from_linear_data(*sk_buff, skb->data, pkt_size);
+ *sk_buff = skb;
+ done = true;
+out:
+ return done;
+}
+
+static int rtl8169_rx_interrupt(struct net_device *dev,
+ struct rtl8169_private *tp,
+ void __iomem *ioaddr, u32 budget)
+{
+ unsigned int cur_rx, rx_left;
+ unsigned int delta, count;
+
+ cur_rx = tp->cur_rx;
+ rx_left = NUM_RX_DESC + tp->dirty_rx - cur_rx;
+ rx_left = min(rx_left, budget);
+
+ for (; rx_left > 0; rx_left--, cur_rx++) {
+ unsigned int entry = cur_rx % NUM_RX_DESC;
+ struct RxDesc *desc = tp->RxDescArray + entry;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(desc->opts1);
+
+ if (status & DescOwn)
+ break;
+ if (unlikely(status & RxRES)) {
+ if (netif_msg_rx_err(tp)) {
+ printk(KERN_INFO
+ "%s: Rx ERROR. status = %08x\n",
+ dev->name, status);
+ }
+ dev->stats.rx_errors++;
+ if (status & (RxRWT | RxRUNT))
+ dev->stats.rx_length_errors++;
+ if (status & RxCRC)
+ dev->stats.rx_crc_errors++;
+ if (status & RxFOVF) {
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+ dev->stats.rx_fifo_errors++;
+ }
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ } else {
+ struct sk_buff *skb = tp->Rx_skbuff[entry];
+ dma_addr_t addr = le64_to_cpu(desc->addr);
+ int pkt_size = (status & 0x00001FFF) - 4;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ /*
+ * The driver does not support incoming fragmented
+ * frames. They are seen as a symptom of over-mtu
+ * sized frames.
+ */
+ if (unlikely(rtl8169_fragmented_frame(status))) {
+ dev->stats.rx_dropped++;
+ dev->stats.rx_length_errors++;
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ continue;
+ }
+
+ rtl8169_rx_csum(skb, desc);
+
+ if (rtl8169_try_rx_copy(&skb, tp, pkt_size, addr)) {
+ pci_dma_sync_single_for_device(pdev, addr,
+ pkt_size, PCI_DMA_FROMDEVICE);
+ rtl8169_mark_to_asic(desc, tp->rx_buf_sz);
+ } else {
+ pci_unmap_single(pdev, addr, tp->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ tp->Rx_skbuff[entry] = NULL;
+ }
+
+ skb_put(skb, pkt_size);
+ skb->protocol = eth_type_trans(skb, dev);
+
+ if (rtl8169_rx_vlan_skb(tp, desc, skb) < 0)
+ netif_receive_skb(skb);
+
+ dev->last_rx = jiffies;
+ dev->stats.rx_bytes += pkt_size;
+ dev->stats.rx_packets++;
+ }
+
+ /* Work around for AMD plateform. */
+ if ((desc->opts2 & cpu_to_le32(0xfffe000)) &&
+ (tp->mac_version == RTL_GIGA_MAC_VER_05)) {
+ desc->opts2 = 0;
+ cur_rx++;
+ }
+ }
+
+ count = cur_rx - tp->cur_rx;
+ tp->cur_rx = cur_rx;
+
+ delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx);
+ if (!delta && count && netif_msg_intr(tp))
+ printk(KERN_INFO "%s: no Rx buffer allocated\n", dev->name);
+ tp->dirty_rx += delta;
+
+ /*
+ * FIXME: until there is periodic timer to try and refill the ring,
+ * a temporary shortage may definitely kill the Rx process.
+ * - disable the asic to try and avoid an overflow and kick it again
+ * after refill ?
+ * - how do others driver handle this condition (Uh oh...).
+ */
+ if ((tp->dirty_rx + NUM_RX_DESC == tp->cur_rx) && netif_msg_intr(tp))
+ printk(KERN_EMERG "%s: Rx buffers exhausted\n", dev->name);
+
+ return count;
+}
+
+static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+{
+ struct net_device *dev = dev_instance;
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ int handled = 0;
+ int status;
+
+ status = RTL_R16(IntrStatus);
+
+ /* hotplug/major error/no more work/shared irq */
+ if ((status == 0xffff) || !status)
+ goto out;
+
+ handled = 1;
+
+ if (unlikely(!netif_running(dev))) {
+ rtl8169_asic_down(ioaddr);
+ goto out;
+ }
+
+ status &= tp->intr_mask;
+ RTL_W16(IntrStatus,
+ (status & RxFIFOOver) ? (status | RxOverflow) : status);
+
+ if (!(status & tp->intr_event))
+ goto out;
+
+ /* Work around for rx fifo overflow */
+ if (unlikely(status & RxFIFOOver) &&
+ (tp->mac_version == RTL_GIGA_MAC_VER_11)) {
+ netif_stop_queue(dev);
+ rtl8169_tx_timeout(dev);
+ goto out;
+ }
+
+ if (unlikely(status & SYSErr)) {
+ rtl8169_pcierr_interrupt(dev);
+ goto out;
+ }
+
+ if (status & LinkChg)
+ rtl8169_check_link_status(dev, tp, ioaddr);
+
+ if (status & tp->napi_event) {
+ RTL_W16(IntrMask, tp->intr_event & ~tp->napi_event);
+ tp->intr_mask = ~tp->napi_event;
+
+ if (likely(netif_rx_schedule_prep(dev, &tp->napi)))
+ __netif_rx_schedule(dev, &tp->napi);
+ else if (netif_msg_intr(tp)) {
+ printk(KERN_INFO "%s: interrupt %04x in poll\n",
+ dev->name, status);
+ }
+ }
+out:
+ return IRQ_RETVAL(handled);
+}
+
+static int rtl8169_poll(struct napi_struct *napi, int budget)
+{
+ struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
+ struct net_device *dev = tp->dev;
+ void __iomem *ioaddr = tp->mmio_addr;
+ int work_done;
+
+ work_done = rtl8169_rx_interrupt(dev, tp, ioaddr, (u32) budget);
+ rtl8169_tx_interrupt(dev, tp, ioaddr);
+
+ if (work_done < budget) {
+ netif_rx_complete(dev, napi);
+ tp->intr_mask = 0xffff;
+ /*
+ * 20040426: the barrier is not strictly required but the
+ * behavior of the irq handler could be less predictable
+ * without it. Btw, the lack of flush for the posted pci
+ * write is safe - FR
+ */
+ smp_wmb();
+ RTL_W16(IntrMask, tp->intr_event);
+ }
+
+ return work_done;
+}
+
+static void rtl8169_rx_missed(struct net_device *dev, void __iomem *ioaddr)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+
+ if (tp->mac_version > RTL_GIGA_MAC_VER_06)
+ return;
+
+ dev->stats.rx_missed_errors += (RTL_R32(RxMissed) & 0xffffff);
+ RTL_W32(RxMissed, 0);
+}
+
+static void rtl8169_down(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned int intrmask;
+
+ rtl8169_delete_timer(dev);
+
+ netif_stop_queue(dev);
+
+ napi_disable(&tp->napi);
+
+core_down:
+ spin_lock_irq(&tp->lock);
+
+ rtl8169_asic_down(ioaddr);
+
+ rtl8169_rx_missed(dev, ioaddr);
+
+ spin_unlock_irq(&tp->lock);
+
+ synchronize_irq(dev->irq);
+
+ /* Give a racing hard_start_xmit a few cycles to complete. */
+ synchronize_sched(); /* FIXME: should this be synchronize_irq()? */
+
+ /*
+ * And now for the 50k$ question: are IRQ disabled or not ?
+ *
+ * Two paths lead here:
+ * 1) dev->close
+ * -> netif_running() is available to sync the current code and the
+ * IRQ handler. See rtl8169_interrupt for details.
+ * 2) dev->change_mtu
+ * -> rtl8169_poll can not be issued again and re-enable the
+ * interruptions. Let's simply issue the IRQ down sequence again.
+ *
+ * No loop if hotpluged or major error (0xffff).
+ */
+ intrmask = RTL_R16(IntrMask);
+ if (intrmask && (intrmask != 0xffff))
+ goto core_down;
+
+ rtl8169_tx_clear(tp);
+
+ rtl8169_rx_clear(tp);
+}
+
+static int rtl8169_close(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ struct pci_dev *pdev = tp->pci_dev;
+
+ rtl8169_down(dev);
+
+ free_irq(dev->irq, dev);
+
+ pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+ pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
+ tp->TxPhyAddr);
+ tp->TxDescArray = NULL;
+ tp->RxDescArray = NULL;
+
+ return 0;
+}
+
+static void rtl_set_rx_mode(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+ u32 mc_filter[2]; /* Multicast hash filter */
+ int rx_mode;
+ u32 tmp = 0;
+
+ if (dev->flags & IFF_PROMISC) {
+ /* Unconditionally log net taps. */
+ if (netif_msg_link(tp)) {
+ printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n",
+ dev->name);
+ }
+ rx_mode =
+ AcceptBroadcast | AcceptMulticast | AcceptMyPhys |
+ AcceptAllPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else if ((dev->mc_count > multicast_filter_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ /* Too many to filter perfectly -- accept all multicasts. */
+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else {
+ struct dev_mc_list *mclist;
+ unsigned int i;
+
+ rx_mode = AcceptBroadcast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0;
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
+ i++, mclist = mclist->next) {
+ int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
+ mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);
+ rx_mode |= AcceptMulticast;
+ }
+ }
+
+ spin_lock_irqsave(&tp->lock, flags);
+
+ tmp = rtl8169_rx_config | rx_mode |
+ (RTL_R32(RxConfig) & rtl_chip_info[tp->chipset].RxConfigMask);
+
+ if (tp->mac_version > RTL_GIGA_MAC_VER_06) {
+ u32 data = mc_filter[0];
+
+ mc_filter[0] = swab32(mc_filter[1]);
+ mc_filter[1] = swab32(data);
+ }
+
+ RTL_W32(MAR0 + 0, mc_filter[0]);
+ RTL_W32(MAR0 + 4, mc_filter[1]);
+
+ RTL_W32(RxConfig, tmp);
+
+ spin_unlock_irqrestore(&tp->lock, flags);
+}
+
+/**
+ * rtl8169_get_stats - Get rtl8169 read/write statistics
+ * @dev: The Ethernet Device to get statistics for
+ *
+ * Get TX/RX statistics for rtl8169
+ */
+static struct net_device_stats *rtl8169_get_stats(struct net_device *dev)
+{
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+ unsigned long flags;
+
+ if (netif_running(dev)) {
+ spin_lock_irqsave(&tp->lock, flags);
+ rtl8169_rx_missed(dev, ioaddr);
+ spin_unlock_irqrestore(&tp->lock, flags);
+ }
+
+ return &dev->stats;
+}
+
+#ifdef CONFIG_PM
+
+static int rtl8169_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct rtl8169_private *tp = netdev_priv(dev);
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ if (!netif_running(dev))
+ goto out_pci_suspend;
+
+ netif_device_detach(dev);
+ netif_stop_queue(dev);
+
+ spin_lock_irq(&tp->lock);
+
+ rtl8169_asic_down(ioaddr);
+
+ rtl8169_rx_missed(dev, ioaddr);
+
+ spin_unlock_irq(&tp->lock);
+
+out_pci_suspend:
+ pci_save_state(pdev);
+ pci_enable_wake(pdev, pci_choose_state(pdev, state),
+ (tp->features & RTL_FEATURE_WOL) ? 1 : 0);
+ pci_set_power_state(pdev, pci_choose_state(pdev, state));
+
+ return 0;
+}
+
+static int rtl8169_resume(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ pci_enable_wake(pdev, PCI_D0, 0);
+
+ if (!netif_running(dev))
+ goto out;
+
+ netif_device_attach(dev);
+
+ rtl8169_schedule_work(dev, rtl8169_reset_task);
+out:
+ return 0;
+}
+
+static void rtl_shutdown(struct pci_dev *pdev)
+{
+ rtl8169_suspend(pdev, PMSG_SUSPEND);
+}
+
+#endif /* CONFIG_PM */
+
+static struct pci_driver rtl8169_pci_driver = {
+ .name = MODULENAME,
+ .id_table = rtl8169_pci_tbl,
+ .probe = rtl8169_init_one,
+ .remove = __devexit_p(rtl8169_remove_one),
+#ifdef CONFIG_PM
+ .suspend = rtl8169_suspend,
+ .resume = rtl8169_resume,
+ .shutdown = rtl_shutdown,
+#endif
+};
+
+static int __init rtl8169_init_module(void)
+{
+ return pci_register_driver(&rtl8169_pci_driver);
+}
+
+static void __exit rtl8169_cleanup_module(void)
+{
+ pci_unregister_driver(&rtl8169_pci_driver);
+}
+
+module_init(rtl8169_init_module);
+module_exit(rtl8169_cleanup_module);
--- a/documentation/Makefile Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/Makefile Wed Jan 13 00:04:47 2010 +0100
@@ -42,18 +42,31 @@
$(shell $(subst $(EXT_PREFIX),$(ETHERCAT_HELP) ,$@) > $@)
pdf: $(EXT_FILES)
+ $(MAKE) -C images
+ $(MAKE) -C graphs
pdflatex $(LATEX_OPTIONS) $(FILE)
index:
makeindex $(FILE)
- makeindex $(FILE).glo -s nomencl.ist -o $(FILE).gls
+ makeindex $(FILE).nlo -s nomencl.ist -o $(FILE).nls
clean:
- @rm -f $(FILE).aux $(FILE).dvi $(FILE).idx \
- $(FILE).ilg $(FILE).ind $(FILE).log \
- $(FILE).out $(FILE).pdf $(FILE).ps \
- $(FILE).toc $(FILE).lot $(FILE).lof \
- $(FILE).lol $(FILE).glo $(FILE).gls \
- images/*.bak *~
+ @rm -f \
+ $(FILE).aux \
+ $(FILE).dvi \
+ $(FILE).idx \
+ $(FILE).ilg \
+ $(FILE).ind \
+ $(FILE).lof \
+ $(FILE).log \
+ $(FILE).lol \
+ $(FILE).lot \
+ $(FILE).nlo \
+ $(FILE).nls \
+ $(FILE).out \
+ $(FILE).pdf \
+ $(FILE).toc \
+ *~ \
+ images/*.bak
#------------------------------------------------------------------------------
--- a/documentation/ethercat_doc.tex Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/ethercat_doc.tex Wed Jan 13 00:04:47 2010 +0100
@@ -4,61 +4,10 @@
%
% $Id$
%
-% vi: spell spelllang=en
+% vi: spell spelllang=en tw=78
%
%------------------------------------------------------------------------------
-%
-% Conventions
-% The IgH EtherCAT Master
-% Feature Summary
-% License
-% Architecture
-% Phases
-% Behavior (Scanning) TODO
-% Application Interface
-% Interface version
-% Master Requesting and Releasing
-% Master Locking
-% Slave configuration
-% Configuring Pdo assignment and mapping
-% Domains (memory)
-% Pdo entry registration
-% Sdo configuration
-% Sdo access
-% Cyclic operation
-% Ethernet Devices
-% Device Interface
-% Device Modules
-% Network Driver Basics
-% EtherCAT Network Drivers
-% Device Selection
-% The Device Interface
-% Patching Network Drivers
-% The Master's State Machines
-% Master
-% Slave scanning
-% SII
-% Pdo assign/mapping
-% Slave configuration
-% State change
-% Pdo assign/mapping
-% CoE upload/download/information
-% Mailbox Protocol Implementations
-% Ethernet-over-EtherCAT (EoE)
-% CANopen-over-EtherCAT (CoE)
-% User Space
-% The ethercat command
-% System Integration
-% The EtherCAT Init Script
-% The EtherCAT Sysconfig File
-% Monitoring and Debugging
-% Installation
-% Example applications
-% Bibliography
-% Glossary
-%
-
\documentclass[a4paper,12pt,BCOR6mm,bibtotoc,idxtotoc]{scrbook}
\usepackage[latin1]{inputenc}
@@ -68,10 +17,11 @@
\usepackage[refpage]{nomencl}
\usepackage{listings}
\usepackage{svn}
-\usepackage{textcomp}
-\usepackage{url}
\usepackage{SIunits}
-\usepackage[pdfpagelabels,plainpages=false]{hyperref}
+\usepackage{hyperref}
+
+\hypersetup{pdfpagelabels,plainpages=false}
+\hypersetup{linkcolor=blue,colorlinks=true,urlcolor=blue}
\setlength{\parskip}{0.8ex plus 0.8ex minus 0.5ex}
\setlength{\parindent}{0mm}
@@ -133,14 +83,13 @@
{\Huge\bf IgH \includegraphics[height=2.4ex]{images/ethercat}
Master \masterversion\\[1ex]
- Documentation}
+ Preliminary Documentation}
\vspace{1ex}
\rule{\textwidth}{1.5mm}
- \vspace{\fill}
- {\Large Florian Pose, \url{fp@igh-essen.com}\\[1ex]
- Ingenieurgemeinschaft \IgH}
+ \vspace{\fill} {\Large Dipl.-Ing. (FH) Florian Pose,
+ \url{fp@igh-essen.com}\\[1ex] Ingenieurgemeinschaft \IgH}
\vspace{\fill}
{\Large Essen, \SVNDate\\[1ex]
@@ -153,7 +102,7 @@
\tableofcontents
\listoftables
\listoffigures
-\lstlistoflistings
+%\lstlistoflistings
%------------------------------------------------------------------------------
@@ -218,119 +167,146 @@
\begin{itemize}
-\item Designed as a kernel module for Linux 2.6.
-
-\item Implemented according to IEC 61158-12 \cite{dlspec} \cite{alspec}.
-
-\item Comes with EtherCAT-capable drivers for several common Ethernet devices.
+\item EtherCAT master implementation conforming to IEC/PAS 62407 \cite{dlspec}
+\cite{alspec}.
\begin{itemize}
+ \item Runs as kernel module for Linux 2.6.
+
+ \item Multiple masters possible on one machine.
+
+ \end{itemize}
+
+\item EtherCAT-capable versions of standard Linux drivers for wide-spread
+Ethernet devices.
+
+ \begin{itemize}
+
\item The Ethernet hardware is operated without interrupts.
\item Drivers for additional Ethernet hardware can easily be implemented
- using the common device interface (see section~\ref{sec:ecdev}) provided by
- the master module.
+ using the common device interface (see sec.~\ref{sec:ecdev}) provided by the
+ master module.
+
+ \item Operation possible with any device supported by the standard drivers,
+ including PCMCIA devices.
\end{itemize}
-\item The master module supports multiple EtherCAT masters running in
-parallel.
-
\item The master code supports any Linux realtime extension through its
independent architecture.
\begin{itemize}
- \item RTAI\nomenclature{RTAI}{Realtime Application Interface},
- ADEOS\nomenclature{ADEOS}{Adaptive Domain Environment for Operating
- Systems}, etc.
-
- \item It runs well even without realtime extensions.
+ \item RTAI\nomenclature{RTAI}{Realtime Application Interface}, Xenomai, etc.
+
+ \item Operation possible without any realtime extension at all.
\end{itemize}
-\item Common ``realtime interface'' for applications, that want to use
-EtherCAT functionality (see section~\ref{sec:ecrt}).
+\item Common ``Application Interface'' for kernel-space realtime applications
+(see chap.~\ref{chap:api}).
+
+ \begin{itemize}
+
+ \item Requesting and releasing masters.
+
+ \item Dynamic slave configuration, even for slaves that are offline.
+
+ \item Detailed configuration of the slaves' PDOs and SDOs.
+
+ \item Creation of process data domains (see below). Registration of PDO
+ entries for exchange within a domain.
+
+ \item Monitoring the states of masters, slave configurations and domains.
+
+ \item SDO handlers for application-triggered CoE transfers (see below).
+
+ \item Avoidance of unnecessary copy operations for process data.
+
+ \end{itemize}
\item \textit{Domains} are introduced, to allow grouping of process
data transfers with different slave groups and task periods.
\begin{itemize}
- \item Handling of multiple domains with different task periods.
+ \item Management of PDO groups with different sample rates.
\item Automatic calculation of process data mapping, FMMU and sync manager
- configuration within each domain.
+ configuration within the domains.
+
+ \item Process data exchange can be monitored via a per-domain mechanism.
\end{itemize}
-\item Communication through several finite state machines.
+\item Master finite state machine (FSM).
\begin{itemize}
- \item Automatic bus scanning after topology changes.
-
- \item Bus monitoring during operation.
-
- \item Automatic reconfiguration of slaves (for example after power failure)
- during operation.
+ \item The same state machine runs both in idle mode and in realtime
+ operation.
+
+ \item Bus monitoring: Slave states are read cyclically. Automatic scanning
+ of the bus after a topology change.
+
+ \item Automatic configuration of slaves, if a application-layer state change
+ is requested.
\end{itemize}
-\item CANopen-over-EtherCAT (CoE)
+\item Implementation of the CANopen over EtherCAT (CoE) mailbox protocol.
\begin{itemize}
- \item Sdo upload, download and information service.
-
- \item Slave configuration via Sdos.
-
- \item Sdo access from user-space and from the application.
+ \item Configuration of CoE-capable slaves.
+
+ \item SDO information service (dictionary listing).
+
+ \item SDO transfers both via the application interface and the command-line tool.
\end{itemize}
-\item Ethernet-over-EtherCAT (EoE)
+\item Implementation of the Ethernet over EtherCAT (EoE) mailbox protocol.
\begin{itemize}
- \item Transparent use of EoE slaves via virtual network interfaces.
-
- \item Natively supports either a switched or a routed EoE network
- architecture.
+ \item Virtual network interface for any EoE-capable slave.
+
+ \item Both a switched and a routed EoE network architecture is natively
+ supported and configurable with standard tools.
\end{itemize}
-\item User space command-line-tool ``ethercat`` (see
-section~\ref{sec:ethercat})
+\item Userspace command-line-tool ``ethercat'' (see sec.~\ref{sec:tool})
\begin{itemize}
- \item Showing the current bus with slaves, Pdos and Sdos.
- \item Showing the bus configuration.
- \item Showing domains and process data.
- \item Setting the master's debug level.
- \item Writing alias addresses.
- \item Sdo uploading/downloading.
- \item Reading/writing a slave's SII.
- \item Setting slave states.
- \item Generate slave description XML.
+ \item Detailed information about master, slaves, domains and bus
+ configuration.
+ \item Reading/Writing alias addresses.
+ \item Listing slave configurations.
+ \item Viewing process data.
+ \item SDO download/upload; listing SDO dictionaries.
+ \item Slave SII (EEPROM) access.
+ \item Controlling application-layer states.
+ \item Generation of slave description XML from existing slaves.
\end{itemize}
-\item Seamless system integration though LSB\nomenclature{LSB}{Linux
- Standard Base} compliance.
+\item Seamless integration in any GNU/Linux distribution.
\begin{itemize}
- \item Master and network device configuration via sysconfig files.
-
- \item Init script for master control.
+ \item ``Linux Standard Base''-compatible init script for master control.
+ \item Master and Ethernet device configuration via sysconfig file.
\end{itemize}
-\item Virtual read-only network interface for monitoring and debugging
- purposes.
+\item Virtual read-only network interface for debugging and traffic monitoring
+purposes (using Wireshark \cite{wireshark}, etc.). No additional hardware
+necessary.
\end{itemize}
@@ -339,10 +315,10 @@
\section{License}
\label{sec:license}
-The master code is released under the terms and conditions of the GNU
-General Public License\index{GPL} \cite{gpl} (version 2). Other
-developers, that want to use EtherCAT with Linux systems, are invited
-to use the master code or even participate on development.
+The master code is released under the terms and conditions of the GNU General
+Public License (GPL \cite{gpl})\index{GPL}, version 2. Other developers, that
+want to use EtherCAT with Linux systems, are invited to use the master code or
+even participate on development.
%------------------------------------------------------------------------------
@@ -355,17 +331,17 @@
\begin{itemize}
-\item Kernel code has significantly better realtime characteristics, i.~e.
-less latency than user space code. It was foreseeable, that a fieldbus master
+\item Kernel code has significantly better realtime characteristics, i.\,e.\
+less latency than userspace code. It was foreseeable, that a fieldbus master
has a lot of cyclic work to do. Cyclic work is usually triggered by timer
interrupts inside the kernel. The execution delay of a function that processes
-timer interrupts is less, when it resides in kernel space, because there is no
-need of time-consuming context switches to a user space process.
+timer interrupts is less, when it resides in kernelspace, because there is no
+need of time-consuming context switches to a userspace process.
\item It was also foreseeable, that the master code has to directly
communicate with the Ethernet hardware. This has to be done in the kernel
anyway (through network device drivers), which is one more reason for the
-master code being in kernel space.
+master code being in kernelspace.
\end{itemize}
@@ -374,47 +350,115 @@
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\textwidth]{images/architecture}
- \caption{Master architecture}
+ \caption{Master Architecture}
\label{fig:arch}
\end{figure}
-\paragraph{Master Module}
+The components of the master environment are described below:
+
+\begin{description}
+
+\item[Master Module]\index{Master Module} Kernel module containing one or more
+EtherCAT master instances (see sec.~\ref{sec:mastermod}), the ``Device
+Interface'' (see sec.~\ref{sec:ecdev}) and the ``Application Interface'' (see
+chap.~\ref{chap:api}).
+
+\item[Device Modules]\index{Device modules} EtherCAT-capable Ethernet device
+driver modules\index{Device modules}, that offer their devices to the EtherCAT
+master via the device interface (see sec.~\ref{sec:ecdev}). These modified
+network drivers can handle network devices used for EtherCAT operation and
+``normal'' Ethernet devices in parallel. A master can accept a certain device
+and then is able to send and receive EtherCAT frames. Ethernet devices
+declined by the master module are connected to the kernel's network stack as
+usual.
+
+\item[Application Modules]\index{Application} A kernel module that uses the
+EtherCAT master (usually for cyclic exchange of process data with EtherCAT
+slaves). These modules are not part of the EtherCAT master
+code\footnote{Although there are some examples provided in the
+\textit{examples/} directory.}, but have to be generated or written by the
+user. An application module can ``request'' a master through the application
+interface (see chap.~\ref{chap:api}). If this succeeds, it has the control
+over the master: It can provide a bus configuration and exchange process data.
+
+\end{description}
+
+%------------------------------------------------------------------------------
+
+\section{Master Module}
+\label{sec:mastermod}
\index{Master module}
-Kernel module containing one or more EtherCAT master instances (see
-section~\ref{sec:mastermod}), the ``Device Interface'' (see
-section~\ref{sec:ecdev}) and the ``Realtime Interface'' (see
-section~\ref{sec:ecrt}).
-
-\paragraph{Device Modules}
-\index{Device modules}
-
-EtherCAT-capable Ethernet device driver modules\index{Device modules}, that
-offer their devices to the EtherCAT master via the device interface (see
-section~\ref{sec:ecdev}). These modified network drivers can handle network
-devices used for EtherCAT operation and ``normal'' Ethernet devices in
-parallel. A master can accept a certain device and then is able to send and
-receive EtherCAT frames. Ethernet devices declined by the master module are
-connected to the kernel's network stack as usual.
-
-\paragraph{Application Modules}
-\index{Application module}
-
-Kernel modules, that use the EtherCAT master (usually for cyclic exchange of
-process data with EtherCAT slaves). These modules are not part of the EtherCAT
-master code\footnote{Although there are some examples provided in the
-\textit{examples} directory, see chapter~\ref{chapter:examples}}, but have to
-be generated or written by the user. An application module can ``request'' a
-master through the realtime interface (see section~\ref{sec:ecrt}). If this
-succeeds, the module has the control over the master: It can provide a bus
-configuration and exchange process data.
-
-%------------------------------------------------------------------------------
-
-\section{Phases}
+The EtherCAT master kernel module \textit{ec\_master} can contain multiple
+master instances. Each master waits for a certain Ethernet device identified
+by its MAC address\index{MAC address}. These addresses have to be specified on
+module loading via the \textit{main\_devices} module parameter. The number of
+master instances to initialize is taken from the number of MAC addresses
+given.
+
+The below command loads the master module with a single master instance that
+waits for the Ethernet device with the MAC address
+\lstinline+00:0E:0C:DA:A2:20+. The master will be accessible via index $0$.
+
+\begin{lstlisting}
+# `\textbf{modprobe ec\_master main\_devices=00:0E:0C:DA:A2:20}`
+\end{lstlisting}
+
+MAC addresses for multiple masters have to be separated by commas:
+
+\begin{lstlisting}
+# `\textbf{modprobe ec\_master main\_devices=00:0E:0C:DA:A2:20,00:e0:81:71:d5:1c}`
+\end{lstlisting}
+
+The two masters can be addressed by their indices 0 and 1 respectively (see
+figure~\ref{fig:masters}). The master index is needed for the
+\lstinline+ecrt_master_request()+ function of the application interface (see
+chap.~\ref{chap:api}) and the \lstinline+--master+ option of the
+\textit{ethercat} command-line tool (see sec.~\ref{sec:tool}), which defaults
+to $0$.
+
+\begin{figure}[htbp]
+ \centering
+ \includegraphics[width=.5\textwidth]{images/masters}
+ \caption{Multiple masters in one module}
+ \label{fig:masters}
+\end{figure}
+
+\paragraph{Init Script}
+\index{Init script}
+
+In most cases it is not necessary to load the master module and the Ethernet
+driver modules manually. There is an init script available, so the master can
+be started as a service (see sec.~\ref{sec:system}).
+
+\paragraph{Syslog}
+
+The master module outputs information about its state and events to the kernel
+ring buffer. These also end up in the system logs. The above module loading
+command should result in the messages below:
+
+\begin{lstlisting}
+# `\textbf{dmesg | tail -2}`
+EtherCAT: Master driver `\masterversion`
+EtherCAT: 2 masters waiting for devices.
+
+# `\textbf{tail -2 /var/log/messages}`
+Jul 4 10:22:45 ethercat kernel: EtherCAT: Master driver `\masterversion`
+Jul 4 10:22:45 ethercat kernel: EtherCAT: 2 masters waiting
+ for devices.
+\end{lstlisting}
+
+All EtherCAT master output is prefixed with \lstinline+EtherCAT+ which makes
+searching the logs easier.
+
+%------------------------------------------------------------------------------
+
+\section{Master Phases}
\index{Master phases}
-The EtherCAT master runs through several phases (see fig.~\ref{fig:phases}):
+Every EtherCAT master provided by the master module (see
+sec.~\ref{sec:mastermod}) runs through several phases (see
+fig.~\ref{fig:phases}):
\begin{figure}[htbp]
\centering
@@ -422,6 +466,7 @@
\caption{Master phases and transitions}
\label{fig:phases}
\end{figure}
+
\begin{description}
\item[Orphaned phase]\index{Orphaned phase} This mode takes effect, when the
@@ -430,11 +475,11 @@
\item[Idle phase]\index{Idle phase} takes effect when the master has accepted
an Ethernet device, but is not requested by any application yet. The master
-runs its state machine (see section~\ref{sec:fsm-master}), that automatically
-scans the bus for slaves and executes pending operations from the user space
-interface (for example Sdo access). The command-line tool can be used to access
-the bus, but there is no process data exchange because of the missing bus
-configuration.
+runs its state machine (see sec.~\ref{sec:fsm-master}), that automatically
+scans the bus for slaves and executes pending operations from the userspace
+interface (for example SDO access). The command-line tool can be used to
+access the bus, but there is no process data exchange because of the missing
+bus configuration.
\item[Operation phase]\index{Operation phase} The master is requested by an
application that can provide a bus configuration and exchange process data.
@@ -443,177 +488,109 @@
%------------------------------------------------------------------------------
-\section{General behavior} % FIXME
-\index{Master behavior}
-
-\ldots
-
-%------------------------------------------------------------------------------
-
-\section{Master Module}
-\label{sec:mastermodule}
-\index{Master module}
-
-The EtherCAT master kernel module \textit{ec\_master} can contain multiple
-master instances. Each master waits for a certain Ethernet device identified
-by its MAC address\index{MAC address}. These addresses have to be specified on
-module loading via the \textit{main\_devices} module parameter. The number of
-master instances to initialize is taken from the number of MAC addresses
-given.
-
-The below command loads the master module with a single master instance that
-waits for the Ethernet device with the MAC address
-\lstinline+00:0E:0C:DA:A2:20+. The master will be accessible via index $0$.
-
-\begin{lstlisting}
-# `\textbf{modprobe ec\_master main\_devices=00:0E:0C:DA:A2:20}`
-\end{lstlisting}
-
-MAC addresses for multiple masters have to be separated by commas:
-
-\begin{lstlisting}
-# `\textbf{modprobe ec\_master main\_devices=00:0E:0C:DA:A2:20,00:e0:81:71:d5:1c}`
-\end{lstlisting}
-
-The two masters can be addressed by their indices 0 and 1 respectively (see
-figure~\ref{fig:masters}). The master index is needed for the
-\lstinline+ecrt_master_request()+ function of the realtime interface (see
-section~\ref{sec:ecrt}) and the \lstinline+--master+ option of the
-\textit{ethercat} command-line tool (see section~\ref{sec:ethercat}), which
-defaults to $0$.
-
-\begin{figure}[htbp]
- \centering
- \includegraphics[width=.5\textwidth]{images/masters}
- \caption{Multiple masters in one module}
- \label{fig:masters}
-\end{figure}
-
-\paragraph{Init script}
-\index{Init script}
-
-Most probably you won't want to load the master module and the Ethernet driver
-modules manually, but start the master as a service. See
-section~\ref{sec:system} on how to do this.
-
-\paragraph{Syslog}
-
-The master module outputs information about it's state and events to the
-kernel ring buffer. These also end up in the system logs. The above module
-loading command should result in the messages below:
-
-\begin{lstlisting}
-# `\textbf{dmesg | tail -2}`
-EtherCAT: Master driver `\masterversion`
-EtherCAT: 2 masters waiting for devices.
-
-# `\textbf{tail -2 /var/log/messages}`
-Jul 4 10:22:45 ethercat kernel: EtherCAT: Master driver `\masterversion`
-Jul 4 10:22:45 ethercat kernel: EtherCAT: 2 masters waiting
- for devices.
-\end{lstlisting}
-
-All EtherCAT master output is prefixed with \lstinline+EtherCAT+ which makes
-searching the logs easier.
-
-%------------------------------------------------------------------------------
-
-\section{Handling of Process Data} % FIXME
+\section{Process Data}
\label{sec:processdata}
-\ldots
+This section shall introduce a few terms and ideas how the master handles
+process data.
\paragraph{Process Data Image}
\index{Process data}
-The slaves offer their inputs and outputs by presenting the master so-called
-``Process Data Objects'' (Pdos\index{Pdo}). The available Pdos can be
-determined by reading out the slave's TXPDO and RXPDO E$^2$PROM categories. The
-application can register the Pdos for data exchange during cyclic operation.
-The sum of all registered Pdos defines the ``process data image'', which is
-exchanged via the ``Logical ReadWrite'' datagrams introduced
-in~\cite[section~5.4.2.4]{dlspec}.
+Slaves offer their inputs and outputs by presenting the master so-called
+``Process Data Objects'' (PDOs\index{PDO}). The available PDOs can be either
+determined by reading out the slave's TXPDO and RXPDO SII categories from the
+E$^2$PROM (in case of fixed PDOs) or by reading out the appropriate CoE
+objects (see sec.~\ref{sec:coe}), if available. The application can register
+the PDOs' entries for exchange during cyclic operation. The sum of all
+registered PDO entries defines the ``process data image'', which is exchanged
+via datagrams with ``logical'' memory access (like LWR, LRD or LRW) introduced
+in~\cite[sec.~5.4]{dlspec}.
\paragraph{Process Data Domains}
\index{Domain}
The process data image can be easily managed by creating so-called
-``domains'', which group Pdos and allocate the datagrams needed to
-exchange them. Domains are mandatory for process data exchange, so
-there has to be at least one. They were introduced for the following
-reasons:
+``domains'', which allow grouped PDO exchange. They also take care of managing
+the datagram structures needed to exchange the PDOs. Domains are mandatory for
+process data exchange, so there has to be at least one. They were introduced
+for the following reasons:
\begin{itemize}
-\item The maximum size of a ``Logical ReadWrite'' datagram is limited
- due to the limited size of an Ethernet frame: The maximum data size
- is the Ethernet data field size minus the EtherCAT frame header,
- EtherCAT datagram header and EtherCAT datagram footer: $1500 - 2 -
- 12 - 2 = 1484$ octets. If the size of the process data image exceeds
- this limit, multiple frames have to be sent, and the image has to be
- partitioned for the use of multiple datagrams. A domain manages this
- automatically.
-\item Not every Pdo has to be exchanged with the same frequency: The
- values of Pdos can vary slowly over time (for example temperature
- values), so exchanging them with a high frequency would just waste
- bus bandwidth. For this reason, multiple domains can be created, to
- group different Pdos and so allow separate exchange.
+
+\item The maximum size of a datagram is limited due to the limited size of an
+Ethernet frame: The maximum data size is the Ethernet data field size minus
+the EtherCAT frame header, EtherCAT datagram header and EtherCAT datagram
+footer: $1500 - 2 - 12 - 2 = 1484$ octets. If the size of the process data
+image exceeds this limit, multiple frames have to be sent, and the image has
+to be partitioned for the use of multiple datagrams. A domain manages this
+automatically.
+
+\item Not every PDO has to be exchanged with the same frequency: The values of
+PDOs can vary slowly over time (for example temperature values), so exchanging
+them with a high frequency would just waste bus bandwidth. For this reason,
+multiple domains can be created, to group different PDOs and so allow separate
+exchange.
+
\end{itemize}
-There is no upper limit for the number of domains, but each domain
-occupies one FMMU in each slave involved, so the maximum number of
-domains is also limited by the slaves' capabilities.
+There is no upper limit for the number of domains, but each domain occupies
+one FMMU in each slave involved, so the maximum number of domains is de facto
+limited by the slaves.
\paragraph{FMMU Configuration}
\index{FMMU!Configuration}
-An application can register Pdos for process data exchange. Every
-Pdo is part of a memory area in the slave's physical memory, that is
-protected by a sync manager \cite[section~6.7]{dlspec} for
-synchronized access. In order to make a sync manager react on a
-datagram accessing its memory, it is necessary to access the last byte
-covered by the sync manager. Otherwise the sync manager will not react
-on the datagram and no data will be exchanged. That is why the whole
-synchronized memory area has to be included into the process data
-image: For example, if a certain Pdo of a slave is registered for
-exchange with a certain domain, one FMMU will be configured to map the
-complete sync-manager-protected memory, the Pdo resides in. If a
-second Pdo of the same slave is registered for process data exchange
-within the same domain, and this Pdo resides in the same
-sync-manager-protected memory as the first Pdo, the FMMU configuration
-is not touched, because the appropriate memory is already part of the
-domain's process data image. If the second Pdo belongs to another
-sync-manager-protected area, this complete area is also included into
-the domains process data image. See figure~\ref{fig:fmmus} for an
-overview, how FMMU's are configured to map physical memory to logical
-process data images.
+An application can register PDO entries for exchange. Every PDO entry and its
+parent PDO is part of a memory area in the slave's physical memory, that is
+protected by a sync manager \cite[sec.~6.7]{dlspec} for synchronized access.
+In order to make a sync manager react on a datagram accessing its memory, it
+is necessary to access the last byte covered by the sync manager. Otherwise
+the sync manager will not react on the datagram and no data will be exchanged.
+That is why the whole synchronized memory area has to be included into the
+process data image: For example, if a certain PDO entry of a slave is
+registered for exchange with a certain domain, one FMMU will be configured to
+map the complete sync-manager-protected memory, the PDO entry resides in. If a
+second PDO entry of the same slave is registered for process data exchange
+within the same domain, and it resides in the same sync-manager-protected
+memory as the first one, the FMMU configuration is not altered, because the
+desired memory is already part of the domain's process data image. If the
+second PDO entry would belong to another sync-manager-protected area, this
+complete area would also be included into the domains process data image.
+
+Figure~\ref{fig:fmmus} gives an overview, how FMMUs are configured to map
+physical memory to logical process data images.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{images/fmmus}
- \caption{FMMU configuration for several domains}
+ \caption{FMMU Configuration}
\label{fig:fmmus}
\end{figure}
-\paragraph{Process Data Pointers} % FIXME
-
-The figure also demonstrates the way, the application can access the exchanged
-process data: At Pdo registration, the application has to provide the address
-of a process data pointer. Upon calculation of the domain image and allocation
-of process data memory, this pointer is redirected to the appropriate location
-inside the domain's process data memory and can later be easily dereferenced by
-the module code.
-
%------------------------------------------------------------------------------
\chapter{Application Interface}
-\label{sec:ecrt}
+\label{chap:api}
\index{Application interface}
+% TODO
+%
+% Interface version
+% Master Requesting and Releasing
+% Master Locking
+% Configuring PDO assignment and mapping
+% Domains (memory)
+% PDO entry registration
+% SDO configuration
+% SDO access
+
The application interface provides functions and data structures for
-applications to access and use an EtherCAT master. The complete documentation
-of the interface is included as Doxygen~\cite{doxygen} comments in the header
-file \textit{include/ecrt.h}. You can either directly view the file comments
-or generate an HTML documentation as described in section~\ref{sec:gendoc}.
+applications to access an EtherCAT master. The complete documentation of the
+interface is included as Doxygen~\cite{doxygen} comments in the header file
+\textit{include/ecrt.h}. It can either be read directly from the file
+comments, or as a more comfortable HTML documentation. The HTML generation is
+described in sec.~\ref{sec:gendoc}.
The following sections cover a general description of the application
interface.
@@ -623,91 +600,175 @@
\begin{description}
\item[Configuration] The master is requested and the configuration is applied.
-Domains are created Slaves are configured and Pdo entries are registered (see
-section~\ref{sec:masterconfig}).
-
-\item[Operation] Cyclic code is run, process data is exchanged (see
-section~\ref{sec:cyclic}).
+For example, domains are created, slaves are configured and PDO entries are
+registered (see sec.~\ref{sec:masterconfig}).
+
+\item[Operation] Cyclic code is run and process data are exchanged (see
+sec.~\ref{sec:cyclic}).
\end{description}
+\paragraph{Example Applications}\index{Example Applications} There are a few
+example applications in the \textit{examples/} subdirectory of the master
+code. They are documented in the source code.
+
%------------------------------------------------------------------------------
\section{Master Configuration}
\label{sec:masterconfig}
-\ldots
+The bus configuration is supplied via the application interface.
+Figure~\ref{fig:app-config} gives an overview of the objects, that can be
+configured by the application.
\begin{figure}[htbp]
\centering
\includegraphics[width=.8\textwidth]{images/app-config}
- \caption{Master configuration structures}
+ \caption{Master Configuration}
\label{fig:app-config}
\end{figure}
+\subsection{Slave Configuration}
+
+The application has to tell the master about the expected bus topology. This
+can be done by creating ``slave configurations''. A slave configuration can be
+seen as an expected slave. When a slave configuration is created, the
+application provides the bus position (see below), vendor id and product code.
+
+When the bus configuration is applied, the master checks, if there is a slave
+with the given vendor id and product code at the given position. If this is
+the case, the slave configuration is ``attached'' to the real slave on the bus
+and the slave is configured according to the settings provided by the
+application. The state of a slave configuration can either be queried via the
+application interface or via the command-line tool (see
+sec.~\ref{sec:ethercat-config}).
+
+\paragraph{Slave Position} The slave position has to be specified as a tuple
+of ``alias'' and ``position''. This allows addressing slaves either via an
+absolute bus position, or a stored identifier called ``alias'', or a mixture
+of both. The alias is a 16-bit value stored in the slave's E$^2$PROM. It can
+be modified via the command-line tool (see sec.~\ref{sec:ethercat-alias}).
+Table~\ref{tab:slaveposition} shows, how the values are interpreted.
+
+\begin{table}[htbp]
+ \centering
+ \caption{Specifying a Slave Position}
+ \label{tab:slaveposition}
+ \vspace{2mm}
+ \begin{tabular}{c|c|p{70mm}}
+ Alias & Position & Interpretation\\
+ \hline
+
+ \lstinline+0+ & \lstinline+0+ -- \lstinline+65535+ &
+
+ Position addressing. The position parameter is interpreted as the absolute
+ ring position in the bus.\\ \hline
+
+ \lstinline+1+ -- \lstinline+65535+ & \lstinline+0+ -- \lstinline+65535+ &
+
+ Alias addressing. The position parameter is interpreted as relative
+ position after the first slave with the given alias address. \\ \hline
+
+ \end{tabular}
+\end{table}
+
+Figure~\ref{fig:attach} shows an example of how slave configurations are
+attached. Some of the configurations were attached, while others remain
+detached. The below lists gives the reasons beginning with the top slave
+configuration.
+
+\begin{figure}[htbp]
+ \centering
+ \includegraphics[width=.7\textwidth]{images/attach}
+ \caption{Slave Configuration Attachment}
+ \label{fig:attach}
+\end{figure}
+
+\begin{enumerate}
+
+\item A zero alias means to use simple position addressing. Slave 1 exists and
+vendor id and product code match the expected values.
+
+\item Although the slave with position 0 is found, the product code does not
+match, so the configuration is not attached.
+
+\item The alias is non-zero, so alias addressing is used. Slave 2 is the first
+slave with alias \lstinline+0x2000+. Because the position value is zero, the
+same slave is used.
+
+\item There is no slave with the given alias, so the configuration can not be
+attached.
+
+\item Slave 2 is again the first slave with the alias \lstinline+0x2000+, but
+position is now 1, so slave 3 is attached.
+
+\end{enumerate}
+
%------------------------------------------------------------------------------
\section{Cyclic Operation}
\label{sec:cyclic}
-\ldots
-% FIXME PDOS endianess
-
-
-%------------------------------------------------------------------------------
-
-\section{Concurrent Master Access} % FIXME
+
+To enter cyclic operation mode, the master has to be ``activated'' to
+calculate the process data image and apply the bus configuration for the first
+time. After activation, the application is in charge to send and receive
+frames.
+
+% TODO
+%
+% PDO endianess
+% Datagram injection
+
+%------------------------------------------------------------------------------
+
+\section{Concurrent Master Access}
\label{sec:concurr}
\index{Concurrency}
In some cases, one master is used by several instances, for example when an
-application does cyclic process data exchange, and there are EoE-capable slaves
-that require to exchange Ethernet data with the kernel (see
-section~\ref{sec:eoeimp}). For this reason, the master is a shared resource,
-and access to it has to be sequentialized. This is usually done by locking with
+application does cyclic process data exchange, and there are EoE-capable
+slaves that require to exchange Ethernet data with the kernel (see
+sec.~\ref{sec:eoe}). For this reason, the master is a shared resource, and
+access to it has to be sequentialized. This is usually done by locking with
semaphores, or other methods to protect critical sections.
The master itself can not provide locking mechanisms, because it has no chance
-to know the appropriate kind of lock. Imagine, the application uses RTAI
-functionality, then ordinary kernel semaphores would not be sufficient. For
-that, an important design decision was made: The application that reserved a
-master must have the total control, therefore it has to take responsibility for
-providing the appropriate locking mechanisms. If another instance wants to
-access the master, it has to request the master lock by callbacks, that have to
-be set by the application. Moreover the application can deny access to the
-master if it considers it to be awkward at the moment.
+to know the appropriate kind of lock. For example if the application module
+uses RTAI functionality, ordinary kernel semaphores would not be sufficient.
+For that, an important design decision was made: The application that reserved
+a master must have the total control, therefore it has to take responsibility
+for providing the appropriate locking mechanisms. If another instance wants
+to access the master, it has to request the master lock by callbacks, that
+have to be set by the application. Moreover the application can deny access to
+the master if it considers it to be awkward at the moment.
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{images/master-locks}
- \caption{Concurrent master access}
+ \caption{Concurrent Master Access}
\label{fig:locks}
\end{figure}
-Figure~\ref{fig:locks} exemplary shows, how two processes share one master: The
-application's cyclic task uses the master for process data exchange, while the
-master-internal EoE process uses it to communicate with EoE-capable slaves.
-Both have to acquire the master lock before access: The application task can
-access the lock natively, while the EoE process has to use the callbacks.
-Section~\ref{sec:concurrency} gives an example, of how to implement this.
-
-%------------------------------------------------------------------------------
-
-\chapter{Ethernet devices}
+Figure~\ref{fig:locks} exemplary shows, how two processes share one master:
+The application's cyclic task uses the master for process data exchange, while
+the master-internal EoE process uses it to communicate with EoE-capable
+slaves. Both have to acquire the master lock before access: The application
+task can access the lock natively, while the EoE process has to use the
+callbacks. See the application interface documentation (chap.~\ref{chap:api})
+for how to use the locking callbacks.
+
+%------------------------------------------------------------------------------
+
+\chapter{Ethernet Devices}
\label{sec:devices}
-The EtherCAT protocol is based on the Ethernet standard. That's why the master
-relies on standard Ethernet hardware to communicate with the bus.
+The EtherCAT protocol is based on the Ethernet standard, so a master relies on
+standard Ethernet hardware to communicate with the bus.
The term \textit{device} is used as a synonym for Ethernet network interface
hardware. There are device driver modules that handle Ethernet hardware, which
-the master can use to connect to an EtherCAT bus.
-
-Section~\ref{sec:networkdrivers} offers an overview of general Linux
-network driver modules, while section~\ref{sec:requirements} will show
-the requirements to an EtherCAT-enabled network driver. Finally,
-sections~\ref{sec:seldev} to~\ref{sec:patching} show how to fulfill
-these requirements and implement such a driver module.
+a master can use to connect to an EtherCAT bus.
%------------------------------------------------------------------------------
@@ -755,35 +816,43 @@
for received frames is set, frame data has to be copied from hardware
to kernel memory and passed to the network stack.
-\paragraph{The net\_device structure}
+\paragraph{The \lstinline+net_device+ Structure}
\index{net\_device}
-The driver registers a \textit{net\_device} structure for each device
-to communicate with the network stack and to create a ``network
-interface''. In case of an Ethernet driver, this interface appears as
-\textit{ethX}, where X is a number assigned by the kernel on
-registration. The \textit{net\_device} structure receives events
-(either from user space or from the network stack) via several
-callbacks, which have to be set before registration. Not every
-callback is mandatory, but for reasonable operation the ones below are
-needed in any case:
+The driver registers a \lstinline+net_device+ structure for each device to
+communicate with the network stack and to create a ``network interface''. In
+case of an Ethernet driver, this interface appears as \textit{ethX}, where X
+is a number assigned by the kernel on registration. The \lstinline+net_device+
+structure receives events (either from userspace or from the network stack)
+via several callbacks, which have to be set before registration. Not every
+callback is mandatory, but for reasonable operation the ones below are needed
+in any case:
+
+\newsavebox\boxopen
+\sbox\boxopen{\lstinline+open()+}
+\newsavebox\boxstop
+\sbox\boxstop{\lstinline+stop()+}
+\newsavebox\boxxmit
+\sbox\boxxmit{\lstinline+hard_start_xmit()+}
+\newsavebox\boxstats
+\sbox\boxstats{\lstinline+get_stats()+}
\begin{description}
-\item[open()] This function is called when network communication has to be
-started, for example after a command \textit{ifconfig ethX up} from user
-space. Frame reception has to be enabled by the driver.
-
-\item[stop()] The purpose of this callback is to ``close'' the device, i.~e.
-make the hardware stop receiving frames.
-
-\item[hard\_start\_xmit()] This function is cal\-led for each frame that has
-to be transmitted. The network stack passes the frame as a pointer to an
-\textit{sk\_buff} structure (``socket buffer''\index{Socket buffer}, see
+\item[\usebox\boxopen] This function is called when network communication has
+to be started, for example after a command \lstinline+ip link set ethX up+
+from userspace. Frame reception has to be enabled by the driver.
+
+\item[\usebox\boxstop] The purpose of this callback is to ``close'' the
+device, i.\,e.\ make the hardware stop receiving frames.
+
+\item[\usebox\boxxmit] This function is called for each frame that has to be
+transmitted. The network stack passes the frame as a pointer to an
+\lstinline+sk_buff+ structure (``socket buffer''\index{Socket buffer}, see
below), which has to be freed after sending.
-\item[get\_stats()] This call has to return a pointer to the device's
-\textit{net\_device\_stats} structure, which permanently has to be filled with
+\item[\usebox\boxstats] This call has to return a pointer to the device's
+\lstinline+net_device_stats+ structure, which permanently has to be filled with
frame statistics. This means, that every time a frame is received, sent, or an
error happened, the appropriate counter in this structure has to be increased.
@@ -792,18 +861,18 @@
The actual registration is done with the \lstinline+register_netdev()+ call,
unregistering is done with \lstinline+unregister_netdev()+.
-\paragraph{The netif Interface}
+\paragraph{The \lstinline+netif+ Interface}
\index{netif}
All other communication in the direction interface $\to$ network stack is done
-via the \lstinline+netif_*()+ calls. For example, on successful device
-opening, the network stack has to be notified, that it can now pass frames to
-the interface. This is done by calling \lstinline+netif_start_queue()+. After
-this call, the \lstinline+hard_start_xmit()+ callback can be called by the
-network stack. Furthermore a network driver usually manages a frame
-transmission queue. If this gets filled up, the network stack has to be told
-to stop passing further frames for a while. This happens with a call to
-\lstinline+netif_stop_queue()+. If some frames have been sent, and there is
+via the \lstinline+netif_*()+ calls. For example, on successful device opening,
+the network stack has to be notified, that it can now pass frames to the
+interface. This is done by calling \lstinline+netif_start_queue()+. After this
+call, the \lstinline+hard_start_xmit()+ callback can be called by the network
+stack. Furthermore a network driver usually manages a frame transmission queue.
+If this gets filled up, the network stack has to be told to stop passing
+further frames for a while. This happens with a call to
+\lstinline+netif_stop_queue()+. If some frames have been sent, and there is
enough space again to queue new frames, this can be notified with
\lstinline+netif_wake_queue()+. Another important call is
\lstinline+netif_receive_skb()+\footnote{This function is part of the NAPI
@@ -812,30 +881,31 @@
network performance on Linux. Read more in
\url{http://www.cyberus.ca/~hadi/usenix-paper.tgz}.}: It passes a frame to the
network stack, that was just received by the device. Frame data has to be
-packed into a so-called ``socket buffer'' for that (see below).
+included in a so-called ``socket buffer'' for that (see below).
\paragraph{Socket Buffers}
\index{Socket buffer}
-Socket buffers are the basic data type for the whole network stack. They
-serve as containers for network data and are able to quickly add data headers
-and footers, or strip them off again. Therefore a socket buffer consists of an
+Socket buffers are the basic data type for the whole network stack. They serve
+as containers for network data and are able to quickly add data headers and
+footers, or strip them off again. Therefore a socket buffer consists of an
allocated buffer and several pointers that mark beginning of the buffer
-(\textit{head}), beginning of data (\textit{data}), end of data
-(\textit{tail}) and end of buffer (\textit{end}). In addition, a socket buffer
-holds network header information and (in case of received data) a pointer to
-the \textit{net\_device}, it was received on. There exist functions that
-create a socket buffer (\lstinline+dev_alloc_skb()+), add data either from
-front (\lstinline+skb_push()+) or back (\lstinline+skb_put()+), remove data
-from front (\lstinline+skb_pull()+) or back (\lstinline+skb_trim()+), or
-delete the buffer (\lstinline+kfree_skb()+). A socket buffer is passed from
-layer to layer, and is freed by the layer that uses it the last time. In case
-of sending, freeing has to be done by the network driver.
+(\lstinline+head+), beginning of data (\lstinline+data+), end of data
+(\lstinline+tail+) and end of buffer (\lstinline+end+). In addition, a socket
+buffer holds network header information and (in case of received data) a
+pointer to the \lstinline+net_device+, it was received on. There exist
+functions that create a socket buffer (\lstinline+dev_alloc_skb()+), add data
+either from front (\lstinline+skb_push()+) or back (\lstinline+skb_put()+),
+remove data from front (\lstinline+skb_pull()+) or back
+(\lstinline+skb_trim()+), or delete the buffer (\lstinline+kfree_skb()+). A
+socket buffer is passed from layer to layer, and is freed by the layer that
+uses it the last time. In case of sending, freeing has to be done by the
+network driver.
%------------------------------------------------------------------------------
\section{EtherCAT Device Drivers}
-\label{sec:requirements}
+\label{sec:drivers}
There are a few requirements for Ethernet network devices to function as
EtherCAT devices, when connected to an EtherCAT bus.
@@ -843,25 +913,25 @@
\paragraph{Dedicated Interfaces}
For performance and realtime purposes, the EtherCAT master needs direct and
-exclusive access to the Ethernet hardware. This implies that the network
-device must not be connected to the kernel's network stack as usual, because
-the kernel would try to use it as an ordinary Ethernet device.
+exclusive access to the Ethernet hardware. This implies that the network device
+must not be connected to the kernel's network stack as usual, because the
+kernel would try to use it as an ordinary Ethernet device.
\paragraph{Interrupt-less Operation}
\index{Interrupt}
-EtherCAT frames travel through the logical EtherCAT ring and are then sent
-back to the master. Communication is highly deterministic: A frame is sent and
-will be received again after a constant time. Therefore, there is no need to
-notify the driver about frame reception: The master can instead query the
-hardware for received frames.
+EtherCAT frames travel through the logical EtherCAT ring and are then sent back
+to the master. Communication is highly deterministic: A frame is sent and will
+be received again after a constant time, so there is no need to notify the
+driver about frame reception: The master can instead query the hardware for
+received frames, if it expects them to be already received.
Figure~\ref{fig:interrupt} shows two workflows for cyclic frame transmission
and reception with and without interrupts.
\begin{figure}[htbp]
\centering
- \includegraphics[width=.8\textwidth]{images/interrupt}
+ \includegraphics[width=.9\textwidth]{images/interrupt}
\caption{Interrupt Operation versus Interrupt-less Operation}
\label{fig:interrupt}
\end{figure}
@@ -881,11 +951,11 @@
workflow: The received data is processed and a new frame is assembled and
sent. There is nothing to do for the rest of the cycle.
-The interrupt-less operation is desirable, because there is simply no need for
-an interrupt. Moreover hardware interrupts are not conducive in improving the
-driver's realtime behaviour: Their indeterministic incidences contribute to
-increasing the jitter. Besides, if a realtime extension (like RTAI) is used,
-some additional effort would have to be made to prioritize interrupts.
+The interrupt-less operation is desirable, because hardware interrupts are not
+conducive in improving the driver's realtime behaviour: Their indeterministic
+incidences contribute to increasing the jitter. Besides, if a realtime
+extension (like RTAI) is used, some additional effort would have to be made to
+prioritize interrupts.
\paragraph{Ethernet and EtherCAT Devices}
@@ -927,200 +997,36 @@
\section{Device Selection}
\label{sec:deviceselection}
-After loading the master module, at least one EtherCAT-capable network
-driver module has to be loaded, that connects one of its devices to
-the master. To specify an EtherCAT device and the master to connect
-to, all EtherCAT-capable network driver modules should provide two
-module parameters:
-
-\begin{description}
-\item[ec\_device\_index] PCI device index of the device that is
- connected to the EtherCAT bus. If this parameter is left away, all
- devices found are treated as ordinary Ethernet devices. Default:
- $-1$
-\item[ec\_master\_index] Index of the master to connect to. Default:
- $0$
-\end{description}
-
-The following command loads the EtherCAT-capable RTL8139 device
-driver, telling it to handle the second device as an EtherCAT device
-and connecting it to the first master:
-
-\begin{lstlisting}[gobble=2]
- # `\textbf{modprobe ec\_8139too ec\_device\_index=1}`
-\end{lstlisting}
-
-Usually, this command does not have to be entered manually, but is
-called by the EtherCAT init script. See section~\ref{sec:init} for
-more information.
-
-%------------------------------------------------------------------------------
-
-\section{The Device Interface}
+After loading the master module, at least one EtherCAT-capable network driver
+module has to be loaded, that offers its devices to the master (see
+sec.~\ref{sec:ecdev}. The master module knows the devices to choose from the
+module parameters (see sec.~\ref{sec:mastermod}). If the init script is used
+to start the master, the drivers and devices to use can be specified in the
+sysconfig file (see sec.~\ref{sec:sysconfig}).
+
+%------------------------------------------------------------------------------
+
+\section{EtherCAT Device Interface}
\label{sec:ecdev}
\index{Device interface}
An anticipation to the section about the master module
-(section~\ref{sec:mastermod}) has to be made in order to understand
-the way, a network device driver module can connect a device to a
-specific EtherCAT master.
-
-The master module provides a ``device interface'' for network device
-drivers. To use this interface, a network device driver module must
-include the header
-\textit{devices/ecdev.h}\nomenclature{ecdev}{EtherCAT Device}, coming
-with the EtherCAT master code. This header offers a function interface
-for EtherCAT devices which is explained below. All functions of the
-device interface are named with the prefix \textit{ecdev}.
-
-\paragraph{Device Registration}
-
-A network device driver can connect a physical device to an EtherCAT
-master with the \textit{ecdev\_register()} function.
-
-\begin{lstlisting}[gobble=2,language=C]
- ec_device_t *ecdev_register(unsigned int master_index,
- struct net_device *net_dev,
- ec_isr_t isr,
- struct module *module);
-\end{lstlisting}
-
-The first parameter \textit{master\_index} must be the index of the
-EtherCAT master to connect to (see section~\ref{sec:mastermod}),
-followed by \textit{net\_dev}, the pointer to the corresponding
-net\_device structure, which represents the network device to connect.
-The third parameter \textit{isr} must be a pointer to the interrupt
-service routine (ISR\index{ISR}) handling the device. The master will
-later execute the ISR in order to receive frames and to update the
-device status. The last parameter \textit{module} must be the pointer
-to the device driver module, which is usually accessible via the macro
-\textit{THIS\_MODULE} (see next paragraph). On success, the function
-returns a pointer to an \textit{ec\_device\_t} object, which has to be
-specified when calling further functions of the device interface.
-Therefore the device module has to store this pointer for future use.
-In error case, the \textit{ecdev\_register()} returns \textit{NULL},
-which means that the device could not be registered. The reason for
-this is printed to \textit{Syslog}\index{Syslog}. In this case, the
-device module is supposed to abort the module initialisation and let
-the \textit{insmod} command fail.
-
-\paragraph{Implicit Dependencies}
-
-The reason for the module pointer has to be specified at device registration is
-a non-trivial one: The master has to know about the module, because there will
-be an implicit dependency between the device module and a later connected
-application module: When an application module connects to the master, the use
-count of the master module will be increased, so that the master module can not
-be unloaded for the time of the connection. This is reasonable, and so
-automatically done by the kernel. The kernel knows about this dependency,
-because the application module uses kernel symbols provided by the master
-module. Moreover it is mandatory, that the device module can be unloaded
-neither, because it is implicitly used by the application module, too.
-Unloading it would lead to a fatal situation, because the master would have no
-device to send and receive frames for the application. This dependency can not
-be detected automatically, because the application module does not use any
-symbols of the device module. Therefore the master explicitly increments the
-use counter of the connected device module upon connection of an application
-and decrements it, if it disconnects again. In this manner, it is impossible to
-unload a device module while the master is in use. This is done with the kernel
-function pair \textit{try\_module\_get()}
-\index{try\_module\_get@\textit{try\_module\_get()}} and \textit{module\_put()}
-\index{module\_put@\textit{module\_put()}}. The first one increases the use
-count of a module and only fails, if the module is currently being unloaded.
-The last one decreases the use count again and never fails. Both functions take
-a pointer to the module as their argument, which the device module therefore
-has to specify upon device registration.
-
-\paragraph{Device Unregistering}
-
-The deregistration of a device is usually done in the device module's cleanup
-function, by calling the \textit{ecdev\_unregister()} function and specifying
-the master index and a pointer to the device object again.
-
-\begin{lstlisting}[gobble=2,language=C]
- void ecdev_unregister(unsigned int master_index,
- ec_device_t *device);
-\end{lstlisting}
-
-This function can fail too (if the master index is invalid, or the
-given device was not registered), but due to the fact, that this
-failure can not be dealt with appropriately, because the device module
-is unloading anyway, the failure code would not be of any interest. So
-the function has a void return value.
-
-\paragraph{Starting the Master}
-
-When a device has been initialized completely and is ready to send and
-receive frames, the master has to be notified about this by calling
-the \textit{ecdev\_start()} function.
-
-\begin{lstlisting}[gobble=2,language=C]
- int ecdev_start(unsigned int master_index);
-\end{lstlisting}
-
-The master will then enter ``Idle Mode'' and start scanning the bus
-(and possibly handling EoE slaves). Moreover it will make the bus
-accessible via Sysfs interface and react to user interactions. The
-function takes one parameter \textit{master\_index}, which has to be
-the same as at the call to \textit{ecdev\_register()}. The return
-value will be non-zero if the starting process failed. In this case
-the device module is supposed to abort the init sequence and make the
-init function return an error code.
-
-\paragraph{Stopping the Master}
-
-Before a device can be unregistered, the master has to be stopped by
-calling the \textit{ecdev\_stop()} function. It will stop processing
-messages of EoE slaves and leave ``Idle Mode''. The only parameter is
-\textit{master\_index}. This function can not fail.
-
-\begin{lstlisting}[gobble=2,language=C]
- void ecdev_stop(unsigned int master_index);
-\end{lstlisting}
-
-A subsequent call to \textit{ecdev\_unregister()} will now unregister
-the device savely.
-
-\paragraph{Receiving Frames}
-
-The interrupt service routine handling device events usually has a
-section where new frames are fetched from the hardware and forwarded
-to the kernel network stack via \textit{netif\_receive\_skb()}. For an
-EtherCAT-capable device, this has to be replaced by calling the
-\textit{ecdev\_receive()} function to forward the received data to the
-connected EtherCAT master instead.
-
-\begin{lstlisting}[gobble=2,language=C]
- void ecdev_receive(ec_device_t *device,
- const void *data,
- size_t size);
-\end{lstlisting}
-
-This function takes 3 arguments, a pointer to the device object
-(\textit{device}), a pointer to the received data, and the size of the
-received data. The data range has to include the Ethernet headers
-starting with the destination address and reach up to the last octet
-of EtherCAT data, excluding the FCS. Most network devices handle the
-FCS in hardware, so it is not seen by the driver code and therefore
-doesn't have to be cut off manually.
-
-\paragraph{Handling the Link Status}
-
-Information about the link status (i.~e. if there is a carrier signal detected
-on the physical port) is also important to the master. This information is
-usually gathered by the ISR and should be forwarded to the master by calling
-the \textit{ecdev\_link\_state()} function. The master then can react on this
-and warn the application of a lost link.
-
-\begin{lstlisting}[gobble=2,language=C]
- void ecdev_link_state(ec_device_t *device,
- uint8_t new_state);
-\end{lstlisting}
-
-The parameter \textit{device} has to be a pointer to the device object
-returned by \textit{ecdev\_\-register()}. With the second parameter
-\textit{new\_state}, the new link state is passed: 1, if the link went
-up, and 0, if it went down.
+(sec.~\ref{sec:mastermod}) has to be made in order to understand the way, a
+network device driver module can connect a device to a specific EtherCAT
+master.
+
+The master module provides a ``device interface'' for network device drivers.
+To use this interface, a network device driver module must include the header
+\textit{devices/ecdev.h}\nomenclature{ecdev}{EtherCAT Device}, coming with the
+EtherCAT master code. This header offers a function interface for EtherCAT
+devices. All functions of the device interface are named with the prefix
+\lstinline+ecdev+.
+
+The documentation of the device interface can be found in the header file or
+in the appropriate module of the interface documentation (see
+sec.~\ref{sec:gendoc} for generation instructions).
+
+% TODO general description of the device interface
%------------------------------------------------------------------------------
@@ -1128,347 +1034,42 @@
\label{sec:patching}
\index{Network drivers}
-This section will demonstrate, how to make a standard Ethernet driver
-EtherCAT-capable. The below code examples are taken out of the
-modified RealTek RTL8139 driver coming with the EtherCAT master
-(\textit{devices/8139too.c}). The driver was originally developed by
-Donald Becker, and is currently maintained by Jeff Garzik.
-
-Unfortunately, there is no standard procedure to enable an Ethernet
-driver for use with the EtherCAT master, but there are a few common
-techniques, that are described in this section.
+This section will describe, how to make a standard Ethernet driver
+EtherCAT-capable. Unfortunately, there is no standard procedure to enable an
+Ethernet driver for use with the EtherCAT master, but there are a few common
+techniques.
\begin{enumerate}
-\item A first simple rule is, that \textit{netif\_*()}-calls must be
- strictly avoided for all EtherCAT devices. As mentioned before,
- EtherCAT devices have no connection to the network stack, and
- therefore must not call its interface functions.
-\item Another important thing is, that EtherCAT devices should be
- operated without interrupts. So any calls of registering interrupt
- handlers and enabling interrupts at hardware level must be avoided,
- too.
-\item The master does not use a new socket buffer for each send
- operation: Instead there is a fix one allocated on master
- initialization. This socket buffer is filled with an EtherCAT frame
- with every send operation and passed to the
- \textit{hard\_start\_xmit()} callback. For that it is necessary,
- that the socket buffer is not be freed by the network driver as
- usual.
+
+\item A first simple rule is, that \lstinline+netif_*()+ calls must be avoided
+for all EtherCAT devices. As mentioned before, EtherCAT devices have no
+connection to the network stack, and therefore must not call its interface
+functions.
+
+\item Another important thing is, that EtherCAT devices should be operated
+without interrupts. So any calls of registering interrupt handlers and enabling
+interrupts at hardware level must be avoided, too.
+
+\item The master does not use a new socket buffer for each send operation:
+Instead there is a fix one allocated on master initialization. This socket
+buffer is filled with an EtherCAT frame with every send operation and passed to
+the \lstinline+hard_start_xmit()+ callback. For that it is necessary, that the
+socket buffer is not be freed by the network driver as usual.
+
\end{enumerate}
-As mentioned before, the driver will handle both EtherCAT and ordinary
-Ethernet devices. This implies, that for each device-dependent
-operation, it has to be checked if an EtherCAT device is involved, or
-just an Ethernet device. For means of simplicity, this example driver
-will only handle one EtherCAT device. This makes the case
-differentiations easier.
-
-\paragraph{Global Variables}
-
-First of all, there have to be additional global variables declared,
-as shown in the listing:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- static int ec_device_index = -1;
- static int ec_device_master_index = 0;
- static ec_device_t *rtl_ec_dev;
- struct net_device *rtl_ec_net_dev = NULL;
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{1} -- \linenum{2}] To
- comply to the requirements for parameters of EtherCAT device modules
- described in section~\ref{sec:seldev}, there have to be additional
- parameter variables: \textit{ec\_\-device\_\-index} holds the index
- of the EtherCAT device and defaults to $-1$ (no EtherCAT device),
- while \textit{ec\_device\_master\_index} stores index of the master,
- the single device will be connected to. Default: $0$
-\item[\linenum{3}] \textit{rtl\_ec\_dev} will be
- the pointer to the later registered RealTek EtherCAT device, which
- can be used as a parameter for device methods.
-\item[\linenum{4}] \textit{rtl\_ec\_net\_dev} is
- a pointer to the \textit{net\_device} structure of the dedicated
- device and is set while scanning the PCI bus and finding the device
- with the specified index. This is done inside the
- \textit{pci\_module\_init()} function executed as the first thing on
- module loading.
-\end{description}
-
-\paragraph{Module Initialization}
-
-Below is the (shortened) coding of the device driver's module init
-function:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- static int __init rtl8139_init_module(void)
- {
- if (pci_module_init(&rtl8139_pci_driver) < 0) {
- printk(KERN_ERR "Failed to init PCI mod.\n");
- goto out_return;
- }
-
- if (rtl_ec_net_dev) {
- printk(KERN_INFO "Registering"
- " EtherCAT device...\n");
- if (!(rtl_ec_dev =
- ecdev_register(ec_device_master_index,
- rtl_ec_net_dev,
- rtl8139_interrupt,
- THIS_MODULE))) {
- printk(KERN_ERR "Failed to reg."
- " EtherCAT device!\n");
- goto out_unreg_pci;
- }
-
- printk(KERN_INFO "Starting EtherCAT"
- " device...\n");
- if (ecdev_start(ec_device_master_index)) {
- printk(KERN_ERR "Failed to start"
- " EtherCAT device!\n");
- goto out_unreg_ec;
- }
- } else {
- printk(KERN_WARNING "No EtherCAT device"
- " registered!\n");
- }
-
- return 0;
-
- out_unreg_ec:
- ecdev_unregister(ec_device_master_index, rtl_ec_dev);
- out_unreg_pci:
- pci_unregister_driver(&rtl8139_pci_driver);
- out_return:
- return -1;
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{3}] This call initializes all
- RTL8139-compatible devices found on the pci bus. If a device with
- index \textit{ec\_device\_index} is found, a pointer to its
- \textit{net\_device} structure is stored in
- \textit{rtl\_ec\_net\_dev} for later use (see next listings).
-\item[\linenum{8}] If the specified device was
- found, \textit{rtl\_ec\_net\_dev} is non-zero.
-\item[\linenum{11}] The device is connected to
- the specified master with a call to \textit{ecdev\_register()}. If
- this fails, module loading is aborted.
-\item[\linenum{23}] The device registration was
- successful and the master is started. This can fail, which aborts
- module loading.
-\item[\linenum{29}] If no EtherCAT device was
- found, a warning is output.
-\end{description}
-
-\paragraph{Device Searching}
-
-During the PCI initialization phase, a variable \textit{board\_idx} is
-increased for each RTL8139-compatible device found. The code below is
-executed for each device:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- if (board_idx == ec_device_index) {
- rtl_ec_net_dev = dev;
- strcpy(dev->name, "ec0");
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{1}] The device with the specified
- index will be the EtherCAT device.
-\end{description}
-
-\paragraph{Avoiding Device Registration}
-
-Later in the PCI initialization phase, the net\_devices get
-registered. This has to be avoided for EtherCAT devices and so this is
-a typical example for an EtherCAT case differentiation:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- if (dev != rtl_ec_net_dev) {
- i = register_netdev(dev);
- if (i) goto err_out;
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{1}] If the current net\_device is
- not the EtherCAT device, it is registered at the network stack.
-\end{description}
-
-\paragraph{Avoiding Interrupt Registration}
-
-In the next two listings, there is an interrupt requested and the
-device's interrupts are enabled. This also has to be encapsulated by
-if-clauses, because interrupt operation is not wanted for EtherCAT
-devices.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- if (dev != rtl_ec_net_dev) {
- retval = request_irq(dev->irq, rtl8139_interrupt,
- SA_SHIRQ, dev->name, dev);
- if (retval) return retval;
- }
-\end{lstlisting}
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- if (dev != rtl_ec_net_dev) {
- /* Enable all known interrupts by setting
- the interrupt mask. */
- RTL_W16(IntrMask, rtl8139_intr_mask);
- }
-\end{lstlisting}
-
-\paragraph{Frame Sending}
-
-The listing below shows an excerpt of the function representing the
-\textit{hard\_start\_xmit()} callback of the net\_device.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- /* Note: the chip doesn't have auto-pad! */
- if (likely(len < TX_BUF_SIZE)) {
- if (len < ETH_ZLEN)
- memset(tp->tx_buf[entry], 0, ETH_ZLEN);
- skb_copy_and_csum_dev(skb, tp->tx_buf[entry]);
- if (dev != rtl_ec_net_dev) {
- dev_kfree_skb(skb);
- }
- } else {
- if (dev != rtl_ec_net_dev) {
- dev_kfree_skb(skb);
- }
- tp->stats.tx_dropped++;
- return 0;
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{6} + \linenum{10}] The
- master uses a fixed socket buffer for transmission, which is reused
- and may not be freed.
-\end{description}
-
-\paragraph{Frame Receiving}
-
-During ordinary frame reception, a socket buffer is created and filled
-with the received data. This is not necessary for an EtherCAT device:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- if (dev != rtl_ec_net_dev) {
- /* Malloc up new buffer, compatible with net-2e. */
- /* Omit the four octet CRC from the length. */
-
- skb = dev_alloc_skb (pkt_size + 2);
- if (likely(skb)) {
- skb->dev = dev;
- skb_reserve(skb, 2); /* 16 byte align
- the IP fields. */
- eth_copy_and_sum(skb, &rx_ring[ring_off + 4],
- pkt_size, 0);
- skb_put(skb, pkt_size);
- skb->protocol = eth_type_trans(skb, dev);
-
- dev->last_rx = jiffies;
- tp->stats.rx_bytes += pkt_size;
- tp->stats.rx_packets++;
-
- netif_receive_skb (skb);
- } else {
- if (net_ratelimit())
- printk(KERN_WARNING
- "%s: Memory squeeze, dropping"
- " packet.\n", dev->name);
- tp->stats.rx_dropped++;
- }
- } else {
- ecdev_receive(rtl_ec_dev,
- &rx_ring[ring_offset + 4], pkt_size);
- dev->last_rx = jiffies;
- tp->stats.rx_bytes += pkt_size;
- tp->stats.rx_packets++;
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{28}] If the device is an EtherCAT
- device, no socket buffer is allocated. Instead a pointer to the data
- (which is still in the device's receive ring) is passed to the
- EtherCAT master. Unnecessary copy operations are avoided.
-\item[\linenum{30} -- \linenum{32}] The
- device's statistics are updated as usual.
-\end{description}
-
-\paragraph{Link State}
-
-The link state (i.~e. if there is a carrier signal detected on the
-receive port) is determined during execution of the ISR. The listing
-below shows the different processing for Ethernet and EtherCAT
-devices:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- if (dev != rtl_ec_net_dev) {
- if (tp->phys[0] >= 0) {
- mii_check_media(&tp->mii, netif_msg_link(tp),
- init_media);
- }
- } else {
- void __iomem *ioaddr = tp->mmio_addr;
- uint16_t link = RTL_R16(BasicModeStatus)
- & BMSR_LSTATUS;
- ecdev_link_state(rtl_ec_dev, link ? 1 : 0);
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{3}] The ``media check'' is done
- via the media independent interface (MII\nomenclature{MII}{Media
- Independent Interface}), a standard interface for Fast Ethernet
- devices.
-\item[\linenum{7} -- \linenum{10}] For
- EtherCAT devices, the link state is fetched manually from the
- appropriate device register, and passed to the EtherCAT master by
- calling \textit{ecdev\_\-link\_\-state()}.
-\end{description}
-
-\paragraph{Module Cleanup}
-
-Below is the module's cleanup function:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left]
- static void __exit rtl8139_cleanup_module (void)
- {
- printk(KERN_INFO "Cleaning up RTL8139-EtherCAT"
- " module...\n");
-
- if (rtl_ec_net_dev) {
- printk(KERN_INFO "Stopping device...\n");
- ecdev_stop(ec_device_master_index);
- printk(KERN_INFO "Unregistering device...\n");
- ecdev_unregister(ec_device_master_index,
- rtl_ec_dev);
- rtl_ec_dev = NULL;
- }
-
- pci_unregister_driver(&rtl8139_pci_driver);
-
- printk(KERN_INFO "RTL8139-EtherCAT module"
- " cleaned up.\n");
- }
-\end{lstlisting}
-
-\begin{description}
-
-\item[\linenum{6}] Stopping and deregistration is only done, if a device was
-registered before.
-
-\item[\linenum{8}] The master is first stopped, so it does not access the
-device any more.
-
-\item[\linenum{10}] After this, the device is unregistered. The master is now
-``orphaned''.
-
-\end{description}
+An Ethernet driver usually handles several Ethernet devices, each described by
+a \lstinline+net_device+ structure with a \lstinline+priv_data+ field to
+attach driver-dependent data to the structure. To distinguish between normal
+Ethernet devices and the ones used by EtherCAT masters, the private data
+structure used by the driver could be extended by a pointer, that points to an
+\lstinline+ec_device_t+ object returned by \lstinline+ecdev_offer()+ (see
+sec.~\ref{sec:ecdev}) if the device is used by a master and otherwise is zero.
+
+The RealTek RTL-8139 Fast Ethernet driver is a ``simple'' Ethernet driver and
+can be taken as an example to patch new drivers. The interesting sections can
+be found by searching the string ``ecdev" in the file
+\textit{devices/8139too-2.6.24-ethercat.c}.
%------------------------------------------------------------------------------
@@ -1499,15 +1100,14 @@
datagram, invokes the \textit{ec\_master\_send\_datagrams()} function to send
a frame with the queued datagram and then waits actively for its reception.
-This sequential approach is very simple, reflecting in only three
-lines of code. The disadvantage is, that the master is blocked for the
-time it waits for datagram reception. There is no difficulty when only
-one instance is using the master, but if more instances want to
-(synchronously\footnote{At this time, synchronous master access will
- be adequate to show the advantages of an FSM. The asynchronous
- approach will be discussed in section~\ref{sec:eoeimp}}) use the
-master, it is inevitable to think about an alternative to the
-sequential model.
+This sequential approach is very simple, reflecting in only three lines of
+code. The disadvantage is, that the master is blocked for the time it waits
+for datagram reception. There is no difficulty when only one instance is using
+the master, but if more instances want to (synchronously\footnote{At this
+time, synchronous master access will be adequate to show the advantages of an
+FSM. The asynchronous approach will be discussed in sec.~\ref{sec:eoe}}) use
+the master, it is inevitable to think about an alternative to the sequential
+model.
Master access has to be sequentialized for more than one instance
wanting to send and receive datagrams synchronously. With the present
@@ -1552,8 +1152,8 @@
// state processing finished.
\end{lstlisting}
-See section~\ref{sec:statemodel} for an introduction to the
-state machine programming concept used in the master code.
+See sec.~\ref{sec:statemodel} for an introduction to the state machine
+programming concept used in the master code.
%------------------------------------------------------------------------------
@@ -1722,18 +1322,17 @@
In the master code, state pointers of all state machines\footnote{All except
for the EoE state machine, because multiple EoE slaves have to be handled in
parallel. For this reason each EoE handler object has its own state pointer.}
-are gathered in a single object of the \textit{ec\_fsm\_t} class. This is
-advantageous, because there is always one instance of every state machine
+are gathered in a single object of the \lstinline+ec_fsm_master_t+ class. This
+is advantageous, because there is always one instance of every state machine
available and can be started on demand.
\paragraph{Mealy and Moore}
-If a closer look is taken to the above listing, it can be seen that
-the actions executed (the ``outputs'' of the state machine) only
-depend on the current state. This accords to the ``Moore'' model
-introduced in section~\ref{sec:fsmtheory}. As mentioned, the ``Mealy''
-model offers a higher flexibility, which can be seen in the listing
-below:
+If a closer look is taken to the above listing, it can be seen that the
+actions executed (the ``outputs'' of the state machine) only depend on the
+current state. This accords to the ``Moore'' model introduced in
+sec.~\ref{sec:fsmtheory}. As mentioned, the ``Mealy'' model offers a higher
+flexibility, which can be seen in the listing below:
\begin{lstlisting}[gobble=2,language=C,numbers=left]
void state7(void *priv_data) {
@@ -1749,9 +1348,10 @@
\end{lstlisting}
\begin{description}
-\item[\linenum{3} + \linenum{7}] The
- state function executes the actions depending on the state
- transition, that is about to be done.
+
+\item[\linenum{3} + \linenum{7}] The state function executes the actions
+depending on the state transition, that is about to be done.
+
\end{description}
The most flexible alternative is to execute certain actions depending
@@ -1772,18 +1372,18 @@
}
\end{lstlisting}
-This model is oftenly used in the master. It combines the best aspects
-of both approaches.
+This model is often used in the master. It combines the best aspects of both
+approaches.
\paragraph{Using Sub State Machines}
-To avoid having too much states, certain functions of the EtherCAT master state
-machine have been sourced out into sub state machines. This helps to
+To avoid having too much states, certain functions of the EtherCAT master
+state machine have been sourced out into sub state machines. This helps to
encapsulate the related workflows and moreover avoids the ``state explosion''
-phenomenon described in section~\ref{sec:fsmtheory}. If the master would
-instead use one big state machine, the number of states would be a multiple of
-the actual number. This would increase the level of complexity to a
-non-manageable grade.
+phenomenon described in sec.~\ref{sec:fsmtheory}. If the master would instead
+use one big state machine, the number of states would be a multiple of the
+actual number. This would increase the level of complexity to a non-manageable
+grade.
\paragraph{Executing Sub State Machines}
@@ -1810,208 +1410,57 @@
\end{lstlisting}
\begin{description}
-\item[\linenum{3}] \textit{change\_state} is the
- state pointer of the state change state machine. The state function,
- the pointer points on, is executed\ldots
-\item[\linenum{6}] \ldots either until the state
- machine terminates with the error state \ldots
-\item[\linenum{11}] \ldots or until the state
- machine terminates in the end state. Until then, the ``higher''
- state machine remains in the current state and executes the sub
- state machine again in the next cycle.
+
+\item[\linenum{3}] \lstinline+change_state+ is the state pointer of the state
+change state machine. The state function, the pointer points on, is
+executed\ldots
+
+\item[\linenum{6}] \ldots either until the state machine terminates with the
+error state \ldots
+
+\item[\linenum{11}] \ldots or until the state machine terminates in the end
+state. Until then, the ``higher'' state machine remains in the current state
+and executes the sub state machine again in the next cycle.
+
\end{description}
\paragraph{State Machine Descriptions}
-The below sections describe every state machine used in the EtherCAT
-master. The textual descriptions of the state machines contain
-references to the transitions in the corresponding state transition
-diagrams, that are marked with an arrow followed by the name of the
-successive state. Transitions caused by trivial error cases (i.~e. no
-response from slave) are not described explicitly. These transitions
-are drawn as dashed arrows in the diagrams.
-
-%------------------------------------------------------------------------------
-
-\section{The Operation State Machine}
-\label{sec:fsm-op}
-\index{FSM!Operation}
-
-The Operation state machine is executed by calling the
-\textit{ecrt\_master\_run()} method in cyclic realtime code. Its
-purpose is to monitor the bus and to reconfigure slaves after a bus
-failure or power failure. Figure~\ref{fig:fsm-op} shows its transition
-diagram.
+The below sections describe every state machine used in the EtherCAT master.
+The textual descriptions of the state machines contain references to the
+transitions in the corresponding state transition diagrams, that are marked
+with an arrow followed by the name of the successive state. Transitions caused
+by trivial error cases (i.\,e.\ no response from slave) are not described
+explicitly. These transitions are drawn as dashed arrows in the diagrams.
+
+%------------------------------------------------------------------------------
+
+\section{The Master State Machine}
+\label{sec:fsm-master}
+\index{FSM!Master}
+
+The master state machine is executed in the context of the master thread.
+Figure~\ref{fig:fsm-master} shows its transition diagram. Its purposes are:
\begin{figure}[htbp]
\centering
- \includegraphics[width=.8\textwidth]{images/fsm-op}
- \caption{Transition diagram of the operation state machine}
- \label{fig:fsm-op}
+ \includegraphics[width=\textwidth]{graphs/fsm_master}
+ \caption{Transition diagram of the master state machine}
+ \label{fig:fsm-master}
\end{figure}
\begin{description}
-\item[START] This is the beginning state of the operation state
- machine. There is a datagram issued, that queries the ``AL Control
- Response'' attribute \cite[section~5.3.2]{alspec} of all slaves via
- broadcast. In this way, all slave states and the number of slaves
- responding can be determined. $\rightarrow$~BROADCAST
-
-\item[BROADCAST] The broadcast datagram is evaluated. A change in the number of
-responding slaves is treated as a topology change. If the number of slaves is
-not as expected, the bus is marked as ``tainted''. In this state, no slave
-reconfiguration is possible, because the assignment of known slaves and those
-present on the bus is ambiguous. If the number of slaves is considered as
-right, the bus is marked for validation, because it turned from tainted to
-normal state and it has to be checked, if all slaves are valid. Now, the state
-of every single slave has to be determined. For that, a (unicast) datagram is
-issued, that queries the first slave's ``AL Control Response'' attribute.
-$\rightarrow$~READ STATES
-
-\item[READ STATES] If the current slave did not respond to its configured
-station address, it is marked as offline, and the next slave is queried.
-$\rightarrow$~READ STATES
-
- If the slave responded, it is marked as online and its current state
- is stored. The next slave is queried. $\rightarrow$~READ STATES
-
- If all slaves have been queried, and the bus is marked for
- validation, the validation is started by checking the first slaves
- vendor ID. $\rightarrow$~VALIDATE VENDOR
-
- If no validation has to be done, it is checked, if all slaves are in
- the state they are supposed to be. If not, the first of slave with
- the wrong state is reconfigured and brought in the required state.
- $\rightarrow$~CONFIGURE SLAVES
-
- If all slaves are in the correct state, the state machine is
- restarted. $\rightarrow$~START
-
-\item[CONFIGURE SLAVES] The slave configuration state machine is
- executed until termination. $\rightarrow$~CONFIGURE SLAVES
-
- If there are still slaves in the wrong state after another check,
- the first of these slaves is configured and brought into the correct
- state again. $\rightarrow$~CONFIGURE SLAVES
-
- If all slaves are in the correct state, the state machine is
- restarted. $\rightarrow$~START
-
-\item[VALIDATE VENDOR] The SII state machine is executed until
- termination. If the slave has the wrong vendor ID, the state machine
- is restarted. $\rightarrow$~START
-
- If the slave has the correct vendor ID, its product ID is queried.
- $\rightarrow$~VALIDATE PRODUCT
-
-\item[VALIDATE PRODUCT] The SII state machine is executed until
- termination. If the slave has the wrong product ID, the state
- machine is restarted. $\rightarrow$~START
-
- If the slave has the correct product ID, the next slave's vendor ID
- is queried. $\rightarrow$~VALIDATE VENDOR
-
- If all slaves have the correct vendor IDs and product codes, the
- configured station addresses can be safely rewritten. This is done
- for the first slave marked as offline.
- $\rightarrow$~REWRITE ADDRESSES
-
-\item[REWRITE ADDRESSES] If the station address was successfully written, it is
-searched for the next slave marked as offline. If there is one, its address is
-reconfigured, too. $\rightarrow$~REWRITE ADDRESSES
-
- If there are no more slaves marked as offline, the state machine is
- restarted. $\rightarrow$~START
-\end{description}
-
-%------------------------------------------------------------------------------
-
-\section{The Idle State Machine}
-\label{sec:fsm-idle}
-\index{FSM!Idle}
-
-The Idle state machine is executed by a kernel thread, if no application is
-connected. Its purpose is to make slave information available to user space,
-operate EoE-capable slaves, read and write SII contents and test slave
-functionality. Figure~\ref{fig:fsm-idle} shows its transition diagram.
-
-\begin{figure}[htbp]
- \centering
- \includegraphics[width=.8\textwidth]{images/fsm-idle}
- \caption{Transition diagram of the idle state machine}
- \label{fig:fsm-idle}
-\end{figure}
-
-\begin{description}
-\item[START] The beginning state of the idle state machine. Similar to
- the operation state machine, a broadcast datagram is issued, to
- query all slave states and the number of slaves.
- $\rightarrow$~BROADCAST
-
-\item[BROADCAST] The number of responding slaves is evaluated. If it
- has changed since the last time, this is treated as a topology
- change and the internal list of slaves is cleared and rebuild
- completely. The slave scan state machine is started for the first
- slave. $\rightarrow$~SCAN FOR SLAVES
-
- If no topology change happened, every single slave state is fetched.
- $\rightarrow$~READ STATES
-
-\item[SCAN FOR SLAVES] The slave scan state machine is executed until
- termination. $\rightarrow$~SCAN FOR SLAVES
-
- If there is another slave to scan, the slave scan state machine is
- started again. $\rightarrow$~SCAN FOR SLAVES
-
- If all slave information has been fetched, slave addresses are
- calculated and EoE processing is started. Then, the state machine is
- restarted. $\rightarrow$~START
-
-\item[READ STATES] If the slave did not respond to the query, it is
- marked as offline. The next slave is queried.
- $\rightarrow$~READ STATES
-
- If the slave responded, it is marked as online. And the next slave
- is queried. $\rightarrow$~READ STATES
-
- If all slave states have been determined, it is checked, if any
- slaves are not in the state they supposed to be. If this is true,
- the slave configuration state machine is started for the first of
- them. $\rightarrow$~CONFIGURE SLAVES
-
- If all slaves are in the correct state, it is checked, if any
- E$^2$PROM write operations are pending. If this is true, the first
- pending operation is executed by starting the SII state machine for
- writing access. $\rightarrow$~WRITE EEPROM
-
- If all these conditions are false, there is nothing to do and the
- state machine is restarted. $\rightarrow$~START
-
-\item[CONFIGURE SLAVES] The slave configuration state machine is
- executed until termination. $\rightarrow$~CONFIGURE SLAVES
-
- After this, it is checked, if another slave needs a state change. If
- this is true, the slave state change state machine is started for
- this slave. $\rightarrow$~CONFIGURE SLAVES
-
- If all slaves are in the correct state, it is determined, if any
- E$^2$PROM write operations are pending. If this is true, the first
- pending operation is executed by starting the SII state machine for
- writing access. $\rightarrow$~WRITE EEPROM
-
- If all prior conditions are false, the state machine is restarted.
- $\rightarrow$~START
-
-\item[WRITE EEPROM] The SII state machine is executed until
- termination. $\rightarrow$~WRITE EEPROM
-
- If the current word has been written successfully, and there are
- still word to write, the SII state machine is started for the next
- word. $\rightarrow$~WRITE EEPROM
-
- If all words have been written successfully, the new E$^2$PROM
- contents are evaluated and the state machine is restarted.
- $\rightarrow$~START
+
+\item[Bus monitoring] The bus topology is monitored. If it changes, the bus is
+(re-)scanned.
+
+\item[Slave configuration] The application-layer states of the slaves are
+monitored. If a slave is not in the state it supposed to be, the slave is
+(re-)configured.
+
+\item[Request handling] Requests (either originating from the application or
+from external sources) are handled. A request is a job that the master shall
+process asynchronously, for example an SII access, SDO access, or similar.
\end{description}
@@ -2022,77 +1471,42 @@
\index{FSM!Slave Scan}
The slave scan state machine, which can be seen in
-figure~\ref{fig:fsm-slavescan}, leads through the process of fetching
-all slave information.
+figure~\ref{fig:fsm-slavescan}, leads through the process of reading desired
+slave information.
\begin{figure}[htbp]
\centering
- \includegraphics[width=.6\textwidth]{images/fsm-slavescan}
+ \includegraphics[height=.8\textheight]{graphs/fsm_slave_scan}
\caption{Transition diagram of the slave scan state machine}
\label{fig:fsm-slavescan}
\end{figure}
+The scan process includes the following steps:
+
\begin{description}
-\item[START] In the beginning state of the slave scan state machine,
- the station address is written to the slave, which is always the
- ring position~+~$1$. In this way, the address 0x0000 (default
- address) is not used, which makes it easy to detect unconfigured
- slaves. $\rightarrow$~ADDRESS
-
-\item[ADDRESS] The writing of the station address is verified. After
- that, the slave's ``AL Control Response'' attribute is queried.
- $\rightarrow$~STATE
-
-\item[STATE] The AL state is evaluated. A warning is output, if the
- slave has still the \textit{Change} bit set. After that, the slave's
- ``DL Information'' attribute is queried.
- $\rightarrow$~BASE
-
-\item[BASE] The queried base data are evaluated: Slave type, revision
- and build number, and even more important, the number of supported
- sync managers and FMMUs are stored. After that, the slave's data
- link layer information is read from the ``DL Status'' attribute at
- address 0x0110. $\rightarrow$~DATALINK
-
-\item[DATALINK] In this state, the DL information is evaluated: This
- information about the communication ports contains, if the link is
- up, if the loop has been closed and if there is a carrier detected
- on the RX side of each port.
-
- Then, the state machine starts measuring the size of the slave's
- E$^2$PROM contents. This is done by subsequently reading out each
- category header, until the last category is reached (type 0xFFFF).
- This procedure is started by querying the first category header at
- word address 0x0040 via the SII state machine.
- $\rightarrow$~EEPROM SIZE
-
-\item[EEPROM SIZE] The SII state machine is executed until
- termination. $\rightarrow$~EEPROM SIZE
-
- If the category type does not mark the end of the categories, the
- position of the next category header is determined via the length of
- the current category, and the SII state machine is started again.
- $\rightarrow$~EEPROM SIZE
-
- If the size of the E$^2$PROM contents has been determined, memory is
- allocated, to read all the contents. The SII state machine is
- started to read the first word. $\rightarrow$~EEPROM DATA
-
-\item[EEPROM DATA] The SII state machine is executed until
- termination. $\rightarrow$~EEPROM DATA
-
- Two words have been read. If more than one word is needed, the two
- words are written in the allocated memory. Otherwise only one word
- (the last word) is copied. If more words are to read, the SII state
- machine is started again to read the next two words.
- $\rightarrow$~EEPROM DATA
-
- The complete E$^2$PROM contents have been read. The slave's identity
- object and mailbox information are evaluated. Moreover the category
- types STRINGS, GENERAL, SYNC and PDO are evaluated. The slave
- scanning has been completed. $\rightarrow$~END
-
-\item[END] Slave scanning has been finished.
+
+\item[Node Address] The node address is set for the slave, so that it can be
+node-addressed for all following operations.
+
+\item[AL State] The initial application-layer state is read.
+
+\item[Base Information] Base information (like the number of supported FMMUs)
+is read from the lower physical memory.
+
+\item[Data Link] Information about the physical ports is read.
+
+\item[SII Size] The size of the SII contents is determined to allocate SII
+image memory.
+
+\item[SII Data] The SII contents are read into the master's image.
+
+\item[PREOP] If the slave supports CoE, it is set to PREOP state using the
+State change FSM (see sec.~\ref{sec:fsm-change}) to enable mailbox
+communication and read the PDO configuration via CoE.
+
+\item[PDOs] The PDOs are read via CoE (if supported) using the PDO Reading FSM
+(see sec.~\ref{sec:fsm-pdo}). If this is successful, the PDO information from
+the SII (if any) is overwritten.
\end{description}
@@ -2103,103 +1517,52 @@
\index{FSM!Slave Configuration}
The slave configuration state machine, which can be seen in
-figure~\ref{fig:fsm-slaveconf}, leads through the process of
-configuring a slave and bringing it to a certain state.
+figure~\ref{fig:fsm-slaveconf}, leads through the process of configuring a
+slave and bringing it to a certain application-layer state.
\begin{figure}[htbp]
\centering
- \includegraphics[width=.6\textwidth]{images/fsm-slaveconf}
+ \includegraphics[height=.9\textheight]{graphs/fsm_slave_conf}
\caption{Transition diagram of the slave configuration state
machine}
\label{fig:fsm-slaveconf}
\end{figure}
\begin{description}
-\item[INIT] The state change state machine has been initialized to
- bring the slave into the INIT state. Now, the slave state change
- state machine is executed until termination. $\rightarrow$~INIT
-
- If the slave state change failed, the configuration has to be
- aborted. $\rightarrow$~END
-
- The slave state change succeeded and the slave is now in INIT state.
- If this is the target state, the configuration is finished.
- $\rightarrow$~END
-
- If the slave does not support any sync managers, the sync manager
- configuration can be skipped. The state change state machine is
- started to bring the slave into PREOP state.
- $\rightarrow$~PREOP
-
- Sync managers are configured conforming to the sync manager category
- information provided in the slave's E$^2$PROM. The corresponding
- datagram is issued. $\rightarrow$~SYNC
-
-\item[SYNC] If the sync manager configuration datagram is accepted,
- the sync manager configuration was successful. The slave may now
- enter the PREOP state, and the state change state machine is
- started. $\rightarrow$~PREOP
-
-\item[PREOP] The state change state machine is executed until
- termination. $\rightarrow$~PREOP
-
- If the state change failed, the configuration has to be aborted.
- $\rightarrow$~END
-
- If the PREOP state was the target state, the configuration is
- finished. $\rightarrow$~END
-
- If the slave supports no FMMUs, the FMMU configuration can be
- skipped. If the slave has Sdos to configure, it is begun with
- sending the first Sdo. $\rightarrow$~SDO\_CONF
-
- If no Sdo configurations are provided, the slave can now directly be
- brought into the SAFEOP state and the state change state machine is
- started again. $\rightarrow$~SAFEOP
-
- Otherwise, all supported FMMUs are configured according to the Pdos
- requested via the master's realtime interface. The appropriate
- datagram is issued. $\rightarrow$~FMMU
-
-\item[FMMU] The FMMU configuration datagram was accepted. If the slave
- has Sdos to configure, it is begun with sending the first Sdo.
- $\rightarrow$~SDO\_CONF
-
- Otherwise, the slave can now be brought into the SAFEOP state. The
- state change state machine is started.
- $\rightarrow$~SAFEOP
-
-\item[SDO\_CONF] The CoE state machine is executed until termination.
- $\rightarrow$~SDO\_CONF
-
- If another Sdo has to be configured, a new Sdo download sequence is
- begun. $\rightarrow$~SDO\_CONF
-
- Otherwise, the slave can now be brought into the SAFEOP state. The
- state change state machine is started.
- $\rightarrow$~SAFEOP
-
-\item[SAFEOP] The state change state machine is executed until
- termination. $\rightarrow$~SAFEOP
-
- If the state change failed, the configuration has to be aborted.
- $\rightarrow$~END
-
- If the SAFEOP state was the target state, the configuration is
- finished. $\rightarrow$~END
-
- The slave can now directly be brought into the OP state and the
- state change state machine is started a last time.
- $\rightarrow$~OP
-
-\item[OP] The state change state machine is executed until
- termination. $\rightarrow$~OP
-
- If the state change state machine terminates, the slave
- configuration is finished, regardless of its success.
- $\rightarrow$~END
-
-\item[END] The termination state.
+
+\item[INIT] The state change FSM is used to bring the slave to the INIT state.
+
+\item[FMMU Clearing] To avoid that the slave reacts on any process data, the
+FMMU configuration are cleared. If the slave does not support FMMUs, this
+state is skipped. If INIT is the requested state, the state machine is
+finished.
+
+\item[Mailbox Sync Manager Configuration] If the slaves support mailbox
+communication, the mailbox sync managers are configured. Otherwise this state
+is skipped.
+
+\item[PREOP] The state change FSM is used to bring the slave to PREOP state.
+If this is the requested state, the state machine is finished.
+
+\item[SDO Configuration] If there is a slave configuration attached (see
+sec.~\ref{sec:masterconfig}), and there are any SDO configurations are
+provided by the application, these are sent to the slave.
+
+\item[PDO Configuration] The PDO configuration state machine is executed to
+apply all necessary PDO configurations.
+
+\item[PDO Sync Manager Configuration] If any PDO sync managers exist, they are
+configured.
+
+\item[FMMU Configuration] If there are FMMUs configurations supplied by the
+application (i.\,e.\ if the application registered PDO entries), they are
+applied.
+
+\item[SAFEOP] The state change FSM is used to bring the slave to SAFEOP state.
+If this is the requested state, the state machine is finished.
+
+\item[OP] The state change FSM is used to bring the slave to OP state.
+If this is the requested state, the state machine is finished.
\end{description}
@@ -2210,72 +1573,47 @@
\index{FSM!State Change}
The state change state machine, which can be seen in
-figure~\ref{fig:fsm-change}, leads through the process of changing a
-slave's state. This implements the states and transitions described in
-\cite[section~6.4.1]{alspec}.
+figure~\ref{fig:fsm-change}, leads through the process of changing a slave's
+application-layer state. This implements the states and transitions described
+in \cite[sec.~6.4.1]{alspec}.
\begin{figure}[htbp]
\centering
- \includegraphics[width=.9\textwidth]{images/fsm-change}
- \caption{Transition diagram of the state change state machine}
+ \includegraphics[width=.6\textwidth]{graphs/fsm_change}
+ \caption{Transition Diagram of the State Change State Machine}
\label{fig:fsm-change}
\end{figure}
\begin{description}
-\item[START] The beginning state, where a datagram with the state
- change command is written to the slave's ``AL Control Request''
- attribute. Nothing can fail. $\rightarrow$~CHECK
-
-\item[CHECK] After the state change datagram has been sent, the ``AL
- Control Response'' attribute is queried with a second datagram.
- $\rightarrow$~STATUS
-
-\item[STATUS] The read memory contents are evaluated: While the
- parameter \textit{State} still contains the old slave state, the
- slave is busy with reacting on the state change command. In this
- case, the attribute has to be queried again.
- $\rightarrow$~STATUS
-
- In case of success, the \textit{State} parameter contains the new
- state and the \textit{Change} bit is cleared. The slave is in the
- requested state. $\rightarrow$~END
-
- If the slave can not process the state change, the \textit{Change}
- bit is set: Now the master tries to get the reason for this by
- querying the \textit{AL Status Code} parameter.
- $\rightarrow$~CODE
-
-\item[END] If the state machine ends in this state, the slave's state
- change has been successful.
-
-\item[CODE] The status code query has been sent. Reading the
- \textit{AL Status Code} might fail, because not all slaves support
- this parameter. Anyway, the master has to acknowledge the state
- change error by writing the current slave state to the ``AL Control
- Request'' attribute with the \textit{Acknowledge} bit set.
- $\rightarrow$~ACK
-
-\item[ACK] After that, the ``AL Control Response'' attribute is
- queried for the state of the acknowledgement.
- $\rightarrow$~CHECK ACK
-
-\item[CHECK ACK] If the acknowledgement has been accepted by the
- slave, the old state is kept. Still, the state change was
- unsuccessful. $\rightarrow$~ERROR
-
- If the acknowledgement is ignored by the slave, a timeout happens.
- In any case, the overall state change was unsuccessful.
- $\rightarrow$~ERROR
-
- If there is still now response from the slave, but the timer did not
- run out yet, the slave's ``AL Control Response'' attribute is
- queried again. $\rightarrow$~CHECK ACK
-
-\item[ERROR] If the state machine ends in this state, the slave's
- state change was unsuccessful.
+
+\item[Start] The new application-layer state is requested via the ``AL Control
+Request'' register (see~\cite[sec. 5.3.1]{alspec}).
+
+\item[Check for Response] Some slave need some time to respond to an AL state
+change command, and do not respond for some time. For this case, the command
+is issued again, until it is acknowledged.
+
+\item[Check AL Status] If the AL State change datagram was acknowledged, the
+``AL Control Response'' register (see~\cite[sec. 5.3.2]{alspec}) must be read
+out until the slave changes the AL state.
+
+\item[AL Status Code] If the slave refused the state change command, the
+reason can be read from the ``AL Status Code'' field in the ``AL State
+Changed'' registers (see~\cite[sec. 5.3.3]{alspec}).
+
+\item[Acknowledge State] If the state change was not successful, the master
+has to acknowledge the old state by writing to the ``AL Control request''
+register again.
+
+\item[Check Acknowledge] After sending the acknowledge command, it has to read
+out the ``AL Control Response'' register again.
\end{description}
+The ``start\_ack'' state is a shortcut in the state machine for the case, that
+the master wants to acknowledge a spontaneous AL state change, that was not
+requested.
+
%------------------------------------------------------------------------------
\section{The SII State Machine}
@@ -2283,68 +1621,111 @@
\index{FSM!SII}
The SII\index{SII} state machine (shown in figure~\ref{fig:fsm-sii})
-implements the process of reading or writing E$^2$PROM data via the
-Slave Information Interface described in \cite[section~5.4]{alspec}.
+implements the process of reading or writing SII data via the Slave
+Information Interface described in \cite[sec.~6.4]{dlspec}.
\begin{figure}[htbp]
\centering
- \includegraphics[width=.9\textwidth]{images/fsm-sii}
- \caption{Transition diagram of the SII state machine}
+ \includegraphics[width=.5\textwidth]{graphs/fsm_sii}
+ \caption{Transition Diagram of the SII State Machine}
\label{fig:fsm-sii}
\end{figure}
+This is how the reading part of the state machine works:
+
\begin{description}
-\item[READ\_START] The beginning state for reading access, where the
- read request and the requested address are written to the SII
- attribute. Nothing can fail up to now.
- $\rightarrow$~READ\_CHECK
-
-\item[READ\_CHECK] When the SII read request has been sent
- successfully, a timer is started. A check/fetch datagram is issued,
- that reads out the SII attribute for state and data.
- $\rightarrow$~READ\_FETCH
-
-\item[READ\_FETCH] Upon reception of the check/fetch datagram, the
- \textit{Read Operation} and \textit{Busy} parameters are checked:
- \begin{itemize}
- \item If the slave is still busy with fetching E$^2$PROM data into
- the interface, the timer is checked. If it timed out, the reading
- is aborted ($\rightarrow$~ERROR), if not, the check/fetch datagram
- is issued again. $\rightarrow$~READ\_FETCH
-
- \item If the slave is ready with reading data, these are copied from
- the datagram and the read cycle is completed.
- $\rightarrow$~END
- \end{itemize}
+
+\item[Start Reading] The read request and the requested word address are
+written to the SII attribute.
+
+\item[Check Read Command] If the SII read request command has been
+acknowledged, a timer is started. A datagram is issued, that reads out the SII
+attribute for state and data.
+
+\item[Fetch Data] If the read operation is still busy (the SII is usually
+implemented as an E$^2$PROM), the state is read again. Otherwise the data are
+copied from the datagram.
+
\end{description}
-The write access states behave nearly the same:
+The writing part works nearly similar:
\begin{description}
-\item[WRITE\_START] The beginning state for writing access,
- respectively. A write request, the target address and the data word
- are written to the SII attribute. Nothing can fail.
- $\rightarrow$~WRITE\_CHECK
-
-\item[WRITE\_CHECK] When the SII write request has been sent
- successfully, the timer is started. A check datagram is issued, that
- reads out the SII attribute for the state of the write operation.
- $\rightarrow$~WRITE\_CHECK2
-
-\item[WRITE\_CHECK2] Upon reception of the check datagram, the
- \textit{Write Operation} and \textit{Busy} parameters are checked:
- \begin{itemize}
- \item If the slave is still busy with writing E$^2$PROM data, the
- timer is checked. If it timed out, the operation is aborted
- ($\rightarrow$~ERROR), if not, the check datagram is issued again.
- $\rightarrow$~WRITE\_CHECK2
- \item If the slave is ready with writing data, the write cycle is
- completed. $\rightarrow$~END
- \end{itemize}
+
+\item[Start Writing] A write request, the target address and the data word are
+written to the SII attribute.
+
+\item[Check Write Command] If the SII write request command has been
+acknowledged, a timer is started. A datagram is issued, that reads out the SII
+attribute for the state of the write operation.
+
+\item[Wait while Busy] If the write operation is still busy (determined by a
+minimum wait time and the state of the busy flag), the state machine remains in
+this state to avoid that another write operation is issued too early.
+
\end{description}
%------------------------------------------------------------------------------
+\section{The PDO State Machines}
+\label{sec:fsm-pdo}
+\index{FSM!PDO}
+
+The PDO state machines are a set of state machines that read or write the PDO
+assignment and the PDO mapping via the ``CoE Communication Area'' described in
+\cite[sec. 5.6.7.4]{alspec}. For the object access, the CANopen over EtherCAT
+access primitives are used (see sec.~\ref{sec:coe}), so the slave must support
+the CoE mailbox protocol.
+
+\paragraph{PDO Reading FSM} This state machine (fig.~\ref{fig:fsm-pdo-read})
+has the purpose to read the complete PDO configuration of a slave. It reads
+the PDO assignment for each Sync Manager and uses the PDO Entry Reading FSM
+(fig.~\ref{fig:fsm-pdo-entry-read}) to read the mapping for each assigned PDO.
+
+\begin{figure}[htbp]
+ \centering
+ \includegraphics[width=.4\textwidth]{graphs/fsm_pdo_read}
+ \caption{Transition Diagram of the PDO Reading State Machine}
+ \label{fig:fsm-pdo-read}
+\end{figure}
+
+Basically it reads the every Sync manager's PDO assignment SDO's
+(\lstinline+0x1C1x+) number of elements to determine the number of assigned
+PDOs for this sync manager and then reads out the subindices of the SDO to get
+the assigned PDO's indices. When a PDO index is read, the PDO Entry Reading
+FSM is executed to read the PDO's mapped PDO entries.
+
+\paragraph{PDO Entry Reading FSM} This state machine
+(fig.~\ref{fig:fsm-pdo-entry-read}) reads the PDO mapping (the PDO entries) of
+a PDO. It reads the respective mapping SDO (\lstinline+0x1600+ --
+\lstinline+0x17ff+, or \lstinline+0x1a00+ -- \lstinline+0x1bff+) for the given
+PDO by reading first the subindex zero (number of elements) to determine the
+number of mapped PDO entries. After that, each subindex is read to get the
+mapped PDO entry index, subindex and bit size.
+
+\begin{figure}[htbp]
+ \centering
+ \includegraphics[width=.4\textwidth]{graphs/fsm_pdo_entry_read}
+ \caption{Transition Diagram of the PDO Entry Reading State Machine}
+ \label{fig:fsm-pdo-entry-read}
+\end{figure}
+
+\begin{figure}[htbp]
+ \centering
+ \includegraphics[width=.9\textwidth]{graphs/fsm_pdo_conf}
+ \caption{Transition Diagram of the PDO Configuration State Machine}
+ \label{fig:fsm-pdo-conf}
+\end{figure}
+
+\begin{figure}[htbp]
+ \centering
+ \includegraphics[width=.4\textwidth]{graphs/fsm_pdo_entry_conf}
+ \caption{Transition Diagram of the PDO Entry Configuration State Machine}
+ \label{fig:fsm-pdo-entry-conf}
+\end{figure}
+
+%------------------------------------------------------------------------------
+
\chapter{Mailbox Protocol Implementations}
\index{Mailbox}
@@ -2353,130 +1734,94 @@
%------------------------------------------------------------------------------
-\section{Ethernet-over-EtherCAT (EoE)}
-\label{sec:eoeimp}
+\section{Ethernet over EtherCAT (EoE)}
+\label{sec:eoe}
\index{EoE}
-The EtherCAT master implements the Ethernet-over-EtherCAT mailbox
-protocol to enable the tunneling of Ethernet frames to special slaves,
-that can either have physical Ethernet ports to forward the frames to,
-or have an own IP stack to receive the frames.
+The EtherCAT master implements the
+Ethernet over EtherCAT\nomenclature{EoE}{Ethernet over EtherCAT, Mailbox
+Protocol} mailbox protocol~\cite[sec.~5.7]{alspec} to enable the tunneling of
+Ethernet frames to special slaves, that can either have physical Ethernet
+ports to forward the frames to, or have an own IP stack to receive the frames.
\paragraph{Virtual Network Interfaces}
-The master creates a virtual EoE network interface for every
-EoE-capable slave. These interfaces are called \textit{eoeX}, where X
-is a number provided by the kernel on interface registration. Frames
-sent to these interfaces are forwarded to the associated slaves by the
-master. Frames, that are received by the slaves, are fetched by the
-master and forwarded to the virtual interfaces.
+The master creates a virtual EoE network interface for every EoE-capable
+slave. These interfaces are called either
+
+\begin{description}
+
+\item[eoeXsY] for a slave without an alias address (see
+sec.~\ref{sec:ethercat-alias}), where X is the master index and Y is the
+slave's ring position, or
+
+\item[eoeXaY] for a slave with a non-zero alias address, where X is the master
+index and Y is the decimal alias address.
+
+\end{description}
+
+Frames sent to these interfaces are forwarded to the associated slaves by the
+master. Frames, that are received by the slaves, are fetched by the master and
+forwarded to the virtual interfaces.
This bears the following advantages:
\begin{itemize}
+
\item Flexibility: The user can decide, how the EoE-capable slaves are
- interconnected with the rest of the world.
-\item Standard tools can be used to monitor the EoE activity and to
- configure the EoE interfaces.
-\item The Linux kernel's layer-2-bridging implementation (according to
- the IEEE 802.1D MAC Bridging standard) can be used natively to
- bridge Ethernet traffic between EoE-capable slaves.
-\item The Linux kernel's network stack can be used to route packets
- between EoE-capable slaves and to track security issues, just like
- having physical network interfaces.
+interconnected with the rest of the world.
+
+\item Standard tools can be used to monitor the EoE activity and to configure
+the EoE interfaces.
+
+\item The Linux kernel's layer-2-bridging implementation (according to the
+IEEE 802.1D MAC Bridging standard) can be used natively to bridge Ethernet
+traffic between EoE-capable slaves.
+
+\item The Linux kernel's network stack can be used to route packets between
+EoE-capable slaves and to track security issues, just like having physical
+network interfaces.
+
\end{itemize}
\paragraph{EoE Handlers}
-The virtual EoE interfaces and the related functionality is encapsulated in the
-\textit{ec\_eoe\_t} class (see section~\ref{sec:class-eoe}). So the master
-does not create the network interfaces directly: This is done inside the
-constructor of the \textit{ec\_eoe\_t} class. An object of this class is called
-``EoE handler'' below. An EoE handler additionally contains a frame queue. Each
-time, the kernel passes a new socket buffer for sending via the interface's
-\textit{hard\_start\_xmit()} callback, the socket buffer is queued for
-transmission by the EoE state machine (see below). If the queue gets filled up,
-the passing of new socket buffers is suspended with a call to
-\textit{netif\_stop\_queue()}.
-
-\paragraph{Static Handler Creation}
-
-The master creates a pool of EoE handlers at startup, that are coupled
-to EoE-capable slaves on demand. The lifetime of the corresponding
-network interfaces is equal to the lifetime of the master module.
-This approach is opposed to creating the virtual network interfaces on
-demand (i.~e. on running across a new EoE-capable slave). The latter
-approach was considered as difficult, because of several reasons:
-
-\begin{itemize}
-\item The \textit{alloc\_netdev()} function can sleep and must be
- called from a non-interrupt context. This reduces the flexibility of
- choosing an appropriate method for cyclic EoE processing.
-\item Unregistering network interfaces requires them to be ``down'',
- which can not be guaranteed upon sudden disappearing of an
- EoE-capable slave.
-\item The connection to the EoE-capable slaves must be as continuous
- as possible. Especially the transition from idle to operation mode
- (and vice versa) causes the rebuilding of the internal data
- structures. These transitions must be as transparent as possible for
- the instances using the network interfaces.
-\end{itemize}
-
-\paragraph{Number of Handlers} % FIXME
-
-The master module has a parameter \textit{ec\_eoeif\_count} to specify
-the number of EoE interfaces (and handlers) per master to create. This
-parameter can either be specified when manually loading the master
-module, or (when using the init script) by setting the
-\$EOE\_INTERFACES variable in the sysconfig file (see
-section~\ref{sec:sysconfig}). Upon loading of the master module, the
-virtual interfaces become available:
-
-\begin{lstlisting}[gobble=2]
- # `\textbf{ifconfig -a}`
- eoe0 Link encap:Ethernet HWaddr 00:11:22:33:44:06
- BROADCAST MULTICAST MTU:1500 Metric:1
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
-
- eoe1 Link encap:Ethernet HWaddr 00:11:22:33:44:07
- BROADCAST MULTICAST MTU:1500 Metric:1
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
- ...
-\end{lstlisting}
-
-\paragraph{Coupling of EoE Slaves}
-
-During execution of the slave scan state machine (see
-section~\ref{sec:fsm-scan}), the master determines the supported
-mailbox protocols. This is done by examining the ``Supported Mailbox
-Protocols'' mask field at word address 0x001C of the SII\index{SII}.
-If bit 1 is set, the slave supports the EoE protocol. After slave
-scanning, the master runs through all slaves again and couples each
-EoE-capable slave to a free EoE handler. It can happen, that there are
-not enough EoE handlers to cover all EoE-capable slaves. In this case,
-the number of EoE handlers must be increased accordingly.
+The virtual EoE interfaces and the related functionality is encapsulated in
+the \lstinline+ec_eoe_t+ class. An object of this class is called ``EoE
+handler''. For example the master does not create the network interfaces
+directly: This is done inside the constructor of an EoE handler. An EoE
+handler additionally contains a frame queue. Each time, the kernel passes a
+new socket buffer for sending via the interface's
+\lstinline+hard_start_xmit()+ callback, the socket buffer is queued for
+transmission by the EoE state machine (see below). If the queue gets filled
+up, the passing of new socket buffers is suspended with a call to
+\lstinline+netif_stop_queue()+.
+
+\paragraph{Creation of EoE Handlers}
+
+During bus scanning (see sec.~\ref{sec:fsm-scan}), the master determines the
+supported mailbox protocols foe each slave. This is done by examining the
+``Supported Mailbox Protocols'' mask field at word address 0x001C of the
+SII\index{SII}. If bit 1 is set, the slave supports the EoE protocol. In this
+case, an EoE handler is created for that slave.
\paragraph{EoE State Machine}
\index{FSM!EoE}
-Every EoE handler owns an EoE state machine, that is used to send
-frames to the coupled slave and receive frames from the it via the EoE
+Every EoE handler owns an EoE state machine, that is used to send frames to
+the corresponding slave and receive frames from the it via the EoE
communication primitives. This state machine is showed in
figure~\ref{fig:fsm-eoe}.
\begin{figure}[htbp]
\centering
- \includegraphics[width=.7\textwidth]{images/fsm-eoe}
- \caption{Transition diagram of the EoE state machine}
+ \includegraphics[width=.7\textwidth]{images/fsm-eoe} % FIXME
+ \caption{Transition Diagram of the EoE State Machine}
\label{fig:fsm-eoe}
\end{figure}
+% FIXME
+
\begin{description}
\item[RX\_START] The beginning state of the EoE state machine. A
mailbox check datagram is sent, to query the slave's mailbox for new
@@ -2524,95 +1869,74 @@
\paragraph{EoE Processing}
-To execute the EoE state machine of every active EoE handler, there
-must be a cyclic process. The easiest thing would be to execute the
-EoE state machines synchronously to the operation state machine (see
-section~\ref{sec:fsm-op}) with every realtime cycle. This approach has
-the following disadvantages:
-
-\begin{itemize}
-
-\item Only one EoE fragment can be sent or received every few cycles. This
+To execute the EoE state machine of every active EoE handler, there must be a
+cyclic process. The easiest solution would be to execute the EoE state
+machines synchronously with the master state machine (see
+sec.~\ref{sec:fsm-master}). This approach has the following disadvantage:
+
+Only one EoE fragment could be sent or received every few cycles. This
causes the data rate to be very low, because the EoE state machines are not
executed in the time between the application cycles. Moreover, the data rate
would be dependent on the period of the application task.
-\item The receiving and forwarding of frames to the kernel requires the dynamic
-allocation of frames. Some realtime extensions do not support calling memory
-allocation functions in realtime context, so the EoE state machine may not be
-executed with each application cycle.
-
-\end{itemize}
-
-To overcome these problems, an own cyclic process is needed to
-asynchronously execute the EoE state machines. For that, the master
-owns a kernel timer, that is executed each timer interrupt. This
-guarantees a constant bandwidth, but poses the new problem of
-concurrent access to the master. The locking mechanisms needed for
-this are introduced in section~\ref{sec:concurr}.
-Section~\ref{sec:concurrency} gives practical implementation examples.
-
-\paragraph{Idle phase}
-
-EoE data must also be exchanged in idle phase, to guarantee the continuous
-availability of the connection to the EoE-capable slaves. Although there is no
-application connected in this case, the master is still accessed by the master
-state machine (see section~\ref{sec:fsm-master}). With the EoE timer running in
-addition, there is still concurrency, that has to be protected by a lock.
-Therefore the master owns an internal spinlock that is used protect master
-access during idle phase.
+To overcome this problem, an own cyclic process is needed to asynchronously
+execute the EoE state machines. For that, the master owns a kernel timer, that
+is executed each timer interrupt. This guarantees a constant bandwidth, but
+poses the new problem of concurrent access to the master. The locking
+mechanisms needed for this are introduced in sec.~\ref{sec:concurr}.
\paragraph{Automatic Configuration}
-By default, slaves are left in INIT state during idle mode. If an EoE
-interface is set to running state (i.~e. with the \textit{ifconfig up}
-command), the requested slave state of the related slave is
-automatically set to OP, whereupon the idle state machine will attempt
-to configure the slave and put it into operation.
-
-%------------------------------------------------------------------------------
-
-\section{CANopen-over-EtherCAT (CoE)}
-\label{sec:coeimp}
+By default, slaves are left in PREOP state, if no configuration is applied. If
+an EoE interface link is set to ``up'', the requested slave's
+application-layer state is automatically set to OP.
+
+%------------------------------------------------------------------------------
+
+\section{CANopen over EtherCAT (CoE)}
+\label{sec:coe}
\index{CoE}
-The CANopen-over-EtherCAT protocol \cite[section~5.6]{alspec} is used
-to configure slaves on application level. Each CoE-capable slave
-provides a list of Sdos for this reason.
-
-\paragraph{Sdo Configuration}
-
-The Sdo configurations have to be provided by the application. This is done
-via the \textit{ecrt\_slave\_conf\_sdo*()} methods (see
-section~\ref{sec:ecrt-slave}), that are part of the realtime interface. The
-slave stores the Sdo configurations in a linked list, but does not apply them
-at once.
-
-\paragraph{Sdo Download State Machine}
-
-The best time to apply Sdo configurations is during the slave's PREOP
-state, because mailbox communication is already possible and slave's
-application will start with updating input data in the succeeding
-SAFEOP state. Therefore the Sdo configuration has to be part of the
-slave configuration state machine (see section~\ref{sec:fsm-conf}): It
-is implemented via an Sdo download state machine, that is executed
-just before entering the slave's SAFEOP state. In this way, it is
-guaranteed that the Sdo configurations are applied each time, the
-slave is reconfigured.
-
-The transition diagram of the Sdo Download state machine can be seen
+The CANopen over EtherCAT\nomenclature{CoE}{CANopen over EtherCAT, Mailbox
+Protocol} protocol~\cite[sec.~5.6]{alspec} is used to configure slaves and
+exchange data objects on application level.
+
+% TODO
+%
+% Download / Upload
+% Expedited / Normal
+% Segmenting
+% SDO Info Services
+%
+
+\ldots
+
+\paragraph{SDO Download State Machine}
+
+The best time to apply SDO configurations is during the slave's PREOP state,
+because mailbox communication is already possible and slave's application will
+start with updating input data in the succeeding SAFEOP state. Therefore the
+SDO configuration has to be part of the slave configuration state machine (see
+sec.~\ref{sec:fsm-conf}): It is implemented via an SDO download state machine,
+that is executed just before entering the slave's SAFEOP state. In this way,
+it is guaranteed that the SDO configurations are applied each time, the slave
+is reconfigured.
+
+The transition diagram of the SDO Download state machine can be seen
in figure~\ref{fig:fsm-coedown}.
\begin{figure}[htbp]
\centering
- \includegraphics[width=.9\textwidth]{images/fsm-coedown}
+ \includegraphics[width=.9\textwidth]{images/fsm-coedown} % FIXME
\caption{Transition diagram of the CoE download state machine}
\label{fig:fsm-coedown}
\end{figure}
+% FIXME
+
\begin{description}
\item[START] The beginning state of the CoE download state
- machine. The ``Sdo Download Normal Request'' mailbox command is
+ machine. The ``SDO Download Normal Request'' mailbox command is
sent. $\rightarrow$~REQUEST
\item[REQUEST] It is checked, if the CoE download request has been
@@ -2621,7 +1945,7 @@
\item[CHECK] If no mailbox data is available, the timer is checked.
\begin{itemize}
- \item If it timed out, the Sdo download is aborted.
+ \item If it timed out, the SDO download is aborted.
$\rightarrow$~ERROR
\item Otherwise, the mailbox is queried again.
$\rightarrow$~CHECK
@@ -2631,117 +1955,119 @@
$\rightarrow$~RESPONSE
\item[RESPONSE] If the mailbox response could not be fetched, the data
- is invalid, the wrong protocol was received, or a ``Abort Sdo
- Transfer Request'' was received, the Sdo download is aborted.
+ is invalid, the wrong protocol was received, or a ``Abort SDO
+ Transfer Request'' was received, the SDO download is aborted.
$\rightarrow$~ERROR
- If a ``Sdo Download Normal Response'' acknowledgement was received,
- the Sdo download was successful. $\rightarrow$~END
-
-\item[END] The Sdo download was successful.
-
-\item[ERROR] The Sdo download was aborted due to an error.
+ If a ``SDO Download Normal Response'' acknowledgement was received,
+ the SDO download was successful. $\rightarrow$~END
+
+\item[END] The SDO download was successful.
+
+\item[ERROR] The SDO download was aborted due to an error.
\end{description}
%------------------------------------------------------------------------------
-\chapter{User Space}
+\chapter{Userspace Interfaces}
\label{sec:user}
-\index{User space}
-
-For the master runs as a kernel module, accessing it is natively
-limited to analyzing Syslog messages and controlling using modutils.
-
-It is necessary to implement further interfaces, that make it easier
-to access the master from user space and allow a finer influence. It
-should be possible to view and to change special parameters at runtime.
-
-Bus visualization is a second point: For development and debugging
-purposes it would be nice, if one could show the connected slaves with
-a single command.
-
-Another aspect is automatic startup and configuration. If the master
-is to be integrated into a running system, it must be able to
-automatically start with a persistent configuration.
-
-A last thing is monitoring EtherCAT communication. For debugging
-purposes, there had to be a way to analyze EtherCAT datagrams. The
-best way would be with a popular network analyzer, like Wireshark
-\cite{wireshark} (the former Ethereal) or others.
-
-This section covers all those points and introduces the interfaces and
-tools to make all that possible.
+\index{Userspace}
+
+For the master runs as a kernel module, accessing it is natively limited to
+analyzing Syslog messages and controlling using \textit{modutils}.
+
+It was necessary to implement further interfaces, that make it easier to access
+the master from userspace and allow a finer influence. It should be possible
+to view and to change special parameters at runtime.
+
+Bus visualization is another point: For development and debugging purposes it
+is necessary to show the connected slaves with a single command, for instance
+(see sec.~\ref{sec:tool}).
+
+Another aspect is automatic startup and configuration. The master must be able
+to automatically start up with a persistent configuration (see
+sec.~\ref{sec:system}).
+
+A last thing is monitoring EtherCAT communication. For debugging purposes,
+there had to be a way to analyze EtherCAT datagrams. The best way would be
+with a popular network analyzer, like Wireshark \cite{wireshark} (the former
+Ethereal) or others (see sec.~\ref{sec:debug}).
+
+This chapter covers all these points and introduces the interfaces and tools
+to make all that possible.
%------------------------------------------------------------------------------
\section{Command-line Tool}
-\label{sec:ethercat}
-
-% --master
-
-\subsection{Character devices}
+\label{sec:tool}
+
+% TODO --master
+
+\subsection{Character Devices}
\label{sec:cdev}
-Each master instance will get a character device as a user-space interface.
-The devices are named \textit{/dev/EtherCATX}, where $X$ is the index of the
-master.
-
-% FIXME
-% udev
-% rights
-
-%------------------------------------------------------------------------------
-
-\subsection{Setting alias addresses}
+Each master instance will get a character device as a userspace interface.
+The devices are named \textit{/dev/EtherCATx}, where $x \in \{0 \ldots n\}$ is
+the index of the master.
+
+\paragraph{Device Node Creation} The character device nodes are automatically
+created, if the \lstinline+udev+ Package is installed. See
+sec.~\ref{sec:autonode} for how to install and configure it.
+
+%------------------------------------------------------------------------------
+
+\subsection{Setting Alias Addresses}
+\label{sec:ethercat-alias}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_alias}
%------------------------------------------------------------------------------
-\subsection{Displaying the bus configuration}
+\subsection{Displaying the Bus Configuration}
+\label{sec:ethercat-config}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_config}
%------------------------------------------------------------------------------
-\subsection{Displaying process data}
+\subsection{Displaying Process Data}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_data}
%------------------------------------------------------------------------------
-\subsection{Setting a master's debug level}
+\subsection{Setting a Master's Debug Level}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_debug}
%------------------------------------------------------------------------------
-\subsection{Configured domains}
+\subsection{Configured Domains}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_domains}
%------------------------------------------------------------------------------
-\subsection{Master and Ethernet device information}
+\subsection{Master and Ethernet Devices}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_master}
%------------------------------------------------------------------------------
-\subsection{Showing slaves' sync managers, Pdos and Pdo entries}
+\subsection{Sync Managers, PDOs and PDO Entries}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_pdos}
%------------------------------------------------------------------------------
-\subsection{Displaying the Sdo dictionary}
+\subsection{SDO Dictionary}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_sdos}
%------------------------------------------------------------------------------
-\subsection{Sdo access}
+\subsection{SDO Access}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_download}
@@ -2749,7 +2075,7 @@
%------------------------------------------------------------------------------
-\subsection{Displaying slaves on the bus}
+\subsection{Slaves on the Bus}
Slave information can be gathered with the subcommand \lstinline+slaves+:
@@ -2783,8 +2109,8 @@
\item Some SII data fields have to be altered (like the alias address). A quick
writing must be possible for that.
-\item Through reading access, analyzing category data is possible from user
-space.
+\item Through reading access, analyzing category data is possible from
+userspace.
\end{itemize}
@@ -2794,7 +2120,7 @@
binary format, analysis is easier with a tool like \textit{hexdump}:
\begin{lstlisting}
-$ `\textbf{ethercat sii\_read --slave 3 | hexdump}`
+$ `\textbf{ethercat sii\_read --position 3 | hexdump}`
0000000 0103 0000 0000 0000 0000 0000 0000 008c
0000010 0002 0000 3052 07f0 0000 0000 0000 0000
0000020 0000 0000 0000 0000 0000 0000 0000 0000
@@ -2804,16 +2130,16 @@
Backing up SII contents can easily done with a redirection:
\begin{lstlisting}
-$ `\textbf{ethercat sii\_read --slave 3 > sii-of-slave3.bin}`
+$ `\textbf{ethercat sii\_read --position 3 > sii-of-slave3.bin}`
\end{lstlisting}
To download SII contents to a slave, writing access to the master's character
-device is necessary (see section~\ref{sec:cdev}).
+device is necessary (see sec.~\ref{sec:cdev}).
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_sii_write}
\begin{lstlisting}
-# `\textbf{ethercat sii\_write --slave 3 sii-of-slave3.bin}`
+# `\textbf{ethercat sii\_write --position 3 sii-of-slave3.bin}`
\end{lstlisting}
The SII contents will be checked for validity and then sent to the slave. The
@@ -2821,13 +2147,13 @@
%------------------------------------------------------------------------------
-\subsection{Requesting application-layer states}
+\subsection{Requesting Application-Layer States}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_states}
%------------------------------------------------------------------------------
-\subsection{Generating slave description XML}
+\subsection{Generating Slave Description XML}
\lstinputlisting[basicstyle=\ttfamily\footnotesize]{external/ethercat_xml}
@@ -2845,21 +2171,21 @@
The EtherCAT master init script conforms to the requirements of the ``Linux
Standard Base'' (LSB\index{LSB}, \cite{lsb}). The script is installed to
-\textit{etc/init.d/ethercat} below the installation prefix and has to be copied
-(or better: linked) to the appropriate location (see
-section~\ref{sec:install}), before the master can be inserted as a service.
+\textit{etc/init.d/ethercat} below the installation prefix and has to be
+copied (or better: linked) to the appropriate location (see
+sec.~\ref{sec:installation}), before the master can be inserted as a service.
Please note, that the init script depends on the sysconfig file described
below.
-To provide service dependencies (i.~e. which services have to be started before
-others) inside the init script code, LSB defines a special comment block.
-System tools can extract this information to insert the EtherCAT init script at
-the correct place in the startup sequence:
+To provide service dependencies (i.\,e.\ which services have to be started
+before others) inside the init script code, LSB defines a special comment
+block. System tools can extract this information to insert the EtherCAT init
+script at the correct place in the startup sequence:
\lstinputlisting[firstline=38,lastline=48]
{../script/init.d/ethercat}
-\subsection{Sysconfig}
+\subsection{Sysconfig File}
\label{sec:sysconfig}
\index{Sysconfig file}
@@ -2872,7 +2198,7 @@
\lstinputlisting[numbers=left,firstline=9,basicstyle=\ttfamily\scriptsize]
{../script/sysconfig/ethercat}
-\subsection{Service}
+\subsection{Starting the Master as a Service}
\label{sec:service}
\index{Service}
@@ -2898,27 +2224,55 @@
%------------------------------------------------------------------------------
-\section{Monitoring and Debugging}
+\section{Debug Interfaces}
\label{sec:debug}
-\index{Monitoring}
-
-For debugging purposes, every EtherCAT master registers a read-only network
-interface \textit{ecX}, where X is a number, provided by the kernel on device
-registration. While it is ``up'', the master forwards every frame sent and
-received to this interface.
-
-This makes it possible to connect an network monitor (like Wireshark or
-tcpdump) to the debug interface and monitor the EtherCAT frames.
-
-% FIXME schedule()
-It has to be considered, that can be frame rate can be very high. The master
-state machine usually runs every kernel timer interrupt (usually up to
-\unit{1}{\kilo\hertz}) and with a connected application, the rate can be even
-higher.
-
-\paragraph{Attention:} The socket buffers needed for the operation of
-the debugging interface have to be allocated dynamically. Some Linux
-realtime extensions do not allow this in realtime context!
+\index{Debug Interfaces}
+
+EtherCAT buses can always be monitored by inserting a switch between master
+and slaves. This allows to connect another PC with a network monitor like
+Wireshark~\cite{wireshark}, for example.
+
+For convenience, so-called ``debug interfaces'' are supported. Debug
+interfaces are virtual network interfaces allowing to capture EtherCAT traffic
+with a network monitor (like Wireshark or tcpdump) running on the master
+machine without using external hardware. To use this functionality, the master
+sources have to be configured with the \lstinline+--enable-debug-if+ switch
+(see sec.~\ref{sec:installation}).
+
+Every EtherCAT master registers two read-only network interfaces,
+corresponding to the physical Ethernet devices. These are
+named \textit{ecdbgmX} (main device) and \textit{ecdbgbX} (backup device, for
+future use), where X is the master index. The below listing shows debug
+interfaces among some standard network interfaces:
+
+\begin{lstlisting}
+# `\textbf{ip link}`
+1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+4: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
+ link/ether 00:04:61:03:d1:01 brd ff:ff:ff:ff:ff:ff
+8: ecdbgm0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast
+ qlen 1000
+ link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
+9: ecdbgb0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
+ link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
+\end{lstlisting}
+
+While a debug interface is enabled, all frames sent or received to or from the
+physical device are additionally forwarded to the debug interface by the
+corresponding master. Network interfaces can be enabled with the below
+command:
+
+\begin{lstlisting}
+# `\textbf{ip link set dev ecdbgm0 up}`
+\end{lstlisting}
+
+Please note, that the frame rate can be very high. With an application
+connected, the debug interface can produce thousands of frames per second.
+
+\paragraph{Attention} The socket buffers needed for the operation of debug
+interfaces have to be allocated dynamically. Some Linux realtime extensions do
+not allow this in realtime context!
%------------------------------------------------------------------------------
@@ -2930,73 +2284,74 @@
%------------------------------------------------------------------------------
-\subsection{Realtime Interface Profiling}
-\label{sec:timing-profile}
-\index{Realtime!Profiling}
+\section{Application Interface Profiling}
+\label{sec:profiling}
+\index{Profiling}
One of the most important timing aspects are the execution times of the
-realtime interface functions, that are called in cyclic context. These
+application interface functions, that are called in cyclic context. These
functions make up an important part of the overall timing of the application.
-To measure the timing of the functions, the following code was used:
-
-\begin{lstlisting}[gobble=2,language=C]
- c0 = get_cycles();
- ecrt_master_receive(master);
- c1 = get_cycles();
- ecrt_domain_process(domain1);
- c2 = get_cycles();
- ecrt_master_run(master);
- c3 = get_cycles();
- ecrt_master_send(master);
- c4 = get_cycles();
+To measure the timing of the functions, the below cyclic code was used:
+
+\begin{lstlisting}[language=C]
+c0 = get_cycles();
+ecrt_master_receive(master);
+c1 = get_cycles();
+ecrt_domain_process(domain1);
+c2 = get_cycles();
+ecrt_domain_queue(domain1);
+c3 = get_cycles();
+ecrt_master_send(master);
+c4 = get_cycles();
\end{lstlisting}
Between each call of an interface function, the CPU timestamp counter is read.
-The counter differences are converted to \micro\second\ with help of the
-\lstinline+cpu_khz+ variable, that contains the number of increments per
-\milli\second.
-
-For the actual measuring, a system with a \unit{2.0}{\giga\hertz} CPU was used,
-that ran the above code in an RTAI thread with a period of
-\unit{100}{\micro\second}. The measuring was repeated $n = 100$ times and the
-results were averaged. These can be seen in table~\ref{tab:profile}.
+The counter differences are converted to \micro\second\ via the
+\lstinline+cpu_khz+ variable, that contains the number of counts per
+\milli\second\ for the IA32 architecture's timestamp counter.
+
+For the actual measurement, a system with a \unit{2.0}{\giga\hertz} CPU was
+used, that ran the above code in an RTAI thread with a period of
+\unit{1}{\milli\second}. The measurement was repeated $n = 10000$ times and
+the results were averaged. These can be seen in table~\ref{tab:profile}.
\begin{table}[htpb]
\centering
- \caption{Profiling of a Realtime Cycle on a \unit{2.0}{\giga\hertz}
- Processor}
+ \caption{Application Cycle on a \unit{2.0}{\giga\hertz} Processor}
\label{tab:profile}
\vspace{2mm}
\begin{tabular}{l|r|r}
- Element & Mean Duration [\second] & Standard Deviancy [\micro\second] \\
+
+ Function &
+ $\mu(\Delta t)$ [\micro\second] &
+ $\sigma(\Delta t)$ [\micro\second] \\
\hline
- \textit{ecrt\_master\_receive()} & 8.04 & 0.48\\
- \textit{ecrt\_domain\_process()} & 0.14 & 0.03\\
- \textit{ecrt\_master\_run()} & 0.29 & 0.12\\
- \textit{ecrt\_master\_send()} & 2.18 & 0.17\\ \hline
- Complete Cycle & 10.65 & 0.69\\ \hline
+
+ \lstinline+ecrt_master_receive()+ & 6.13 & 1.11\\
+
+ \lstinline+ecrt_domain_process()+ & $<$ 0.01 & 0.07\\
+
+ \lstinline+ecrt_domain_queue()+ & $<$ 0.01 & 0.17\\
+
+ \lstinline+ecrt_master_send()+ & 1.15 & 0.65\\ \hline
+
+ Complete Cycle & 7.28 & 1.31\\ \hline
+
\end{tabular}
\end{table}
-It is obvious, that the functions accessing hardware make up the
-lion's share. The \textit{ec\_master\_receive()} executes the ISR of
-the Ethernet device, analyzes datagrams and copies their contents into
-the memory of the datagram objects. The \textit{ec\_master\_send()}
-assembles a frame out of different datagrams and copies it to the
-hardware buffers. Interestingly, this makes up only a quarter of the
-receiving time.
-
-The functions that only operate on the masters internal data structures are
-very fast ($\Delta t < \unit{1}{\micro\second}$). Interestingly the runtime of
-\textit{ec\_domain\_process()} has a small standard deviancy relative to the
-mean value, while this ratio is about twice as big for
-\textit{ec\_master\_run()}: This probably results from the latter function
-having to execute code depending on the current state and the different state
-functions are more or less complex.
-
-For a realtime cycle makes up about \unit{10}{\micro\second}, the theoretical
-frequency can be up to \unit{100}{\kilo\hertz}. For two reasons, this frequency
-keeps being theoretical:
+It is obvious, that the functions accessing hardware make up the lion's share.
+The \lstinline+ec_master_receive()+ executes the ISR of the Ethernet device
+driver, dissects the received frame and copies the datagram contents into the
+memory of the corresponding datagram objects. The \lstinline+ec_master_send()+
+function assembles a frame from different datagrams and copies it to the
+hardware buffers. The functions that only operate on the masters internal data
+structures are very fast ($\Delta t < \unit{1}{\micro\second}$).
+
+For a realtime cycle makes up about \unit{10}{\micro\second}, the resulting
+theoretical frequency could be up to $1 / \unit{10}{\micro\second} =
+\unit{100}{\kilo\hertz}$. For two reasons, this frequency keeps being
+theoretical:
\begin{enumerate}
@@ -3005,58 +2360,57 @@
\item The EtherCAT frame must be sent and received, before the next realtime
cycle begins. The determination of the bus cycle time is difficult and covered
-in section~\ref{sec:timing-bus}.
+in sec.~\ref{sec:timing-bus}.
\end{enumerate}
%------------------------------------------------------------------------------
-\subsection{Bus Cycle Measuring}
+\section{Bus Cycle Measurement}
\label{sec:timing-bus}
\index{Bus cycle}
-For measuring the time, a frame is ``on the wire'', two timestamps
-must be be taken:
+For measurement the time, a frame is ``on the wire'', two timestamps must be
+taken:
\begin{enumerate}
-\item The time, the Ethernet hardware begins with physically sending
- the frame.
-\item The time, the frame is completely received by the Ethernet
- hardware.
+
+\item The time, the Ethernet hardware begins with physically sending the
+frame.
+
+\item The time, the frame is completely received by the Ethernet hardware.
+
\end{enumerate}
Both times are difficult to determine. The first reason is, that the
-interrupts are disabled and the master is not notified, when a frame
-is sent or received (polling would distort the results). The second
-reason is, that even with interrupts enabled, the time from the event
-to the notification is unknown. Therefore the only way to confidently
-determine the bus cycle time is an electrical measuring.
-
-Anyway, the bus cycle time is an important factor when designing realtime code,
-because it limits the maximum frequency for the cyclic task of the application.
-In practice, these timing parameters are highly dependent on the hardware and
+interrupts are disabled and the master is not notified, when a frame is sent
+or received (polling would distort the results). The second reason is, that
+even with interrupts enabled, the interrupt latency (i.\,e.\ the time from the
+event to the notification) is unknown. Therefore the only way to confidently
+determine the bus cycle time is an electrical measurement.
+
+Anyway, the bus cycle time is an important factor when designing realtime
+applications, because it limits the maximum frequency for the cyclic task. In
+practice, these timing parameters are highly dependent on the hardware and
often a trial and error method must be used to determine the limits of the
system.
-The central question is: What happens, if the cycle frequency is too high? The
-answer is, that the EtherCAT frames that have been sent at the end of the cycle
-are not yet received, when the next cycle starts. First this is noticed by
-\textit{ecrt\_domain\_process()}, because the working counter of the process
-data datagrams were not increased. The function will notify the user via
+An essential question is: What happens, if the cycle frequency is too high?
+The EtherCAT frames that have been sent at the end of the cycle could have
+been not yet received when the next cycle starts. First this is noticed by the
+domain, because the working counters of the datagrams are zero. This can be
+queried in realtime context via the application interface and is output via
Syslog\footnote{To limit Syslog output, a mechanism has been implemented, that
-outputs a summarized notification at maximum once a second.}. In this case, the
-process data keeps being the same as in the last cycle, because it is not
+outputs a summarized notification at maximum once a second.}. In this case,
+the process data keeps being the same as in the last cycle, because it is not
erased by the domain. When the domain datagrams are queued again, the master
notices, that they are already queued (and marked as sent). The master will
mark them as unsent again and output a warning, that datagrams were
``skipped''.
On the mentioned \unit{2.0}{\giga\hertz} system, the possible cycle frequency
-can be up to \unit{25}{\kilo\hertz} without skipped frames. This value can
-surely be increased by choosing faster hardware. Especially the RealTek network
-hardware could be replaced by a faster one. Besides, implementing a dedicated
-ISR for EtherCAT devices would also contribute to increasing the latency. These
-are two points on the author's to-do list.
+can be up to \unit{25}{\kilo\hertz} without skipped frames. This value is
+highly dependant on the chosen hardware.
%------------------------------------------------------------------------------
@@ -3064,24 +2418,23 @@
\label{sec:installation}
\index{Master!Installation}
-\section{Building the software}
+\section{Building the Software}
The current EtherCAT master code is available at~\cite{etherlab} or can be
obtained from the EtherLab CD. The \textit{tar.bz2} file has to be unpacked
with the commands below (or similar):
\begin{lstlisting}[gobble=2]
- `\$` `\textbf{tar xjf ethercat-\masterversion.tar.bz2}`
- `\$` `\textbf{cd ethercat-\masterversion/}`
+ $ `\textbf{tar xjf ethercat-\masterversion.tar.bz2}`
+ $ `\textbf{cd ethercat-\masterversion/}`
\end{lstlisting}
The tarball was created with GNU Autotools, so the build process
follows the below commands:
\begin{lstlisting}[gobble=2]
- `\$` `\textbf{./configure}`
- `\$` `\textbf{make}`
- `\$` `\textbf{make modules}`
+ $ `\textbf{./configure}`
+ $ `\textbf{make all modules}`
\end{lstlisting}
Table~\ref{tab:config} lists important configuration switches and options.
@@ -3117,22 +2470,10 @@
\lstinline+--with-8139too-kernel+ & 8139too kernel & $\dagger$\\
-\lstinline+--enable-e100+ & Build the e100 driver & no\\
-
-\lstinline+--with-e100-kernel+ & e100 kernel & $\dagger$\\
-
-\lstinline+--enable-forcedeth+ & Enable forcedeth driver & no\\
-
-\lstinline+--with-forcedeth-kernel+ & forcedeth kernel & $\dagger$\\
-
\lstinline+--enable-e1000+ & Enable e1000 driver & no\\
\lstinline+--with-e1000-kernel+ & e1000 kernel & $\dagger$\\
-\lstinline+--enable-r8169+ & Enable r8169 driver & no\\
-
-\lstinline+--with-r8169-kernel+ & r8169 kernel & $\dagger$\\
-
\end{tabular}
\vspace{2mm}
@@ -3145,31 +2486,35 @@
\end{table}
-\section{Building the documentation}
+\section{Building the Interface Documentation}
\label{sec:gendoc}
The source code is documented using Doxygen~\cite{doxygen}. To build the HTML
-documentation, you must have the Doxygen software installed. The below command
+documentation, the Doxygen software has to be installed. The below command
will generate the documents in the subdirectory \textit{doxygen-output}:
\begin{lstlisting}
$ `\textbf{make doc}`
\end{lstlisting}
-To view them, point your browser to \textit{doxygen-output/html/index.html}.
-
-\section{Installation}
-
-The below commands have to be entered as \textit{root}: The first one will
-install the EtherCAT header, init script, sysconfig file and the user space
-tools to the prefix path. The second one will install the kernel modules to the
-kernel's modules directory. The following \lstinline+depmod+ call is necessary
-to include the kernel modules into the \textit{modules.dep} file to make it
-available to the \lstinline+modprobe+ command, used in the init script.
+The interface documentation can be viewed by pointing a browser to the file
+\textit{doxygen-output/html/index.html}. The functions and data structures of
+the application interface a covered by an own module ``Application
+Interface''.
+
+\section{Installing the Software}
+
+The below commands have to be entered as \textit{root}: The
+\lstinline+install+ target will install the EtherCAT header, init script,
+sysconfig file and the userspace tool to the prefix path. The
+\lstinline+modules_install+ target will install the kernel modules to the
+kernel's modules directory. The final \lstinline+depmod+ call is necessary to
+include the kernel modules into the \textit{modules.dep} file to make it
+available to the \lstinline+modprobe+ command, that is used in the init
+script.
\begin{lstlisting}
-# `\textbf{make install}`
-# `\textbf{make modules\_install}`
+# `\textbf{make install modules\_install}`
# `\textbf{depmod}`
\end{lstlisting}
@@ -3186,7 +2531,7 @@
If the EtherCAT master shall be run as a service\footnote{Even if the EtherCAT
master shall not be loaded on system startup, the use of the init script is
-recommended for manual (un-)loading.} (see section~\ref{sec:system}), the init
+recommended for manual (un-)loading.} (see sec.~\ref{sec:system}), the init
script and the sysconfig file have to be copied (or linked) to the appropriate
locations. The below example is suitable for SUSE Linux. It may vary for other
distributions.
@@ -3200,21 +2545,21 @@
\end{lstlisting}
Now the sysconfig file \texttt{/etc/sysconfig/ethercat} (see
-section~\ref{sec:sysconfig}) has to be customized. The minimal customization
-is to set the \lstinline+MASTER0_DEVICE+ variable to the MAC address of the
+sec.~\ref{sec:sysconfig}) has to be customized. The minimal customization is
+to set the \lstinline+MASTER0_DEVICE+ variable to the MAC address of the
Ethernet device to use (or \lstinline+ff:ff:ff:ff:ff:ff+ to use the first
device offered) and selecting the driver(s) to load via the
\lstinline+DEVICE_MODULES+ variable.
-After the basic configuration is done, the master can be started with
-the below command:
+After the basic configuration is done, the master can be started with the
+below command:
\begin{lstlisting}
# `\textbf{/etc/init.d/ethercat start}`
\end{lstlisting}
-The operation of the master can be observed by looking at the
-Syslog\index{Syslog} messages, which should look like the ones below. If
+At this time, the operation of the master can be observed by viewing the
+Syslog\index{Syslog} messages, which should look like the ones below. If
EtherCAT slaves are connected to the master's EtherCAT device, the activity
indicators should begin to flash.
@@ -3253,622 +2598,41 @@
\end{description}
-%------------------------------------------------------------------------------
-
-\chapter{Application examples}
-\label{chapter:examples}
-
-This chapter will give practical examples of how to use the EtherCAT master via
-the realtime interface by writing an application module.
-
-%------------------------------------------------------------------------------
-
-\section{Minimal Example}
-\label{sec:mini}
-\index{Examples!Minimal}
-
-This section will explain the use of the EtherCAT master from a minimal kernel
-module. The complete module code is obtainable as a part of the EtherCAT master
-code release (see~\cite{etherlab}, file \textit{examples/mini/mini.c}).
-
-The minimal example uses a kernel timer (software interrupt) to generate a
-cyclic task. After the timer function is executed, it re-adds itself with a
-delay of one \textit{jiffy}\index{jiffies}, which results in a timer frequency
-of \textit{HZ}\nomenclature{HZ}{Kernel macro containing the timer interrupt
-frequency}
-
-The module-global variables, needed to operate the master can be seen
-in listing~\ref{lst:minivar}.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={Minimal
- variables},label=lst:minivar]
- struct timer_list timer;
-
- ec_master_t *master = NULL;
- ec_domain_t *domain1 = NULL;
-
- void *r_dig_in, *r_ana_out;
-
- ec_pdo_reg_t domain1_pdos[] = {
- {"1", Beckhoff_EL1014_Inputs, &r_dig_in},
- {"2", Beckhoff_EL4132_Ouput1, &r_ana_out},
- {}
- };
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{1}] There is a timer object
- declared, that is needed to tell the kernel to install a timer and
- execute a certain function, if it runs out. This is done by a
- variable of the \textit{timer\_list} structure.
-\item[\linenum{3} -- \linenum{4}] There
- is a pointer declared, that will later point to a requested EtherCAT
- master. Additionally there is a pointer to a domain object needed,
- that will manage process data IO.
-\item[\linenum{6}] The pointers \textit{r\_*}
- will later point to the \underline{r}aw process data values inside
- the domain memory. The addresses they point to will be set during a
- call to \textit{ec\_\-master\_\-activate()}, that will create the
- domain memory and configure the mapped process data image.
-\item[\linenum{8} -- \linenum{12}] The
- configuration of the mapping of certain Pdos in a domain can easily
- be done with the help of an initialization array of the
- \textit{ec\_pdo\_reg\_t} type, defined as part of the realtime
- interface. Each record must contain the ASCII bus-address of the
- slave (see section~\ref{sec:addr}), the slave's vendor ID and
- product code, and the index and subindex of the Pdo to map (these
- four fields can be specified in junction, by using one of the
- defines out of the \textit{include/ecdb.h} header). The last field
- has to be the address of the process data pointer, so it can later
- be redirected appropriately. Attention: The initialization array
- must end with an empty record (\textit{\{\}})!
-\end{description}
-
-The initialization of the minimal application is done by the ``Minimal init
-function'' in listing~\ref{lst:miniinit}.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={Minimal init
- function},label={lst:miniinit}]
- int __init init_mini_module(void)
- {
- if (!(master = ecrt_request_master(0))) {
- goto out_return;
- }
-
- if (!(domain1 = ecrt_master_create_domain(master))) {
- goto out_release_master;
- }
-
- if (ecrt_domain_register_pdo_list(domain1,
- domain1_pdos)) {
- goto out_release_master;
- }
-
- if (ecrt_master_activate(master)) {
- goto out_release_master;
- }
-
- ecrt_master_prepare(master);
-
- init_timer(&timer);
- timer.function = run;
- timer.expires = jiffies + 10;
- add_timer(&timer);
-
- return 0;
-
- out_release_master:
- ecrt_release_master(master);
- out_return:
- return -1;
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{3}] It is tried to request the
- first EtherCAT master (index 0). On success, the
- \textit{ecrt\_\-request\_\-master()} function returns a pointer to
- the reserved master, that can be used as an object to following
- functions calls. On failure, the function returns \textit{NULL}.
-\item[\linenum{7}] In order to exchange process
- data, a domain object has to be created. The
- \textit{ecrt\_\-master\_\-create\_domain()} function also returns a
- pointer to the created domain, or \textit{NULL} in error case.
-\item[\linenum{11}] The registration of domain
- Pdos with an initialization array results in a single function call.
- Alternatively the data fields could be registered with individual
- calls of \textit{ecrt\_domain\_register\_pdo()}.
-\item[\linenum{16}] After the configuration of
- process data mapping, the master can be activated for cyclic
- operation. This will configure all slaves and bring them into
- OP state.
-\item[\linenum{20}] This call is needed to avoid
- a case differentiation in cyclic operation: The first operation in
- cyclic mode is a receive call. Due to the fact, that there is
- nothing to receive during the first cycle, there had to be an
- \textit{if}-statement to avoid a warning. A call to
- \textit{ec\_master\_prepare()} sends a first datagram containing a
- process data exchange datagram, so that the first receive call will
- not fail.
-\item[\linenum{22} -- \linenum{25}] The
- master is now ready for cyclic operation. The kernel timer that
- cyclically executes the \textit{run()} function is initialized and
- started.
-\end{description}
-
-The coding of a cleanup function fo the minimal module can be seen in
-listing~\ref{lst:miniclean}.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={Minimal cleanup
- function},label={lst:miniclean}]
- void __exit cleanup_mini_module(void)
- {
- del_timer_sync(&timer);
- ecrt_master_deactivate(master);
- ecrt_release_master(master);
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{3}] To cleanup the module, it it
- necessary to stop the cyclic processing. This is done by a call to
- \textit{del\_timer\_sync()} which safely removes a queued timer
- object. It is assured, that no cyclic work will be done after this
- call returns.
-\item[\linenum{4}] This call deactivates the
- master, which results in all slaves being brought to their INIT
- state again.
-\item[\linenum{5}] This call releases the master,
- removes any existing configuration and silently starts the idle
- mode. The value of the master pointer is invalid after this call and
- the module can be safely unloaded.
-\end{description}
-
-The final part of the minimal module is that for the cyclic work. Its
-coding can be seen in listing~\ref{lst:minirun}.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={Minimal cyclic
- function},label={lst:minirun}]
- void run(unsigned long data)
- {
- static uint8_t dig_in_0;
-
- ecrt_master_receive(master);
- ecrt_domain_process(domain1);
-
- dig_in_0 = EC_READ_BIT(r_dig_in, 0);
- EC_WRITE_S16(r_ana_out, dig_in_0 * 0x3FFF);
-
- ecrt_master_run(master);
- ecrt_master_send(master);
-
- timer.expires += 1; // frequency = HZ
- add_timer(&timer);
- }
-\end{lstlisting}
-
-\begin{description}
-
-\item[\linenum{5}] The cyclic processing starts with receiving datagrams, that
-were sent in the last cycle. The frames containing these datagrams have to be
-received by the network interface card prior to this call.
-
-\item[\linenum{6}] The process data of domain 1 has been automatically copied
-into domain memory while datagram reception. This call checks the working
-counter for changes and re-queues the domain's datagram for sending.
-
-\item[\linenum{8}] This is an example for reading out a bit-oriented process
-data value (i.~e. bit 0) via the \textit{EC\_READ\_BIT()} macro. See
-section~\ref{sec:macros} for more information about those macros.
-
-\item[\linenum{9}] This line shows how to write a signed, 16-bit process data
-value. In this case, the slave is able to output voltages of
-\unit{-10--+10}{\volt} with a resolution of \unit{16}{bit}. This write command
-outputs either \unit{0}{\volt} or \unit{+5}{\volt}, depending of the value of
-\textit{dig\_in\_0}.
-
-\item[\linenum{11}] This call runs the master's operation state machine (see
-section~\ref{sec:fsm-op}). A single state is processed, and datagrams are
-queued. Mainly bus observation is done: The bus state is determined and in case
-of slaves that lost their configuration, reconfiguration is tried.
-
-\item[\linenum{12}] This method sends all queued datagrams, in this case the
-domain's datagram and one of the master state machine. In best case, all
-datagrams fit into one frame.
-
-\item[\linenum{14} -- \linenum{15}] Kernel timers are implemented as
-``one-shot'' timers, so they have to be re-added after each execution. The time
-of the next execution is specified in \textit{jiffies} and will happen at the
-time of the next system timer interrupt. This results in the \textit{run()}
-function being executed with a frequency of \textit{HZ}.
-
-\end{description}
-
-%------------------------------------------------------------------------------
-
-\section{RTAI Example}
-\label{sec:rtai}
-\index{Examples!RTAI}
-
-The whole code can be seen in the EtherCAT master code release
-(see~\cite{etherlab}, file \textit{examples/rtai/rtai\_sample.c}).
-
-Listing~\ref{lst:rtaivar} shows the defines and global variables
-needed for a minimal RTAI module with EtherCAT processing.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={RTAI task
- declaration},label={lst:rtaivar}]
- #define FREQUENCY 10000
- #define TIMERTICKS (1000000000 / FREQUENCY)
-
- RT_TASK task;
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{1} -- \linenum{2}] RTAI
- takes the cycle period as nanoseconds, so the easiest way is to
- define a frequency and convert it to a cycle time in nanoseconds.
-\item[\linenum{4}] The \textit{task} variable
- later contains information about the running RTAI task.
-\end{description}
-
-Listing~\ref{lst:rtaiinit} shows the module init function for the RTAI
-module. Most lines are the same as in listing~\ref{lst:miniinit},
-differences come up when starting the cyclic code.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={RTAI module init
- function},label={lst:rtaiinit}]
- int __init init_mod(void)
- {
- RTIME requested_ticks, tick_period, now;
-
- if (!(master = ecrt_request_master(0))) {
- goto out_return;
- }
-
- if (!(domain1 = ecrt_master_create_domain(master))) {
- goto out_release_master;
- }
-
- if (ecrt_domain_register_pdo_list(domain1,
- domain1_pdos)) {
- goto out_release_master;
- }
-
- if (ecrt_master_activate(master)) {
- goto out_release_master;
- }
-
- ecrt_master_prepare(master);
-
- requested_ticks = nano2count(TIMERTICKS);
- tick_period = start_rt_timer(requested_ticks);
-
- if (rt_task_init(&task, run, 0, 2000, 0, 1, NULL)) {
- goto out_stop_timer;
- }
-
- now = rt_get_time();
- if (rt_task_make_periodic(&task, now + tick_period,
- tick_period)) {
- goto out_stop_task;
- }
-
- return 0;
-
- out_stop_task:
- rt_task_delete(&task);
- out_stop_timer:
- stop_rt_timer();
- out_deactivate:
- ecrt_master_deactivate(master);
- out_release_master:
- ecrt_release_master(master);
- out_return:
- return -1;
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{24} -- \linenum{25}] The
- nanoseconds are converted to RTAI timer ticks and an RTAI timer is
- started. \textit{tick\_period} will be the ``real'' number of ticks
- used for the timer period (which can be different to the requested
- one).
-\item[\linenum{27}] The RTAI task is initialized
- by specifying the cyclic function, the parameter to hand over, the
- stack size, priority, a flag that tells, if the function will use
- floating point operations and a signal handler.
-\item[\linenum{32}] The task is made periodic by
- specifying a start time and a period.
-\end{description}
-
-The cleanup function of the RTAI module in listing~\ref{lst:rtaiclean}
-is nearly as simple as that of the minimal module.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={RTAI module
- cleanup function},label={lst:rtaiclean}]
- void __exit cleanup_mod(void)
- {
- rt_task_delete(&task);
- stop_rt_timer();
- ecrt_master_deactivate(master);
- ecrt_release_master(master);
- rt_sem_delete(&master_sem);
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{2}] The RTAI task will be stopped
- and deleted.
-\item[\linenum{3}] After that, the RTAI timer can
- be stopped.
-\end{description}
-
-The rest is the same as for the minimal module.
-
-Worth to mention is, that the cyclic function of the RTAI module
-(listing~\ref{lst:rtairun}) has a slightly different architecture. The
-function is not executed until returning for every cycle, but has an
-infinite loop in it, that is placed in a waiting state for the rest of
-each cycle.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={RTAI module cyclic
- function},label={lst:rtairun}]
- void run(long data)
- {
- while (1) {
- ecrt_master_receive(master);
- ecrt_domain_process(domain1);
-
- k_pos = EC_READ_U32(r_ssi_input);
-
- ecrt_master_run(master);
- ecrt_master_send(master);
-
- rt_task_wait_period();
- }
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{3}] The \textit{while (1)} loop
- executes for the lifetime of the RTAI task.
-\item[\linenum{12}] The
- \textit{rt\_task\_wait\_period()} function sets the process into a
- sleeping state until the beginning of the next cycle. It also
- checks, if the cyclic function has to be terminated.
-\end{description}
-
-%------------------------------------------------------------------------------
-
-\section{Concurrency Example}
-\label{sec:concurrency}
-\index{Examples!Concurrency}
-
-As mentioned before, there can be concurrent access to the EtherCAT master. The
-application and a EoE\index{EoE} process can compete for master access, for
-example. In this case, the module has to provide the locking mechanism, because
-it depends on the module's architecture which lock has to be used. The module
-makes this locking mechanism available to the master through the master's
-locking callbacks.
-
-In case of RTAI, the lock can be an RTAI semaphore, as shown in
-listing~\ref{lst:convar}. A normal Linux semaphore would not be appropriate,
-because it could not block the RTAI task due to RTAI running in a higher domain
-than the Linux kernel (see~\cite{rtai}).
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={RTAI semaphore for
- concurrent access},label={lst:convar}]
- SEM master_sem;
-\end{lstlisting}
-
-The module has to implement the two callbacks for requesting and
-releasing the master lock. An exemplary coding can be seen in
-listing~\ref{lst:conlock}.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={RTAI locking
- callbacks for concurrent access},label={lst:conlock}]
- int request_lock(void *data)
- {
- rt_sem_wait(&master_sem);
- return 0;
- }
-
- void release_lock(void *data)
- {
- rt_sem_signal(&master_sem);
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{1}] The \textit{request\_lock()}
- function has a data parameter. The master always passes the value,
- that was specified when registering the callback function. This can
- be used for handing the master pointer. Notice, that it has an
- integer return value (see line 4).
-\item[\linenum{3}] The call to
- \textit{rt\_sem\_wait()} either returns at once, when the semaphore
- was free, or blocks until the semaphore is freed again. In any case,
- the semaphore finally is reserved for the process calling the
- request function.
-\item[\linenum{4}] When the lock was requested
- successfully, the function should return 0. The module can prohibit
- requesting the lock by returning non-zero (see paragraph ``Tuning
- the jitter'' below).
-\item[\linenum{7}] The \textit{release\_lock()}
- function gets the same argument passed, but has a void return value,
- because is always succeeds.
-\item[\linenum{9}] The \textit{rt\_sem\_signal()}
- function frees the semaphore, that was prior reserved with
- \textit{rt\_sem\_wait()}.
-\end{description}
-
-In the module's init function, the semaphore must be initialized, and
-the callbacks must be passed to the EtherCAT master:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={Module init
- function for concurrent access},label={lst:coninit}]
- int __init init_mod(void)
- {
- RTIME tick_period, requested_ticks, now;
-
- rt_sem_init(&master_sem, 1);
-
- if (!(master = ecrt_request_master(0))) {
- goto out_return;
- }
-
- ecrt_master_callbacks(master, request_lock,
- release_lock, NULL);
- // ...
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{5}] The call to
- \textit{rt\_sem\_init()} initializes the semaphore and sets its
- value to 1, meaning that only one process can reserve the semaphore
- without blocking.
-\item[\linenum{11}] The callbacks are passed to
- the master with a call to \textit{ecrt\_master\_callbacks()}. The
- last parameter is the argument, that the master should pass with
- each call to a callback function. Here it is not used and set to
- \textit{NULL}.
-\end{description}
-
-For the cyclic function being only one competitor for master access,
-it has to request the lock like any other process. There is no need to
-use the callbacks (which are meant for processes of lower priority),
-so it can access the semaphore directly:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={RTAI cyclic
- function for concurrent access},label={lst:conrun}]
- void run(long data)
- {
- while (1) {
- rt_sem_wait(&master_sem);
-
- ecrt_master_receive(master);
- ecrt_domain_process(domain1);
-
- k_pos = EC_READ_U32(r_ssi_input);
-
- ecrt_master_run(master);
- ecrt_master_send(master);
-
- rt_sem_signal(&master_sem);
- rt_task_wait_period();
- }
- }
-\end{lstlisting}
-
-\begin{description}
-
-\item[\linenum{4}] Every access to the master has to be preceded by a call to
-\textit{rt\_sem\_wait()}, because another instance might currently access the
-master.
-
-\item[\linenum{14}] When cyclic processing finished, the semaphore has to be
-freed again, so that other processes have the possibility to access the master.
-
-\end{description}
-
-A little change has to be made to the cleanup function in case of
-concurrent master access.
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={RTAI module
- cleanup function for concurrent access},label={lst:conclean}]
- void __exit cleanup_mod(void)
- {
- rt_task_delete(&task);
- stop_rt_timer();
- ecrt_master_deactivate(master);
- ecrt_release_master(master);
- rt_sem_delete(&master_sem);
- }
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{7}] Upon module cleanup, the
- semaphore has to be deleted, so that memory can be freed.
-\end{description}
-
-\paragraph{Tuning the Jitter}
-\index{Jitter}
-
-Concurrent access leads to higher jitter for the application task, because
-there are situations, in which the task has to wait for a process of lower
-priority to finish accessing the master. In most cases this is acceptable,
-because a master access cycle (receive/process/send) only takes
-\unit{10-20}{\micro\second} on recent systems, what would be the maximum
-additional jitter. However some applications demand a minimum jitter. For this
-reason the master access can be prohibited by the application: If the time,
-another process wants to access the master, is to close to the beginning of the
-next application cycle, the module can disallow, that the lock is taken. In
-this case, the request callback has to return $1$, meaning that the lock has
-not been taken. The foreign process must abort its master access and try again
-next time.
-
-This measure helps to significantly reducing the jitter produced by concurrent
-master access. Below are excerpts of an example coding:
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={Variables for
- jitter reduction},label={lst:redvar}]
- #define FREQUENCY 10000 // RTAI task frequency in Hz
- // ...
- cycles_t t_last_cycle = 0;
- const cycles_t t_critical = cpu_khz * 1000 / FREQUENCY
- - cpu_khz * 30 / 1000;
-\end{lstlisting}
-
-\begin{description}
-
-\item[\linenum{3}] The variable \textit{t\_last\_cycle} holds the timer ticks
-at the beginning of the last realtime cycle.
-
-\item[\linenum{4}] \textit{t\_critical} contains the number of ticks, that may
-have passed since the beginning of the last cycle, until there is no more
-foreign access possible. It is calculated by subtracting the ticks for
-\unit{30}{\micro\second} from the ticks for a complete cycle.
-
-\end{description}
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={Cyclic function
- with reduced jitter},label={lst:redrun}]
- void run(long data)
- {
- while (1) {
- t_last_cycle = get_cycles();
- rt_sem_wait(&master_sem);
- // ...
-\end{lstlisting}
-
-\begin{description}
-\item[\linenum{4}] The ticks of the beginning of
- the current realtime cycle are taken before reserving the semaphore.
-\end{description}
-
-\begin{lstlisting}[gobble=2,language=C,numbers=left,caption={Request callback
- for reduced jitter},label={lst:redreq}]
- int request_lock(void *data)
- {
- // too close to the next RT cycle: deny access.
- if (get_cycles() - t_last_cycle > t_critical)
- return -1;
-
- // allow access
- rt_sem_wait(&master_sem);
- return 0;
- }
-\end{lstlisting}
-
-\begin{description}
-
-\item[\linenum{4}] If the time of request is too close to the next realtime
-cycle (here: \unit{<30}{\micro\second} before the estimated beginning), the
-locking is denied. The requesting process must abort its cycle.
-
-\end{description}
+\section{Automatic Device Node Creation}
+\label{sec:autonode}
+
+The \lstinline+ethercat+ command-line tool (see sec.~\ref{sec:tool})
+communicates with the master via a character device. The corresponding device
+nodes are created automatically, if the udev daemon is running. Note, that on
+some distributions, the \lstinline+udev+ package is not installed by default.
+
+The device nodes will be created with mode \lstinline+0660+ and group
+\lstinline+root+ by default. If ``normal'' users shall have reading access, a
+udev rule file (for example \textit{/etc/udev/rules.d/99-EtherCAT.rules}) has
+to be created with the following contents:
+
+\begin{lstlisting}
+KERNEL=="EtherCAT[0-9]*", MODE="0664"
+\end{lstlisting}
+
+After the udev rule file is created and the EtherCAT master is restarted with
+\lstinline[breaklines=true]+/etc/init.d/ethercat restart+, the device node
+will be automatically created with the desired rights:
+
+\begin{lstlisting}
+# `\textbf{ls -l /dev/EtherCAT0}`
+crw-rw-r-- 1 root root 252, 0 2008-09-03 16:19 /dev/EtherCAT0
+\end{lstlisting}
+
+Now, the \lstinline+ethercat+ tool can be used (see sec.~\ref{sec:tool}) even
+as a non-root user.
+
+If non-root users shall have writing access, the following udev rule can be
+used instead:
+
+\begin{lstlisting}
+KERNEL=="EtherCAT[0-9]*", MODE="0664", GROUP="users"
+\end{lstlisting}
%------------------------------------------------------------------------------
@@ -3885,18 +2649,18 @@
International Electrotechnical Commission (IEC), 2005.
\bibitem{gpl} GNU General Public License, Version 2.
-\url{http://www.gnu.org/licenses/gpl.txt}. August~9, 2006.
+\url{http://www.gnu.org/licenses/gpl-2.0.html}. October~15, 2008.
\bibitem{lsb} Linux Standard Base.
-\url{http://www.linuxfoundation.org/en/LSB}. August~9, 2006.
+\url{http://www.linuxfoundation.org/en/LSB}. August~9, 2006.
\bibitem{wireshark} Wireshark. \url{http://www.wireshark.org}. 2008.
-\bibitem{automata} {\it Hopcroft, J.~E. / Ullman, J.~D.}: Introduction to
+\bibitem{automata} {\it Hopcroft, J.\,E.\ / Ullman, J.\,D.}: Introduction to
Automata Theory, Languages and Computation. Adison-Wesley, Reading,
Mass.~1979.
-\bibitem{fsmmis} {\it Wagner, F. / Wolstenholme, P.}: State machine
+\bibitem{fsmmis} {\it Wagner, F.\ / Wolstenholme, P.}: State machine
misunderstandings. In: IEE journal ``Computing and Control Engineering'',
2004.
--- a/documentation/graphs/Makefile Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/graphs/Makefile Wed Jan 13 00:04:47 2010 +0100
@@ -5,11 +5,13 @@
#-----------------------------------------------------------------------------
GRAPHS := \
+ fsm_change \
fsm_master \
fsm_pdo_conf \
- fsm_pdo_read \
fsm_pdo_entry_conf \
fsm_pdo_entry_read \
+ fsm_pdo_read \
+ fsm_sii \
fsm_slave_conf \
fsm_slave_scan
@@ -20,13 +22,17 @@
#-----------------------------------------------------------------------------
-all: $(PDF)
+all: pdf
+
+pdf: $(PDF)
+
+ps: $(PS)
%.ps: %.dot
dot -Tps -o $@ $<
%.pdf: %.ps
- ps2pdf $<
+ epstopdf $<
clean:
@rm -f *.ps *.pdf
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/documentation/graphs/fsm_change.dot Wed Jan 13 00:04:47 2010 +0100
@@ -0,0 +1,34 @@
+
+/* $Id$ */
+
+digraph change {
+
+ start [fontname="Helvetica"]
+ start -> check [weight=5]
+
+ check [fontname="Helvetica"]
+ check -> status [weight=5]
+ check -> error [fontname="Helvetica", label="Response\ntimeout"]
+
+ status [fontname="Helvetica"]
+ status -> end [fontname="Helvetica", label="Success", weight=5]
+ status -> code [fontname="Helvetica", label="Refuse", weight=5]
+ status -> error [fontname="Helvetica", label="Change\ntimeout"]
+
+ code [fontname="Helvetica"]
+ code -> ack [weight=2]
+
+ start_ack [fontname="Helvetica"]
+ start_ack -> ack [fontname="Helvetica", label="Ack only"]
+
+ ack [fontname="Helvetica"]
+ ack -> check_ack [weight=2]
+
+ check_ack [fontname="Helvetica"]
+ check_ack -> end [fontname="Helvetica", label="Ack only"]
+ check_ack -> error [weight=2]
+
+ end [fontname="Helvetica"]
+
+ error [fontname="Helvetica"]
+}
--- a/documentation/graphs/fsm_pdo_conf.dot Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/graphs/fsm_pdo_conf.dot Wed Jan 13 00:04:47 2010 +0100
@@ -2,17 +2,14 @@
/* $Id$ */
digraph pdo_conf {
- size="7,9"
- center=1
- ratio=fill
start [fontname="Helvetica"]
start -> action_next_sync [fontname="Helvetica",label="First SM",weight=10]
start -> end [fontname="Helvetica",label="No config"]
action_next_sync [shape=point,label=""]
- action_next_sync -> action_check_assignment [fontname="Helvetica",label="No Pdos"]
- action_next_sync -> action_pdo_mapping [fontname="Helvetica",label="First Pdo",weight=10]
+ action_next_sync -> action_check_assignment [fontname="Helvetica",label="No PDOs"]
+ action_next_sync -> action_pdo_mapping [fontname="Helvetica",label="First PDO",weight=10]
action_next_sync -> end [fontname="Helvetica",label="No more SMs"]
action_pdo_mapping [shape=point,label=""]
@@ -32,22 +29,22 @@
action_next_pdo_mapping [shape=point,label=""]
action_next_pdo_mapping -> action_check_assignment [weight=10]
action_next_pdo_mapping -> action_pdo_mapping
- [fontname="Helvetica",label="Next Pdo"]
+ [fontname="Helvetica",label="Next PDO"]
action_check_assignment [shape=point,label=""]
action_check_assignment -> action_next_sync [fontname="Helvetica",label="Assign ok"]
action_check_assignment -> zero_pdo_count [weight=10]
zero_pdo_count [fontname="Helvetica"]
- zero_pdo_count -> action_next_sync [fontname="Helvetica",label="No Pdos"]
- zero_pdo_count -> action_assign_pdo [fontname="Helvetica",label="First Pdo", weight=10]
+ zero_pdo_count -> action_next_sync [fontname="Helvetica",label="No PDOs"]
+ zero_pdo_count -> action_assign_pdo [fontname="Helvetica",label="First PDO", weight=10]
action_assign_pdo [shape=point,label=""]
action_assign_pdo -> assign_pdo [weight=10]
assign_pdo [fontname="Helvetica"]
- assign_pdo -> set_pdo_count [fontname="Helvetica",label="No more Pdos", weight=10]
- assign_pdo -> action_assign_pdo [fontname="Helvetica",label="Next Pdo"]
+ assign_pdo -> set_pdo_count [fontname="Helvetica",label="No more PDOs", weight=10]
+ assign_pdo -> action_assign_pdo [fontname="Helvetica",label="Next PDO"]
set_pdo_count [fontname="Helvetica"]
set_pdo_count -> action_next_sync
--- a/documentation/graphs/fsm_pdo_entry_conf.dot Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/graphs/fsm_pdo_entry_conf.dot Wed Jan 13 00:04:47 2010 +0100
@@ -2,9 +2,6 @@
/* $Id$ */
digraph pdo_entry_conf {
- size="7,9"
- center=1
- ratio=fill
start [fontname="Helvetica"]
start -> zero_entry_count [weight=10]
--- a/documentation/graphs/fsm_pdo_entry_read.dot Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/graphs/fsm_pdo_entry_read.dot Wed Jan 13 00:04:47 2010 +0100
@@ -2,9 +2,6 @@
/* $Id$ */
digraph pdo_entry_read {
- size="7,9"
- center=1
- ratio=fill
start [fontname="Helvetica"]
start -> count [weight=5]
--- a/documentation/graphs/fsm_pdo_read.dot Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/graphs/fsm_pdo_read.dot Wed Jan 13 00:04:47 2010 +0100
@@ -2,9 +2,6 @@
/* $Id$ */
digraph pdo_read {
- size="7,9"
- center=1
- ratio=fill
start [fontname="Helvetica"]
start -> action_next_sync [fontname="Helvetica", label="First SM", weight=5]
@@ -17,8 +14,8 @@
pdo_count -> action_next_pdo [weight=5]
action_next_pdo [shape=point,label=""]
- action_next_pdo -> pdo [fontname="Helvetica", label="Next Pdo", weight=5]
- action_next_pdo -> action_next_sync [fontname="Helvetica", label="No more Pdos"]
+ action_next_pdo -> pdo [fontname="Helvetica", label="Next PDO", weight=5]
+ action_next_pdo -> action_next_sync [fontname="Helvetica", label="No more PDOs"]
pdo [fontname="Helvetica"]
pdo -> pdo_entries [weight=5]
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/documentation/graphs/fsm_sii.dot Wed Jan 13 00:04:47 2010 +0100
@@ -0,0 +1,33 @@
+
+/* $Id$ */
+
+digraph sii {
+
+ start_reading [fontname="Helvetica"]
+ start_reading -> read_check [weight=5]
+
+ read_check [fontname="Helvetica"]
+ read_check -> error
+ read_check -> read_fetch [weight=5]
+
+ read_fetch [fontname="Helvetica"]
+ read_fetch -> error
+ read_fetch -> end [weight=5]
+ read_fetch -> read_fetch
+
+ start_writing [fontname="Helvetica"]
+ start_writing -> write_check [weight=5]
+
+ write_check [fontname="Helvetica"]
+ write_check -> error
+ write_check -> write_check2 [weight=5]
+
+ write_check2 [fontname="Helvetica"]
+ write_check2 -> error
+ write_check2 -> end [weight=5]
+ write_check2 -> write_check2
+
+ end [fontname="Helvetica"]
+
+ error [fontname="Helvetica"]
+}
--- a/documentation/graphs/fsm_slave_conf.dot Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/graphs/fsm_slave_conf.dot Wed Jan 13 00:04:47 2010 +0100
@@ -2,12 +2,13 @@
/* $Id$ */
digraph slaveconf {
- size="7,9"
- center=1
- ratio=fill
+ size="3,5"
start [fontname="Helvetica"]
- start -> init [weight=10]
+ start -> enter_init [weight=10]
+
+ enter_init [shape=point, label=""]
+ enter_init -> init [weight=10]
init [fontname="Helvetica"]
init -> enter_mbox_sync [fontname="Helvetica", label="No FMMUs"]
@@ -30,23 +31,26 @@
preop -> enter_sdo_conf [weight=10]
enter_sdo_conf [shape=point, label=""]
- enter_sdo_conf -> enter_pdo_conf [fontname="Helvetica", label="No Sdos\nconfigured"]
+ enter_sdo_conf -> enter_pdo_conf [fontname="Helvetica", label="No SDOs\nconfigured"]
enter_sdo_conf -> sdo_conf [weight=10]
sdo_conf [fontname="Helvetica"]
+ sdo_conf -> enter_init [fontname="Helvetica", label="Config\ndetached"]
sdo_conf -> enter_pdo_conf [weight=10]
enter_pdo_conf [shape=point, label=""]
enter_pdo_conf -> pdo_conf [weight=10]
pdo_conf [fontname="Helvetica"]
+ pdo_conf -> enter_init [fontname="Helvetica", label="Config\ndetached"]
pdo_conf -> enter_pdo_sync [weight=10]
enter_pdo_sync [shape=point, label=""]
- enter_pdo_sync -> enter_fmmu [fontname="Helvetica", label="No Pdo SMs"]
+ enter_pdo_sync -> enter_fmmu [fontname="Helvetica", label="No PDO SMs"]
enter_pdo_sync -> pdo_sync [weight=10]
pdo_sync [fontname="Helvetica"]
+ pdo_sync -> enter_init [fontname="Helvetica", label="Config\ndetached"]
pdo_sync -> enter_fmmu [weight=10]
enter_fmmu [shape=point,label=""]
--- a/documentation/graphs/fsm_slave_scan.dot Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/graphs/fsm_slave_scan.dot Wed Jan 13 00:04:47 2010 +0100
@@ -2,27 +2,34 @@
/* $Id$ */
digraph slavescan {
- size="7,9"
- center=1
- ratio=fill
+ size="1,7"
+ start [fontname="Helvetica"]
start -> address [weight=10]
- address -> error
+ address [fontname="Helvetica"]
address -> state [weight=10]
- state -> error
+ state [fontname="Helvetica"]
state -> base [weight=10]
- base -> error
+ base [fontname="Helvetica"]
base -> datalink [weight=10]
- datalink -> error
+ datalink [fontname="Helvetica"]
datalink -> sii_size [weight=10]
- sii_size -> error
+ sii_size [fontname="Helvetica"]
sii_size -> sii_data [weight=10]
- sii_data -> error
- sii_data -> end [weight=10]
+ sii_data [fontname="Helvetica"]
+ sii_data -> preop [weight=10]
+
+ preop [fontname="Helvetica"]
+ preop -> pdos [weight=10]
+
+ pdos [fontname="Helvetica"]
+ pdos -> end [weight=10]
+
+ end [fontname="Helvetica"]
}
--- a/documentation/images/Makefile Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/images/Makefile Wed Jan 13 00:04:47 2010 +0100
@@ -7,15 +7,10 @@
FIGS := \
app-config.fig \
architecture.fig \
+ attach.fig \
fmmus.fig \
- fsm-change.fig \
fsm-coedown.fig \
fsm-eoe.fig \
- fsm-idle.fig \
- fsm-op.fig \
- fsm-sii.fig \
- fsm-slaveconf.fig \
- fsm-slavescan.fig \
interrupt.fig \
master-locks.fig \
masters.fig \
--- a/documentation/images/app-config.fig Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/images/app-config.fig Wed Jan 13 00:04:47 2010 +0100
@@ -95,16 +95,16 @@
4 0 0 50 -1 18 12 0.0000 4 180 1290 3735 2025 Sync Manager\001
4 0 0 50 -1 16 12 0.0000 4 135 450 3735 2385 Index\001
4 0 0 50 -1 16 12 0.0000 4 135 750 3735 2610 Direction\001
-4 0 0 50 -1 18 12 0.0000 4 180 1575 3735 3420 Sdo Configuration\001
+4 0 0 50 -1 18 12 0.0000 4 180 1575 3735 3420 SDO Configuration\001
4 0 0 50 -1 16 12 0.0000 4 135 450 3735 3780 Index\001
4 0 0 50 -1 16 12 0.0000 4 135 780 3735 4005 Subindex\001
4 0 0 50 -1 16 12 0.0000 4 135 390 3735 4230 Data\001
-4 0 0 50 -1 18 12 0.0000 4 180 1140 3735 5040 Sdo Request\001
+4 0 0 50 -1 18 12 0.0000 4 180 1140 3735 5040 SDO Request\001
4 0 0 50 -1 16 12 0.0000 4 135 450 3735 5310 Index\001
4 0 0 50 -1 16 12 0.0000 4 135 780 3735 5535 Subindex\001
-4 0 0 50 -1 18 12 0.0000 4 135 330 6210 2025 Pdo\001
+4 0 0 50 -1 18 12 0.0000 4 135 330 6210 2025 PDO\001
4 0 0 50 -1 16 12 0.0000 4 135 450 6210 2385 Index\001
-4 0 0 50 -1 18 12 0.0000 4 180 885 7785 2025 Pdo Entry\001
+4 0 0 50 -1 18 12 0.0000 4 180 885 7785 2025 PDO Entry\001
4 0 0 50 -1 16 12 0.0000 4 135 450 7785 2340 Index\001
4 0 0 50 -1 16 12 0.0000 4 135 780 7785 2565 Subindex\001
4 0 0 50 -1 16 12 0.0000 4 180 720 7785 2790 Bitlength\001
--- a/documentation/images/architecture.fig Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/images/architecture.fig Wed Jan 13 00:04:47 2010 +0100
@@ -71,9 +71,9 @@
4 1 0 50 -1 16 10 0.0000 4 120 465 5445 5760 Device\001
4 1 0 50 -1 16 10 0.0000 4 120 615 5445 5925 Interface\001
-6
-6 3870 4275 4500 5355
+6 3908 4310 4463 5319
5 1 0 1 0 7 50 -1 -1 0.000 0 0 0 0 3958.125 4815.000 3915 4320 4455 4815 3915 5310
-4 1 0 50 -1 16 10 4.7124 4 120 570 4162 4822 Realtime\001
+4 1 0 50 -1 16 10 4.7124 4 150 765 4162 4822 Application\001
4 1 0 50 -1 16 10 4.7124 4 120 615 3997 4822 Interface\001
-6
6 4538 2648 5813 3293
@@ -84,8 +84,8 @@
6 1575 2430 7785 2925
2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
7785 2655 1575 2655
-4 0 0 50 -1 16 12 0.0000 4 180 1110 1665 2880 Kernel space\001
-4 0 0 50 -1 16 12 0.0000 4 180 945 1665 2565 User space\001
+4 0 0 50 -1 16 12 0.0000 4 180 1110 1665 2880 Kernelspace\001
+4 0 0 50 -1 16 12 0.0000 4 180 945 1665 2565 Userspace\001
-6
6 4673 1298 5677 2302
1 4 0 1 0 7 50 -1 -1 4.000 1 0.0000 5175 1800 495 495 5670 2295 4680 1305
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/documentation/images/attach.fig Wed Jan 13 00:04:47 2010 +0100
@@ -0,0 +1,129 @@
+#FIG 3.2
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+6 450 900 2475 1575
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 450 900 2475 900 2475 1575 450 1575 450 900
+4 0 0 50 -1 16 12 0.0000 4 135 660 495 1080 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 495 1305 Product:\001
+4 0 0 50 -1 16 12 0.0000 4 135 465 495 1530 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 1350 1080 0x00000001\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 1350 1305 0x00000001\001
+4 0 0 50 -1 16 12 0.0000 4 135 615 1350 1530 0x0000\001
+-6
+6 450 2025 2475 2700
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 450 2025 2475 2025 2475 2700 450 2700 450 2025
+4 0 0 50 -1 16 12 0.0000 4 135 660 495 2205 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 495 2430 Product:\001
+4 0 0 50 -1 16 12 0.0000 4 135 465 495 2655 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 1350 2205 0x00000002\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 1350 2430 0x00000004\001
+4 0 0 50 -1 16 12 0.0000 4 135 615 1350 2655 0x1000\001
+-6
+6 450 3150 2475 3825
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 450 3150 2475 3150 2475 3825 450 3825 450 3150
+4 0 0 50 -1 16 12 0.0000 4 135 660 495 3330 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 495 3555 Product:\001
+4 0 0 50 -1 16 12 0.0000 4 135 465 495 3780 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 1350 3330 0x00000001\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 1350 3555 0x00000002\001
+4 0 0 50 -1 16 12 0.0000 4 135 615 1350 3780 0x2000\001
+-6
+6 450 4275 2475 4950
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 450 4275 2475 4275 2475 4950 450 4950 450 4275
+4 0 0 50 -1 16 12 0.0000 4 135 660 495 4455 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 495 4680 Product:\001
+4 0 0 50 -1 16 12 0.0000 4 135 465 495 4905 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 1350 4455 0x00000001\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 1350 4680 0x00000002\001
+4 0 0 50 -1 16 12 0.0000 4 135 615 1350 4905 0x0000\001
+-6
+6 4500 900 6750 1800
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 4500 900 6750 900 6750 1800 4500 1800 4500 900
+4 0 0 50 -1 16 12 0.0000 4 135 465 4545 1080 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 705 4545 1305 Position:\001
+4 0 0 50 -1 16 12 0.0000 4 135 660 4545 1530 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 4545 1755 Product:\001
+4 0 0 50 -1 16 12 0.0000 4 135 615 5400 1080 0x0000\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 1530 0x00000002\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 1755 0x00000004\001
+4 0 0 50 -1 16 12 0.0000 4 135 105 5400 1305 1\001
+-6
+6 4500 2025 6750 2925
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 4500 2025 6750 2025 6750 2925 4500 2925 4500 2025
+4 0 0 50 -1 16 12 0.0000 4 135 465 4545 2205 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 705 4545 2430 Position:\001
+4 0 0 50 -1 16 12 0.0000 4 135 660 4545 2655 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 4545 2880 Product:\001
+4 0 0 50 -1 16 12 0.0000 4 135 615 5400 2205 0x0000\001
+4 0 4 50 -1 16 12 0.0000 4 135 1035 5400 2880 0x00000002\001
+4 0 0 50 -1 16 12 0.0000 4 135 105 5400 2430 0\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 2655 0x00000001\001
+-6
+6 4500 3150 6750 4050
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 4500 3150 6750 3150 6750 4050 4500 4050 4500 3150
+4 0 0 50 -1 16 12 0.0000 4 135 465 4545 3330 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 705 4545 3555 Position:\001
+4 0 0 50 -1 16 12 0.0000 4 135 660 4545 3780 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 4545 4005 Product:\001
+4 0 0 50 -1 16 12 0.0000 4 135 615 5400 3330 0x2000\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 3780 0x00000001\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 4005 0x00000002\001
+4 0 0 50 -1 16 12 0.0000 4 135 105 5400 3555 0\001
+-6
+6 4500 4275 6750 5175
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 4500 4275 6750 4275 6750 5175 4500 5175 4500 4275
+4 0 0 50 -1 16 12 0.0000 4 135 465 4545 4455 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 705 4545 4680 Position:\001
+4 0 0 50 -1 16 12 0.0000 4 135 660 4545 4905 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 4545 5130 Product:\001
+4 0 4 50 -1 16 12 0.0000 4 135 615 5400 4455 0x3000\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 4905 0x00000001\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 5130 0x00000002\001
+4 0 0 50 -1 16 12 0.0000 4 135 105 5400 4680 0\001
+-6
+6 4500 5400 6750 6300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 4500 5400 6750 5400 6750 6300 4500 6300 4500 5400
+4 0 0 50 -1 16 12 0.0000 4 135 465 4545 5580 Alias:\001
+4 0 0 50 -1 16 12 0.0000 4 135 705 4545 5805 Position:\001
+4 0 0 50 -1 16 12 0.0000 4 135 660 4545 6030 Vendor:\001
+4 0 0 50 -1 16 12 0.0000 4 135 690 4545 6255 Product:\001
+4 0 0 50 -1 16 12 0.0000 4 135 615 5400 5580 0x2000\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 6030 0x00000001\001
+4 0 0 50 -1 16 12 0.0000 4 135 1035 5400 6255 0x00000002\001
+4 0 0 50 -1 16 12 0.0000 4 135 105 5400 5805 1\001
+-6
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4500 1350 2475 2385
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 675 1575 675 2025
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 675 2700 675 3150
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 675 3825 675 4275
+2 1 2 1 0 7 50 -1 -1 3.000 0 0 -1 0 0 2
+ 2475 1215 4500 2475
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2475 3510 4500 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4500 5850 2475 4590
+4 2 0 50 -1 16 12 0.0000 4 135 105 360 4410 3\001
+4 2 0 50 -1 16 12 0.0000 4 135 105 360 1035 0\001
+4 2 0 50 -1 16 12 0.0000 4 135 105 360 2160 1\001
+4 2 0 50 -1 16 12 0.0000 4 135 105 360 3285 2\001
+4 0 0 50 -1 16 14 0.0000 4 165 630 450 765 Slaves\001
+4 0 0 50 -1 16 14 0.0000 4 210 1950 4500 765 Slave Configurations\001
--- a/documentation/images/fmmus.fig Mon Oct 19 14:33:59 2009 +0200
+++ b/documentation/images/fmmus.fig Wed Jan 13 00:04:47 2010 +0100
@@ -10,6 +10,11 @@
5 1 0 1 0 7 50 -1 20 0.000 0 0 0 0 1755.000 1440.000 1215 1440 1755 900 2295 1440
5 1 0 1 0 7 50 -1 20 0.000 0 0 0 0 4095.000 1440.000 3735 1440 4095 1080 4455 1440
5 1 0 1 0 7 50 -1 20 0.000 0 0 0 0 8190.000 1440.000 7650 1440 8190 900 8730 1440
+6 3465 4455 5640 4680
+2 2 0 1 0 7 50 -1 42 0.000 0 0 -1 0 0 5
+ 3465 4455 3645 4455 3645 4680 3465 4680 3465 4455
+4 0 0 50 -1 16 12 0.0000 4 180 1905 3735 4635 Registered PDO Entries\001
+-6
2 1 1 1 0 7 52 -1 46 4.000 0 0 -1 0 0 2
1215 1665 1620 3870
2 1 1 1 0 7 52 -1 46 4.000 0 0 -1 0 0 2
@@ -186,26 +191,6 @@
5310 2880 4185 2880 4185 2520 5310 2520 5310 2880
2 1 1 1 0 7 52 -1 42 4.000 0 0 -1 0 0 2
4455 1665 5850 3870
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
- 1 1 1.00 60.00 120.00
- 1800 4410 1800 4095
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
- 1 1 1.00 60.00 120.00
- 2700 4410 2700 4095
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
- 1 1 1.00 60.00 120.00
- 5130 4410 5130 4095
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
- 1 1 1.00 60.00 120.00
- 6210 4410 6210 4095
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
- 1 1 1.00 60.00 120.00
- 2340 4410 2340 4095
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
- 1 1 1.00 60.00 120.00
- 3645 5265 3330 5265
-2 2 0 1 0 7 50 -1 42 0.000 0 0 -1 0 0 5
- 3465 4815 3645 4815 3645 5040 3465 5040 3465 4815
4 0 0 50 -1 16 12 0.0000 4 135 420 675 1395 RAM\001
4 1 0 50 -1 16 12 0.0000 4 135 390 4095 1350 SM1\001
4 0 0 50 -1 16 12 0.0000 4 135 420 6390 1395 RAM\001
@@ -219,5 +204,3 @@
4 1 0 50 -1 16 12 0.0000 4 135 390 8190 1305 SM3\001
4 0 0 50 -1 16 12 0.0000 4 180 1290 1080 3825 Domain0 Image\001
4 0 0 50 -1 16 12 0.0000 4 180 1290 4050 3825 Domain1 Image\001
-4 0 0 50 -1 16 12 0.0000 4 180 1815 3735 5310 Process data pointers\001
-4 0 0 50 -1 16 12 0.0000 4 180 1890 3735 4995 Registered Pdo entries\001
--- a/documentation/images/fsm-change.fig Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,101 +0,0 @@
-#FIG 3.2
-Portrait
-Center
-Metric
-A4
-100.00
-Single
--2
-1200 2
-0 32 #8e8e8e
-6 398 2378 2122 3112
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 1260 2745 855 360 405 2385 2115 3105
-4 1 0 50 -1 16 12 0.0000 4 120 960 1260 2790 CODE\001
--6
-6 2513 2378 4237 3112
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 3375 2745 855 360 2520 2385 4230 3105
-4 1 0 50 -1 16 12 0.0000 4 120 690 3375 2790 ACK\001
--6
-6 4523 2378 6458 3112
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 5490 2745 855 360 4635 2385 6345 3105
-4 1 0 50 -1 16 12 0.0000 4 120 1935 5490 2790 CHECK ACK\001
--6
-6 6705 2340 8505 3150
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 7605 2745 855 360 6750 2385 8460 3105
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 7605 2738 765 322 6840 2416 8370 3060
-4 1 0 50 -1 16 12 0.0000 4 120 1200 7605 2790 ERROR\001
--6
-6 2513 893 4237 1627
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 3375 1260 855 360 2520 900 4230 1620
-4 1 0 50 -1 16 12 0.0000 4 120 1170 3375 1305 CHECK\001
--6
-6 4628 893 6352 1627
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 5490 1260 855 360 4635 900 6345 1620
-4 1 0 50 -1 16 12 0.0000 4 120 1305 5490 1305 STATUS\001
--6
-6 6705 855 8505 1665
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 7605 1260 855 360 6750 900 8460 1620
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 7605 1261 748 315 6857 946 8353 1576
-4 1 0 50 -1 16 12 0.0000 4 120 705 7605 1305 END\001
--6
-6 360 855 2160 1665
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 1260 1260 855 360 405 900 2115 1620
-4 1 0 50 -1 16 12 0.0000 4 120 1080 1260 1305 START\001
--6
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 495 675 101 101 495 675 585 720
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 2115 1260 2520 1260
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4230 1260 4635 1260
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 6345 1260 6750 1260
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 5670 900 5670 540 5400 540 5355 900
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 2115 2745 2520 2745
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4230 2745 4635 2745
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 6345 2745 6750 2745
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4905 1530 4365 1980 2610 2115 1755 2430
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 0 50 -1 20 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 540 765 675 990
- 0.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4050 1485 4500 1620 6120 1980 6930 2520
- 0.000 -1.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 6165 1485 6750 1935 7110 2430
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3915 3060 5490 3285 6795 2925
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 1890 3015 5355 3510 6975 3015
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 5670 2386 5670 2026 5400 2026 5355 2386
- 0.000 -1.000 -1.000 0.000
--- a/documentation/images/fsm-idle.fig Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,110 +0,0 @@
-#FIG 3.2
-Portrait
-Center
-Metric
-A4
-100.00
-Single
--2
-1200 2
-0 32 #8e8e8e
-0 33 #000000
-6 3375 855 5175 1665
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 1260 855 360 3420 900 5130 1620
-4 1 0 50 -1 16 12 0.0000 4 120 1080 4275 1305 START\001
--6
-6 3420 3105 5175 3870
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4282 3503 855 360 3427 3143 5137 3863
-4 1 0 50 -1 16 12 0.0000 4 120 930 4282 3465 READ\001
-4 1 0 50 -1 16 12 0.0000 4 120 1290 4282 3690 STATES\001
--6
-6 1500 4261 3465 4995
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 2482 4628 855 360 1627 4268 3337 4988
-4 1 0 50 -1 16 12 0.0000 4 120 1965 2482 4590 CONFIGURE\001
-4 1 0 50 -1 16 12 0.0000 4 120 1305 2482 4815 SLAVES\001
--6
-6 4950 1800 4950 1800
--6
-6 3240 2011 5310 2745
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 2378 855 360 3420 2018 5130 2738
-4 1 0 50 -1 16 12 0.0000 4 120 2070 4275 2423 BROADCAST\001
--6
-6 5220 4261 6944 4995
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6082 4628 855 360 5227 4268 6937 4988
-4 1 0 50 -1 16 12 0.0000 4 120 1035 6082 4590 WRITE\001
-4 1 0 50 -1 16 12 0.0000 4 120 1425 6082 4815 EEPROM\001
--6
-6 5206 1471 6930 2205
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6068 1838 855 360 5213 1478 6923 2198
-4 1 0 50 -1 16 12 0.0000 4 120 1695 6068 1800 SCAN FOR\001
-4 1 0 50 -1 16 12 0.0000 4 120 1305 6068 2025 SLAVES\001
--6
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3510 675 101 101 3510 675 3600 720
-3 2 0 1 0 0 50 -1 20 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 3555 765 3690 990
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 1620 4275 2025
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 2745 4275 3150
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 3510 3690 3060 3690 3060 3330 3510 3330
- 0.000 -1.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4770 945 4770 630 4455 630 4410 900
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3825 3825 3690 4140 3240 4455
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3870 2070 3735 1845 3915 1575
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3600 3285 3150 2430 3645 1485
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5625 1530 5445 1305 5130 1260
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5130 2385 5445 2385 5625 2160
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 4740 3825 4875 4140 5325 4455
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 3330 4635 5220 4635
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 1710 4815 1260 4815 1260 4455 1710 4455
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 2475 4275 2700 2295 3420 1395
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 6660 4365 7245 1620 5085 1125
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 6840 4815 7290 4815 7290 4455 6840 4455
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 6615 1575 6660 1260 6435 1215 6300 1485
- 0.000 -1.000 -1.000 0.000
--- a/documentation/images/fsm-op.fig Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,116 +0,0 @@
-#FIG 3.2
-Portrait
-Center
-Metric
-A4
-100.00
-Single
--2
-1200 2
-0 32 #8e8e8e
-6 3375 855 5175 1665
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 1260 855 360 3420 900 5130 1620
-4 1 0 50 -1 16 12 0.0000 4 120 1080 4275 1305 START\001
--6
-6 3240 2011 5310 2745
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 2378 855 360 3420 2018 5130 2738
-4 1 0 50 -1 16 12 0.0000 4 120 2070 4275 2423 BROADCAST\001
--6
-6 3420 3105 5175 3870
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4282 3503 855 360 3427 3143 5137 3863
-4 1 0 50 -1 16 12 0.0000 4 120 930 4282 3465 READ\001
-4 1 0 50 -1 16 12 0.0000 4 120 1290 4282 3690 STATES\001
--6
-6 5220 4230 6975 4995
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6097 4628 855 360 5242 4268 6952 4988
-4 1 0 50 -1 16 12 0.0000 4 120 1590 6097 4590 VALIDATE\001
-4 1 0 50 -1 16 12 0.0000 4 120 1425 6097 4815 VENDOR\001
--6
-6 5220 5355 6975 6120
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6112 5753 855 360 5257 5393 6967 6113
-4 1 0 50 -1 16 12 0.0000 4 120 1590 6112 5715 VALIDATE\001
-4 1 0 50 -1 16 12 0.0000 4 120 1635 6112 5925 PRODUCT\001
--6
-6 5092 6511 7162 7245
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6127 6878 855 360 5272 6518 6982 7238
-4 1 0 50 -1 16 12 0.0000 4 120 1500 6127 6840 REWRITE\001
-4 1 0 50 -1 16 12 0.0000 4 120 2070 6127 7050 ADDRESSES\001
--6
-6 1485 4261 3450 4995
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 2467 4628 855 360 1612 4268 3322 4988
-4 1 0 50 -1 16 12 0.0000 4 120 1965 2467 4590 CONFIGURE\001
-4 1 0 50 -1 16 12 0.0000 4 120 1305 2467 4800 SLAVES\001
--6
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3510 675 101 101 3510 675 3600 720
-3 2 0 1 0 0 50 -1 20 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 3555 765 3690 990
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 1620 4275 2025
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 2745 4275 3150
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3825 3825 3600 4185 3240 4455
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 4706 3825 4931 4185 5291 4455
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5715 4950 5580 5175 5670 5445
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 6525 5445 6660 5220 6570 4950
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 6120 6120 6120 6525
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 2475 4275 2745 2295 3465 1395
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 4815 2070 4950 1845 4770 1575
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 4950 3285 5400 2430 4905 1485
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 6120 4275 5850 2610 5040 1440
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 3510 3690 3060 3690 3060 3330 3510 3330
- 0.000 -1.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4770 945 4770 630 4455 630 4410 900
- 0.000 -1.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 6840 5580 6930 3600 5130 1350
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 6885 6705 7245 3555 5085 1170
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 1710 4815 1260 4815 1260 4455 1710 4455
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 5355 7065 4905 7065 4905 6705 5355 6705
- 0.000 -1.000 -1.000 0.000
--- a/documentation/images/fsm-sii.fig Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,106 +0,0 @@
-#FIG 3.2
-Portrait
-Center
-Metric
-A4
-100.00
-Single
--2
-1200 2
-0 32 #8e8e8e
-6 2235 893 4515 1627
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 3375 1260 855 360 2520 900 4230 1620
-4 1 0 50 -1 16 12 0.0000 4 150 2280 3375 1305 READ_CHECK\001
--6
-6 4388 893 6593 1627
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 5490 1260 855 360 4635 900 6345 1620
-4 1 0 50 -1 16 12 0.0000 4 150 2205 5490 1305 READ_FETCH\001
--6
-6 165 893 2355 1627
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 1260 1260 855 360 405 900 2115 1620
-4 1 0 50 -1 16 12 0.0000 4 150 2190 1260 1305 READ_START\001
--6
-6 6705 1710 8505 2520
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 7605 2115 855 360 6750 1755 8460 2475
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 7605 2116 748 315 6857 1801 8353 2431
-4 1 0 50 -1 16 12 0.0000 4 120 705 7605 2160 END\001
--6
-6 113 2648 2408 3382
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 1260 3015 855 360 405 2655 2115 3375
-4 1 0 50 -1 16 12 0.0000 4 150 2295 1260 3060 WRITE_START\001
--6
-6 2183 2648 4568 3382
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 3375 3015 855 360 2520 2655 4230 3375
-4 1 0 50 -1 16 12 0.0000 4 150 2385 3375 3060 WRITE_CHECK\001
--6
-6 4208 2648 6773 3382
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 5490 3015 855 360 4635 2655 6345 3375
-4 1 0 50 -1 16 12 0.0000 4 150 2565 5490 3060 WRITE_CHECK2\001
--6
-6 3555 1710 5355 2520
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4455 2115 855 360 3600 1755 5310 2475
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4455 2108 765 322 3690 1786 5220 2430
-4 1 0 50 -1 16 12 0.0000 4 120 1200 4455 2160 ERROR\001
--6
-6 360 540 675 990
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 495 675 101 101 495 675 585 720
-3 2 0 1 0 0 50 -1 20 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 540 765 675 990
- 0.000 0.000
--6
-6 360 2295 675 2745
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 495 2430 101 101 495 2430 585 2475
-3 2 0 1 0 0 50 -1 20 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 540 2520 675 2745
- 0.000 0.000
--6
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 2115 1260 2520 1260
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4230 1260 4635 1260
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 2115 3015 2520 3015
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4230 3015 4635 3015
- 0.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3375 1620 3465 1845 3735 1935
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5490 1620 5445 1845 5220 1935
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3375 2610 3465 2385 3735 2295
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5490 2610 5445 2385 5220 2295
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 6255 1440 6750 1620 7020 1845
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 6210 2835 6750 2700 7020 2385
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 5670 900 5670 630 5400 630 5400 900
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 5670 3375 5670 3690 5310 3690 5310 3375
- 0.000 -1.000 -1.000 0.000
--- a/documentation/images/fsm-slaveconf.fig Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,132 +0,0 @@
-#FIG 3.2
-Portrait
-Center
-Metric
-A4
-100.00
-Single
--2
-1200 2
-0 32 #8e8e8e
-6 3413 893 5137 1627
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 1260 855 360 3420 900 5130 1620
-4 1 0 50 -1 16 12 0.0000 4 135 330 4275 1305 INIT\001
--6
-6 3413 2011 5137 2745
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 2378 855 360 3420 2018 5130 2738
-4 1 0 50 -1 16 12 0.0000 4 135 525 4275 2423 SYNC\001
--6
-6 3413 3136 5137 3870
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 3503 855 360 3420 3143 5130 3863
-4 1 0 50 -1 16 12 0.0000 4 135 630 4275 3548 PREOP\001
--6
-6 3413 4261 5137 4995
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 4628 855 360 3420 4268 5130 4988
-4 1 0 50 -1 16 12 0.0000 4 135 570 4275 4673 FMMU\001
--6
-6 3360 5386 5190 6120
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 5753 855 360 3420 5393 5130 6113
-4 1 0 50 -1 16 12 0.0000 4 165 1050 4275 5798 SDO_CONF\001
--6
-6 3375 6480 5175 7245
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 6878 855 360 3420 6518 5130 7238
-4 1 0 50 -1 16 12 0.0000 4 135 765 4275 6923 SAFEOP\001
--6
-6 3413 7636 5137 8370
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 8003 855 360 3420 7643 5130 8363
-4 1 0 50 -1 16 12 0.0000 4 135 270 4275 8048 OP\001
--6
-6 6075 4230 7830 4995
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6968 4628 855 360 6113 4268 7823 4988
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6968 4635 748 315 6220 4320 7716 4950
-4 1 0 50 -1 16 12 0.0000 4 135 390 6968 4673 END\001
--6
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3510 675 101 101 3510 675 3600 720
-3 2 0 1 0 0 50 -1 20 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 3555 765 3690 990
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 1620 4275 2025
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 2745 4275 3150
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 3870 4275 4275
- 0.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5085 2520 5940 3060 6705 4275
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5130 4635 5670 4635 6120 4635
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5130 1304 6390 2520 6975 4275
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5130 8010 6255 6840 6975 4995
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5085 3600 5805 3825 6390 4320
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5085 6750 5985 6210 6705 4995
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3510 1440 3105 2385 3510 3330
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3510 3690 3105 4635 3510 5580
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4950 1035 5085 765 4860 675 4725 945
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4950 3285 5085 3015 4860 2925 4725 3195
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 4995 4275 5400
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 6120 4275 6525
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 7245 4275 7650
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4950 6660 5085 6390 4860 6300 4725 6570
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4950 7785 5085 7515 4860 7425 4725 7695
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 3465 3645 2835 5175 3465 6750
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 4950 5535 5085 5265 4860 5175 4725 5445
- 0.000 -1.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5085 5629 5805 5404 6390 4905
- 0.000 -1.000 0.000
--- a/documentation/images/fsm-slavescan.fig Mon Oct 19 14:33:59 2009 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,104 +0,0 @@
-#FIG 3.2
-Portrait
-Center
-Metric
-A4
-100.00
-Single
--2
-1200 2
-0 32 #8e8e8e
-6 3413 893 5137 1627
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 1260 855 360 3420 900 5130 1620
-4 1 0 50 -1 16 12 0.0000 4 120 1080 4275 1305 START\001
--6
-6 3413 3136 5137 3870
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 3503 855 360 3420 3143 5130 3863
-4 1 0 50 -1 16 12 0.0000 4 120 1065 4275 3548 STATE\001
--6
-6 3413 4261 5137 4995
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 4628 855 360 3420 4268 5130 4988
-4 1 0 50 -1 16 12 0.0000 4 120 900 4275 4673 BASE\001
--6
-6 3413 5386 5137 6120
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 5753 855 360 3420 5393 5130 6113
-4 1 0 50 -1 16 12 0.0000 4 120 1605 4275 5798 DATALINK\001
--6
-6 3165 6511 5385 7245
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 6878 855 360 3420 6518 5130 7238
-4 1 0 50 -1 16 12 0.0000 4 120 2220 4275 6923 EEPROM SIZE\001
--6
-6 3413 2011 5137 2745
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 2378 855 360 3420 2018 5130 2738
-4 1 0 50 -1 16 12 0.0000 4 120 1620 4275 2423 ADDRESS\001
--6
-6 3083 7636 5468 8370
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 4275 8003 855 360 3420 7643 5130 8363
-4 1 0 50 -1 16 12 0.0000 4 120 2385 4275 8048 EEPROM DATA\001
--6
-6 6075 4230 7830 4995
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6968 4628 855 360 6113 4268 7823 4988
-1 2 0 1 0 7 50 -1 -1 0.000 1 0.0000 6968 4635 748 315 6220 4320 7716 4950
-4 1 0 50 -1 16 12 0.0000 4 120 705 6968 4673 END\001
--6
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3510 675 101 101 3510 675 3600 720
-3 2 0 1 0 0 50 -1 20 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 3555 765 3690 990
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 1620 4275 2025
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 2745 4275 3150
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 3870 4275 4275
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 4995 4275 5400
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 6120 4275 6525
- 0.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 4275 7245 4275 7650
- 0.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5085 2520 5985 3285 6570 4275
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5130 3600 5805 3960 6300 4410
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 2
- 1 1 1.00 60.00 120.00
- 5130 4635 6120 4635
- 0.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5130 5680 5805 5320 6300 4870
- 0.000 -1.000 0.000
-3 2 1 1 0 7 50 -1 -1 4.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5085 6731 5985 5966 6570 4976
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 3
- 1 1 1.00 60.00 120.00
- 5130 7920 6390 6525 6930 4995
- 0.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 3510 7020 3105 7020 3105 6705 3510 6705
- 0.000 -1.000 -1.000 0.000
-3 2 0 1 0 7 50 -1 -1 0.000 0 1 0 4
- 1 1 1.00 60.00 120.00
- 3510 8145 3105 8145 3105 7830 3510 7830
- 0.000 -1.000 -1.000 0.000
--- a/examples/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/examples/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/examples/mini/Kbuild.in Mon Oct 19 14:33:59 2009 +0200
+++ b/examples/mini/Kbuild.in Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
# vi: syntax=make
#
--- a/examples/mini/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/examples/mini/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -6,32 +6,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/examples/mini/mini.c Mon Oct 19 14:33:59 2009 +0200
+++ b/examples/mini/mini.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -80,7 +73,7 @@
#define Beckhoff_EL3152 0x00000002, 0x0c503052
#define Beckhoff_EL4102 0x00000002, 0x10063052
-// offsets for Pdo entries
+// offsets for PDO entries
static unsigned int off_ana_in;
static unsigned int off_ana_out;
static unsigned int off_dig_out;
@@ -256,12 +249,12 @@
printk(KERN_INFO PFX "Still busy...\n");
break;
case EC_SDO_REQUEST_SUCCESS:
- printk(KERN_INFO PFX "Sdo value: 0x%04X\n",
+ printk(KERN_INFO PFX "SDO value: 0x%04X\n",
EC_READ_U16(ecrt_sdo_request_data(sdo)));
ecrt_sdo_request_read(sdo); // trigger next read
break;
case EC_SDO_REQUEST_ERROR:
- printk(KERN_INFO PFX "Failed to read Sdo!\n");
+ printk(KERN_INFO PFX "Failed to read SDO!\n");
ecrt_sdo_request_read(sdo); // retry reading
break;
}
@@ -296,7 +289,7 @@
check_slave_config_states();
#if SDO_ACCESS
- // read process data Sdo
+ // read process data SDO
read_sdo();
#endif
}
@@ -363,9 +356,9 @@
}
#if CONFIGURE_PDOS
- printk(KERN_INFO PFX "Configuring Pdos...\n");
+ printk(KERN_INFO PFX "Configuring PDOs...\n");
if (ecrt_slave_config_pdos(sc_ana_in, EC_END, el3152_syncs)) {
- printk(KERN_ERR PFX "Failed to configure Pdos.\n");
+ printk(KERN_ERR PFX "Failed to configure PDOs.\n");
goto out_release_master;
}
@@ -376,7 +369,7 @@
}
if (ecrt_slave_config_pdos(sc, EC_END, el4102_syncs)) {
- printk(KERN_ERR PFX "Failed to configure Pdos.\n");
+ printk(KERN_ERR PFX "Failed to configure PDOs.\n");
goto out_release_master;
}
@@ -387,23 +380,23 @@
}
if (ecrt_slave_config_pdos(sc, EC_END, el2004_syncs)) {
- printk(KERN_ERR PFX "Failed to configure Pdos.\n");
+ printk(KERN_ERR PFX "Failed to configure PDOs.\n");
goto out_release_master;
}
#endif
#if SDO_ACCESS
- printk(KERN_INFO PFX "Creating Sdo requests...\n");
+ printk(KERN_INFO PFX "Creating SDO requests...\n");
if (!(sdo = ecrt_slave_config_create_sdo_request(sc_ana_in, 0x3102, 2, 2))) {
- printk(KERN_ERR PFX "Failed to create Sdo request.\n");
+ printk(KERN_ERR PFX "Failed to create SDO request.\n");
goto out_release_master;
}
ecrt_sdo_request_timeout(sdo, 500); // ms
#endif
- printk(KERN_INFO PFX "Registering Pdo entries...\n");
+ printk(KERN_INFO PFX "Registering PDO entries...\n");
if (ecrt_domain_reg_pdo_entry_list(domain1, domain1_regs)) {
- printk(KERN_ERR PFX "Pdo entry registration failed!\n");
+ printk(KERN_ERR PFX "PDO entry registration failed!\n");
goto out_release_master;
}
--- a/examples/rtai/Kbuild.in Mon Oct 19 14:33:59 2009 +0200
+++ b/examples/rtai/Kbuild.in Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
# vi: syntax=make
#
--- a/examples/rtai/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/examples/rtai/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -4,32 +4,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/examples/rtai/rtai_sample.c Mon Oct 19 14:33:59 2009 +0200
+++ b/examples/rtai/rtai_sample.c Wed Jan 13 00:04:47 2010 +0100
@@ -4,32 +4,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -86,7 +79,7 @@
#define Beckhoff_EL2004 0x00000002, 0x07D43052
#define Beckhoff_EL3162 0x00000002, 0x0C5A3052
-static unsigned int off_ana_in; // offsets for Pdo entries
+static unsigned int off_ana_in; // offsets for PDO entries
static unsigned int off_dig_out;
const static ec_pdo_entry_reg_t domain1_regs[] = {
@@ -299,9 +292,9 @@
}
#ifdef CONFIGURE_PDOS
- printk(KERN_INFO PFX "Configuring Pdos...\n");
+ printk(KERN_INFO PFX "Configuring PDOs...\n");
if (ecrt_slave_config_pdos(sc_ana_in, EC_END, el3162_syncs)) {
- printk(KERN_ERR PFX "Failed to configure Pdos.\n");
+ printk(KERN_ERR PFX "Failed to configure PDOs.\n");
goto out_release_master;
}
@@ -311,14 +304,14 @@
}
if (ecrt_slave_config_pdos(sc, EC_END, el2004_syncs)) {
- printk(KERN_ERR PFX "Failed to configure Pdos.\n");
+ printk(KERN_ERR PFX "Failed to configure PDOs.\n");
goto out_release_master;
}
#endif
- printk(KERN_INFO PFX "Registering Pdo entries...\n");
+ printk(KERN_INFO PFX "Registering PDO entries...\n");
if (ecrt_domain_reg_pdo_entry_list(domain1, domain1_regs)) {
- printk(KERN_ERR PFX "Pdo entry registration failed!\n");
+ printk(KERN_ERR PFX "PDO entry registration failed!\n");
goto out_release_master;
}
--- a/globals.h Mon Oct 19 14:33:59 2009 +0200
+++ b/globals.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -61,7 +54,7 @@
/** Master version string
*/
-#define EC_MASTER_VERSION VERSION " " BRANCH " r" EC_STR(SVNREV)
+#define EC_MASTER_VERSION VERSION " r" EC_STR(SVNREV)
/*****************************************************************************/
--- a/include/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/include/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -6,32 +6,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/include/ecrt.h Mon Oct 19 14:33:59 2009 +0200
+++ b/include/ecrt.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -57,18 +50,18 @@
* offers the possibility to use a shared-memory region. Therefore,
* added the domain methods ecrt_domain_size() and
* ecrt_domain_external_memory().
- * - Pdo entry registration functions do not return a process data pointer,
+ * - PDO entry registration functions do not return a process data pointer,
* but an offset in the domain's process data. In addition, an optional bit
* position can be requested. This was necessary for the external domain
* memory. An additional advantage is, that the returned offset is
* immediately valid. If the domain's process data is allocated internally,
* the start address can be retrieved with ecrt_domain_data().
* - Replaced ecrt_slave_pdo_mapping/add/clear() with
- * ecrt_slave_config_pdo_assign_add() to add a Pdo to a sync manager's Pdo
- * assignment and ecrt_slave_config_pdo_mapping_add() to add a Pdo entry to a
- * Pdo's mapping. ecrt_slave_config_pdos() is a convenience function
+ * ecrt_slave_config_pdo_assign_add() to add a PDO to a sync manager's PDO
+ * assignment and ecrt_slave_config_pdo_mapping_add() to add a PDO entry to a
+ * PDO's mapping. ecrt_slave_config_pdos() is a convenience function
* for both, that uses the new data types ec_pdo_info_t and
- * ec_pdo_entry_info_t. Pdo entries, that are mapped with these functions
+ * ec_pdo_entry_info_t. PDO entries, that are mapped with these functions
* can now immediately be registered, even if the bus is offline.
* - Renamed ec_bus_status_t, ec_master_status_t to ec_bus_state_t and
* ec_master_state_t, respectively. Renamed ecrt_master_get_status() to
@@ -76,15 +69,15 @@
* - Added ec_domain_state_t and #ec_wc_state_t for a new output parameter
* of ecrt_domain_state(). The domain state object does now contain
* information, if the process data was exchanged completely.
- * - Former "Pdo registration" meant Pdo entry registration in fact, therefore
+ * - Former "PDO registration" meant PDO entry registration in fact, therefore
* renamed ec_pdo_reg_t to ec_pdo_entry_reg_t and ecrt_domain_register_pdo()
* to ecrt_slave_config_reg_pdo_entry().
* - Removed ecrt_domain_register_pdo_range(), because it's functionality can
- * be reached by specifying an explicit Pdo assignment/mapping and
- * registering the mapped Pdo entries.
- * - Added an Sdo access interface, working with Sdo requests. These can be
+ * be reached by specifying an explicit PDO assignment/mapping and
+ * registering the mapped PDO entries.
+ * - Added an SDO access interface, working with SDO requests. These can be
* scheduled for reading and writing during realtime operation.
- * - Exported ecrt_slave_config_sdo(), the generic Sdo configuration function.
+ * - Exported ecrt_slave_config_sdo(), the generic SDO configuration function.
* - Removed the bus_state and bus_tainted flags from ec_master_state_t.
*
* @{
@@ -221,7 +214,7 @@
/*****************************************************************************/
-/** Direction type for Pdo assignment functions.
+/** Direction type for PDO assignment functions.
*/
typedef enum {
EC_DIR_INVALID, /**< Invalid direction. Do not use this value. */
@@ -232,33 +225,33 @@
/*****************************************************************************/
-/** Pdo entry configuration information.
+/** PDO entry configuration information.
*
* This is the data type of the \a entries field in ec_pdo_info_t.
*
* \see ecrt_slave_config_pdos().
*/
typedef struct {
- uint16_t index; /**< Pdo entry index. */
- uint8_t subindex; /**< Pdo entry subindex. */
- uint8_t bit_length; /**< Size of the Pdo entry in bit. */
+ uint16_t index; /**< PDO entry index. */
+ uint8_t subindex; /**< PDO entry subindex. */
+ uint8_t bit_length; /**< Size of the PDO entry in bit. */
} ec_pdo_entry_info_t;
/*****************************************************************************/
-/** Pdo configuration information.
+/** PDO configuration information.
*
* This is the data type of the \a pdos field in ec_sync_info_t.
*
* \see ecrt_slave_config_pdos().
*/
typedef struct {
- uint16_t index; /**< Pdo index. */
- unsigned int n_entries; /**< Number of Pdo entries in \a entries to map.
+ uint16_t index; /**< PDO index. */
+ unsigned int n_entries; /**< Number of PDO entries in \a entries to map.
Zero means, that the default mapping shall be
used (this can only be done if the slave is
present at bus configuration time). */
- ec_pdo_entry_info_t *entries; /**< Array of Pdo entries to map. Can either
+ ec_pdo_entry_info_t *entries; /**< Array of PDO entries to map. Can either
be \a NULL, or must contain at
least \a n_entries values. */
} ec_pdo_info_t;
@@ -267,8 +260,8 @@
/** Sync manager configuration information.
*
- * This can be use to configure multiple sync managers including the Pdo
- * assignment and Pdo mapping. It is used as an input parameter type in
+ * This can be use to configure multiple sync managers including the PDO
+ * assignment and PDO mapping. It is used as an input parameter type in
* ecrt_slave_config_pdos().
*/
typedef struct {
@@ -276,14 +269,14 @@
than #EC_MAX_SYNC_MANAGERS for a valid sync manager,
but can also be \a 0xff to mark the end of the list. */
ec_direction_t dir; /**< Sync manager direction. */
- unsigned int n_pdos; /**< Number of Pdos in \a pdos. */
- ec_pdo_info_t *pdos; /**< Array with Pdos to assign. This must contain
- at least \a n_pdos Pdos. */
+ unsigned int n_pdos; /**< Number of PDOs in \a pdos. */
+ ec_pdo_info_t *pdos; /**< Array with PDOs to assign. This must contain
+ at least \a n_pdos PDOs. */
} ec_sync_info_t;
/*****************************************************************************/
-/** List record type for Pdo entry mass-registration.
+/** List record type for PDO entry mass-registration.
*
* This type is used for the array parameter of the
* ecrt_domain_reg_pdo_entry_list()
@@ -293,19 +286,19 @@
uint16_t position; /**< Slave position. */
uint32_t vendor_id; /**< Slave vendor ID. */
uint32_t product_code; /**< Slave product code. */
- uint16_t index; /**< Pdo entry index. */
- uint8_t subindex; /**< Pdo entry subindex. */
- unsigned int *offset; /**< Pointer to a variable to store the Pdo entry's
+ uint16_t index; /**< PDO entry index. */
+ uint8_t subindex; /**< PDO entry subindex. */
+ unsigned int *offset; /**< Pointer to a variable to store the PDO entry's
(byte-)offset in the process data. */
unsigned int *bit_position; /**< Pointer to a variable to store a bit
position (0-7) within the \a offset. Can be
NULL, in which case an error is raised if the
- Pdo entry does not byte-align. */
+ PDO entry does not byte-align. */
} ec_pdo_entry_reg_t;
/*****************************************************************************/
-/** Sdo request state.
+/** SDO request state.
*
* This is used as return type of ecrt_sdo_request_state().
*/
@@ -378,7 +371,7 @@
*
* For process data exchange, at least one process data domain is needed.
* This method creates a new process data domain and returns a pointer to the
- * new domain object. This object can be used for registering Pdos and
+ * new domain object. This object can be used for registering PDOs and
* exchanging them in cyclic operation.
*
* \return Pointer to the new domain on success, else NULL.
@@ -495,7 +488,7 @@
ec_direction_t dir /**< Input/Output. */
);
-/** Add a Pdo to a sync manager's Pdo assignment.
+/** Add a PDO to a sync manager's PDO assignment.
*
* \see ecrt_slave_config_pdos()
* \return zero on success, else non-zero
@@ -504,12 +497,12 @@
ec_slave_config_t *sc, /**< Slave configuration. */
uint8_t sync_index, /**< Sync manager index. Must be less
than #EC_MAX_SYNC_MANAGERS. */
- uint16_t index /**< Index of the Pdo to assign. */
- );
-
-/** Clear a sync manager's Pdo assignment.
- *
- * This can be called before assigning Pdos via
+ uint16_t index /**< Index of the PDO to assign. */
+ );
+
+/** Clear a sync manager's PDO assignment.
+ *
+ * This can be called before assigning PDOs via
* ecrt_slave_config_pdo_assign_add(), to clear the default assignment of a
* sync manager.
*
@@ -521,34 +514,34 @@
than #EC_MAX_SYNC_MANAGERS. */
);
-/** Add a Pdo entry to the given Pdo's mapping.
+/** Add a PDO entry to the given PDO's mapping.
*
* \see ecrt_slave_config_pdos()
* \return zero on success, else non-zero
*/
int ecrt_slave_config_pdo_mapping_add(
ec_slave_config_t *sc, /**< Slave configuration. */
- uint16_t pdo_index, /**< Index of the Pdo. */
- uint16_t entry_index, /**< Index of the Pdo entry to add to the Pdo's
+ uint16_t pdo_index, /**< Index of the PDO. */
+ uint16_t entry_index, /**< Index of the PDO entry to add to the PDO's
mapping. */
- uint8_t entry_subindex, /**< Subindex of the Pdo entry to add to the
- Pdo's mapping. */
- uint8_t entry_bit_length /**< Size of the Pdo entry in bit. */
- );
-
-/** Clear the mapping of a given Pdo.
- *
- * This can be called before mapping Pdo entries via
+ uint8_t entry_subindex, /**< Subindex of the PDO entry to add to the
+ PDO's mapping. */
+ uint8_t entry_bit_length /**< Size of the PDO entry in bit. */
+ );
+
+/** Clear the mapping of a given PDO.
+ *
+ * This can be called before mapping PDO entries via
* ecrt_slave_config_pdo_mapping_add(), to clear the default mapping.
*
* \see ecrt_slave_config_pdos()
*/
void ecrt_slave_config_pdo_mapping_clear(
ec_slave_config_t *sc, /**< Slave configuration. */
- uint16_t pdo_index /**< Index of the Pdo. */
- );
-
-/** Specify a complete Pdo configuration.
+ uint16_t pdo_index /**< Index of the PDO. */
+ );
+
+/** Specify a complete PDO configuration.
*
* This function is a convenience wrapper for the functions
* ecrt_slave_config_sync_manager(), ecrt_slave_config_pdo_assign_clear(),
@@ -557,7 +550,7 @@
* automatic code generation.
*
* The following example shows, how to specify a complete configuration,
- * including the Pdo mappings. With this information, the master is able to
+ * including the PDO mappings. With this information, the master is able to
* reserve the complete process data, even if the slave is not present at
* configuration time:
*
@@ -588,9 +581,9 @@
* }
* \endcode
*
- * The next example shows, how to configure the Pdo assignment only. The
- * entries for each assigned Pdo are taken from the Pdo's default mapping.
- * Please note, that Pdo entry registration will fail, if the Pdo
+ * The next example shows, how to configure the PDO assignment only. The
+ * entries for each assigned PDO are taken from the PDO's default mapping.
+ * Please note, that PDO entry registration will fail, if the PDO
* configuration is left empty and the slave is offline.
*
* \code
@@ -624,46 +617,46 @@
configurations. */
);
-/** Registers a Pdo entry for process data exchange in a domain.
- *
- * Searches the assigned Pdos for the given Pdo entry. An error is raised, if
+/** Registers a PDO entry for process data exchange in a domain.
+ *
+ * Searches the assigned PDOs for the given PDO entry. An error is raised, if
* the given entry is not mapped. Otherwise, the corresponding sync manager
* and FMMU configurations are provided for slave configuration and the
- * respective sync manager's assigned Pdos are appended to the given domain,
- * if not already done. The offset of the requested Pdo entry's data inside
- * the domain's process data is returned. Optionally, the Pdo entry bit
+ * respective sync manager's assigned PDOs are appended to the given domain,
+ * if not already done. The offset of the requested PDO entry's data inside
+ * the domain's process data is returned. Optionally, the PDO entry bit
* position (0-7) can be retrieved via the \a bit_position output parameter.
- * This pointer may be \a NULL, in this case an error is raised if the Pdo
+ * This pointer may be \a NULL, in this case an error is raised if the PDO
* entry does not byte-align.
*
- * \retval >=0 Success: Offset of the Pdo entry's process data.
- * \retval -1 Error: Pdo entry not found.
- * \retval -2 Error: Failed to register Pdo entry.
- * \retval -3 Error: Pdo entry is not byte-aligned.
+ * \retval >=0 Success: Offset of the PDO entry's process data.
+ * \retval -1 Error: PDO entry not found.
+ * \retval -2 Error: Failed to register PDO entry.
+ * \retval -3 Error: PDO entry is not byte-aligned.
*/
int ecrt_slave_config_reg_pdo_entry(
ec_slave_config_t *sc, /**< Slave configuration. */
- uint16_t entry_index, /**< Index of the Pdo entry to register. */
- uint8_t entry_subindex, /**< Subindex of the Pdo entry to register. */
+ uint16_t entry_index, /**< Index of the PDO entry to register. */
+ uint8_t entry_subindex, /**< Subindex of the PDO entry to register. */
ec_domain_t *domain, /**< Domain. */
unsigned int *bit_position /**< Optional address if bit addressing
is desired */
);
-/** Add an Sdo configuration.
- *
- * An Sdo configuration is stored in the slave configuration object and is
+/** Add an SDO configuration.
+ *
+ * An SDO configuration is stored in the slave configuration object and is
* downloaded to the slave whenever the slave is being configured by the
* master. This usually happens once on master activation, but can be repeated
* subsequently, for example after the slave's power supply failed.
*
- * \attention The Sdos for Pdo assignment (\p 0x1C10 - \p 0x1C2F) and Pdo
+ * \attention The SDOs for PDO assignment (\p 0x1C10 - \p 0x1C2F) and PDO
* mapping (\p 0x1600 - \p 0x17FF and \p 0x1A00 - \p 0x1BFF) should not be
* configured with this function, because they are part of the slave
* configuration done by the master. Please use ecrt_slave_config_pdos() and
* friends instead.
*
- * This is the generic function for adding an Sdo configuration. Please note
+ * This is the generic function for adding an SDO configuration. Please note
* that the this function does not do any endianess correction. If
* datatype-specific functions are needed (that automatically correct the
* endianess), have a look at ecrt_slave_config_sdo8(),
@@ -673,8 +666,8 @@
*/
int ecrt_slave_config_sdo(
ec_slave_config_t *sc, /**< Slave configuration. */
- uint16_t index, /**< Index of the Sdo to configure. */
- uint8_t subindex, /**< Subindex of the Sdo to configure. */
+ uint16_t index, /**< Index of the SDO to configure. */
+ uint8_t subindex, /**< Subindex of the SDO to configure. */
const uint8_t *data, /**< Pointer to the data. */
size_t size /**< Size of the \a data. */
);
@@ -715,15 +708,15 @@
uint32_t value /**< Value to set. */
);
-/** Create an Sdo request to exchange Sdos during realtime operation.
- *
- * The created Sdo request object is freed automatically when the master is
+/** Create an SDO request to exchange SDOs during realtime operation.
+ *
+ * The created SDO request object is freed automatically when the master is
* released.
*/
ec_sdo_request_t *ecrt_slave_config_create_sdo_request(
ec_slave_config_t *sc, /**< Slave configuration. */
- uint16_t index, /**< Sdo index. */
- uint8_t subindex, /**< Sdo subindex. */
+ uint16_t index, /**< SDO index. */
+ uint8_t subindex, /**< SDO subindex. */
size_t size /**< Data size to reserve. */
);
@@ -740,7 +733,7 @@
* Domain methods
*****************************************************************************/
-/** Registers a bunch of Pdo entries for a domain.
+/** Registers a bunch of PDO entries for a domain.
*
* \todo doc
* \attention The registration array has to be terminated with an empty
@@ -749,7 +742,7 @@
*/
int ecrt_domain_reg_pdo_entry_list(
ec_domain_t *domain, /**< Domain. */
- const ec_pdo_entry_reg_t *pdo_entry_regs /**< Array of Pdo
+ const ec_pdo_entry_reg_t *pdo_entry_regs /**< Array of PDO
registrations. */
);
@@ -758,16 +751,16 @@
* \return Size of the process data image.
*/
size_t ecrt_domain_size(
- ec_domain_t *domain /**< Domain. */
+ const ec_domain_t *domain /**< Domain. */
);
/** Provide external memory to store the domain's process data.
*
- * Call this after all Pdo entries have been registered and before activating
+ * Call this after all PDO entries have been registered and before activating
* the master.
*
* The size of the allocated memory must be at least ecrt_domain_size(), after
- * all Pdo entries have been registered.
+ * all PDO entries have been registered.
*/
void ecrt_domain_external_memory(
ec_domain_t *domain, /**< Domain. */
@@ -818,10 +811,10 @@
);
/*****************************************************************************
- * Sdo request methods.
+ * SDO request methods.
****************************************************************************/
-/** Set the timeout for an Sdo request.
+/** Set the timeout for an SDO request.
*
* If the request cannot be processed in the specified time, if will be marked
* as failed.
@@ -830,14 +823,14 @@
* the next call of this method.
*/
void ecrt_sdo_request_timeout(
- ec_sdo_request_t *req, /**< Sdo request. */
+ ec_sdo_request_t *req, /**< SDO request. */
uint32_t timeout /**< Timeout in milliseconds. Zero means no
timeout. */
);
-/** Access to the Sdo request's data.
- *
- * This function returns a pointer to the request's internal Sdo data memory.
+/** Access to the SDO request's data.
+ *
+ * This function returns a pointer to the request's internal SDO data memory.
*
* - After a read operation was successful, integer data can be evaluated using
* the EC_READ_*() macros as usual. Example:
@@ -853,45 +846,45 @@
* \endcode
*
* \attention The return value can be invalid during a read operation, because
- * the internal Sdo data memory could be re-allocated if the read Sdo data do
+ * the internal SDO data memory could be re-allocated if the read SDO data do
* not fit inside.
*
- * \return Pointer to the internal Sdo data memory.
+ * \return Pointer to the internal SDO data memory.
*/
uint8_t *ecrt_sdo_request_data(
- ec_sdo_request_t *req /**< Sdo request. */
- );
-
-/** Returns the current Sdo data size.
- *
- * When the Sdo request is created, the data size is set to the size of the
+ ec_sdo_request_t *req /**< SDO request. */
+ );
+
+/** Returns the current SDO data size.
+ *
+ * When the SDO request is created, the data size is set to the size of the
* reserved memory. After a read operation the size is set to the size of the
* read data. The size is not modified in any other situation.
*
- * \return Sdo data size in bytes.
+ * \return SDO data size in bytes.
*/
size_t ecrt_sdo_request_data_size(
- const ec_sdo_request_t *req /**< Sdo request. */
- );
-
-/** Get the current state of the Sdo request.
+ const ec_sdo_request_t *req /**< SDO request. */
+ );
+
+/** Get the current state of the SDO request.
*
* \return Request state.
*/
ec_sdo_request_state_t ecrt_sdo_request_state(
- const ec_sdo_request_t *req /**< Sdo request. */
+ const ec_sdo_request_t *req /**< SDO request. */
);
-/** Schedule an Sdo write operation.
+/** Schedule an SDO write operation.
*
* \attention This method may not be called while ecrt_sdo_request_state()
* returns EC_SDO_REQUEST_BUSY.
*/
void ecrt_sdo_request_write(
- ec_sdo_request_t *req /**< Sdo request. */
- );
-
-/** Schedule an Sdo read operation.
+ ec_sdo_request_t *req /**< SDO request. */
+ );
+
+/** Schedule an SDO read operation.
*
* \attention This method may not be called while ecrt_sdo_request_state()
* returns EC_SDO_REQUEST_BUSY.
@@ -901,7 +894,7 @@
* ecrt_sdo_request_state() returns EC_SDO_REQUEST_BUSY.
*/
void ecrt_sdo_request_read(
- ec_sdo_request_t *req /**< Sdo request. */
+ ec_sdo_request_t *req /**< SDO request. */
);
/******************************************************************************
--- a/master/Kbuild.in Mon Oct 19 14:33:59 2009 +0200
+++ b/master/Kbuild.in Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
# vi: syntax=make
#
--- a/master/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/master/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -2,37 +2,30 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
# HEADERS, because of tags target
-nodist_noinst_HEADERS = \
+noinst_HEADERS = \
Kbuild.in \
cdev.c cdev.h \
datagram.c datagram.h \
@@ -79,4 +72,13 @@
clean-local:
$(MAKE) -C "$(LINUX_SOURCE_DIR)" M="@abs_srcdir@" clean
+modulesdir=@prefix@/modules
+SYMVERS=`echo $(top_builddir)/Module*.symvers`
+
+install-data-local:
+ @test -n "$(SYMVERS)" && \
+ mkdir -p $(DESTDIR)$(modulesdir) && \
+ cp -vf $(SYMVERS) \
+ $(DESTDIR)$(modulesdir)/ec_master.symvers
+
#------------------------------------------------------------------------------
--- a/master/cdev.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/cdev.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -275,7 +268,7 @@
/*****************************************************************************/
-/** Get slave sync manager Pdo information.
+/** Get slave sync manager PDO information.
*/
int ec_cdev_ioctl_slave_sync_pdo(
ec_master_t *master, /**< EtherCAT master. */
@@ -312,7 +305,7 @@
if (!(pdo = ec_pdo_list_find_pdo_by_pos_const(
&sync->pdos, data.pdo_pos))) {
up(&master->master_sem);
- EC_ERR("Sync manager %u does not contain a Pdo with "
+ EC_ERR("Sync manager %u does not contain a PDO with "
"position %u in slave %u!\n", data.sync_index,
data.pdo_pos, data.slave_position);
return -EINVAL;
@@ -332,7 +325,7 @@
/*****************************************************************************/
-/** Get slave sync manager Pdo entry information.
+/** Get slave sync manager PDO entry information.
*/
int ec_cdev_ioctl_slave_sync_pdo_entry(
ec_master_t *master, /**< EtherCAT master. */
@@ -370,7 +363,7 @@
if (!(pdo = ec_pdo_list_find_pdo_by_pos_const(
&sync->pdos, data.pdo_pos))) {
up(&master->master_sem);
- EC_ERR("Sync manager %u does not contain a Pdo with "
+ EC_ERR("Sync manager %u does not contain a PDO with "
"position %u in slave %u!\n", data.sync_index,
data.pdo_pos, data.slave_position);
return -EINVAL;
@@ -379,7 +372,7 @@
if (!(entry = ec_pdo_find_entry_by_pos_const(
pdo, data.entry_pos))) {
up(&master->master_sem);
- EC_ERR("Pdo 0x%04X does not contain an entry with "
+ EC_ERR("PDO 0x%04X does not contain an entry with "
"position %u in slave %u!\n", data.pdo_pos,
data.entry_pos, data.slave_position);
return -EINVAL;
@@ -518,8 +511,10 @@
}
if (copy_to_user((void __user *) data.target, domain->data,
- domain->data_size))
- return -EFAULT;
+ domain->data_size)) {
+ up(&master->master_sem);
+ return -EFAULT;
+ }
up(&master->master_sem);
return 0;
@@ -574,7 +569,7 @@
/*****************************************************************************/
-/** Get slave Sdo information.
+/** Get slave SDO information.
*/
int ec_cdev_ioctl_slave_sdo(
ec_master_t *master, /**< EtherCAT master. */
@@ -602,7 +597,7 @@
if (!(sdo = ec_slave_get_sdo_by_pos_const(
slave, data.sdo_position))) {
up(&master->master_sem);
- EC_ERR("Sdo %u does not exist in slave %u!\n",
+ EC_ERR("SDO %u does not exist in slave %u!\n",
data.sdo_position, data.slave_position);
return -EINVAL;
}
@@ -621,7 +616,7 @@
/*****************************************************************************/
-/** Get slave Sdo entry information.
+/** Get slave SDO entry information.
*/
int ec_cdev_ioctl_slave_sdo_entry(
ec_master_t *master, /**< EtherCAT master. */
@@ -651,7 +646,7 @@
if (!(sdo = ec_slave_get_sdo_by_pos_const(
slave, -data.sdo_spec))) {
up(&master->master_sem);
- EC_ERR("Sdo %u does not exist in slave %u!\n",
+ EC_ERR("SDO %u does not exist in slave %u!\n",
-data.sdo_spec, data.slave_position);
return -EINVAL;
}
@@ -659,7 +654,7 @@
if (!(sdo = ec_slave_get_sdo_const(
slave, data.sdo_spec))) {
up(&master->master_sem);
- EC_ERR("Sdo 0x%04X does not exist in slave %u!\n",
+ EC_ERR("SDO 0x%04X does not exist in slave %u!\n",
data.sdo_spec, data.slave_position);
return -EINVAL;
}
@@ -668,7 +663,7 @@
if (!(entry = ec_sdo_get_entry_const(
sdo, data.sdo_entry_subindex))) {
up(&master->master_sem);
- EC_ERR("Sdo entry 0x%04X:%02X does not exist "
+ EC_ERR("SDO entry 0x%04X:%02X does not exist "
"in slave %u!\n", sdo->index,
data.sdo_entry_subindex, data.slave_position);
return -EINVAL;
@@ -688,7 +683,7 @@
/*****************************************************************************/
-/** Upload Sdo.
+/** Upload SDO.
*/
int ec_cdev_ioctl_slave_sdo_upload(
ec_master_t *master, /**< EtherCAT master. */
@@ -773,7 +768,7 @@
/*****************************************************************************/
-/** Download Sdo.
+/** Download SDO.
*/
int ec_cdev_ioctl_slave_sdo_download(
ec_master_t *master, /**< EtherCAT master. */
@@ -809,7 +804,6 @@
request.req.data_size = data.data_size;
ecrt_sdo_request_write(&request.req);
-
if (down_interruptible(&master->master_sem))
return -EINTR;
@@ -1034,7 +1028,7 @@
/*****************************************************************************/
-/** Get slave configuration Pdo information.
+/** Get slave configuration PDO information.
*/
int ec_cdev_ioctl_config_pdo(
ec_master_t *master, /**< EtherCAT master. */
@@ -1070,7 +1064,7 @@
&sc->sync_configs[data.sync_index].pdos,
data.pdo_pos))) {
up(&master->master_sem);
- EC_ERR("Invalid Pdo position!\n");
+ EC_ERR("Invalid PDO position!\n");
return -EINVAL;
}
@@ -1088,7 +1082,7 @@
/*****************************************************************************/
-/** Get slave configuration Pdo entry information.
+/** Get slave configuration PDO entry information.
*/
int ec_cdev_ioctl_config_pdo_entry(
ec_master_t *master, /**< EtherCAT master. */
@@ -1125,7 +1119,7 @@
&sc->sync_configs[data.sync_index].pdos,
data.pdo_pos))) {
up(&master->master_sem);
- EC_ERR("Invalid Pdo position!\n");
+ EC_ERR("Invalid PDO position!\n");
return -EINVAL;
}
@@ -1151,7 +1145,7 @@
/*****************************************************************************/
-/** Get slave configuration Sdo information.
+/** Get slave configuration SDO information.
*/
int ec_cdev_ioctl_config_sdo(
ec_master_t *master, /**< EtherCAT master. */
@@ -1180,7 +1174,7 @@
if (!(req = ec_slave_config_get_sdo_by_pos_const(
sc, data.sdo_pos))) {
up(&master->master_sem);
- EC_ERR("Invalid Sdo position!\n");
+ EC_ERR("Invalid SDO position!\n");
return -EINVAL;
}
@@ -1238,7 +1232,7 @@
ec_master_t *master = cdev->master;
if (master->debug_level)
- EC_DBG("ioctl(filp = %x, cmd = %u (%u), arg = %x)\n",
+ EC_DBG("ioctl(filp = 0x%x, cmd = 0x%x (0x%x), arg = 0x%x)\n",
(u32) filp, (u32) cmd, (u32) _IOC_NR(cmd), (u32) arg);
switch (cmd) {
--- a/master/cdev.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/cdev.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/datagram.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/datagram.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -120,8 +113,10 @@
*/
void ec_datagram_clear(ec_datagram_t *datagram /**< EtherCAT datagram. */)
{
- if (datagram->data_origin == EC_ORIG_INTERNAL && datagram->data)
+ if (datagram->data_origin == EC_ORIG_INTERNAL && datagram->data) {
kfree(datagram->data);
+ datagram->data = NULL;
+ }
}
/*****************************************************************************/
--- a/master/datagram.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/datagram.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/debug.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/debug.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/debug.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/debug.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/device.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/device.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/device.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/device.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/domain.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/domain.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -161,20 +154,20 @@
}
// If LRW is used, output FMMUs increment the working counter by 2,
// while input FMMUs increment it by 1.
- domain->expected_working_counter =
+ domain->expected_working_counter +=
used[EC_DIR_OUTPUT] * 2 + used[EC_DIR_INPUT];
} else if (used[EC_DIR_OUTPUT]) { // outputs only
if (ec_datagram_lwr(datagram, logical_offset, data_size, data)) {
kfree(datagram);
return -1;
}
- domain->expected_working_counter = used[EC_DIR_OUTPUT];
+ domain->expected_working_counter += used[EC_DIR_OUTPUT];
} else { // inputs only (or nothing)
if (ec_datagram_lrd(datagram, logical_offset, data_size, data)) {
kfree(datagram);
return -1;
}
- domain->expected_working_counter = used[EC_DIR_INPUT];
+ domain->expected_working_counter += used[EC_DIR_INPUT];
}
list_add_tail(&datagram->list, &domain->datagrams);
@@ -343,7 +336,7 @@
/*****************************************************************************/
-size_t ecrt_domain_size(ec_domain_t *domain)
+size_t ecrt_domain_size(const ec_domain_t *domain)
{
return domain->data_size;
}
--- a/master/domain.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/domain.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/doxygen.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/doxygen.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -61,32 +54,25 @@
\section sec_license License
\verbatim
- Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
This file is part of the IgH EtherCAT Master.
- The IgH EtherCAT Master is free software; you can redistribute it
- and/or modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version 2 of the
- License, or (at your option) any later version.
+ The IgH EtherCAT Master is free software; you can redistribute it and/or
+ modify it under the terms of the GNU General Public License version 2, as
+ published by the Free Software Foundation.
- The IgH EtherCAT Master is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
+ The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ Public License for more details.
- You should have received a copy of the GNU General Public License
- along with the IgH EtherCAT Master; if not, write to the Free Software
+ You should have received a copy of the GNU General Public License along
+ with the IgH EtherCAT Master; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- The right to use EtherCAT Technology is granted and comes free of
- charge under condition of compatibility of product made by
- Licensee. People intending to distribute/sell products based on the
- code, have to sign an agreement to guarantee that products using
- software based on IgH EtherCAT master stay compatible with the actual
- EtherCAT specification (which are released themselves as an open
- standard) as the (only) precondition to have the right to use EtherCAT
- Technology, IP and trade marks.
+ Using the EtherCAT technology and brand is permitted in compliance with the
+ industrial property and similar rights of Beckhoff Automation GmbH.
\endverbatim
*/
--- a/master/ethernet.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/ethernet.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- Ethernet-over-EtherCAT (EoE).
+ Ethernet over EtherCAT (EoE).
*/
/*****************************************************************************/
--- a/master/ethernet.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/ethernet.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- Ethernet-over-EtherCAT (EoE)
+ Ethernet over EtherCAT (EoE)
*/
/*****************************************************************************/
@@ -64,7 +57,7 @@
typedef struct ec_eoe ec_eoe_t; /**< \see ec_eoe */
/**
- Ethernet-over-EtherCAT (EoE) handler.
+ Ethernet over EtherCAT (EoE) handler.
The master creates one of these objects for each slave that supports the
EoE protocol.
*/
--- a/master/fmmu_config.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fmmu_config.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -48,7 +41,7 @@
/** FMMU configuration constructor.
*
* Inits an FMMU configuration, sets the logical start address and adds the
- * process data size for the mapped Pdos of the given direction to the domain
+ * process data size for the mapped PDOs of the given direction to the domain
* data size.
*/
void ec_fmmu_config_init(
@@ -56,7 +49,7 @@
ec_slave_config_t *sc, /**< EtherCAT slave configuration. */
ec_domain_t *domain, /**< EtherCAT domain. */
uint8_t sync_index, /**< Sync manager index to use. */
- ec_direction_t dir /**< Pdo direction. */
+ ec_direction_t dir /**< PDO direction. */
)
{
INIT_LIST_HEAD(&fmmu->list);
--- a/master/fmmu_config.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fmmu_config.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -56,7 +49,7 @@
uint8_t sync_index; /**< Index of sync manager to use. */
ec_direction_t dir; /**< FMMU direction. */
uint32_t logical_start_address; /**< Logical start address. */
- unsigned int data_size; /**< Covered Pdo size. */
+ unsigned int data_size; /**< Covered PDO size. */
} ec_fmmu_config_t;
/*****************************************************************************/
--- a/master/fsm_change.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_change.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/fsm_change.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_change.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/fsm_coe.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_coe.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -81,24 +74,24 @@
/*****************************************************************************/
/**
- Sdo abort messages.
- The "abort Sdo transfer request" supplies an abort code,
+ SDO abort messages.
+ The "abort SDO transfer request" supplies an abort code,
which can be translated to clear text. This table does
the mapping of the codes and messages.
*/
const ec_code_msg_t sdo_abort_messages[] = {
{0x05030000, "Toggle bit not changed"},
- {0x05040000, "Sdo protocol timeout"},
+ {0x05040000, "SDO protocol timeout"},
{0x05040001, "Client/Server command specifier not valid or unknown"},
{0x05040005, "Out of memory"},
{0x06010000, "Unsupported access to an object"},
{0x06010001, "Attempt to read a write-only object"},
{0x06010002, "Attempt to write a read-only object"},
{0x06020000, "This object does not exist in the object directory"},
- {0x06040041, "The object cannot be mapped into the Pdo"},
+ {0x06040041, "The object cannot be mapped into the PDO"},
{0x06040042, "The number and length of the objects to be mapped would"
- " exceed the Pdo length"},
+ " exceed the PDO length"},
{0x06040043, "General parameter incompatibility reason"},
{0x06040047, "Gerneral internal incompatibility in device"},
{0x06060000, "Access failure due to a hardware error"},
@@ -127,7 +120,7 @@
/*****************************************************************************/
/**
- Outputs an Sdo abort message.
+ Outputs an SDO abort message.
*/
void ec_canopen_abort_msg(uint32_t abort_code)
@@ -136,13 +129,13 @@
for (abort_msg = sdo_abort_messages; abort_msg->code; abort_msg++) {
if (abort_msg->code == abort_code) {
- EC_ERR("Sdo abort message 0x%08X: \"%s\".\n",
+ EC_ERR("SDO abort message 0x%08X: \"%s\".\n",
abort_msg->code, abort_msg->message);
return;
}
}
- EC_ERR("Unknown Sdo abort code 0x%08X.\n", abort_code);
+ EC_ERR("Unknown SDO abort code 0x%08X.\n", abort_code);
}
/*****************************************************************************/
@@ -172,7 +165,7 @@
/*****************************************************************************/
/**
- Starts reading a slaves' Sdo dictionary.
+ Starts reading a slaves' SDO dictionary.
*/
void ec_fsm_coe_dictionary(ec_fsm_coe_t *fsm, /**< finite state machine */
@@ -186,13 +179,13 @@
/*****************************************************************************/
/**
- Starts to transfer an Sdo to/from a slave.
+ Starts to transfer an SDO to/from a slave.
*/
void ec_fsm_coe_transfer(
ec_fsm_coe_t *fsm, /**< State machine. */
ec_slave_t *slave, /**< EtherCAT slave. */
- ec_sdo_request_t *request /**< Sdo request. */
+ ec_sdo_request_t *request /**< SDO request. */
)
{
fsm->slave = slave;
@@ -281,16 +274,23 @@
return;
}
+ if (slave->sii.has_general && !slave->sii.coe_details.enable_sdo_info) {
+ EC_ERR("Slave %u does not support SDO information service!\n",
+ slave->ring_position);
+ fsm->state = ec_fsm_coe_error;
+ return;
+ }
+
if (!(data = ec_slave_mbox_prepare_send(slave, datagram, 0x03, 8))) {
fsm->state = ec_fsm_coe_error;
return;
}
- EC_WRITE_U16(data, 0x8 << 12); // Sdo information
+ EC_WRITE_U16(data, 0x8 << 12); // SDO information
EC_WRITE_U8 (data + 2, 0x01); // Get OD List Request
EC_WRITE_U8 (data + 3, 0x00);
EC_WRITE_U16(data + 4, 0x0000);
- EC_WRITE_U16(data + 6, 0x0001); // deliver all Sdos!
+ EC_WRITE_U16(data + 6, 0x0001); // deliver all SDOs!
fsm->retries = EC_FSM_RETRIES;
fsm->state = ec_fsm_coe_dict_request;
@@ -369,7 +369,7 @@
(datagram->jiffies_received - fsm->jiffies_start) * 1000 / HZ;
if (diff_ms >= EC_FSM_COE_DICT_TIMEOUT) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Timeout while waiting for Sdo dictionary list response "
+ EC_ERR("Timeout while waiting for SDO dictionary list response "
"on slave %u.\n", slave->ring_position);
return;
}
@@ -442,18 +442,18 @@
}
if (rec_size < 3) {
- EC_ERR("Received corrupted Sdo dictionary response (size %u).\n",
+ EC_ERR("Received corrupted SDO dictionary response (size %u).\n",
rec_size);
fsm->state = ec_fsm_coe_error;
return;
}
- if (EC_READ_U16(data) >> 12 == 0x8 && // Sdo information
+ if (EC_READ_U16(data) >> 12 == 0x8 && // SDO information
(EC_READ_U8(data + 2) & 0x7F) == 0x07) { // error response
- EC_ERR("Sdo information error response at slave %u!\n",
+ EC_ERR("SDO information error response at slave %u!\n",
slave->ring_position);
if (rec_size < 10) {
- EC_ERR("Incomplete Sdo information error response:\n");
+ EC_ERR("Incomplete SDO information error response:\n");
ec_print_data(data, rec_size);
} else {
ec_canopen_abort_msg(EC_READ_U32(data + 6));
@@ -462,10 +462,10 @@
return;
}
- if (EC_READ_U16(data) >> 12 != 0x8 || // Sdo information
+ if (EC_READ_U16(data) >> 12 != 0x8 || // SDO information
(EC_READ_U8 (data + 2) & 0x7F) != 0x02) { // Get OD List response
if (fsm->slave->master->debug_level) {
- EC_DBG("Invalid Sdo list response at slave %u! Retrying...\n",
+ EC_DBG("Invalid SDO list response at slave %u! Retrying...\n",
slave->ring_position);
ec_print_data(data, rec_size);
}
@@ -488,13 +488,13 @@
sdo_index = EC_READ_U16(data + 8 + i * 2);
if (!sdo_index) {
if (slave->master->debug_level)
- EC_WARN("Sdo dictionary of slave %u contains index 0x0000.\n",
+ EC_WARN("SDO dictionary of slave %u contains index 0x0000.\n",
slave->ring_position);
continue;
}
if (!(sdo = (ec_sdo_t *) kmalloc(sizeof(ec_sdo_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate memory for Sdo!\n");
+ EC_ERR("Failed to allocate memory for SDO!\n");
fsm->state = ec_fsm_coe_error;
return;
}
@@ -505,7 +505,7 @@
fragments_left = EC_READ_U16(data + 4);
if (slave->master->debug_level && fragments_left) {
- EC_DBG("Sdo list fragments left: %u\n", fragments_left);
+ EC_DBG("SDO list fragments left: %u\n", fragments_left);
}
if (EC_READ_U8(data + 2) & 0x80 || fragments_left) { // more messages waiting. check again.
@@ -517,12 +517,12 @@
}
if (list_empty(&slave->sdo_dictionary)) {
- // no Sdos in dictionary. finished.
+ // no SDOs in dictionary. finished.
fsm->state = ec_fsm_coe_end; // success
return;
}
- // fetch Sdo descriptions
+ // fetch SDO descriptions
fsm->sdo = list_entry(slave->sdo_dictionary.next, ec_sdo_t, list);
if (!(data = ec_slave_mbox_prepare_send(slave, datagram, 0x03, 8))) {
@@ -530,11 +530,11 @@
return;
}
- EC_WRITE_U16(data, 0x8 << 12); // Sdo information
+ EC_WRITE_U16(data, 0x8 << 12); // SDO information
EC_WRITE_U8 (data + 2, 0x03); // Get object description request
EC_WRITE_U8 (data + 3, 0x00);
EC_WRITE_U16(data + 4, 0x0000);
- EC_WRITE_U16(data + 6, fsm->sdo->index); // Sdo index
+ EC_WRITE_U16(data + 6, fsm->sdo->index); // SDO index
fsm->retries = EC_FSM_RETRIES;
fsm->state = ec_fsm_coe_dict_desc_request;
@@ -557,7 +557,7 @@
if (datagram->state != EC_DATAGRAM_RECEIVED) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Failed to receive CoE Sdo description request datagram for"
+ EC_ERR("Failed to receive CoE SDO description request datagram for"
" slave %u (datagram state %u).\n",
slave->ring_position, datagram->state);
return;
@@ -565,7 +565,7 @@
if (datagram->working_counter != 1) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Reception of CoE Sdo description"
+ EC_ERR("Reception of CoE SDO description"
" request failed on slave %u: ", slave->ring_position);
ec_datagram_print_wc_error(datagram);
return;
@@ -613,7 +613,7 @@
(datagram->jiffies_received - fsm->jiffies_start) * 1000 / HZ;
if (diff_ms >= EC_FSM_COE_DICT_TIMEOUT) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Timeout while waiting for Sdo object description "
+ EC_ERR("Timeout while waiting for SDO object description "
"response on slave %u.\n", slave->ring_position);
return;
}
@@ -650,7 +650,7 @@
if (datagram->state != EC_DATAGRAM_RECEIVED) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Failed to receive CoE Sdo description response datagram from"
+ EC_ERR("Failed to receive CoE SDO description response datagram from"
" slave %u (datagram state %u).\n",
slave->ring_position, datagram->state);
return;
@@ -658,7 +658,7 @@
if (datagram->working_counter != 1) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Reception of CoE Sdo description"
+ EC_ERR("Reception of CoE SDO description"
" response failed on slave %u: ", slave->ring_position);
ec_datagram_print_wc_error(datagram);
return;
@@ -685,16 +685,16 @@
}
if (rec_size < 3) {
- EC_ERR("Received corrupted Sdo description response (size %u).\n",
+ EC_ERR("Received corrupted SDO description response (size %u).\n",
rec_size);
fsm->state = ec_fsm_coe_error;
return;
}
- if (EC_READ_U16(data) >> 12 == 0x8 && // Sdo information
+ if (EC_READ_U16(data) >> 12 == 0x8 && // SDO information
(EC_READ_U8 (data + 2) & 0x7F) == 0x07) { // error response
- EC_ERR("Sdo information error response at slave %u while"
- " fetching Sdo 0x%04X!\n", slave->ring_position,
+ EC_ERR("SDO information error response at slave %u while"
+ " fetching SDO 0x%04X!\n", slave->ring_position,
sdo->index);
ec_canopen_abort_msg(EC_READ_U32(data + 6));
fsm->state = ec_fsm_coe_error;
@@ -702,18 +702,18 @@
}
if (rec_size < 8) {
- EC_ERR("Received corrupted Sdo description response (size %u).\n",
+ EC_ERR("Received corrupted SDO description response (size %u).\n",
rec_size);
fsm->state = ec_fsm_coe_error;
return;
}
- if (EC_READ_U16(data) >> 12 != 0x8 || // Sdo information
+ if (EC_READ_U16(data) >> 12 != 0x8 || // SDO information
(EC_READ_U8 (data + 2) & 0x7F) != 0x04 || // Object desc. response
- EC_READ_U16(data + 6) != sdo->index) { // Sdo index
+ EC_READ_U16(data + 6) != sdo->index) { // SDO index
if (fsm->slave->master->debug_level) {
EC_DBG("Invalid object description response at slave %u while"
- " fetching Sdo 0x%04X!\n", slave->ring_position,
+ " fetching SDO 0x%04X!\n", slave->ring_position,
sdo->index);
ec_print_data(data, rec_size);
}
@@ -737,7 +737,7 @@
name_size = rec_size - 12;
if (name_size) {
if (!(sdo->name = kmalloc(name_size + 1, GFP_KERNEL))) {
- EC_ERR("Failed to allocate Sdo name!\n");
+ EC_ERR("Failed to allocate SDO name!\n");
fsm->state = ec_fsm_coe_error;
return;
}
@@ -761,12 +761,12 @@
return;
}
- EC_WRITE_U16(data, 0x8 << 12); // Sdo information
+ EC_WRITE_U16(data, 0x8 << 12); // SDO information
EC_WRITE_U8 (data + 2, 0x05); // Get entry description request
EC_WRITE_U8 (data + 3, 0x00);
EC_WRITE_U16(data + 4, 0x0000);
- EC_WRITE_U16(data + 6, sdo->index); // Sdo index
- EC_WRITE_U8 (data + 8, fsm->subindex); // Sdo subindex
+ EC_WRITE_U16(data + 6, sdo->index); // SDO index
+ EC_WRITE_U8 (data + 8, fsm->subindex); // SDO subindex
EC_WRITE_U8 (data + 9, 0x00); // value info (no values)
fsm->retries = EC_FSM_RETRIES;
@@ -791,7 +791,7 @@
if (datagram->state != EC_DATAGRAM_RECEIVED) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Failed to receive CoE Sdo entry request datagram for"
+ EC_ERR("Failed to receive CoE SDO entry request datagram for"
" slave %u (datagram state %u).\n",
slave->ring_position, datagram->state);
return;
@@ -799,7 +799,7 @@
if (datagram->working_counter != 1) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Reception of CoE Sdo entry request failed on slave %u: ",
+ EC_ERR("Reception of CoE SDO entry request failed on slave %u: ",
slave->ring_position);
ec_datagram_print_wc_error(datagram);
return;
@@ -848,7 +848,7 @@
(datagram->jiffies_received - fsm->jiffies_start) * 1000 / HZ;
if (diff_ms >= EC_FSM_COE_DICT_TIMEOUT) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Timeout while waiting for Sdo entry description response "
+ EC_ERR("Timeout while waiting for SDO entry description response "
"on slave %u.\n", slave->ring_position);
return;
}
@@ -886,7 +886,7 @@
if (datagram->state != EC_DATAGRAM_RECEIVED) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Failed to receive CoE Sdo description response datagram from"
+ EC_ERR("Failed to receive CoE SDO description response datagram from"
" slave %u (datagram state %u).\n",
slave->ring_position, datagram->state);
return;
@@ -894,7 +894,7 @@
if (datagram->working_counter != 1) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Reception of CoE Sdo description"
+ EC_ERR("Reception of CoE SDO description"
" response failed on slave %u: ", slave->ring_position);
ec_datagram_print_wc_error(datagram);
return;
@@ -921,16 +921,16 @@
}
if (rec_size < 3) {
- EC_ERR("Received corrupted Sdo entry description response "
+ EC_ERR("Received corrupted SDO entry description response "
"(size %u).\n", rec_size);
fsm->state = ec_fsm_coe_error;
return;
}
- if (EC_READ_U16(data) >> 12 == 0x8 && // Sdo information
+ if (EC_READ_U16(data) >> 12 == 0x8 && // SDO information
(EC_READ_U8 (data + 2) & 0x7F) == 0x07) { // error response
- EC_ERR("Sdo information error response at slave %u while"
- " fetching Sdo entry 0x%04X:%02X!\n", slave->ring_position,
+ EC_ERR("SDO information error response at slave %u while"
+ " fetching SDO entry 0x%04X:%02X!\n", slave->ring_position,
sdo->index, fsm->subindex);
ec_canopen_abort_msg(EC_READ_U32(data + 6));
fsm->state = ec_fsm_coe_error;
@@ -938,19 +938,19 @@
}
if (rec_size < 9) {
- EC_ERR("Received corrupted Sdo entry description response "
+ EC_ERR("Received corrupted SDO entry description response "
"(size %u).\n", rec_size);
fsm->state = ec_fsm_coe_error;
return;
}
- if (EC_READ_U16(data) >> 12 != 0x8 || // Sdo information
+ if (EC_READ_U16(data) >> 12 != 0x8 || // SDO information
(EC_READ_U8(data + 2) & 0x7F) != 0x06 || // Entry desc. response
- EC_READ_U16(data + 6) != sdo->index || // Sdo index
- EC_READ_U8(data + 8) != fsm->subindex) { // Sdo subindex
+ EC_READ_U16(data + 6) != sdo->index || // SDO index
+ EC_READ_U8(data + 8) != fsm->subindex) { // SDO subindex
if (fsm->slave->master->debug_level) {
EC_DBG("Invalid entry description response at slave %u while"
- " fetching Sdo entry 0x%04X:%02X!\n", slave->ring_position,
+ " fetching SDO entry 0x%04X:%02X!\n", slave->ring_position,
sdo->index, fsm->subindex);
ec_print_data(data, rec_size);
}
@@ -984,7 +984,7 @@
if (data_size) {
uint8_t *desc;
if (!(desc = kmalloc(data_size + 1, GFP_KERNEL))) {
- EC_ERR("Failed to allocate Sdo entry name!\n");
+ EC_ERR("Failed to allocate SDO entry name!\n");
fsm->state = ec_fsm_coe_error;
return;
}
@@ -1003,12 +1003,12 @@
return;
}
- EC_WRITE_U16(data, 0x8 << 12); // Sdo information
+ EC_WRITE_U16(data, 0x8 << 12); // SDO information
EC_WRITE_U8 (data + 2, 0x05); // Get entry description request
EC_WRITE_U8 (data + 3, 0x00);
EC_WRITE_U16(data + 4, 0x0000);
- EC_WRITE_U16(data + 6, sdo->index); // Sdo index
- EC_WRITE_U8 (data + 8, fsm->subindex); // Sdo subindex
+ EC_WRITE_U16(data + 6, sdo->index); // SDO index
+ EC_WRITE_U8 (data + 8, fsm->subindex); // SDO subindex
EC_WRITE_U8 (data + 9, 0x00); // value info (no values)
fsm->retries = EC_FSM_RETRIES;
@@ -1016,7 +1016,7 @@
return;
}
- // another Sdo description to fetch?
+ // another SDO description to fetch?
if (fsm->sdo->list.next != &slave->sdo_dictionary) {
fsm->sdo = list_entry(fsm->sdo->list.next, ec_sdo_t, list);
@@ -1025,11 +1025,11 @@
return;
}
- EC_WRITE_U16(data, 0x8 << 12); // Sdo information
+ EC_WRITE_U16(data, 0x8 << 12); // SDO information
EC_WRITE_U8 (data + 2, 0x03); // Get object description request
EC_WRITE_U8 (data + 3, 0x00);
EC_WRITE_U16(data + 4, 0x0000);
- EC_WRITE_U16(data + 6, fsm->sdo->index); // Sdo index
+ EC_WRITE_U16(data + 6, fsm->sdo->index); // SDO index
fsm->retries = EC_FSM_RETRIES;
fsm->state = ec_fsm_coe_dict_desc_request;
@@ -1056,7 +1056,7 @@
uint8_t size;
if (fsm->slave->master->debug_level) {
- EC_DBG("Downloading Sdo 0x%04X:%02X to slave %u.\n",
+ EC_DBG("Downloading SDO 0x%04X:%02X to slave %u.\n",
request->index, request->subindex, slave->ring_position);
ec_print_data(request->data, request->data_size);
}
@@ -1075,7 +1075,7 @@
size = 4 - request->data_size;
- EC_WRITE_U16(data, 0x2 << 12); // Sdo request
+ EC_WRITE_U16(data, 0x2 << 12); // SDO request
EC_WRITE_U8 (data + 2, (0x3 // size specified, expedited
| size << 2
| 0x1 << 5)); // Download request
@@ -1090,7 +1090,7 @@
}
else { // request->data_size > 4, use normal transfer type
if (slave->sii.rx_mailbox_size < 6 + 10 + request->data_size) {
- EC_ERR("Sdo fragmenting not supported yet!\n");
+ EC_ERR("SDO fragmenting not supported yet!\n");
fsm->state = ec_fsm_coe_error;
return;
}
@@ -1101,7 +1101,7 @@
return;
}
- EC_WRITE_U16(data, 0x2 << 12); // Sdo request
+ EC_WRITE_U16(data, 0x2 << 12); // SDO request
EC_WRITE_U8 (data + 2, (0x1 // size indicator, normal
| 0x1 << 5)); // Download request
EC_WRITE_U16(data + 3, request->index);
@@ -1149,7 +1149,7 @@
(jiffies - fsm->request->jiffies_sent) * 1000 / HZ;
if (diff_ms < fsm->request->response_timeout) {
if (fsm->slave->master->debug_level) {
- EC_DBG("Slave %u did not respond to Sdo download request. "
+ EC_DBG("Slave %u did not respond to SDO download request. "
"Retrying after %u ms...\n",
slave->ring_position, (u32) diff_ms);
}
@@ -1206,7 +1206,7 @@
(datagram->jiffies_received - fsm->jiffies_start) * 1000 / HZ;
if (diff_ms >= fsm->request->response_timeout) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Timeout while waiting for Sdo download response on "
+ EC_ERR("Timeout while waiting for SDO download response on "
"slave %u.\n", slave->ring_position);
return;
}
@@ -1288,10 +1288,10 @@
return;
}
- if (EC_READ_U16(data) >> 12 == 0x2 && // Sdo request
- EC_READ_U8 (data + 2) >> 5 == 0x4) { // abort Sdo transfer request
- fsm->state = ec_fsm_coe_error;
- EC_ERR("Sdo download 0x%04X:%02X (%u bytes) aborted on slave %u.\n",
+ if (EC_READ_U16(data) >> 12 == 0x2 && // SDO request
+ EC_READ_U8 (data + 2) >> 5 == 0x4) { // abort SDO transfer request
+ fsm->state = ec_fsm_coe_error;
+ EC_ERR("SDO download 0x%04X:%02X (%u bytes) aborted on slave %u.\n",
request->index, request->subindex, request->data_size,
slave->ring_position);
if (rec_size < 10) {
@@ -1304,12 +1304,12 @@
return;
}
- if (EC_READ_U16(data) >> 12 != 0x3 || // Sdo response
+ if (EC_READ_U16(data) >> 12 != 0x3 || // SDO response
EC_READ_U8 (data + 2) >> 5 != 0x3 || // Download response
EC_READ_U16(data + 3) != request->index || // index
EC_READ_U8 (data + 5) != request->subindex) { // subindex
if (slave->master->debug_level) {
- EC_DBG("Invalid Sdo download response at slave %u! Retrying...\n",
+ EC_DBG("Invalid SDO download response at slave %u! Retrying...\n",
slave->ring_position);
ec_print_data(data, rec_size);
}
@@ -1338,7 +1338,7 @@
uint8_t *data;
if (master->debug_level)
- EC_DBG("Uploading Sdo 0x%04X:%02X from slave %u.\n",
+ EC_DBG("Uploading SDO 0x%04X:%02X from slave %u.\n",
request->index, request->subindex, slave->ring_position);
if (!(slave->sii.mailbox_protocols & EC_MBOX_COE)) {
@@ -1352,7 +1352,7 @@
return;
}
- EC_WRITE_U16(data, 0x2 << 12); // Sdo request
+ EC_WRITE_U16(data, 0x2 << 12); // SDO request
EC_WRITE_U8 (data + 2, 0x2 << 5); // initiate upload request
EC_WRITE_U16(data + 3, request->index);
EC_WRITE_U8 (data + 5, request->subindex);
@@ -1397,7 +1397,7 @@
(jiffies - fsm->request->jiffies_sent) * 1000 / HZ;
if (diff_ms < fsm->request->response_timeout) {
if (fsm->slave->master->debug_level) {
- EC_DBG("Slave %u did no respond to Sdo upload request. "
+ EC_DBG("Slave %u did not respond to SDO upload request. "
"Retrying after %u ms...\n",
slave->ring_position, (u32) diff_ms);
}
@@ -1454,7 +1454,7 @@
(datagram->jiffies_received - fsm->jiffies_start) * 1000 / HZ;
if (diff_ms >= fsm->request->response_timeout) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Timeout while waiting for Sdo upload response on "
+ EC_ERR("Timeout while waiting for SDO upload response on "
"slave %u.\n", slave->ring_position);
return;
}
@@ -1485,7 +1485,6 @@
uint8_t *data, mbox_prot;
size_t rec_size, data_size;
ec_sdo_request_t *request = fsm->request;
- uint32_t complete_size;
unsigned int expedited, size_specified;
if (datagram->state == EC_DATAGRAM_TIMED_OUT && fsm->retries--)
@@ -1534,14 +1533,14 @@
if (rec_size < 3) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Received currupted Sdo upload response (%u bytes)!\n", rec_size);
+ EC_ERR("Received currupted SDO upload response (%u bytes)!\n", rec_size);
ec_print_data(data, rec_size);
return;
}
- if (EC_READ_U16(data) >> 12 == 0x2 && // Sdo request
- EC_READ_U8 (data + 2) >> 5 == 0x4) { // abort Sdo transfer request
- EC_ERR("Sdo upload 0x%04X:%02X aborted on slave %u.\n",
+ if (EC_READ_U16(data) >> 12 == 0x2 && // SDO request
+ EC_READ_U8 (data + 2) >> 5 == 0x4) { // abort SDO transfer request
+ EC_ERR("SDO upload 0x%04X:%02X aborted on slave %u.\n",
request->index, request->subindex, slave->ring_position);
if (rec_size >= 10) {
request->abort_code = EC_READ_U32(data + 6);
@@ -1559,18 +1558,18 @@
if (expedited) {
if (rec_size < 7) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Received currupted Sdo expedited upload"
+ EC_ERR("Received currupted SDO expedited upload"
" response (only %u bytes)!\n", rec_size);
ec_print_data(data, rec_size);
return;
}
- if (EC_READ_U16(data) >> 12 != 0x3 || // Sdo response
+ if (EC_READ_U16(data) >> 12 != 0x3 || // SDO response
EC_READ_U8 (data + 2) >> 5 != 0x2 || // upload response
EC_READ_U16(data + 3) != request->index || // index
EC_READ_U8 (data + 5) != request->subindex) { // subindex
if (fsm->slave->master->debug_level) {
- EC_DBG("Invalid Sdo upload expedited response at slave %u!\n",
+ EC_DBG("Invalid SDO upload expedited response at slave %u!\n",
slave->ring_position);
ec_print_data(data, rec_size);
}
@@ -1583,38 +1582,38 @@
size_specified = EC_READ_U8(data + 2) & 0x01;
if (size_specified) {
- complete_size = 4 - ((EC_READ_U8(data + 2) & 0x0C) >> 2);
+ fsm->complete_size = 4 - ((EC_READ_U8(data + 2) & 0x0C) >> 2);
} else {
- complete_size = 4;
- }
-
- if (rec_size < 6 + complete_size) {
+ fsm->complete_size = 4;
+ }
+
+ if (rec_size < 6 + fsm->complete_size) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Received currupted Sdo expedited upload"
+ EC_ERR("Received currupted SDO expedited upload"
" response (only %u bytes)!\n", rec_size);
ec_print_data(data, rec_size);
return;
}
- if (ec_sdo_request_copy_data(request, data + 6, complete_size)) {
+ if (ec_sdo_request_copy_data(request, data + 6, fsm->complete_size)) {
fsm->state = ec_fsm_coe_error;
return;
}
} else { // normal
if (rec_size < 10) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Received currupted Sdo normal upload"
+ EC_ERR("Received currupted SDO normal upload"
" response (only %u bytes)!\n", rec_size);
ec_print_data(data, rec_size);
return;
}
- if (EC_READ_U16(data) >> 12 != 0x3 || // Sdo response
+ if (EC_READ_U16(data) >> 12 != 0x3 || // SDO response
EC_READ_U8 (data + 2) >> 5 != 0x2 || // upload response
EC_READ_U16(data + 3) != request->index || // index
EC_READ_U8 (data + 5) != request->subindex) { // subindex
if (fsm->slave->master->debug_level) {
- EC_DBG("Invalid Sdo normal upload response at slave %u!\n",
+ EC_DBG("Invalid SDO normal upload response at slave %u!\n",
slave->ring_position);
ec_print_data(data, rec_size);
}
@@ -1626,16 +1625,16 @@
}
data_size = rec_size - 10;
- complete_size = EC_READ_U32(data + 6);
-
- if (!complete_size) {
+ fsm->complete_size = EC_READ_U32(data + 6);
+
+ if (!fsm->complete_size) {
fsm->state = ec_fsm_coe_error;
EC_ERR("No complete size supplied!\n");
ec_print_data(data, rec_size);
return;
}
- if (ec_sdo_request_alloc(request, complete_size)) {
+ if (ec_sdo_request_alloc(request, fsm->complete_size)) {
fsm->state = ec_fsm_coe_error;
return;
}
@@ -1647,9 +1646,10 @@
fsm->toggle = 0;
- if (data_size < complete_size) {
- EC_WARN("Sdo data incomplete (%u / %u).\n",
- data_size, complete_size);
+ if (data_size < fsm->complete_size) {
+ if (master->debug_level)
+ EC_DBG("SDO data incomplete (%u / %u). Segmenting...\n",
+ data_size, fsm->complete_size);
if (!(data = ec_slave_mbox_prepare_send(slave, datagram,
0x03, 3))) {
@@ -1657,7 +1657,7 @@
return;
}
- EC_WRITE_U16(data, 0x2 << 12); // Sdo request
+ EC_WRITE_U16(data, 0x2 << 12); // SDO request
EC_WRITE_U8 (data + 2, (fsm->toggle << 4 // toggle
| 0x3 << 5)); // upload segment request
@@ -1753,7 +1753,7 @@
(datagram->jiffies_received - fsm->jiffies_start) * 1000 / HZ;
if (diff_ms >= fsm->request->response_timeout) {
fsm->state = ec_fsm_coe_error;
- EC_ERR("Timeout while waiting for Sdo upload segment response "
+ EC_ERR("Timeout while waiting for SDO upload segment response "
"on slave %u.\n", slave->ring_position);
return;
}
@@ -1774,7 +1774,6 @@
/**
CoE state: UP RESPONSE.
\todo Timeout behavior
- \todo Check for \a data_size exceeding \a complete_size.
*/
void ec_fsm_coe_up_seg_response(ec_fsm_coe_t *fsm /**< finite state machine */)
@@ -1833,15 +1832,15 @@
}
if (rec_size < 10) {
- EC_ERR("Received currupted Sdo upload segment response!\n");
+ EC_ERR("Received currupted SDO upload segment response!\n");
ec_print_data(data, rec_size);
fsm->state = ec_fsm_coe_error;
return;
}
- if (EC_READ_U16(data) >> 12 == 0x2 && // Sdo request
- EC_READ_U8 (data + 2) >> 5 == 0x4) { // abort Sdo transfer request
- EC_ERR("Sdo upload 0x%04X:%02X aborted on slave %u.\n",
+ if (EC_READ_U16(data) >> 12 == 0x2 && // SDO request
+ EC_READ_U8 (data + 2) >> 5 == 0x4) { // abort SDO transfer request
+ EC_ERR("SDO upload 0x%04X:%02X aborted on slave %u.\n",
request->index, request->subindex, slave->ring_position);
request->abort_code = EC_READ_U32(data + 6);
ec_canopen_abort_msg(request->abort_code);
@@ -1849,10 +1848,10 @@
return;
}
- if (EC_READ_U16(data) >> 12 != 0x3 || // Sdo response
+ if (EC_READ_U16(data) >> 12 != 0x3 || // SDO response
EC_READ_U8 (data + 2) >> 5 != 0x0) { // upload segment response
if (fsm->slave->master->debug_level) {
- EC_DBG("Invalid Sdo upload segment response at slave %u!\n",
+ EC_DBG("Invalid SDO upload segment response at slave %u!\n",
slave->ring_position);
ec_print_data(data, rec_size);
}
@@ -1865,12 +1864,19 @@
last_segment = EC_READ_U8(data + 2) & 0x01;
seg_size = (EC_READ_U8(data + 2) & 0xE) >> 1;
- data_size = rec_size - 10;
-
- if (data_size != seg_size) {
- EC_WARN("Sdo segment data invalid (%u / %u)"
- " - Fragmenting not implemented.\n",
- data_size, seg_size);
+ if (rec_size > 10) {
+ data_size = rec_size - 10;
+ } else { // == 10
+ /* seg_size contains the number of trailing bytes to ignore. */
+ data_size = rec_size - seg_size;
+ }
+
+ if (request->data_size + data_size > fsm->complete_size) {
+ EC_ERR("SDO upload 0x%04X:%02X failed on slave %u: Fragment"
+ " exceeding complete size!\n",
+ request->index, request->subindex, slave->ring_position);
+ fsm->state = ec_fsm_coe_error;
+ return;
}
memcpy(request->data + request->data_size, data + 10, data_size);
@@ -1884,7 +1890,7 @@
return;
}
- EC_WRITE_U16(data, 0x2 << 12); // Sdo request
+ EC_WRITE_U16(data, 0x2 << 12); // SDO request
EC_WRITE_U8 (data + 2, (fsm->toggle << 4 // toggle
| 0x3 << 5)); // upload segment request
@@ -1898,6 +1904,13 @@
return;
}
+ if (request->data_size != fsm->complete_size) {
+ EC_WARN("SDO upload 0x%04X:%02X on slave %u: Assembled data"
+ " size (%u) does not match complete size (%u)!\n",
+ request->index, request->subindex, slave->ring_position,
+ request->data_size, fsm->complete_size);
+ }
+
if (master->debug_level) {
EC_DBG("Uploaded data:\n");
ec_print_data(request->data, request->data_size);
--- a/master/fsm_coe.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_coe.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -52,7 +45,7 @@
typedef struct ec_fsm_coe ec_fsm_coe_t; /**< \see ec_fsm_coe */
-/** Finite state machines for the CANopen-over-EtherCAT protocol.
+/** Finite state machines for the CANopen over EtherCAT protocol.
*/
struct ec_fsm_coe {
ec_slave_t *slave; /**< slave the FSM runs on */
@@ -61,9 +54,10 @@
void (*state)(ec_fsm_coe_t *); /**< CoE state function */
unsigned long jiffies_start; /**< CoE timestamp. */
- ec_sdo_t *sdo; /**< current Sdo */
+ ec_sdo_t *sdo; /**< current SDO */
uint8_t subindex; /**< current subindex */
- ec_sdo_request_t *request; /**< Sdo request */
+ ec_sdo_request_t *request; /**< SDO request */
+ uint32_t complete_size; /**< Used when segmenting. */
uint8_t toggle; /**< toggle bit for segment commands */
};
--- a/master/fsm_master.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_master.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -324,9 +317,9 @@
/*****************************************************************************/
-/** Check for pending Sdo requests and process one.
+/** Check for pending SDO requests and process one.
*
- * \return non-zero, if an Sdo request is processed.
+ * \return non-zero, if an SDO request is processed.
*/
int ec_fsm_master_action_process_sdo(
ec_fsm_master_t *fsm /**< Master state machine. */
@@ -349,7 +342,7 @@
if (ec_sdo_request_timed_out(req)) {
req->state = EC_REQUEST_FAILURE;
if (master->debug_level)
- EC_DBG("Sdo request for slave %u timed out...\n",
+ EC_DBG("SDO request for slave %u timed out...\n",
slave->ring_position);
continue;
}
@@ -361,7 +354,7 @@
req->state = EC_REQUEST_BUSY;
if (master->debug_level)
- EC_DBG("Processing Sdo request for slave %u...\n",
+ EC_DBG("Processing SDO request for slave %u...\n",
slave->ring_position);
fsm->idle = 0;
@@ -388,19 +381,19 @@
slave = request->slave;
if (slave->current_state == EC_SLAVE_STATE_INIT) {
- EC_ERR("Discarding Sdo request, slave %u is in INIT.\n",
+ EC_ERR("Discarding SDO request, slave %u is in INIT.\n",
slave->ring_position);
request->req.state = EC_REQUEST_FAILURE;
wake_up(&master->sdo_queue);
continue;
}
- // Found pending Sdo request. Execute it!
+ // Found pending SDO request. Execute it!
if (master->debug_level)
- EC_DBG("Processing Sdo request for slave %u...\n",
+ EC_DBG("Processing SDO request for slave %u...\n",
slave->ring_position);
- // Start uploading Sdo
+ // Start uploading SDO
fsm->idle = 0;
fsm->sdo_request = &request->req;
fsm->slave = slave;
@@ -426,28 +419,31 @@
ec_master_t *master = fsm->master;
ec_slave_t *slave;
- // Check for pending Sdo requests
+ // Check for pending SDO requests
if (ec_fsm_master_action_process_sdo(fsm))
return;
- // check, if slaves have an Sdo dictionary to read out.
+ // check, if slaves have an SDO dictionary to read out.
for (slave = master->slaves;
slave < master->slaves + master->slave_count;
slave++) {
if (!(slave->sii.mailbox_protocols & EC_MBOX_COE)
+ || (slave->sii.has_general
+ && !slave->sii.coe_details.enable_sdo_info)
|| slave->sdo_dictionary_fetched
|| slave->current_state == EC_SLAVE_STATE_INIT
+ || slave->current_state == EC_SLAVE_STATE_UNKNOWN
|| jiffies - slave->jiffies_preop < EC_WAIT_SDO_DICT * HZ
) continue;
if (master->debug_level) {
- EC_DBG("Fetching Sdo dictionary from slave %u.\n",
+ EC_DBG("Fetching SDO dictionary from slave %u.\n",
slave->ring_position);
}
slave->sdo_dictionary_fetched = 1;
- // start fetching Sdo dictionary
+ // start fetching SDO dictionary
fsm->idle = 0;
fsm->slave = slave;
fsm->state = ec_fsm_master_state_sdo_dictionary;
@@ -725,6 +721,8 @@
if (ec_fsm_slave_config_exec(&fsm->fsm_slave_config))
return;
+ fsm->slave->force_config = 0;
+
// configuration finished
master->config_busy = 0;
wake_up_interruptible(&master->config_queue);
@@ -809,12 +807,12 @@
return;
}
- // Sdo dictionary fetching finished
+ // SDO dictionary fetching finished
if (master->debug_level) {
unsigned int sdo_count, entry_count;
ec_slave_sdo_dict_info(slave, &sdo_count, &entry_count);
- EC_DBG("Fetched %u Sdos and %u entries from slave %u.\n",
+ EC_DBG("Fetched %u SDOs and %u entries from slave %u.\n",
sdo_count, entry_count, slave->ring_position);
}
@@ -835,10 +833,14 @@
ec_master_t *master = fsm->master;
ec_sdo_request_t *request = fsm->sdo_request;
+ // FIXME
+ // Check if request is still existing (may have been deleted with a slave
+ // configuration).
+
if (ec_fsm_coe_exec(&fsm->fsm_coe)) return;
if (!ec_fsm_coe_success(&fsm->fsm_coe)) {
- EC_DBG("Failed to process Sdo request for slave %u.\n",
+ EC_DBG("Failed to process SDO request for slave %u.\n",
fsm->slave->ring_position);
request->state = EC_REQUEST_FAILURE;
wake_up(&master->sdo_queue);
@@ -846,15 +848,15 @@
return;
}
- // Sdo request finished
+ // SDO request finished
request->state = EC_REQUEST_SUCCESS;
wake_up(&master->sdo_queue);
if (master->debug_level)
- EC_DBG("Finished Sdo request for slave %u.\n",
+ EC_DBG("Finished SDO request for slave %u.\n",
fsm->slave->ring_position);
- // check for another Sdo request
+ // check for another SDO request
if (ec_fsm_master_action_process_sdo(fsm))
return; // processing another request
--- a/master/fsm_master.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_master.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -65,12 +58,12 @@
/*****************************************************************************/
-/** Slave/Sdo request record for master's Sdo request list.
+/** Slave/SDO request record for master's SDO request list.
*/
typedef struct {
struct list_head list; /**< List element. */
ec_slave_t *slave; /**< Slave. */
- ec_sdo_request_t req; /**< Sdo request. */
+ ec_sdo_request_t req; /**< SDO request. */
} ec_master_sdo_request_t;
/*****************************************************************************/
@@ -93,10 +86,10 @@
ec_slave_t *slave; /**< current slave */
ec_sii_write_request_t *sii_request; /**< SII write request */
off_t sii_index; /**< index to SII write request data */
- ec_sdo_request_t *sdo_request; /**< Sdo request to process. */
+ ec_sdo_request_t *sdo_request; /**< SDO request to process. */
ec_fsm_coe_t fsm_coe; /**< CoE state machine */
- ec_fsm_pdo_t fsm_pdo; /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t fsm_pdo; /**< PDO configuration state machine. */
ec_fsm_change_t fsm_change; /**< State change state machine */
ec_fsm_slave_config_t fsm_slave_config; /**< slave state machine */
ec_fsm_slave_scan_t fsm_slave_scan; /**< slave state machine */
--- a/master/fsm_pdo.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_pdo.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,37 +2,30 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/** \file
- * EtherCAT Pdo configuration state machine.
+ * EtherCAT PDO configuration state machine.
*/
/*****************************************************************************/
@@ -76,7 +69,7 @@
/** Constructor.
*/
void ec_fsm_pdo_init(
- ec_fsm_pdo_t *fsm, /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm, /**< PDO configuration state machine. */
ec_fsm_coe_t *fsm_coe /**< CoE state machine to use */
)
{
@@ -92,7 +85,7 @@
/** Destructor.
*/
void ec_fsm_pdo_clear(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
ec_fsm_pdo_entry_clear(&fsm->fsm_pdo_entry);
@@ -103,10 +96,10 @@
/*****************************************************************************/
-/** Start reading the Pdo configuration.
+/** Start reading the PDO configuration.
*/
void ec_fsm_pdo_start_reading(
- ec_fsm_pdo_t *fsm, /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm, /**< PDO configuration state machine. */
ec_slave_t *slave /**< slave to configure */
)
{
@@ -116,10 +109,10 @@
/*****************************************************************************/
-/** Start writing the Pdo configuration.
+/** Start writing the PDO configuration.
*/
void ec_fsm_pdo_start_configuration(
- ec_fsm_pdo_t *fsm, /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm, /**< PDO configuration state machine. */
ec_slave_t *slave /**< slave to configure */
)
{
@@ -134,7 +127,7 @@
* \return false, if state machine has terminated
*/
int ec_fsm_pdo_running(
- const ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ const ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
return fsm->state != ec_fsm_pdo_state_end
@@ -151,7 +144,7 @@
* \return false, if state machine has terminated
*/
int ec_fsm_pdo_exec(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
fsm->state(fsm);
@@ -165,7 +158,7 @@
* \return true, if the state machine terminated gracefully
*/
int ec_fsm_pdo_success(
- const ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ const ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
return fsm->state == ec_fsm_pdo_state_end;
@@ -175,20 +168,20 @@
* Reading state funtions.
*****************************************************************************/
-/** Start reading Pdo assignment.
+/** Start reading PDO assignment.
*/
void ec_fsm_pdo_read_state_start(
ec_fsm_pdo_t *fsm /**< finite state machine */
)
{
- // read Pdo assignment for first sync manager not reserved for mailbox
+ // read PDO assignment for first sync manager not reserved for mailbox
fsm->sync_index = 1; // next is 2
ec_fsm_pdo_read_action_next_sync(fsm);
}
/*****************************************************************************/
-/** Read Pdo assignment of next sync manager.
+/** Read PDO assignment of next sync manager.
*/
void ec_fsm_pdo_read_action_next_sync(
ec_fsm_pdo_t *fsm /**< Finite state machine */
@@ -203,7 +196,7 @@
continue;
if (slave->master->debug_level)
- EC_DBG("Reading Pdo assignment of SM%u.\n", fsm->sync_index);
+ EC_DBG("Reading PDO assignment of SM%u.\n", fsm->sync_index);
ec_pdo_list_clear_pdos(&fsm->pdos);
@@ -216,14 +209,14 @@
}
if (slave->master->debug_level)
- EC_DBG("Reading of Pdo configuration finished.\n");
+ EC_DBG("Reading of PDO configuration finished.\n");
fsm->state = ec_fsm_pdo_state_end;
}
/*****************************************************************************/
-/** Count assigned Pdos.
+/** Count assigned PDOs.
*/
void ec_fsm_pdo_read_state_pdo_count(
ec_fsm_pdo_t *fsm /**< finite state machine */
@@ -232,14 +225,14 @@
if (ec_fsm_coe_exec(fsm->fsm_coe)) return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_ERR("Failed to read number of assigned Pdos for SM%u.\n",
+ EC_ERR("Failed to read number of assigned PDOs for SM%u.\n",
fsm->sync_index);
fsm->state = ec_fsm_pdo_state_error;
return;
}
if (fsm->request.data_size != sizeof(uint8_t)) {
- EC_ERR("Invalid data size %u returned when uploading Sdo 0x%04X:%02X "
+ EC_ERR("Invalid data size %u returned when uploading SDO 0x%04X:%02X "
"from slave %u.\n", fsm->request.data_size,
fsm->request.index, fsm->request.subindex,
fsm->slave->ring_position);
@@ -249,16 +242,16 @@
fsm->pdo_count = EC_READ_U8(fsm->request.data);
if (fsm->slave->master->debug_level)
- EC_DBG("%u Pdos assigned.\n", fsm->pdo_count);
-
- // read first Pdo
+ EC_DBG("%u PDOs assigned.\n", fsm->pdo_count);
+
+ // read first PDO
fsm->pdo_pos = 1;
ec_fsm_pdo_read_action_next_pdo(fsm);
}
/*****************************************************************************/
-/** Read next Pdo.
+/** Read next PDO.
*/
void ec_fsm_pdo_read_action_next_pdo(
ec_fsm_pdo_t *fsm /**< finite state machine */
@@ -274,7 +267,7 @@
return;
}
- // finished reading Pdo configuration
+ // finished reading PDO configuration
if (ec_pdo_list_copy(&fsm->sync->pdos, &fsm->pdos)) {
fsm->state = ec_fsm_pdo_state_error;
@@ -289,7 +282,7 @@
/*****************************************************************************/
-/** Fetch Pdo information.
+/** Fetch PDO information.
*/
void ec_fsm_pdo_read_state_pdo(
ec_fsm_pdo_t *fsm /**< finite state machine */
@@ -298,14 +291,14 @@
if (ec_fsm_coe_exec(fsm->fsm_coe)) return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_ERR("Failed to read index of assigned Pdo %u from SM%u.\n",
+ EC_ERR("Failed to read index of assigned PDO %u from SM%u.\n",
fsm->pdo_pos, fsm->sync_index);
fsm->state = ec_fsm_pdo_state_error;
return;
}
if (fsm->request.data_size != sizeof(uint16_t)) {
- EC_ERR("Invalid data size %u returned when uploading Sdo 0x%04X:%02X "
+ EC_ERR("Invalid data size %u returned when uploading SDO 0x%04X:%02X "
"from slave %u.\n", fsm->request.data_size,
fsm->request.index, fsm->request.subindex,
fsm->slave->ring_position);
@@ -315,7 +308,7 @@
if (!(fsm->pdo = (ec_pdo_t *)
kmalloc(sizeof(ec_pdo_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate Pdo.\n");
+ EC_ERR("Failed to allocate PDO.\n");
fsm->state = ec_fsm_pdo_state_error;
return;
}
@@ -325,7 +318,7 @@
fsm->pdo->sync_index = fsm->sync_index;
if (fsm->slave->master->debug_level)
- EC_DBG("Pdo 0x%04X.\n", fsm->pdo->index);
+ EC_DBG("PDO 0x%04X.\n", fsm->pdo->index);
list_add_tail(&fsm->pdo->list, &fsm->pdos.list);
@@ -336,7 +329,7 @@
/*****************************************************************************/
-/** Fetch Pdo information.
+/** Fetch PDO information.
*/
void ec_fsm_pdo_read_state_pdo_entries(
ec_fsm_pdo_t *fsm /**< finite state machine */
@@ -346,13 +339,13 @@
return;
if (!ec_fsm_pdo_entry_success(&fsm->fsm_pdo_entry)) {
- EC_ERR("Failed to read mapped Pdo entries for Pdo 0x%04X.\n",
+ EC_ERR("Failed to read mapped PDO entries for PDO 0x%04X.\n",
fsm->pdo->index);
fsm->state = ec_fsm_pdo_state_error;
return;
}
- // next Pdo
+ // next PDO
fsm->pdo_pos++;
ec_fsm_pdo_read_action_next_pdo(fsm);
}
@@ -361,10 +354,10 @@
* Writing state functions.
*****************************************************************************/
-/** Start Pdo configuration.
+/** Start PDO configuration.
*/
void ec_fsm_pdo_conf_state_start(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
if (!fsm->slave->config) {
@@ -378,16 +371,16 @@
/*****************************************************************************/
-/** Assign next Pdo.
+/** Assign next PDO.
*/
ec_pdo_t *ec_fsm_pdo_conf_action_next_pdo(
- const ec_fsm_pdo_t *fsm, /**< Pdo configuration state machine. */
- const struct list_head *list /**< current Pdo list item */
+ const ec_fsm_pdo_t *fsm, /**< PDO configuration state machine. */
+ const struct list_head *list /**< current PDO list item */
)
{
list = list->next;
if (list == &fsm->pdos.list)
- return NULL; // no next Pdo
+ return NULL; // no next PDO
return list_entry(list, ec_pdo_t, list);
}
@@ -396,12 +389,17 @@
/** Get the next sync manager for a pdo configuration.
*/
void ec_fsm_pdo_conf_action_next_sync(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
fsm->sync_index++;
for (; fsm->sync_index < EC_MAX_SYNC_MANAGERS; fsm->sync_index++) {
+ if (!fsm->slave->config) { // slave configuration removed in the meantime
+ fsm->state = ec_fsm_pdo_state_error;
+ return;
+ }
+
if (ec_pdo_list_copy(&fsm->pdos,
&fsm->slave->config->sync_configs[fsm->sync_index].pdos)) {
fsm->state = ec_fsm_pdo_state_error;
@@ -410,13 +408,13 @@
if (!(fsm->sync = ec_slave_get_sync(fsm->slave, fsm->sync_index))) {
if (!list_empty(&fsm->pdos.list))
- EC_WARN("Pdos configured for SM%u, but slave %u does not "
+ EC_WARN("PDOs configured for SM%u, but slave %u does not "
"provide the sync manager information!\n",
fsm->sync_index, fsm->slave->ring_position);
continue;
}
- // get first configured Pdo
+ // get first configured PDO
if (!(fsm->pdo = ec_fsm_pdo_conf_action_next_pdo(fsm, &fsm->pdos.list))) {
// no pdos configured
ec_fsm_pdo_conf_action_check_assignment(fsm);
@@ -435,7 +433,7 @@
/** Check if the mapping has to be read, otherwise start to configure it.
*/
void ec_fsm_pdo_conf_action_pdo_mapping(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
const ec_pdo_t *assigned_pdo;
@@ -444,14 +442,14 @@
if ((assigned_pdo = ec_slave_find_pdo(fsm->slave, fsm->pdo->index))) {
ec_pdo_copy_entries(&fsm->slave_pdo, assigned_pdo);
- } else { // configured Pdo is not assigned and thus unknown
+ } else { // configured PDO is not assigned and thus unknown
ec_pdo_clear_entries(&fsm->slave_pdo);
}
if (list_empty(&fsm->slave_pdo.entries)) {
if (fsm->slave->master->debug_level)
- EC_DBG("Reading mapping of Pdo 0x%04X.\n",
+ EC_DBG("Reading mapping of PDO 0x%04X.\n",
fsm->pdo->index);
// pdo mapping is unknown; start loading it
@@ -468,17 +466,17 @@
/*****************************************************************************/
-/** Execute the Pdo entry state machine to read the current Pdo's mapping.
+/** Execute the PDO entry state machine to read the current PDO's mapping.
*/
void ec_fsm_pdo_conf_state_read_mapping(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
if (ec_fsm_pdo_entry_exec(&fsm->fsm_pdo_entry))
return;
if (!ec_fsm_pdo_entry_success(&fsm->fsm_pdo_entry))
- EC_WARN("Failed to read mapped Pdo entries for Pdo 0x%04X.\n",
+ EC_WARN("Failed to read mapped PDO entries for PDO 0x%04X.\n",
fsm->pdo->index);
// check if the mapping must be re-configured
@@ -492,12 +490,12 @@
* \todo Display mapping differences.
*/
void ec_fsm_pdo_conf_action_check_mapping(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
if (ec_pdo_equal_entries(fsm->pdo, &fsm->slave_pdo)) {
if (fsm->slave->master->debug_level)
- EC_DBG("Mapping of Pdo 0x%04X is already configured correctly.\n",
+ EC_DBG("Mapping of PDO 0x%04X is already configured correctly.\n",
fsm->pdo->index);
ec_fsm_pdo_conf_action_next_pdo_mapping(fsm);
return;
@@ -505,7 +503,7 @@
if (fsm->slave->master->debug_level) {
// TODO display diff
- EC_DBG("Changing mapping of Pdo 0x%04X.\n", fsm->pdo->index);
+ EC_DBG("Changing mapping of PDO 0x%04X.\n", fsm->pdo->index);
}
ec_fsm_pdo_entry_start_configuration(&fsm->fsm_pdo_entry, fsm->slave,
@@ -516,17 +514,17 @@
/*****************************************************************************/
-/** Let the Pdo entry state machine configure the current Pdo's mapping.
+/** Let the PDO entry state machine configure the current PDO's mapping.
*/
void ec_fsm_pdo_conf_state_mapping(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
if (ec_fsm_pdo_entry_exec(&fsm->fsm_pdo_entry))
return;
if (!ec_fsm_pdo_entry_success(&fsm->fsm_pdo_entry))
- EC_WARN("Failed to configure mapping of Pdo 0x%04X.\n",
+ EC_WARN("Failed to configure mapping of PDO 0x%04X.\n",
fsm->pdo->index);
ec_fsm_pdo_conf_action_next_pdo_mapping(fsm);
@@ -534,13 +532,13 @@
/*****************************************************************************/
-/** Check mapping of next Pdo, otherwise configure assignment.
+/** Check mapping of next PDO, otherwise configure assignment.
*/
void ec_fsm_pdo_conf_action_next_pdo_mapping(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
- )
-{
- // get next configured Pdo
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
+ )
+{
+ // get next configured PDO
if (!(fsm->pdo = ec_fsm_pdo_conf_action_next_pdo(fsm, &fsm->pdo->list))) {
// no more configured pdos
ec_fsm_pdo_conf_action_check_assignment(fsm);
@@ -552,17 +550,17 @@
/*****************************************************************************/
-/** Check if the Pdo assignment of the current SM has to be re-configured.
+/** Check if the PDO assignment of the current SM has to be re-configured.
*/
void ec_fsm_pdo_conf_action_check_assignment(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
// check if assignment has to be re-configured
if (ec_pdo_list_equal(&fsm->sync->pdos, &fsm->pdos)) {
if (fsm->slave->master->debug_level)
- EC_DBG("Pdo assignment for SM%u is already configured "
+ EC_DBG("PDO assignment for SM%u is already configured "
"correctly.\n", fsm->sync_index);
ec_fsm_pdo_conf_action_next_sync(fsm);
@@ -570,20 +568,20 @@
}
if (fsm->slave->master->debug_level) {
- EC_DBG("Pdo assignment of SM%u differs:\n", fsm->sync_index);
- EC_DBG("Currently assigned Pdos: ");
+ EC_DBG("PDO assignment of SM%u differs:\n", fsm->sync_index);
+ EC_DBG("Currently assigned PDOs: ");
ec_pdo_list_print(&fsm->sync->pdos);
printk("\n");
- EC_DBG("Pdos to assign: ");
+ EC_DBG("PDOs to assign: ");
ec_pdo_list_print(&fsm->pdos);
printk("\n");
}
- // Pdo assignment has to be changed. Does the slave support this?
+ // PDO assignment has to be changed. Does the slave support this?
if (!(fsm->slave->sii.mailbox_protocols & EC_MBOX_COE)
|| (fsm->slave->sii.has_general
&& !fsm->slave->sii.coe_details.enable_pdo_assign)) {
- EC_WARN("Slave %u does not support assigning Pdos!\n",
+ EC_WARN("Slave %u does not support assigning PDOs!\n",
fsm->slave->ring_position);
ec_fsm_pdo_conf_action_next_sync(fsm);
return;
@@ -594,14 +592,14 @@
return;
}
- // set mapped Pdo count to zero
- EC_WRITE_U8(fsm->request.data, 0); // zero Pdos mapped
+ // set mapped PDO count to zero
+ EC_WRITE_U8(fsm->request.data, 0); // zero PDOs mapped
fsm->request.data_size = 1;
ec_sdo_request_address(&fsm->request, 0x1C10 + fsm->sync_index, 0);
ecrt_sdo_request_write(&fsm->request);
if (fsm->slave->master->debug_level)
- EC_DBG("Setting number of assigned Pdos to zero.\n");
+ EC_DBG("Setting number of assigned PDOs to zero.\n");
fsm->state = ec_fsm_pdo_conf_state_zero_pdo_count;
ec_fsm_coe_transfer(fsm->fsm_coe, fsm->slave, &fsm->request);
@@ -610,48 +608,48 @@
/*****************************************************************************/
-/** Set the number of assigned Pdos to zero.
+/** Set the number of assigned PDOs to zero.
*/
void ec_fsm_pdo_conf_state_zero_pdo_count(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
if (ec_fsm_coe_exec(fsm->fsm_coe))
return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_WARN("Failed to clear Pdo assignment of SM%u.\n", fsm->sync_index);
- fsm->state = ec_fsm_pdo_state_error;
- return;
- }
-
- // the sync manager's assigned Pdos have been cleared
+ EC_WARN("Failed to clear PDO assignment of SM%u.\n", fsm->sync_index);
+ fsm->state = ec_fsm_pdo_state_error;
+ return;
+ }
+
+ // the sync manager's assigned PDOs have been cleared
ec_pdo_list_clear_pdos(&fsm->sync->pdos);
- // assign all Pdos belonging to the current sync manager
+ // assign all PDOs belonging to the current sync manager
- // find first Pdo
+ // find first PDO
if (!(fsm->pdo = ec_fsm_pdo_conf_action_next_pdo(fsm, &fsm->pdos.list))) {
if (fsm->slave->master->debug_level)
- EC_DBG("No Pdos to assign.\n");
+ EC_DBG("No PDOs to assign.\n");
// check for mapping to be altered
ec_fsm_pdo_conf_action_next_sync(fsm);
return;
}
- // assign first Pdo
+ // assign first PDO
fsm->pdo_pos = 1;
ec_fsm_pdo_conf_action_assign_pdo(fsm);
}
/*****************************************************************************/
-/** Assign a Pdo.
+/** Assign a PDO.
*/
void ec_fsm_pdo_conf_action_assign_pdo(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
EC_WRITE_U16(fsm->request.data, fsm->pdo->index);
@@ -661,7 +659,7 @@
ecrt_sdo_request_write(&fsm->request);
if (fsm->slave->master->debug_level)
- EC_DBG("Assigning Pdo 0x%04X at position %u.\n",
+ EC_DBG("Assigning PDO 0x%04X at position %u.\n",
fsm->pdo->index, fsm->pdo_pos);
fsm->state = ec_fsm_pdo_conf_state_assign_pdo;
@@ -671,32 +669,32 @@
/*****************************************************************************/
-/** Add a Pdo to the sync managers Pdo assignment.
+/** Add a PDO to the sync managers PDO assignment.
*/
void ec_fsm_pdo_conf_state_assign_pdo(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
if (ec_fsm_coe_exec(fsm->fsm_coe)) return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_WARN("Failed to assign Pdo 0x%04X at position %u of SM%u.\n",
+ EC_WARN("Failed to assign PDO 0x%04X at position %u of SM%u.\n",
fsm->pdo->index, fsm->pdo_pos, fsm->sync_index);
fsm->state = ec_fsm_pdo_state_error;
return;
}
- // find next Pdo
+ // find next PDO
if (!(fsm->pdo = ec_fsm_pdo_conf_action_next_pdo(fsm, &fsm->pdo->list))) {
- // no more Pdos to assign, set Pdo count
+ // no more PDOs to assign, set PDO count
EC_WRITE_U8(fsm->request.data, fsm->pdo_pos);
fsm->request.data_size = 1;
ec_sdo_request_address(&fsm->request, 0x1C10 + fsm->sync_index, 0);
ecrt_sdo_request_write(&fsm->request);
if (fsm->slave->master->debug_level)
- EC_DBG("Setting number of assigned Pdos to %u.\n", fsm->pdo_pos);
+ EC_DBG("Setting number of assigned PDOs to %u.\n", fsm->pdo_pos);
fsm->state = ec_fsm_pdo_conf_state_set_pdo_count;
ec_fsm_coe_transfer(fsm->fsm_coe, fsm->slave, &fsm->request);
@@ -704,36 +702,36 @@
return;
}
- // add next Pdo to assignment
+ // add next PDO to assignment
fsm->pdo_pos++;
ec_fsm_pdo_conf_action_assign_pdo(fsm);
}
/*****************************************************************************/
-/** Set the number of assigned Pdos.
+/** Set the number of assigned PDOs.
*/
void ec_fsm_pdo_conf_state_set_pdo_count(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
if (ec_fsm_coe_exec(fsm->fsm_coe)) return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_WARN("Failed to set number of assigned Pdos of SM%u.\n",
+ EC_WARN("Failed to set number of assigned PDOs of SM%u.\n",
fsm->sync_index);
fsm->state = ec_fsm_pdo_state_error;
return;
}
- // Pdos have been configured
+ // PDOs have been configured
ec_pdo_list_copy(&fsm->sync->pdos, &fsm->pdos);
if (fsm->slave->master->debug_level)
- EC_DBG("Successfully configured Pdo assignment of SM%u.\n",
+ EC_DBG("Successfully configured PDO assignment of SM%u.\n",
fsm->sync_index);
- // check if Pdo mapping has to be altered
+ // check if PDO mapping has to be altered
ec_fsm_pdo_conf_action_next_sync(fsm);
}
@@ -744,7 +742,7 @@
/** State: ERROR.
*/
void ec_fsm_pdo_state_error(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
)
{
}
@@ -754,9 +752,9 @@
/** State: END.
*/
void ec_fsm_pdo_state_end(
- ec_fsm_pdo_t *fsm /**< Pdo configuration state machine. */
- )
-{
-}
-
-/*****************************************************************************/
+ ec_fsm_pdo_t *fsm /**< PDO configuration state machine. */
+ )
+{
+}
+
+/*****************************************************************************/
--- a/master/fsm_pdo.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_pdo.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- EtherCAT Pdo configuration state machine structures.
+ EtherCAT PDO configuration state machine structures.
*/
/*****************************************************************************/
@@ -55,23 +48,23 @@
*/
typedef struct ec_fsm_pdo ec_fsm_pdo_t;
-/** Pdo configuration state machine.
+/** PDO configuration state machine.
*/
struct ec_fsm_pdo
{
void (*state)(ec_fsm_pdo_t *); /**< State function. */
ec_fsm_coe_t *fsm_coe; /**< CoE state machine to use. */
- ec_fsm_pdo_entry_t fsm_pdo_entry; /**< Pdo entry state machine. */
- ec_pdo_list_t pdos; /**< Pdo configuration. */
- ec_sdo_request_t request; /**< Sdo request. */
- ec_pdo_t slave_pdo; /**< Pdo actually appearing in a slave. */
+ ec_fsm_pdo_entry_t fsm_pdo_entry; /**< PDO entry state machine. */
+ ec_pdo_list_t pdos; /**< PDO configuration. */
+ ec_sdo_request_t request; /**< SDO request. */
+ ec_pdo_t slave_pdo; /**< PDO actually appearing in a slave. */
ec_slave_t *slave; /**< Slave the FSM runs on. */
uint8_t sync_index; /**< Current sync manager index. */
ec_sync_t *sync; /**< Current sync manager. */
- ec_pdo_t *pdo; /**< Current Pdo. */
- unsigned int pdo_pos; /**< Assignment position of current Pdos. */
- unsigned int pdo_count; /**< Number of assigned Pdos. */
+ ec_pdo_t *pdo; /**< Current PDO. */
+ unsigned int pdo_pos; /**< Assignment position of current PDOs. */
+ unsigned int pdo_count; /**< Number of assigned PDOs. */
};
/*****************************************************************************/
--- a/master/fsm_pdo_entry.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_pdo_entry.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,37 +2,30 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/** \file
- * EtherCAT Pdo mapping state machine.
+ * EtherCAT PDO mapping state machine.
*/
/*****************************************************************************/
@@ -67,7 +60,7 @@
/** Constructor.
*/
void ec_fsm_pdo_entry_init(
- ec_fsm_pdo_entry_t *fsm, /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm, /**< PDO mapping state machine. */
ec_fsm_coe_t *fsm_coe /**< CoE state machine to use. */
)
{
@@ -80,7 +73,7 @@
/** Destructor.
*/
void ec_fsm_pdo_entry_clear(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
ec_sdo_request_clear(&fsm->request);
@@ -88,12 +81,12 @@
/*****************************************************************************/
-/** Start reading a Pdo's entries.
+/** Start reading a PDO's entries.
*/
void ec_fsm_pdo_entry_start_reading(
- ec_fsm_pdo_entry_t *fsm, /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm, /**< PDO mapping state machine. */
ec_slave_t *slave, /**< slave to configure */
- ec_pdo_t *pdo /**< Pdo to read entries for. */
+ ec_pdo_t *pdo /**< PDO to read entries for. */
)
{
fsm->slave = slave;
@@ -106,12 +99,12 @@
/*****************************************************************************/
-/** Start Pdo mapping state machine.
+/** Start PDO mapping state machine.
*/
void ec_fsm_pdo_entry_start_configuration(
- ec_fsm_pdo_entry_t *fsm, /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm, /**< PDO mapping state machine. */
ec_slave_t *slave, /**< slave to configure */
- const ec_pdo_t *pdo /**< Pdo with the desired entries. */
+ const ec_pdo_t *pdo /**< PDO with the desired entries. */
)
{
fsm->slave = slave;
@@ -127,7 +120,7 @@
* \return false, if state machine has terminated
*/
int ec_fsm_pdo_entry_running(
- const ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ const ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
return fsm->state != ec_fsm_pdo_entry_state_end
@@ -141,7 +134,7 @@
* \return false, if state machine has terminated
*/
int ec_fsm_pdo_entry_exec(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
fsm->state(fsm);
@@ -155,7 +148,7 @@
* \return true, if the state machine terminated gracefully
*/
int ec_fsm_pdo_entry_success(
- const ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ const ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
return fsm->state == ec_fsm_pdo_entry_state_end;
@@ -165,10 +158,10 @@
* Reading state functions.
*****************************************************************************/
-/** Request reading the number of mapped Pdo entries.
+/** Request reading the number of mapped PDO entries.
*/
void ec_fsm_pdo_entry_read_state_start(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
ec_sdo_request_address(&fsm->request, fsm->target_pdo->index, 0);
@@ -181,7 +174,7 @@
/*****************************************************************************/
-/** Read number of mapped Pdo entries.
+/** Read number of mapped PDO entries.
*/
void ec_fsm_pdo_entry_read_state_count(
ec_fsm_pdo_entry_t *fsm /**< finite state machine */
@@ -191,13 +184,13 @@
return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_ERR("Failed to read number of mapped Pdo entries.\n");
+ EC_ERR("Failed to read number of mapped PDO entries.\n");
fsm->state = ec_fsm_pdo_entry_state_error;
return;
}
if (fsm->request.data_size != sizeof(uint8_t)) {
- EC_ERR("Invalid data size %u at uploading Sdo 0x%04X:%02X.\n",
+ EC_ERR("Invalid data size %u at uploading SDO 0x%04X:%02X.\n",
fsm->request.data_size, fsm->request.index,
fsm->request.subindex);
fsm->state = ec_fsm_pdo_entry_state_error;
@@ -207,16 +200,16 @@
fsm->entry_count = EC_READ_U8(fsm->request.data);
if (fsm->slave->master->debug_level)
- EC_DBG("%u Pdo entries mapped.\n", fsm->entry_count);
-
- // read first Pdo entry
+ EC_DBG("%u PDO entries mapped.\n", fsm->entry_count);
+
+ // read first PDO entry
fsm->entry_pos = 1;
ec_fsm_pdo_entry_read_action_next(fsm);
}
/*****************************************************************************/
-/** Read next Pdo entry.
+/** Read next PDO entry.
*/
void ec_fsm_pdo_entry_read_action_next(
ec_fsm_pdo_entry_t *fsm /**< finite state machine */
@@ -237,7 +230,7 @@
/*****************************************************************************/
-/** Read Pdo entry information.
+/** Read PDO entry information.
*/
void ec_fsm_pdo_entry_read_state_entry(
ec_fsm_pdo_entry_t *fsm /**< finite state machine */
@@ -246,13 +239,13 @@
if (ec_fsm_coe_exec(fsm->fsm_coe)) return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_ERR("Failed to read mapped Pdo entry.\n");
+ EC_ERR("Failed to read mapped PDO entry.\n");
fsm->state = ec_fsm_pdo_entry_state_error;
return;
}
if (fsm->request.data_size != sizeof(uint32_t)) {
- EC_ERR("Invalid data size %u at uploading Sdo 0x%04X:%02X.\n",
+ EC_ERR("Invalid data size %u at uploading SDO 0x%04X:%02X.\n",
fsm->request.data_size, fsm->request.index,
fsm->request.subindex);
fsm->state = ec_fsm_pdo_entry_state_error;
@@ -264,7 +257,7 @@
if (!(pdo_entry = (ec_pdo_entry_t *)
kmalloc(sizeof(ec_pdo_entry_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate Pdo entry.\n");
+ EC_ERR("Failed to allocate PDO entry.\n");
fsm->state = ec_fsm_pdo_entry_state_error;
return;
}
@@ -284,7 +277,7 @@
}
if (fsm->slave->master->debug_level) {
- EC_DBG("Pdo entry 0x%04X:%02X, %u bit, \"%s\".\n",
+ EC_DBG("PDO entry 0x%04X:%02X, %u bit, \"%s\".\n",
pdo_entry->index, pdo_entry->subindex,
pdo_entry->bit_length,
pdo_entry->name ? pdo_entry->name : "???");
@@ -292,7 +285,7 @@
list_add_tail(&pdo_entry->list, &fsm->target_pdo->entries);
- // next Pdo entry
+ // next PDO entry
fsm->entry_pos++;
ec_fsm_pdo_entry_read_action_next(fsm);
}
@@ -302,17 +295,17 @@
* Configuration state functions.
*****************************************************************************/
-/** Start Pdo mapping.
+/** Start PDO mapping.
*/
void ec_fsm_pdo_entry_conf_state_start(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
- )
-{
- // Pdo mapping has to be changed. Does the slave support this?
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
+ )
+{
+ // PDO mapping has to be changed. Does the slave support this?
if (!(fsm->slave->sii.mailbox_protocols & EC_MBOX_COE)
|| (fsm->slave->sii.has_general
&& !fsm->slave->sii.coe_details.enable_pdo_configuration)) {
- EC_WARN("Slave %u does not support changing the Pdo mapping!\n",
+ EC_WARN("Slave %u does not support changing the PDO mapping!\n",
fsm->slave->ring_position);
fsm->state = ec_fsm_pdo_entry_state_error;
return;
@@ -323,7 +316,7 @@
return;
}
- // set mapped Pdo entry count to zero
+ // set mapped PDO entry count to zero
EC_WRITE_U8(fsm->request.data, 0);
fsm->request.data_size = 1;
ec_sdo_request_address(&fsm->request, fsm->source_pdo->index, 0);
@@ -339,10 +332,10 @@
/*****************************************************************************/
-/** Process next Pdo entry.
+/** Process next PDO entry.
*/
ec_pdo_entry_t *ec_fsm_pdo_entry_conf_next_entry(
- const ec_fsm_pdo_entry_t *fsm, /**< Pdo mapping state machine. */
+ const ec_fsm_pdo_entry_t *fsm, /**< PDO mapping state machine. */
const struct list_head *list /**< current entry list item */
)
{
@@ -357,14 +350,14 @@
/** Set the number of mapped entries to zero.
*/
void ec_fsm_pdo_entry_conf_state_zero_entry_count(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
if (ec_fsm_coe_exec(fsm->fsm_coe))
return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_WARN("Failed to clear Pdo mapping.\n");
+ EC_WARN("Failed to clear PDO mapping.\n");
fsm->state = ec_fsm_pdo_entry_state_error;
return;
}
@@ -387,16 +380,16 @@
/*****************************************************************************/
-/** Starts to add a Pdo entry.
+/** Starts to add a PDO entry.
*/
void ec_fsm_pdo_entry_conf_action_map(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
uint32_t value;
if (fsm->slave->master->debug_level)
- EC_DBG("Mapping Pdo entry 0x%04X:%02X (%u bit) at position %u.\n",
+ EC_DBG("Mapping PDO entry 0x%04X:%02X (%u bit) at position %u.\n",
fsm->entry->index, fsm->entry->subindex,
fsm->entry->bit_length, fsm->entry_pos);
@@ -414,16 +407,16 @@
/*****************************************************************************/
-/** Add a Pdo entry.
+/** Add a PDO entry.
*/
void ec_fsm_pdo_entry_conf_state_map_entry(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
if (ec_fsm_coe_exec(fsm->fsm_coe)) return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_WARN("Failed to map Pdo entry 0x%04X:%02X (%u bit) to "
+ EC_WARN("Failed to map PDO entry 0x%04X:%02X (%u bit) to "
"position %u.\n", fsm->entry->index, fsm->entry->subindex,
fsm->entry->bit_length, fsm->entry_pos);
fsm->state = ec_fsm_pdo_entry_state_error;
@@ -441,7 +434,7 @@
ecrt_sdo_request_write(&fsm->request);
if (fsm->slave->master->debug_level)
- EC_DBG("Setting number of Pdo entries to %u.\n", fsm->entry_pos);
+ EC_DBG("Setting number of PDO entries to %u.\n", fsm->entry_pos);
fsm->state = ec_fsm_pdo_entry_conf_state_set_entry_count;
ec_fsm_coe_transfer(fsm->fsm_coe, fsm->slave, &fsm->request);
@@ -459,7 +452,7 @@
/** Set the number of entries.
*/
void ec_fsm_pdo_entry_conf_state_set_entry_count(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
if (ec_fsm_coe_exec(fsm->fsm_coe)) return;
@@ -471,7 +464,7 @@
}
if (fsm->slave->master->debug_level)
- EC_DBG("Successfully configured mapping for Pdo 0x%04X.\n",
+ EC_DBG("Successfully configured mapping for PDO 0x%04X.\n",
fsm->source_pdo->index);
fsm->state = ec_fsm_pdo_entry_state_end; // finished
@@ -484,7 +477,7 @@
/** State: ERROR.
*/
void ec_fsm_pdo_entry_state_error(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
)
{
}
@@ -494,9 +487,9 @@
/** State: END.
*/
void ec_fsm_pdo_entry_state_end(
- ec_fsm_pdo_entry_t *fsm /**< Pdo mapping state machine. */
- )
-{
-}
-
-/*****************************************************************************/
+ ec_fsm_pdo_entry_t *fsm /**< PDO mapping state machine. */
+ )
+{
+}
+
+/*****************************************************************************/
--- a/master/fsm_pdo_entry.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_pdo_entry.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,37 +2,30 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/** \file
- * EtherCAT Pdo entry configuration state machine structures.
+ * EtherCAT PDO entry configuration state machine structures.
*/
/*****************************************************************************/
@@ -52,20 +45,20 @@
*/
typedef struct ec_fsm_pdo_entry ec_fsm_pdo_entry_t;
-/** Pdo configuration state machine.
+/** PDO configuration state machine.
*/
struct ec_fsm_pdo_entry
{
void (*state)(ec_fsm_pdo_entry_t *); /**< state function */
ec_fsm_coe_t *fsm_coe; /**< CoE state machine to use */
- ec_sdo_request_t request; /**< Sdo request. */
+ ec_sdo_request_t request; /**< SDO request. */
ec_slave_t *slave; /**< Slave the FSM runs on. */
- ec_pdo_t *target_pdo; /**< Pdo to read the mapping for. */
- const ec_pdo_t *source_pdo; /**< Pdo with desired mapping. */
+ ec_pdo_t *target_pdo; /**< PDO to read the mapping for. */
+ const ec_pdo_t *source_pdo; /**< PDO with desired mapping. */
const ec_pdo_entry_t *entry; /**< Current entry. */
unsigned int entry_count; /**< Number of entries. */
- unsigned int entry_pos; /**< Position in Pdo mapping. */
+ unsigned int entry_pos; /**< Position in PDO mapping. */
};
/*****************************************************************************/
--- a/master/fsm_sii.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_sii.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/fsm_sii.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_sii.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/fsm_slave_config.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_slave_config.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -58,6 +51,7 @@
void ec_fsm_slave_config_state_safeop(ec_fsm_slave_config_t *);
void ec_fsm_slave_config_state_op(ec_fsm_slave_config_t *);
+void ec_fsm_slave_config_enter_init(ec_fsm_slave_config_t *);
void ec_fsm_slave_config_enter_mbox_sync(ec_fsm_slave_config_t *);
void ec_fsm_slave_config_enter_preop(ec_fsm_slave_config_t *);
void ec_fsm_slave_config_enter_sdo_conf(ec_fsm_slave_config_t *);
@@ -69,6 +63,8 @@
void ec_fsm_slave_config_state_end(ec_fsm_slave_config_t *);
void ec_fsm_slave_config_state_error(ec_fsm_slave_config_t *);
+void ec_fsm_slave_config_reconfigure(ec_fsm_slave_config_t *);
+
/*****************************************************************************/
/** Constructor.
@@ -78,9 +74,11 @@
ec_datagram_t *datagram, /**< datagram structure to use */
ec_fsm_change_t *fsm_change, /**< State change state machine to use. */
ec_fsm_coe_t *fsm_coe, /**< CoE state machine to use. */
- ec_fsm_pdo_t *fsm_pdo /**< Pdo configuration state machine to use. */
- )
-{
+ ec_fsm_pdo_t *fsm_pdo /**< PDO configuration state machine to use. */
+ )
+{
+ ec_sdo_request_init(&fsm->request_copy);
+
fsm->datagram = datagram;
fsm->fsm_change = fsm_change;
fsm->fsm_coe = fsm_coe;
@@ -95,6 +93,7 @@
ec_fsm_slave_config_t *fsm /**< slave state machine */
)
{
+ ec_sdo_request_clear(&fsm->request_copy);
}
/*****************************************************************************/
@@ -172,10 +171,17 @@
EC_DBG("Configuring slave %u...\n", fsm->slave->ring_position);
}
- // configuration will be done immediately; therefore reset the
- // force flag
- fsm->slave->force_config = 0;
-
+ ec_fsm_slave_config_enter_init(fsm);
+}
+
+/*****************************************************************************/
+
+/** Start state change to INIT.
+ */
+void ec_fsm_slave_config_enter_init(
+ ec_fsm_slave_config_t *fsm /**< slave state machine */
+ )
+{
ec_fsm_change_start(fsm->fsm_change, fsm->slave, EC_SLAVE_STATE_INIT);
ec_fsm_change_exec(fsm->fsm_change);
fsm->state = ec_fsm_slave_config_state_init;
@@ -431,7 +437,7 @@
/*****************************************************************************/
-/** Check for Sdo configurations to be applied.
+/** Check for SDO configurations to be applied.
*/
void ec_fsm_slave_config_enter_sdo_conf(
ec_fsm_slave_config_t *fsm /**< slave state machine */
@@ -440,17 +446,18 @@
ec_slave_t *slave = fsm->slave;
// No CoE configuration to be applied?
- if (list_empty(&slave->config->sdo_configs)) { // skip Sdo configuration
+ if (list_empty(&slave->config->sdo_configs)) { // skip SDO configuration
ec_fsm_slave_config_enter_pdo_conf(fsm);
return;
}
- // start Sdo configuration
+ // start SDO configuration
fsm->state = ec_fsm_slave_config_state_sdo_conf;
fsm->request = list_entry(fsm->slave->config->sdo_configs.next,
ec_sdo_request_t, list);
- ecrt_sdo_request_write(fsm->request);
- ec_fsm_coe_transfer(fsm->fsm_coe, fsm->slave, fsm->request);
+ ec_sdo_request_copy(&fsm->request_copy, fsm->request);
+ ecrt_sdo_request_write(&fsm->request_copy);
+ ec_fsm_coe_transfer(fsm->fsm_coe, fsm->slave, &fsm->request_copy);
ec_fsm_coe_exec(fsm->fsm_coe); // execute immediately
}
@@ -465,24 +472,30 @@
if (ec_fsm_coe_exec(fsm->fsm_coe)) return;
if (!ec_fsm_coe_success(fsm->fsm_coe)) {
- EC_ERR("Sdo configuration failed for slave %u.\n",
+ EC_ERR("SDO configuration failed for slave %u.\n",
fsm->slave->ring_position);
fsm->slave->error_flag = 1;
fsm->state = ec_fsm_slave_config_state_error;
return;
}
- // Another Sdo to configure?
+ if (!fsm->slave->config) { // config removed in the meantime
+ ec_fsm_slave_config_reconfigure(fsm);
+ return;
+ }
+
+ // Another SDO to configure?
if (fsm->request->list.next != &fsm->slave->config->sdo_configs) {
- fsm->request = list_entry(fsm->request->list.next, ec_sdo_request_t,
- list);
- ecrt_sdo_request_write(fsm->request);
- ec_fsm_coe_transfer(fsm->fsm_coe, fsm->slave, fsm->request);
+ fsm->request = list_entry(fsm->request->list.next,
+ ec_sdo_request_t, list);
+ ec_sdo_request_copy(&fsm->request_copy, fsm->request);
+ ecrt_sdo_request_write(&fsm->request_copy);
+ ec_fsm_coe_transfer(fsm->fsm_coe, fsm->slave, &fsm->request_copy);
ec_fsm_coe_exec(fsm->fsm_coe); // execute immediately
return;
}
- // All Sdos are now configured.
+ // All SDOs are now configured.
ec_fsm_slave_config_enter_pdo_conf(fsm);
}
@@ -494,7 +507,7 @@
ec_fsm_slave_config_t *fsm /**< slave state machine */
)
{
- // Start configuring Pdos
+ // Start configuring PDOs
ec_fsm_pdo_start_configuration(fsm->fsm_pdo, fsm->slave);
fsm->state = ec_fsm_slave_config_state_pdo_conf;
fsm->state(fsm); // execute immediately
@@ -511,8 +524,13 @@
if (ec_fsm_pdo_exec(fsm->fsm_pdo))
return;
+ if (!fsm->slave->config) { // config removed in the meantime
+ ec_fsm_slave_config_reconfigure(fsm);
+ return;
+ }
+
if (!ec_fsm_pdo_success(fsm->fsm_pdo)) {
- EC_WARN("Pdo configuration failed on slave %u.\n",
+ EC_WARN("PDO configuration failed on slave %u.\n",
fsm->slave->ring_position);
}
@@ -521,7 +539,7 @@
/*****************************************************************************/
-/** Check for Pdo sync managers to be configured.
+/** Check for PDO sync managers to be configured.
*/
void ec_fsm_slave_config_enter_pdo_sync(
ec_fsm_slave_config_t *fsm /**< slave state machine */
@@ -542,7 +560,7 @@
}
if (slave->sii.sync_count <= offset) {
- // no Pdo sync managers to configure
+ // no PDO sync managers to configure
ec_fsm_slave_config_enter_fmmu(fsm);
return;
}
@@ -570,7 +588,7 @@
/*****************************************************************************/
-/** Configure Pdo sync managers.
+/** Configure PDO sync managers.
*/
void ec_fsm_slave_config_state_pdo_sync(
ec_fsm_slave_config_t *fsm /**< slave state machine */
@@ -599,6 +617,11 @@
return;
}
+ if (!fsm->slave->config) { // config removed in the meantime
+ ec_fsm_slave_config_reconfigure(fsm);
+ return;
+ }
+
ec_fsm_slave_config_enter_fmmu(fsm);
}
@@ -639,7 +662,7 @@
if (!(sync = ec_slave_get_sync(slave, fmmu->sync_index))) {
slave->error_flag = 1;
fsm->state = ec_fsm_slave_config_state_error;
- EC_ERR("Failed to determine Pdo sync manager for FMMU on slave"
+ EC_ERR("Failed to determine PDO sync manager for FMMU on slave"
" %u!\n", slave->ring_position);
return;
}
@@ -769,6 +792,22 @@
fsm->state = ec_fsm_slave_config_state_end; // successful
}
+/*****************************************************************************/
+
+/** Reconfigure the slave starting at INIT.
+ */
+void ec_fsm_slave_config_reconfigure(
+ ec_fsm_slave_config_t *fsm /**< slave state machine */
+ )
+{
+ if (fsm->slave->master->debug_level) {
+ EC_DBG("Slave configuration for slave %u detached during "
+ "configuration. Reconfiguring.", fsm->slave->ring_position);
+ }
+
+ ec_fsm_slave_config_enter_init(fsm); // reconfigure
+}
+
/******************************************************************************
* Common state functions
*****************************************************************************/
--- a/master/fsm_slave_config.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_slave_config.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -62,12 +55,13 @@
ec_datagram_t *datagram; /**< Datagram used in the state machine. */
ec_fsm_change_t *fsm_change; /**< State change state machine. */
ec_fsm_coe_t *fsm_coe; /**< CoE state machine. */
- ec_fsm_pdo_t *fsm_pdo; /**< Pdo configuration state machine. */
+ ec_fsm_pdo_t *fsm_pdo; /**< PDO configuration state machine. */
ec_slave_t *slave; /**< Slave the FSM runs on. */
void (*state)(ec_fsm_slave_config_t *); /**< State function. */
unsigned int retries; /**< Retries on datagram timeout. */
- ec_sdo_request_t *request; /**< Sdo request for Sdo configuration. */
+ ec_sdo_request_t *request; /**< SDO request for SDO configuration. */
+ ec_sdo_request_t request_copy; /**< Copied SDO request. */
};
/*****************************************************************************/
--- a/master/fsm_slave_scan.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_slave_scan.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -72,7 +65,7 @@
ec_datagram_t *datagram, /**< Datagram to use. */
ec_fsm_slave_config_t *fsm_slave_config, /**< Slave configuration
state machine to use. */
- ec_fsm_pdo_t *fsm_pdo /**< Pdo configuration machine to use. */
+ ec_fsm_pdo_t *fsm_pdo /**< PDO configuration machine to use. */
)
{
fsm->datagram = datagram;
@@ -540,12 +533,12 @@
break;
case 0x0032:
if (ec_slave_fetch_sii_pdos( slave, (uint8_t *) cat_word,
- cat_size * 2, EC_DIR_INPUT)) // TxPdo
+ cat_size * 2, EC_DIR_INPUT)) // TxPDO
goto end;
break;
case 0x0033:
if (ec_slave_fetch_sii_pdos( slave, (uint8_t *) cat_word,
- cat_size * 2, EC_DIR_OUTPUT)) // RxPdo
+ cat_size * 2, EC_DIR_OUTPUT)) // RxPDO
goto end;
break;
default:
@@ -629,7 +622,7 @@
ec_slave_t *slave = fsm->slave;
if (slave->master->debug_level)
- EC_DBG("Scanning Pdo assignment and mapping of slave %u.\n",
+ EC_DBG("Scanning PDO assignment and mapping of slave %u.\n",
slave->ring_position);
fsm->state = ec_fsm_slave_scan_state_pdos;
ec_fsm_pdo_start_reading(fsm->fsm_pdo, slave);
@@ -652,7 +645,7 @@
return;
}
- // reading Pdo configuration finished
+ // reading PDO configuration finished
fsm->state = ec_fsm_slave_scan_state_end;
}
--- a/master/fsm_slave_scan.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/fsm_slave_scan.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -64,7 +57,7 @@
ec_datagram_t *datagram; /**< Datagram used in the state machine. */
ec_fsm_slave_config_t *fsm_slave_config; /**< Slave configuration state
machine to use. */
- ec_fsm_pdo_t *fsm_pdo; /**< Pdo configuration state machine to use. */
+ ec_fsm_pdo_t *fsm_pdo; /**< PDO configuration state machine to use. */
unsigned int retries; /**< Retries on datagram timeout. */
void (*state)(ec_fsm_slave_scan_t *); /**< State function. */
--- a/master/globals.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/globals.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -57,7 +50,7 @@
/** Number of state machine retries on datagram timeout. */
#define EC_FSM_RETRIES 3
-/** Seconds to wait before fetching Sdo dictionary
+/** Seconds to wait before fetching SDO dictionary
after slave entered PREOP state. */
#define EC_WAIT_SDO_DICT 3
@@ -129,21 +122,21 @@
/** Supported mailbox protocols.
*/
enum {
- EC_MBOX_AOE = 0x01, /**< ADS-over-EtherCAT */
- EC_MBOX_EOE = 0x02, /**< Ethernet-over-EtherCAT */
- EC_MBOX_COE = 0x04, /**< CANopen-over-EtherCAT */
- EC_MBOX_FOE = 0x08, /**< File-Access-over-EtherCAT */
- EC_MBOX_SOE = 0x10, /**< Servo-Profile-over-EtherCAT */
+ EC_MBOX_AOE = 0x01, /**< ADS over EtherCAT */
+ EC_MBOX_EOE = 0x02, /**< Ethernet over EtherCAT */
+ EC_MBOX_COE = 0x04, /**< CANopen over EtherCAT */
+ EC_MBOX_FOE = 0x08, /**< File-Access over EtherCAT */
+ EC_MBOX_SOE = 0x10, /**< Servo-Profile over EtherCAT */
EC_MBOX_VOE = 0x20 /**< Vendor specific */
};
-/** Slave information interface CANopen-over-EtherCAT details flags.
+/** Slave information interface CANopen over EtherCAT details flags.
*/
typedef struct {
- uint8_t enable_sdo : 1; /**< Enable Sdo access. */
+ uint8_t enable_sdo : 1; /**< Enable SDO access. */
uint8_t enable_sdo_info : 1; /**< SDO information service available. */
- uint8_t enable_pdo_assign : 1; /**< Pdo mapping configurable. */
- uint8_t enable_pdo_configuration : 1; /**< Pdo configuration possible. */
+ uint8_t enable_pdo_assign : 1; /**< PDO mapping configurable. */
+ uint8_t enable_pdo_configuration : 1; /**< PDO configuration possible. */
uint8_t enable_upload_at_startup : 1; /**< ?. */
uint8_t enable_sdo_complete_access : 1; /**< Complete access possible. */
} ec_sii_coe_details_t;
--- a/master/ioctl.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/ioctl.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/mailbox.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/mailbox.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/mailbox.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/mailbox.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/master.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/master.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -59,16 +52,46 @@
/*****************************************************************************/
+#ifdef EC_HAVE_CYCLES
+
+/** Frame timeout in cycles.
+ */
+static cycles_t timeout_cycles;
+
+#else
+
+/** Frame timeout in jiffies.
+ */
+static unsigned long timeout_jiffies;
+
+#endif
+
+/*****************************************************************************/
+
void ec_master_clear_slave_configs(ec_master_t *);
void ec_master_clear_domains(ec_master_t *);
-static int ec_master_idle_thread(ec_master_t *);
-static int ec_master_operation_thread(ec_master_t *);
+static int ec_master_idle_thread(void *);
+static int ec_master_operation_thread(void *);
#ifdef EC_EOE
void ec_master_eoe_run(unsigned long);
#endif
/*****************************************************************************/
+/** Static variables initializer.
+*/
+void ec_master_init_static(void)
+{
+#ifdef EC_HAVE_CYCLES
+ timeout_cycles = (cycles_t) EC_IO_TIMEOUT /* us */ * (cpu_khz / 1000);
+#else
+ // one jiffy may always elapse between time measurement
+ timeout_jiffies = max(EC_IO_TIMEOUT * HZ / 1000000, 1);
+#endif
+}
+
+/*****************************************************************************/
+
/**
Master constructor.
\return 0 in case of success, else < 0
@@ -122,6 +145,8 @@
master->stats.output_jiffies = 0;
master->frames_timed_out = 0;
+ master->thread = NULL;
+
#ifdef EC_EOE
init_timer(&master->eoe_timer);
master->eoe_timer.function = ec_master_eoe_run;
@@ -163,14 +188,22 @@
if (ec_cdev_init(&master->cdev, master, device_number))
goto out_clear_fsm;
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 15)
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)
+ master->class_device = device_create(class, NULL,
+ MKDEV(MAJOR(device_number), master->index), NULL,
+ "EtherCAT%u", master->index);
+#elif LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 26)
+ master->class_device = device_create(class, NULL,
+ MKDEV(MAJOR(device_number), master->index),
+ "EtherCAT%u", master->index);
+#elif LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 15)
+ master->class_device = class_device_create(class, NULL,
+ MKDEV(MAJOR(device_number), master->index), NULL,
+ "EtherCAT%u", master->index);
+#else
master->class_device = class_device_create(class,
- MKDEV(MAJOR(device_number), master->index),
- NULL, "EtherCAT%u", master->index);
-#else
- master->class_device = class_device_create(class, NULL,
- MKDEV(MAJOR(device_number), master->index),
- NULL, "EtherCAT%u", master->index);
+ MKDEV(MAJOR(device_number), master->index), NULL,
+ "EtherCAT%u", master->index);
#endif
if (IS_ERR(master->class_device)) {
EC_ERR("Failed to create class device!\n");
@@ -200,7 +233,12 @@
ec_master_t *master /**< EtherCAT master */
)
{
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 26)
+ device_unregister(master->class_device);
+#else
class_device_unregister(master->class_device);
+#endif
+
ec_cdev_clear(&master->cdev);
#ifdef EC_EOE
ec_master_clear_eoe_handlers(master);
@@ -310,16 +348,18 @@
*/
int ec_master_thread_start(
ec_master_t *master, /**< EtherCAT master */
- int (*thread_func)(ec_master_t *) /**< thread function to start */
+ int (*thread_func)(void *), /**< thread function to start */
+ const char *name /**< Thread name. */
)
{
- init_completion(&master->thread_can_terminate);
- init_completion(&master->thread_exit);
-
- EC_INFO("Starting master thread.\n");
- if (!(master->thread_id = kernel_thread((int (*)(void *)) thread_func,
- master, CLONE_KERNEL)))
+ EC_INFO("Starting %s thread.\n", name);
+ master->thread = kthread_run(thread_func, master, name);
+ if (IS_ERR(master->thread)) {
+ EC_ERR("Failed to start master thread (error %i)!\n",
+ (int) PTR_ERR(master->thread));
+ master->thread = NULL;
return -1;
+ }
return 0;
}
@@ -334,19 +374,16 @@
{
unsigned long sleep_jiffies;
- if (!master->thread_id) {
- EC_WARN("ec_master_thread_stop: Already finished!\n");
+ if (!master->thread) {
+ EC_WARN("ec_master_thread_stop(): Already finished!\n");
return;
}
if (master->debug_level)
EC_DBG("Stopping master thread.\n");
- // wait until thread is ready to receive the SIGTERM
- wait_for_completion(&master->thread_can_terminate);
-
- kill_proc(master->thread_id, SIGTERM, 1);
- wait_for_completion(&master->thread_exit);
+ kthread_stop(master->thread);
+ master->thread = NULL;
EC_INFO("Master thread exited.\n");
if (master->fsm_datagram.state != EC_DATAGRAM_SENT)
@@ -373,7 +410,8 @@
master->cb_data = master;
master->phase = EC_IDLE;
- if (ec_master_thread_start(master, ec_master_idle_thread)) {
+ if (ec_master_thread_start(master, ec_master_idle_thread,
+ "EtherCAT-IDLE")) {
master->phase = EC_ORPHANED;
return -1;
}
@@ -529,7 +567,8 @@
}
#endif
- if (ec_master_thread_start(master, ec_master_idle_thread))
+ if (ec_master_thread_start(master, ec_master_idle_thread,
+ "EtherCAT-IDLE"))
EC_WARN("Failed to restart master thread!\n");
#ifdef EC_EOE
ec_master_eoe_start(master);
@@ -823,13 +862,14 @@
/** Master kernel thread function for IDLE phase.
*/
-static int ec_master_idle_thread(ec_master_t *master)
-{
- daemonize("EtherCAT-IDLE");
- allow_signal(SIGTERM);
- complete(&master->thread_can_terminate);
-
- while (!signal_pending(current)) {
+static int ec_master_idle_thread(void *priv_data)
+{
+ ec_master_t *master = (ec_master_t *) priv_data;
+
+ if (master->debug_level)
+ EC_DBG("Idle thread running.\n");
+
+ while (!kthread_should_stop()) {
ec_datagram_output_stats(&master->fsm_datagram);
// receive
@@ -841,7 +881,8 @@
goto schedule;
// execute master state machine
- down(&master->master_sem);
+ if (down_interruptible(&master->master_sem))
+ break;
ec_fsm_master_exec(&master->fsm);
up(&master->master_sem);
@@ -861,23 +902,23 @@
}
}
- master->thread_id = 0;
if (master->debug_level)
EC_DBG("Master IDLE thread exiting...\n");
- complete_and_exit(&master->thread_exit, 0);
+ return 0;
}
/*****************************************************************************/
/** Master kernel thread function for IDLE phase.
*/
-static int ec_master_operation_thread(ec_master_t *master)
-{
- daemonize("EtherCAT-OP");
- allow_signal(SIGTERM);
- complete(&master->thread_can_terminate);
-
- while (!signal_pending(current)) {
+static int ec_master_operation_thread(void *priv_data)
+{
+ ec_master_t *master = (ec_master_t *) priv_data;
+
+ if (master->debug_level)
+ EC_DBG("Operation thread running.\n");
+
+ while (!kthread_should_stop()) {
ec_datagram_output_stats(&master->fsm_datagram);
if (master->injection_seq_rt != master->injection_seq_fsm ||
master->fsm_datagram.state == EC_DATAGRAM_SENT ||
@@ -888,7 +929,8 @@
ec_master_output_stats(master);
// execute master state machine
- down(&master->master_sem);
+ if (down_interruptible(&master->master_sem))
+ break;
ec_fsm_master_exec(&master->fsm);
up(&master->master_sem);
@@ -905,16 +947,15 @@
}
}
- master->thread_id = 0;
if (master->debug_level)
EC_DBG("Master OP thread exiting...\n");
- complete_and_exit(&master->thread_exit, 0);
+ return 0;
}
/*****************************************************************************/
#ifdef EC_EOE
-/** Starts Ethernet-over-EtherCAT processing on demand.
+/** Starts Ethernet over EtherCAT processing on demand.
*/
void ec_master_eoe_start(ec_master_t *master /**< EtherCAT master */)
{
@@ -941,7 +982,7 @@
/*****************************************************************************/
-/** Stops the Ethernet-over-EtherCAT processing.
+/** Stops the Ethernet over EtherCAT processing.
*/
void ec_master_eoe_stop(ec_master_t *master /**< EtherCAT master */)
{
@@ -955,7 +996,7 @@
/*****************************************************************************/
-/** Does the Ethernet-over-EtherCAT processing.
+/** Does the Ethernet over EtherCAT processing.
*/
void ec_master_eoe_run(unsigned long data /**< master pointer */)
{
@@ -1294,7 +1335,8 @@
master->release_cb = master->ext_release_cb;
master->cb_data = master->ext_cb_data;
- if (ec_master_thread_start(master, ec_master_operation_thread)) {
+ if (ec_master_thread_start(master, ec_master_operation_thread,
+ "EtherCAT-OP")) {
EC_ERR("Failed to start master thread!\n");
return -1;
}
@@ -1340,33 +1382,21 @@
void ecrt_master_receive(ec_master_t *master)
{
ec_datagram_t *datagram, *next;
-#ifdef EC_HAVE_CYCLES
- cycles_t cycles_timeout;
-#else
- unsigned long diff_ms, timeout_ms;
-#endif
unsigned int frames_timed_out = 0;
// receive datagrams
ec_device_poll(&master->main_device);
-#ifdef EC_HAVE_CYCLES
- cycles_timeout = (cycles_t) EC_IO_TIMEOUT /* us */ * (cpu_khz / 1000);
-#else
- timeout_ms = max(EC_IO_TIMEOUT /* us */ / 1000, 2);
-#endif
-
// dequeue all datagrams that timed out
list_for_each_entry_safe(datagram, next, &master->datagram_queue, queue) {
if (datagram->state != EC_DATAGRAM_SENT) continue;
#ifdef EC_HAVE_CYCLES
if (master->main_device.cycles_poll - datagram->cycles_sent
- > cycles_timeout) {
+ > timeout_cycles) {
#else
- diff_ms = (master->main_device.jiffies_poll
- - datagram->jiffies_sent) * 1000 / HZ;
- if (diff_ms > timeout_ms) {
+ if (master->main_device.jiffies_poll - datagram->jiffies_sent
+ > timeout_jiffies) {
#endif
frames_timed_out = 1;
list_del_init(&datagram->queue);
@@ -1375,16 +1405,16 @@
ec_master_output_stats(master);
if (unlikely(master->debug_level > 0)) {
+ unsigned int time_us;
+#ifdef EC_HAVE_CYCLES
+ time_us = (unsigned int) (master->main_device.cycles_poll -
+ datagram->cycles_sent) * 1000 / cpu_khz;
+#else
+ time_us = (unsigned int) ((master->main_device.jiffies_poll -
+ datagram->jiffies_sent) * 1000000 / HZ);
+#endif
EC_DBG("TIMED OUT datagram %08x, index %02X waited %u us.\n",
- (unsigned int) datagram, datagram->index,
-#ifdef EC_HAVE_CYCLES
- (unsigned int) (master->main_device.cycles_poll
- - datagram->cycles_sent) * 1000 / cpu_khz
-#else
- (unsigned int) (diff_ms * 1000)
-#endif
- );
-
+ (unsigned int) datagram, datagram->index, time_us);
}
}
}
@@ -1404,7 +1434,7 @@
if (master->debug_level)
EC_DBG("ecrt_master_slave_config(master = 0x%x, alias = %u, "
- "position = %u, vendor_id = %u, product_code = %u)\n",
+ "position = %u, vendor_id = 0x%x, product_code = 0x%x)\n",
(u32) master, alias, position, vendor_id, product_code);
list_for_each_entry(sc, &master->configs, list) {
--- a/master/master.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/master.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -41,10 +34,16 @@
#ifndef __EC_MASTER_H__
#define __EC_MASTER_H__
+#include <linux/version.h>
#include <linux/list.h>
#include <linux/timer.h>
#include <linux/wait.h>
+#include <linux/kthread.h>
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)
+#include <linux/semaphore.h>
+#else
#include <asm/semaphore.h>
+#endif
#include "device.h"
#include "domain.h"
@@ -87,7 +86,11 @@
unsigned int reserved; /**< \a True, if the master is in use. */
ec_cdev_t cdev; /**< Master character device. */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 26)
+ struct device *class_device; /**< Master class device. */
+#else
struct class_device *class_device; /**< Master class device. */
+#endif
struct semaphore master_sem; /**< Master semaphore. */
ec_device_t main_device; /**< EtherCAT main device. */
@@ -134,20 +137,12 @@
unsigned int frames_timed_out; /**< There were frame timeouts in the last
call to ecrt_master_receive(). */
- int thread_id; /**< Master thread PID. */
- struct completion thread_can_terminate; /**< Thread termination completion
- object. When stopping the
- thread, it must be assured, that
- it 'hears' a SIGTERM, therefore
- the allow_singal() function must
- have been called.
- */
- struct completion thread_exit; /**< Thread completion object. */
+ struct task_struct *thread; /**< Master thread. */
#ifdef EC_EOE
struct timer_list eoe_timer; /**< EoE timer object. */
unsigned int eoe_running; /**< \a True, if EoE processing is active. */
- struct list_head eoe_handlers; /**< Ethernet-over-EtherCAT handlers. */
+ struct list_head eoe_handlers; /**< Ethernet over EtherCAT handlers. */
#endif
spinlock_t internal_lock; /**< Spinlock used in \a IDLE phase. */
@@ -162,13 +157,16 @@
wait_queue_head_t sii_queue; /**< Wait queue for SII
write requests from user space. */
- struct list_head slave_sdo_requests; /**< Sdo access requests. */
- wait_queue_head_t sdo_queue; /**< Wait queue for Sdo access requests
+ struct list_head slave_sdo_requests; /**< SDO access requests. */
+ wait_queue_head_t sdo_queue; /**< Wait queue for SDO access requests
from user space. */
};
/*****************************************************************************/
+// static funtions
+void ec_master_init_static(void);
+
// master creation/deletion
int ec_master_init(ec_master_t *, unsigned int, const uint8_t *,
const uint8_t *, dev_t, struct class *);
--- a/master/module.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/module.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -132,6 +125,9 @@
goto out_class;
}
}
+
+ // initialize static master variables
+ ec_master_init_static();
if (master_count) {
if (!(masters = kmalloc(sizeof(ec_master_t) * master_count,
--- a/master/pdo.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/pdo.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -44,10 +37,10 @@
/*****************************************************************************/
-/** Pdo constructor.
+/** PDO constructor.
*/
void ec_pdo_init(
- ec_pdo_t *pdo /**< EtherCAT Pdo */
+ ec_pdo_t *pdo /**< EtherCAT PDO */
)
{
pdo->sync_index = -1; // not assigned
@@ -57,7 +50,7 @@
/*****************************************************************************/
-/** Pdo copy constructor.
+/** PDO copy constructor.
*/
int ec_pdo_init_copy(ec_pdo_t *pdo, const ec_pdo_t *other_pdo)
{
@@ -82,9 +75,9 @@
/*****************************************************************************/
-/** Pdo destructor.
- */
-void ec_pdo_clear(ec_pdo_t *pdo /**< EtherCAT Pdo. */)
+/** PDO destructor.
+ */
+void ec_pdo_clear(ec_pdo_t *pdo /**< EtherCAT PDO. */)
{
if (pdo->name)
kfree(pdo->name);
@@ -94,13 +87,13 @@
/*****************************************************************************/
-/** Clear Pdo entry list.
- */
-void ec_pdo_clear_entries(ec_pdo_t *pdo /**< EtherCAT Pdo. */)
+/** Clear PDO entry list.
+ */
+void ec_pdo_clear_entries(ec_pdo_t *pdo /**< EtherCAT PDO. */)
{
ec_pdo_entry_t *entry, *next;
- // free all Pdo entries
+ // free all PDO entries
list_for_each_entry_safe(entry, next, &pdo->entries, list) {
list_del(&entry->list);
ec_pdo_entry_clear(entry);
@@ -110,10 +103,10 @@
/*****************************************************************************/
-/** Set Pdo name.
+/** Set PDO name.
*/
int ec_pdo_set_name(
- ec_pdo_t *pdo, /**< Pdo. */
+ ec_pdo_t *pdo, /**< PDO. */
const char *name /**< New name. */
)
{
@@ -127,7 +120,7 @@
if (name && (len = strlen(name))) {
if (!(pdo->name = (char *) kmalloc(len + 1, GFP_KERNEL))) {
- EC_ERR("Failed to allocate Pdo name.\n");
+ EC_ERR("Failed to allocate PDO name.\n");
return -1;
}
memcpy(pdo->name, name, len + 1);
@@ -140,7 +133,7 @@
/*****************************************************************************/
-/** Add a new Pdo entry to the configuration.
+/** Add a new PDO entry to the configuration.
*/
ec_pdo_entry_t *ec_pdo_add_entry(
ec_pdo_t *pdo,
@@ -152,7 +145,7 @@
ec_pdo_entry_t *entry;
if (!(entry = kmalloc(sizeof(ec_pdo_entry_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate memory for Pdo entry.\n");
+ EC_ERR("Failed to allocate memory for PDO entry.\n");
return NULL;
}
@@ -166,7 +159,7 @@
/*****************************************************************************/
-/** Copy Pdo entries from another Pdo.
+/** Copy PDO entries from another PDO.
*/
int ec_pdo_copy_entries(ec_pdo_t *pdo, const ec_pdo_t *other)
{
@@ -177,7 +170,7 @@
list_for_each_entry(other_entry, &other->entries, list) {
if (!(entry = (ec_pdo_entry_t *)
kmalloc(sizeof(ec_pdo_entry_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate memory for Pdo entry copy.\n");
+ EC_ERR("Failed to allocate memory for PDO entry copy.\n");
return -1;
}
@@ -194,14 +187,14 @@
/*****************************************************************************/
-/** Compares the entries of two Pdos.
- *
- * \retval 1 The entries of the given Pdos are equal.
- * \retval 0 The entries of the given Pdos differ.
+/** Compares the entries of two PDOs.
+ *
+ * \retval 1 The entries of the given PDOs are equal.
+ * \retval 0 The entries of the given PDOs differ.
*/
int ec_pdo_equal_entries(
- const ec_pdo_t *pdo1, /**< First Pdo. */
- const ec_pdo_t *pdo2 /**< Second Pdo. */
+ const ec_pdo_t *pdo1, /**< First PDO. */
+ const ec_pdo_t *pdo2 /**< Second PDO. */
)
{
const struct list_head *head1, *head2, *item1, *item2;
@@ -230,12 +223,12 @@
/*****************************************************************************/
-/** Get the number of Pdo entries.
- *
- * \return Number of Pdo entries.
+/** Get the number of PDO entries.
+ *
+ * \return Number of PDO entries.
*/
unsigned int ec_pdo_entry_count(
- const ec_pdo_t *pdo /**< Pdo. */
+ const ec_pdo_t *pdo /**< PDO. */
)
{
const ec_pdo_entry_t *entry;
@@ -250,12 +243,12 @@
/*****************************************************************************/
-/** Finds a Pdo entry via its position in the list.
+/** Finds a PDO entry via its position in the list.
*
* Const version.
*/
const ec_pdo_entry_t *ec_pdo_find_entry_by_pos_const(
- const ec_pdo_t *pdo, /**< Pdo. */
+ const ec_pdo_t *pdo, /**< PDO. */
unsigned int pos /**< Position in the list. */
)
{
--- a/master/pdo.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/pdo.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -50,14 +43,14 @@
/*****************************************************************************/
-/** Pdo description.
+/** PDO description.
*/
typedef struct {
struct list_head list; /**< List item. */
- uint16_t index; /**< Pdo index. */
+ uint16_t index; /**< PDO index. */
int8_t sync_index; /**< Assigned sync manager. \todo remove? */
- char *name; /**< Pdo name. */
- struct list_head entries; /**< List of Pdo entries. */
+ char *name; /**< PDO name. */
+ struct list_head entries; /**< List of PDO entries. */
} ec_pdo_t;
/*****************************************************************************/
--- a/master/pdo_entry.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/pdo_entry.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -44,10 +37,10 @@
/*****************************************************************************/
-/** Pdo entry constructor.
+/** PDO entry constructor.
*/
void ec_pdo_entry_init(
- ec_pdo_entry_t *entry /**< Pdo entry. */
+ ec_pdo_entry_t *entry /**< PDO entry. */
)
{
entry->name = NULL;
@@ -55,11 +48,11 @@
/*****************************************************************************/
-/** Pdo entry copy constructor.
+/** PDO entry copy constructor.
*/
int ec_pdo_entry_init_copy(
- ec_pdo_entry_t *entry, /**< Pdo entry. */
- const ec_pdo_entry_t *other /**< Pdo entry to copy from. */
+ ec_pdo_entry_t *entry, /**< PDO entry. */
+ const ec_pdo_entry_t *other /**< PDO entry to copy from. */
)
{
entry->index = other->index;
@@ -75,9 +68,9 @@
/*****************************************************************************/
-/** Pdo entry destructor.
+/** PDO entry destructor.
*/
-void ec_pdo_entry_clear(ec_pdo_entry_t *entry /**< Pdo entry. */)
+void ec_pdo_entry_clear(ec_pdo_entry_t *entry /**< PDO entry. */)
{
if (entry->name)
kfree(entry->name);
@@ -85,10 +78,10 @@
/*****************************************************************************/
-/** Set Pdo entry name.
+/** Set PDO entry name.
*/
int ec_pdo_entry_set_name(
- ec_pdo_entry_t *entry, /**< Pdo entry. */
+ ec_pdo_entry_t *entry, /**< PDO entry. */
const char *name /**< New name. */
)
{
@@ -102,7 +95,7 @@
if (name && (len = strlen(name))) {
if (!(entry->name = (char *) kmalloc(len + 1, GFP_KERNEL))) {
- EC_ERR("Failed to allocate Pdo entry name.\n");
+ EC_ERR("Failed to allocate PDO entry name.\n");
return -1;
}
memcpy(entry->name, name, len + 1);
@@ -115,14 +108,14 @@
/*****************************************************************************/
-/** Compares two Pdo entries.
+/** Compares two PDO entries.
*
* \retval 1 The entries are equal.
* \retval 0 The entries differ.
*/
int ec_pdo_entry_equal(
- const ec_pdo_entry_t *entry1, /**< First Pdo entry. */
- const ec_pdo_entry_t *entry2 /**< Second Pdo entry. */
+ const ec_pdo_entry_t *entry1, /**< First PDO entry. */
+ const ec_pdo_entry_t *entry2 /**< Second PDO entry. */
)
{
return entry1->index == entry2->index
--- a/master/pdo_entry.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/pdo_entry.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -49,12 +42,12 @@
/*****************************************************************************/
-/** Pdo entry description.
+/** PDO entry description.
*/
typedef struct {
struct list_head list; /**< list item */
- uint16_t index; /**< Pdo entry index */
- uint8_t subindex; /**< Pdo entry subindex */
+ uint16_t index; /**< PDO entry index */
+ uint8_t subindex; /**< PDO entry subindex */
char *name; /**< entry name */
uint8_t bit_length; /**< entry length in bit */
} ec_pdo_entry_t;
--- a/master/pdo_list.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/pdo_list.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- EtherCAT Pdo list methods.
+ EtherCAT PDO list methods.
*/
/*****************************************************************************/
@@ -49,10 +42,10 @@
/*****************************************************************************/
-/** Pdo list constructor.
+/** PDO list constructor.
*/
void ec_pdo_list_init(
- ec_pdo_list_t *pl /**< Pdo list. */
+ ec_pdo_list_t *pl /**< PDO list. */
)
{
INIT_LIST_HEAD(&pl->list);
@@ -60,18 +53,18 @@
/*****************************************************************************/
-/** Pdo list destructor.
- */
-void ec_pdo_list_clear(ec_pdo_list_t *pl /**< Pdo list. */)
+/** PDO list destructor.
+ */
+void ec_pdo_list_clear(ec_pdo_list_t *pl /**< PDO list. */)
{
ec_pdo_list_clear_pdos(pl);
}
/*****************************************************************************/
-/** Clears the list of mapped Pdos.
- */
-void ec_pdo_list_clear_pdos(ec_pdo_list_t *pl /**< Pdo list. */)
+/** Clears the list of mapped PDOs.
+ */
+void ec_pdo_list_clear_pdos(ec_pdo_list_t *pl /**< PDO list. */)
{
ec_pdo_t *pdo, *next;
@@ -84,12 +77,12 @@
/*****************************************************************************/
-/** Calculates the total size of the mapped Pdo entries.
+/** Calculates the total size of the mapped PDO entries.
*
* \retval Data size in byte.
*/
uint16_t ec_pdo_list_total_size(
- const ec_pdo_list_t *pl /**< Pdo list. */
+ const ec_pdo_list_t *pl /**< PDO list. */
)
{
unsigned int bit_size;
@@ -114,20 +107,20 @@
/*****************************************************************************/
-/** Add a new Pdo to the list.
- *
- * \retval >0 Pointer to new Pdo.
+/** Add a new PDO to the list.
+ *
+ * \retval >0 Pointer to new PDO.
* \retval NULL No memory.
*/
ec_pdo_t *ec_pdo_list_add_pdo(
- ec_pdo_list_t *pl, /**< Pdo list. */
- uint16_t index /**< Pdo index. */
+ ec_pdo_list_t *pl, /**< PDO list. */
+ uint16_t index /**< PDO index. */
)
{
ec_pdo_t *pdo;
if (!(pdo = (ec_pdo_t *) kmalloc(sizeof(ec_pdo_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate memory for Pdo.\n");
+ EC_ERR("Failed to allocate memory for PDO.\n");
return NULL;
}
@@ -139,26 +132,26 @@
/*****************************************************************************/
-/** Add the copy of an existing Pdo to the list.
+/** Add the copy of an existing PDO to the list.
*
* \return 0 on success, else < 0
*/
int ec_pdo_list_add_pdo_copy(
- ec_pdo_list_t *pl, /**< Pdo list. */
- const ec_pdo_t *pdo /**< Pdo to add. */
+ ec_pdo_list_t *pl, /**< PDO list. */
+ const ec_pdo_t *pdo /**< PDO to add. */
)
{
ec_pdo_t *mapped_pdo;
- // Pdo already mapped?
+ // PDO already mapped?
list_for_each_entry(mapped_pdo, &pl->list, list) {
if (mapped_pdo->index != pdo->index) continue;
- EC_ERR("Pdo 0x%04X is already mapped!\n", pdo->index);
+ EC_ERR("PDO 0x%04X is already mapped!\n", pdo->index);
return -1;
}
if (!(mapped_pdo = kmalloc(sizeof(ec_pdo_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate Pdo memory.\n");
+ EC_ERR("Failed to allocate PDO memory.\n");
return -1;
}
@@ -173,20 +166,20 @@
/*****************************************************************************/
-/** Makes a deep copy of another Pdo list.
+/** Makes a deep copy of another PDO list.
*
* \return 0 on success, else < 0
*/
int ec_pdo_list_copy(
- ec_pdo_list_t *pl, /**< Pdo list. */
- const ec_pdo_list_t *other /**< Pdo list to copy from. */
+ ec_pdo_list_t *pl, /**< PDO list. */
+ const ec_pdo_list_t *other /**< PDO list to copy from. */
)
{
ec_pdo_t *other_pdo;
ec_pdo_list_clear_pdos(pl);
- // Pdo already mapped?
+ // PDO already mapped?
list_for_each_entry(other_pdo, &other->list, list) {
if (ec_pdo_list_add_pdo_copy(pl, other_pdo))
return -1;
@@ -197,13 +190,13 @@
/*****************************************************************************/
-/** Compares two Pdo lists.
- *
- * Only the list is compared, not the Pdo entries (i. e. the Pdo
+/** Compares two PDO lists.
+ *
+ * Only the list is compared, not the PDO entries (i. e. the PDO
* mapping).
*
- * \retval 1 The given Pdo lists are equal.
- * \retval 0 The given Pdo lists differ.
+ * \retval 1 The given PDO lists are equal.
+ * \retval 0 The given PDO lists differ.
*/
int ec_pdo_list_equal(
const ec_pdo_list_t *pl1, /**< First list. */
@@ -237,11 +230,11 @@
/*****************************************************************************/
-/** Finds a Pdo with the given index.
+/** Finds a PDO with the given index.
*/
ec_pdo_t *ec_pdo_list_find_pdo(
- const ec_pdo_list_t *pl, /**< Pdo list. */
- uint16_t index /**< Pdo index. */
+ const ec_pdo_list_t *pl, /**< PDO list. */
+ uint16_t index /**< PDO index. */
)
{
ec_pdo_t *pdo;
@@ -257,11 +250,11 @@
/*****************************************************************************/
-/** Finds a Pdo with the given index and returns a const pointer.
+/** Finds a PDO with the given index and returns a const pointer.
*/
const ec_pdo_t *ec_pdo_list_find_pdo_const(
- const ec_pdo_list_t *pl, /**< Pdo list. */
- uint16_t index /**< Pdo index. */
+ const ec_pdo_list_t *pl, /**< PDO list. */
+ uint16_t index /**< PDO index. */
)
{
const ec_pdo_t *pdo;
@@ -277,12 +270,12 @@
/*****************************************************************************/
-/** Finds a Pdo via its position in the list.
+/** Finds a PDO via its position in the list.
*
* Const version.
*/
const ec_pdo_t *ec_pdo_list_find_pdo_by_pos_const(
- const ec_pdo_list_t *pl, /**< Pdo list. */
+ const ec_pdo_list_t *pl, /**< PDO list. */
unsigned int pos /**< Position in the list. */
)
{
@@ -299,12 +292,12 @@
/*****************************************************************************/
-/** Get the number of Pdos in the list.
- *
- * \return Number of Pdos.
+/** Get the number of PDOs in the list.
+ *
+ * \return Number of PDOs.
*/
unsigned int ec_pdo_list_count(
- const ec_pdo_list_t *pl /**< Pdo list. */
+ const ec_pdo_list_t *pl /**< PDO list. */
)
{
const ec_pdo_t *pdo;
@@ -319,10 +312,10 @@
/*****************************************************************************/
-/** Outputs the Pdos in the list.
+/** Outputs the PDOs in the list.
*/
void ec_pdo_list_print(
- const ec_pdo_list_t *pl /**< Pdo list. */
+ const ec_pdo_list_t *pl /**< PDO list. */
)
{
const ec_pdo_t *pdo;
--- a/master/pdo_list.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/pdo_list.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- EtherCAT Pdo list structure.
+ EtherCAT PDO list structure.
*/
/*****************************************************************************/
@@ -50,10 +43,10 @@
/*****************************************************************************/
-/** EtherCAT Pdo list.
+/** EtherCAT PDO list.
*/
typedef struct {
- struct list_head list; /**< List of Pdos. */
+ struct list_head list; /**< List of PDOs. */
} ec_pdo_list_t;
/*****************************************************************************/
--- a/master/sdo.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sdo.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- CANopen Sdo functions.
+ CANopen SDO functions.
*/
/*****************************************************************************/
@@ -49,9 +42,9 @@
/** Constructor.
*/
void ec_sdo_init(
- ec_sdo_t *sdo, /**< Sdo. */
+ ec_sdo_t *sdo, /**< SDO. */
ec_slave_t *slave, /**< Parent slave. */
- uint16_t index /**< Sdo index. */
+ uint16_t index /**< SDO index. */
)
{
sdo->slave = slave;
@@ -64,12 +57,12 @@
/*****************************************************************************/
-/** Sdo destructor.
+/** SDO destructor.
*
- * Clears and frees an Sdo object.
+ * Clears and frees an SDO object.
*/
void ec_sdo_clear(
- ec_sdo_t *sdo /**< Sdo. */
+ ec_sdo_t *sdo /**< SDO. */
)
{
ec_sdo_entry_t *entry, *next;
@@ -87,13 +80,13 @@
/*****************************************************************************/
-/** Get an Sdo entry from an Sdo via its subindex.
+/** Get an SDO entry from an SDO via its subindex.
*
- * \retval >0 Pointer to the requested Sdo entry.
- * \retval NULL Sdo entry not found.
+ * \retval >0 Pointer to the requested SDO entry.
+ * \retval NULL SDO entry not found.
*/
ec_sdo_entry_t *ec_sdo_get_entry(
- ec_sdo_t *sdo, /**< Sdo. */
+ ec_sdo_t *sdo, /**< SDO. */
uint8_t subindex /**< Entry subindex. */
)
{
@@ -110,15 +103,15 @@
/*****************************************************************************/
-/** Get an Sdo entry from an Sdo via its subindex.
+/** Get an SDO entry from an SDO via its subindex.
*
* const version.
*
- * \retval >0 Pointer to the requested Sdo entry.
- * \retval NULL Sdo entry not found.
+ * \retval >0 Pointer to the requested SDO entry.
+ * \retval NULL SDO entry not found.
*/
const ec_sdo_entry_t *ec_sdo_get_entry_const(
- const ec_sdo_t *sdo, /**< Sdo. */
+ const ec_sdo_t *sdo, /**< SDO. */
uint8_t subindex /**< Entry subindex. */
)
{
--- a/master/sdo.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sdo.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- EtherCAT CANopen Sdo structure.
+ EtherCAT CANopen SDO structure.
*/
/*****************************************************************************/
@@ -48,14 +41,14 @@
/*****************************************************************************/
-/** CANopen Sdo.
+/** CANopen SDO.
*/
struct ec_sdo {
struct list_head list; /**< List item. */
ec_slave_t *slave; /**< Parent slave. */
- uint16_t index; /**< Sdo index. */
+ uint16_t index; /**< SDO index. */
uint8_t object_code; /**< Object code. */
- char *name; /**< Sdo name. */
+ char *name; /**< SDO name. */
uint8_t max_subindex; /**< Maximum subindex. */
struct list_head entries; /**< List of entries. */
};
--- a/master/sdo_entry.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sdo_entry.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- CANopen-over-EtherCAT Sdo entry functions.
+ CANopen over EtherCAT SDO entry functions.
*/
/*****************************************************************************/
@@ -47,8 +40,8 @@
/** Constructor.
*/
void ec_sdo_entry_init(
- ec_sdo_entry_t *entry, /**< Sdo entry. */
- ec_sdo_t *sdo, /**< Parent Sdo. */
+ ec_sdo_entry_t *entry, /**< SDO entry. */
+ ec_sdo_t *sdo, /**< Parent SDO. */
uint8_t subindex /**< Subindex. */
)
{
@@ -64,7 +57,7 @@
/** Destructor.
*/
void ec_sdo_entry_clear(
- ec_sdo_entry_t *entry /**< Sdo entry. */
+ ec_sdo_entry_t *entry /**< SDO entry. */
)
{
--- a/master/sdo_entry.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sdo_entry.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- EtherCAT CANopen Sdo entry structure.
+ EtherCAT CANopen SDO entry structure.
*/
/*****************************************************************************/
@@ -53,11 +46,11 @@
/*****************************************************************************/
-/** CANopen Sdo entry.
+/** CANopen SDO entry.
*/
typedef struct {
struct list_head list; /**< List item. */
- ec_sdo_t *sdo; /**< Parent Sdo. */
+ ec_sdo_t *sdo; /**< Parent SDO. */
uint8_t subindex; /**< Subindex. */
uint16_t data_type; /**< Data type. */
uint16_t bit_length; /**< Data size in bit. */
--- a/master/sdo_request.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sdo_request.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,37 +2,30 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/** \file
- * Canopen-over-EtherCAT Sdo request functions.
+ * Canopen over EtherCAT SDO request functions.
*/
/*****************************************************************************/
@@ -44,7 +37,7 @@
/*****************************************************************************/
-/** Default timeout in ms to wait for Sdo transfer responses.
+/** Default timeout in ms to wait for SDO transfer responses.
*/
#define EC_SDO_REQUEST_RESPONSE_TIMEOUT 3000
@@ -66,10 +59,10 @@
/*****************************************************************************/
-/** Sdo request constructor.
+/** SDO request constructor.
*/
void ec_sdo_request_init(
- ec_sdo_request_t *req /**< Sdo request. */
+ ec_sdo_request_t *req /**< SDO request. */
)
{
req->data = NULL;
@@ -84,10 +77,10 @@
/*****************************************************************************/
-/** Sdo request destructor.
+/** SDO request destructor.
*/
void ec_sdo_request_clear(
- ec_sdo_request_t *req /**< Sdo request. */
+ ec_sdo_request_t *req /**< SDO request. */
)
{
ec_sdo_request_clear_data(req);
@@ -95,10 +88,26 @@
/*****************************************************************************/
-/** Sdo request destructor.
+/** Copy another SDO request.
+ *
+ * \attention Only the index subindex and data are copied.
+ */
+int ec_sdo_request_copy(
+ ec_sdo_request_t *req, /**< SDO request. */
+ const ec_sdo_request_t *other /**< Other SDO request to copy from. */
+ )
+{
+ req->index = other->index;
+ req->subindex = other->subindex;
+ return ec_sdo_request_copy_data(req, other->data, other->data_size);
+}
+
+/*****************************************************************************/
+
+/** SDO request destructor.
*/
void ec_sdo_request_clear_data(
- ec_sdo_request_t *req /**< Sdo request. */
+ ec_sdo_request_t *req /**< SDO request. */
)
{
if (req->data) {
@@ -112,12 +121,12 @@
/*****************************************************************************/
-/** Set the Sdo address.
+/** Set the SDO address.
*/
void ec_sdo_request_address(
- ec_sdo_request_t *req, /**< Sdo request. */
- uint16_t index, /**< Sdo index. */
- uint8_t subindex /**< Sdo subindex. */
+ ec_sdo_request_t *req, /**< SDO request. */
+ uint16_t index, /**< SDO index. */
+ uint8_t subindex /**< SDO subindex. */
)
{
req->index = index;
@@ -131,7 +140,7 @@
* If the \a mem_size is already bigger than \a size, nothing is done.
*/
int ec_sdo_request_alloc(
- ec_sdo_request_t *req, /**< Sdo request. */
+ ec_sdo_request_t *req, /**< SDO request. */
size_t size /**< Data size to allocate. */
)
{
@@ -141,7 +150,7 @@
ec_sdo_request_clear_data(req);
if (!(req->data = (uint8_t *) kmalloc(size, GFP_KERNEL))) {
- EC_ERR("Failed to allocate %u bytes of Sdo memory.\n", size);
+ EC_ERR("Failed to allocate %u bytes of SDO memory.\n", size);
return -1;
}
@@ -152,12 +161,12 @@
/*****************************************************************************/
-/** Copies Sdo data from an external source.
+/** Copies SDO data from an external source.
*
* If the \a mem_size is to small, new memory is allocated.
*/
int ec_sdo_request_copy_data(
- ec_sdo_request_t *req, /**< Sdo request. */
+ ec_sdo_request_t *req, /**< SDO request. */
const uint8_t *source, /**< Source data. */
size_t size /**< Number of bytes in \a source. */
)
@@ -176,7 +185,7 @@
*
* \return non-zero if the timeout was exceeded, else zero.
*/
-int ec_sdo_request_timed_out(const ec_sdo_request_t *req /**< Sdo request. */)
+int ec_sdo_request_timed_out(const ec_sdo_request_t *req /**< SDO request. */)
{
return req->issue_timeout
&& jiffies - req->jiffies_start > HZ * req->issue_timeout / 1000;
--- a/master/sdo_request.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sdo_request.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,38 +2,31 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
/**
\file
- EtherCAT CANopen Sdo request structure.
+ EtherCAT CANopen SDO request structure.
*/
/*****************************************************************************/
@@ -49,15 +42,15 @@
/*****************************************************************************/
-/** CANopen Sdo request.
+/** CANopen SDO request.
*/
struct ec_sdo_request {
struct list_head list; /**< List item. */
- uint16_t index; /**< Sdo index. */
- uint8_t subindex; /**< Sdo subindex. */
- uint8_t *data; /**< Pointer to Sdo data. */
- size_t mem_size; /**< Size of Sdo data memory. */
- size_t data_size; /**< Size of Sdo data. */
+ uint16_t index; /**< SDO index. */
+ uint8_t subindex; /**< SDO subindex. */
+ uint8_t *data; /**< Pointer to SDO data. */
+ size_t mem_size; /**< Size of SDO data memory. */
+ size_t data_size; /**< Size of SDO data. */
uint32_t issue_timeout; /**< Maximum time in ms, the processing of the
request may take. */
uint32_t response_timeout; /**< Maximum time in ms, the transfer is
@@ -65,11 +58,11 @@
ec_direction_t dir; /**< Direction. EC_DIR_OUTPUT means downloading to
the slave, EC_DIR_INPUT means uploading from the
slave. */
- ec_request_state_t state; /**< Sdo request state. */
+ ec_request_state_t state; /**< SDO request state. */
unsigned long jiffies_start; /**< Jiffies, when the request was issued. */
unsigned long jiffies_sent; /**< Jiffies, when the upload/download
request was sent. */
- uint32_t abort_code; /**< Sdo request abort code. Zero on success. */
+ uint32_t abort_code; /**< SDO request abort code. Zero on success. */
};
/*****************************************************************************/
@@ -77,6 +70,7 @@
void ec_sdo_request_init(ec_sdo_request_t *);
void ec_sdo_request_clear(ec_sdo_request_t *);
+int ec_sdo_request_copy(ec_sdo_request_t *, const ec_sdo_request_t *);
void ec_sdo_request_address(ec_sdo_request_t *, uint16_t, uint8_t);
int ec_sdo_request_alloc(ec_sdo_request_t *, size_t);
int ec_sdo_request_copy_data(ec_sdo_request_t *, const uint8_t *, size_t);
--- a/master/slave.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/slave.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -147,7 +140,7 @@
if (slave->config)
ec_slave_config_detach(slave->config);
- // free all Sdos
+ // free all SDOs
list_for_each_entry_safe(sdo, next_sdo, &slave->sdo_dictionary, list) {
list_del(&sdo->list);
ec_sdo_clear(sdo);
@@ -164,7 +157,7 @@
// free all sync managers
ec_slave_clear_sync_managers(slave);
- // free all SII Pdos
+ // free all SII PDOs
list_for_each_entry_safe(pdo, next_pdo, &slave->sii.pdos, list) {
list_del(&pdo->list);
ec_pdo_clear(pdo);
@@ -403,7 +396,7 @@
/*****************************************************************************/
/**
- Fetches data from a [RT]XPdo category.
+ Fetches data from a [RT]xPDO category.
\return 0 in case of success, else < 0
*/
@@ -411,7 +404,7 @@
ec_slave_t *slave, /**< EtherCAT slave */
const uint8_t *data, /**< category data */
size_t data_size, /**< number of bytes */
- ec_direction_t dir /**< Pdo direction. */
+ ec_direction_t dir /**< PDO direction. */
)
{
ec_pdo_t *pdo;
@@ -420,7 +413,7 @@
while (data_size >= 8) {
if (!(pdo = kmalloc(sizeof(ec_pdo_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate Pdo memory.\n");
+ EC_ERR("Failed to allocate PDO memory.\n");
return -1;
}
@@ -441,7 +434,7 @@
for (i = 0; i < entry_count; i++) {
if (!(entry = kmalloc(sizeof(ec_pdo_entry_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate Pdo entry memory.\n");
+ EC_ERR("Failed to allocate PDO entry memory.\n");
return -1;
}
@@ -461,12 +454,12 @@
data += 8;
}
- // if sync manager index is positive, the Pdo is mapped by default
+ // if sync manager index is positive, the PDO is mapped by default
if (pdo->sync_index >= 0) {
ec_sync_t *sync;
if (!(sync = ec_slave_get_sync(slave, pdo->sync_index))) {
- EC_ERR("Invalid SM index %i for Pdo 0x%04X in slave %u.",
+ EC_ERR("Invalid SM index %i for PDO 0x%04X in slave %u.",
pdo->sync_index, pdo->index, slave->ring_position);
return -1;
}
@@ -525,11 +518,11 @@
/*****************************************************************************/
/**
- Counts the total number of Sdos and entries in the dictionary.
+ Counts the total number of SDOs and entries in the dictionary.
*/
void ec_slave_sdo_dict_info(const ec_slave_t *slave, /**< EtherCAT slave */
- unsigned int *sdo_count, /**< number of Sdos */
+ unsigned int *sdo_count, /**< number of SDOs */
unsigned int *entry_count /**< total number of
entries */
)
@@ -552,13 +545,13 @@
/*****************************************************************************/
/**
- * Get an Sdo from the dictionary.
- * \returns The desired Sdo, or NULL.
+ * Get an SDO from the dictionary.
+ * \returns The desired SDO, or NULL.
*/
ec_sdo_t *ec_slave_get_sdo(
ec_slave_t *slave, /**< EtherCAT slave */
- uint16_t index /**< Sdo index */
+ uint16_t index /**< SDO index */
)
{
ec_sdo_t *sdo;
@@ -575,16 +568,16 @@
/*****************************************************************************/
/**
- * Get an Sdo from the dictionary.
+ * Get an SDO from the dictionary.
*
* const version.
*
- * \returns The desired Sdo, or NULL.
+ * \returns The desired SDO, or NULL.
*/
const ec_sdo_t *ec_slave_get_sdo_const(
const ec_slave_t *slave, /**< EtherCAT slave */
- uint16_t index /**< Sdo index */
+ uint16_t index /**< SDO index */
)
{
const ec_sdo_t *sdo;
@@ -600,13 +593,13 @@
/*****************************************************************************/
-/** Get an Sdo from the dictionary, given its position in the list.
- * \returns The desired Sdo, or NULL.
+/** Get an SDO from the dictionary, given its position in the list.
+ * \returns The desired SDO, or NULL.
*/
const ec_sdo_t *ec_slave_get_sdo_by_pos_const(
const ec_slave_t *slave, /**< EtherCAT slave. */
- uint16_t sdo_position /**< Sdo list position. */
+ uint16_t sdo_position /**< SDO list position. */
)
{
const ec_sdo_t *sdo;
@@ -622,8 +615,8 @@
/*****************************************************************************/
-/** Get the number of Sdos in the dictionary.
- * \returns Sdo count.
+/** Get the number of SDOs in the dictionary.
+ * \returns SDO count.
*/
uint16_t ec_slave_sdo_count(
@@ -642,12 +635,12 @@
/*****************************************************************************/
-/** Finds a mapped Pdo.
- * \returns The desired Pdo object, or NULL.
+/** Finds a mapped PDO.
+ * \returns The desired PDO object, or NULL.
*/
const ec_pdo_t *ec_slave_find_pdo(
const ec_slave_t *slave, /**< Slave. */
- uint16_t index /**< Pdo index to find. */
+ uint16_t index /**< PDO index to find. */
)
{
unsigned int i;
@@ -668,7 +661,7 @@
/*****************************************************************************/
-/** Find name for a Pdo and its entries.
+/** Find name for a PDO and its entries.
*/
void ec_slave_find_names_for_pdo(
ec_slave_t *slave,
@@ -699,7 +692,7 @@
/*****************************************************************************/
-/** Attach Pdo names.
+/** Attach PDO names.
*/
void ec_slave_attach_pdo_names(
ec_slave_t *slave
--- a/master/slave.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/slave.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -137,7 +130,7 @@
// Slave information interface
ec_sii_t sii; /**< Extracted SII data. */
- struct list_head sdo_dictionary; /**< Sdo dictionary list */
+ struct list_head sdo_dictionary; /**< SDO dictionary list */
uint8_t sdo_dictionary_fetched; /**< Dictionary has been fetched. */
unsigned long jiffies_preop; /**< Time, the slave went to PREOP. */
};
--- a/master/slave_config.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/slave_config.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
+ *
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -98,14 +91,14 @@
for (i = 0; i < EC_MAX_SYNC_MANAGERS; i++)
ec_sync_config_clear(&sc->sync_configs[i]);
- // free all Sdo configurations
+ // free all SDO configurations
list_for_each_entry_safe(req, next_req, &sc->sdo_configs, list) {
list_del(&req->list);
ec_sdo_request_clear(req);
kfree(req);
}
- // free all Sdo requests
+ // free all SDO requests
list_for_each_entry_safe(req, next_req, &sc->sdo_requests, list) {
list_del(&req->list);
ec_sdo_request_clear(req);
@@ -131,7 +124,7 @@
ec_slave_config_t *sc, /**< Slave configuration. */
ec_domain_t *domain, /**< Domain. */
uint8_t sync_index, /**< Sync manager index. */
- ec_direction_t dir /**< Pdo direction. */
+ ec_direction_t dir /**< PDO direction. */
)
{
unsigned int i;
@@ -233,7 +226,7 @@
/*****************************************************************************/
-/** Loads the default Pdo assignment from the slave object.
+/** Loads the default PDO assignment from the slave object.
*/
void ec_slave_config_load_default_sync_config(ec_slave_config_t *sc)
{
@@ -258,7 +251,7 @@
/*****************************************************************************/
-/** Loads the default mapping for a Pdo from the slave object.
+/** Loads the default mapping for a PDO from the slave object.
*/
void ec_slave_config_load_default_mapping(
const ec_slave_config_t *sc,
@@ -273,10 +266,10 @@
return;
if (sc->master->debug_level)
- EC_DBG("Loading default mapping for Pdo 0x%04X in config %u:%u.\n",
+ EC_DBG("Loading default mapping for PDO 0x%04X in config %u:%u.\n",
pdo->index, sc->alias, sc->position);
- // find Pdo in any sync manager (it could be reassigned later)
+ // find PDO in any sync manager (it could be reassigned later)
for (i = 0; i < sc->slave->sii.sync_count; i++) {
sync = &sc->slave->sii.syncs[i];
@@ -286,13 +279,13 @@
if (default_pdo->name) {
if (sc->master->debug_level)
- EC_DBG("Found Pdo name \"%s\".\n", default_pdo->name);
-
- // take Pdo name from assigned one
+ EC_DBG("Found PDO name \"%s\".\n", default_pdo->name);
+
+ // take PDO name from assigned one
ec_pdo_set_name(pdo, default_pdo->name);
}
- // copy entries (= default Pdo mapping)
+ // copy entries (= default PDO mapping)
if (ec_pdo_copy_entries(pdo, default_pdo))
return;
@@ -314,9 +307,9 @@
/*****************************************************************************/
-/** Get the number of Sdo configurations.
- *
- * \return Number of Sdo configurations.
+/** Get the number of SDO configurations.
+ *
+ * \return Number of SDO configurations.
*/
unsigned int ec_slave_config_sdo_count(
const ec_slave_config_t *sc /**< Slave configuration. */
@@ -334,7 +327,7 @@
/*****************************************************************************/
-/** Finds an Sdo configuration via its position in the list.
+/** Finds an SDO configuration via its position in the list.
*
* Const version.
*/
@@ -460,7 +453,7 @@
entry_bit_length) ? 0 : -1;
up(&sc->master->master_sem);
} else {
- EC_ERR("Pdo 0x%04X is not assigned in config %u:%u.\n",
+ EC_ERR("PDO 0x%04X is not assigned in config %u:%u.\n",
pdo_index, sc->alias, sc->position);
}
@@ -489,7 +482,7 @@
ec_pdo_clear_entries(pdo);
up(&sc->master->master_sem);
} else {
- EC_WARN("Pdo 0x%04X is not assigned in config %u:%u.\n",
+ EC_WARN("PDO 0x%04X is not assigned in config %u:%u.\n",
pdo_index, sc->alias, sc->position);
}
}
@@ -591,7 +584,7 @@
if (bit_position) {
*bit_position = bit_pos;
} else if (bit_pos) {
- EC_ERR("Pdo entry 0x%04X:%02X does not byte-align "
+ EC_ERR("PDO entry 0x%04X:%02X does not byte-align "
"in config %u:%u.\n", index, subindex,
sc->alias, sc->position);
return -3;
@@ -608,7 +601,7 @@
}
}
- EC_ERR("Pdo entry 0x%04X:%02X is not mapped in slave config %u:%u.\n",
+ EC_ERR("PDO entry 0x%04X:%02X is not mapped in slave config %u:%u.\n",
index, subindex, sc->alias, sc->position);
return -1;
}
@@ -634,7 +627,7 @@
if (!(req = (ec_sdo_request_t *)
kmalloc(sizeof(ec_sdo_request_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate memory for Sdo configuration!\n");
+ EC_ERR("Failed to allocate memory for SDO configuration!\n");
return -1;
}
@@ -716,7 +709,7 @@
if (!(req = (ec_sdo_request_t *)
kmalloc(sizeof(ec_sdo_request_t), GFP_KERNEL))) {
- EC_ERR("Failed to allocate Sdo request memory!\n");
+ EC_ERR("Failed to allocate SDO request memory!\n");
return NULL;
}
@@ -748,7 +741,8 @@
state->online = sc->slave ? 1 : 0;
if (state->online) {
state->operational =
- sc->slave->current_state == EC_SLAVE_STATE_OP;
+ sc->slave->current_state == EC_SLAVE_STATE_OP
+ && !sc->slave->force_config;
state->al_state = sc->slave->current_state;
} else {
state->operational = 0;
--- a/master/slave_config.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/slave_config.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -72,8 +65,8 @@
ec_fmmu_config_t fmmu_configs[EC_MAX_FMMUS]; /**< FMMU configurations. */
uint8_t used_fmmus; /**< Number of FMMUs used. */
- struct list_head sdo_configs; /**< List of Sdo configurations. */
- struct list_head sdo_requests; /**< List of Sdo requests. */
+ struct list_head sdo_configs; /**< List of SDO configurations. */
+ struct list_head sdo_requests; /**< List of SDO requests. */
};
--- a/master/sync.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sync.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -128,13 +121,13 @@
/*****************************************************************************/
-/** Adds a Pdo to the list of known mapped Pdos.
+/** Adds a PDO to the list of known mapped PDOs.
*
* \return 0 on success, else < 0
*/
int ec_sync_add_pdo(
ec_sync_t *sync, /**< EtherCAT sync manager. */
- const ec_pdo_t *pdo /**< Pdo to map. */
+ const ec_pdo_t *pdo /**< PDO to map. */
)
{
return ec_pdo_list_add_pdo_copy(&sync->pdos, pdo);
--- a/master/sync.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sync.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -55,7 +48,7 @@
uint16_t default_length; /**< Data length in bytes. */
uint8_t control_register; /**< Control register value. */
uint8_t enable; /**< Enable bit. */
- ec_pdo_list_t pdos; /**< Current Pdo assignment. */
+ ec_pdo_list_t pdos; /**< Current PDO assignment. */
} ec_sync_t;
/*****************************************************************************/
--- a/master/sync_config.c Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sync_config.c Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
--- a/master/sync_config.h Mon Oct 19 14:33:59 2009 +0200
+++ b/master/sync_config.h Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
*
* $Id$
*
- * Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+ * Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
*
* This file is part of the IgH EtherCAT Master.
*
- * The IgH EtherCAT Master is free software; you can redistribute it
- * and/or modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
+ * The IgH EtherCAT Master is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2, as
+ * published by the Free Software Foundation.
*
- * The IgH EtherCAT Master is distributed in the hope that it will be
- * useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The IgH EtherCAT Master is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with the IgH EtherCAT Master; if not, write to the Free Software
+ * You should have received a copy of the GNU General Public License along
+ * with the IgH EtherCAT Master; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
- * The right to use EtherCAT Technology is granted and comes free of
- * charge under condition of compatibility of product made by
- * Licensee. People intending to distribute/sell products based on the
- * code, have to sign an agreement to guarantee that products using
- * software based on IgH EtherCAT master stay compatible with the actual
- * EtherCAT specification (which are released themselves as an open
- * standard) as the (only) precondition to have the right to use EtherCAT
- * Technology, IP and trade marks.
+ * Using the EtherCAT technology and brand is permitted in compliance with
+ * the industrial property and similar rights of Beckhoff Automation GmbH.
*
*****************************************************************************/
@@ -51,7 +44,7 @@
*/
typedef struct {
ec_direction_t dir; /**< Sync manager direction. */
- ec_pdo_list_t pdos; /**< Current Pdo assignment. */
+ ec_pdo_list_t pdos; /**< Current PDO assignment. */
} ec_sync_config_t;
/*****************************************************************************/
--- a/script/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/script/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -6,32 +6,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/script/ifup-eoe.sh Mon Oct 19 14:33:59 2009 +0200
+++ b/script/ifup-eoe.sh Wed Jan 13 00:04:47 2010 +0100
@@ -4,32 +4,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/script/init.d/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/script/init.d/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/script/init.d/ethercat.in Mon Oct 19 14:33:59 2009 +0200
+++ b/script/init.d/ethercat.in Wed Jan 13 00:04:47 2010 +0100
@@ -6,32 +6,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
-#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
+#
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
@@ -71,8 +64,7 @@
#------------------------------------------------------------------------------
-function exit_success()
-{
+exit_success() {
if [ -r /etc/rc.status ]; then
rc_reset
rc_status -v
@@ -85,8 +77,7 @@
#------------------------------------------------------------------------------
-function exit_fail()
-{
+exit_fail() {
if [ -r /etc/rc.status ]; then
rc_failed
rc_status -v
@@ -99,8 +90,7 @@
#------------------------------------------------------------------------------
-function print_running()
-{
+print_running() {
if [ -r /etc/rc.status ]; then
rc_reset
rc_status -v
@@ -111,8 +101,7 @@
#------------------------------------------------------------------------------
-function print_dead()
-{
+print_dead() {
if [ -r /etc/rc.status ]; then
rc_failed
rc_status -v
@@ -123,8 +112,7 @@
#------------------------------------------------------------------------------
-function parse_mac_address()
-{
+parse_mac_address() {
if [ -z "${1}" ]; then
MAC=""
elif echo ${1} | grep -qE '^([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$'; then
--- a/script/sysconfig/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/script/sysconfig/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
#------------------------------------------------------------------------------
--- a/script/sysconfig/ethercat Mon Oct 19 14:33:59 2009 +0200
+++ b/script/sysconfig/ethercat Wed Jan 13 00:04:47 2010 +0100
@@ -32,11 +32,11 @@
# and replace them with the EtherCAT-capable ones, respectively. If a certain
# (EtherCAT-capable) driver is not found, a warning will appear.
#
-# Possible values are "8139too", "e100", "e1000", and "forcedeth".
+# Possible values: 8139too, e1000.
# Separate multiple drivers with spaces.
#
-# Note: The e100, e1000 and forcedeth drivers are not built by default. Enable
-# them with the --enable-<driver> configure switches.
+# Note: The e1000 driver is not built by default. Enable it with
+# the --enable-e1000 configure switch.
#
DEVICE_MODULES=""
--- a/tool/Command.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/Command.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -119,7 +119,7 @@
{
stringstream err;
- err << "The slave selection matches " << size << "slaves. '"
+ err << "The slave selection matches " << size << " slaves. '"
<< name << "' requires a single slave.";
throwInvalidUsageException(err);
--- a/tool/CommandConfig.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandConfig.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -47,8 +47,8 @@
<< "| hexadecimal)." << endl
<< "\\- Alias address and relative position (both decimal)." << endl
<< endl
- << "With the --verbose option given, the configured Pdos and" << endl
- << "Sdos are output in addition." << endl
+ << "With the --verbose option given, the configured PDOs and" << endl
+ << "SDOs are output in addition." << endl
<< endl
<< "Configuration selection:" << endl
<< " Slave configurations can be selected with" << endl
@@ -139,13 +139,13 @@
for (k = 0; k < configIter->syncs[j].pdo_count; k++) {
m.getConfigPdo(&pdo, configIter->config_index, j, k);
- cout << " Pdo 0x" << hex << setw(4) << pdo.index << endl;
+ cout << " PDO 0x" << hex << setw(4) << pdo.index << endl;
for (l = 0; l < pdo.entry_count; l++) {
m.getConfigPdoEntry(&entry,
configIter->config_index, j, k, l);
- cout << " Pdo entry 0x" << hex << setfill('0')
+ cout << " PDO entry 0x" << hex << setfill('0')
<< setw(4) << entry.index << ":"
<< setw(2) << (unsigned int) entry.subindex
<< ", " << dec << setfill(' ')
@@ -156,7 +156,7 @@
}
}
- cout << "Sdo configuration:" << endl;
+ cout << "SDO configuration:" << endl;
if (configIter->sdo_count) {
for (j = 0; j < configIter->sdo_count; j++) {
m.getConfigSdo(&sdo, configIter->config_index, j);
--- a/tool/CommandDomains.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandDomains.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -36,7 +36,7 @@
<< "(LRD/LWR/LRW) is displayed followed by the domain's" << endl
<< "process data size in byte. The last values are the current" << endl
<< "datagram working counter sum and the expected working" << endl
- << "counter sum. If the values are equal, all Pdos were" << endl
+ << "counter sum. If the values are equal, all PDOs were" << endl
<< "exchanged during the last cycle." << endl
<< endl
<< "If the --verbose option is given, the participating slave" << endl
--- a/tool/CommandDownload.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandDownload.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -14,7 +14,7 @@
/*****************************************************************************/
CommandDownload::CommandDownload():
- SdoCommand("download", "Write an Sdo entry to a slave.")
+ SdoCommand("download", "Write an SDO entry to a slave.")
{
}
@@ -30,28 +30,29 @@
<< endl
<< "This command requires a single slave to be selected." << endl
<< endl
- << "The data type of the Sdo entry is taken from the Sdo" << endl
+ << "The data type of the SDO entry is taken from the SDO" << endl
<< "dictionary by default. It can be overridden with the" << endl
- << "--type option. If the slave does not support the Sdo" << endl
- << "information service or the Sdo is not in the dictionary," << endl
+ << "--type option. If the slave does not support the SDO" << endl
+ << "information service or the SDO is not in the dictionary," << endl
<< "the --type option is mandatory." << endl
<< endl
- << "These are the valid Sdo entry data types:" << endl
- << " int8, int16, int32, uint8, uint16, uint32, string." << endl
+ << "These are the valid SDO entry data types:" << endl
+ << " int8, int16, int32, uint8, uint16, uint32, string," << endl
+ << " octet_string." << endl
<< endl
<< "Arguments:" << endl
- << " INDEX is the Sdo index and must be an unsigned" << endl
+ << " INDEX is the SDO index and must be an unsigned" << endl
<< " 16 bit number." << endl
- << " SUBINDEX is the Sdo entry subindex and must be an" << endl
+ << " SUBINDEX is the SDO entry subindex and must be an" << endl
<< " unsigned 8 bit number." << endl
<< " VALUE is the value to download and must correspond" << endl
- << " to the Sdo entry datatype (see above)." << endl
+ << " to the SDO entry datatype (see above)." << endl
<< endl
<< "Command-specific options:" << endl
<< " --alias -a <alias>" << endl
<< " --position -p <pos> Slave selection. See the help of" << endl
<< " the 'slaves' command." << endl
- << " --type -t <type> Sdo entry data type (see above)." << endl
+ << " --type -t <type> SDO entry data type (see above)." << endl
<< endl
<< numericInfo();
@@ -78,7 +79,7 @@
>> resetiosflags(ios::basefield) // guess base from prefix
>> data.sdo_index;
if (strIndex.fail()) {
- err << "Invalid Sdo index '" << args[0] << "'!";
+ err << "Invalid SDO index '" << args[0] << "'!";
throwInvalidUsageException(err);
}
@@ -87,7 +88,7 @@
>> resetiosflags(ios::basefield) // guess base from prefix
>> number;
if (strSubIndex.fail() || number > 0xff) {
- err << "Invalid Sdo subindex '" << args[1] << "'!";
+ err << "Invalid SDO subindex '" << args[1] << "'!";
throwInvalidUsageException(err);
}
data.sdo_entry_subindex = number;
@@ -111,12 +112,12 @@
m.getSdoEntry(&entry, data.slave_position,
data.sdo_index, data.sdo_entry_subindex);
} catch (MasterDeviceException &e) {
- err << "Failed to determine Sdo entry data type. "
+ err << "Failed to determine SDO entry data type. "
<< "Please specify --type.";
throwCommandException(err);
}
if (!(dataType = findDataType(entry.data_type))) {
- err << "Pdo entry has unknown data type 0x"
+ err << "PDO entry has unknown data type 0x"
<< hex << setfill('0') << setw(4) << entry.data_type << "!"
<< " Please specify --type.";
throwCommandException(err);
@@ -191,6 +192,14 @@
data.data_size = strValue.str().size();
strValue >> (char *) data.data;
break;
+ case 0x000a: // octet_string
+ if (strValue.str().size() >= data.data_size) {
+ err << "String too large";
+ throwCommandException(err);
+ }
+ data.data_size = strValue.str().size();
+ strValue >> (char *) data.data;
+ break;
default:
delete [] data.data;
@@ -208,7 +217,7 @@
m.sdoDownload(&data);
} catch (MasterDeviceSdoAbortException &e) {
delete [] data.data;
- err << "Sdo transfer aborted with code 0x"
+ err << "SDO transfer aborted with code 0x"
<< setfill('0') << hex << setw(8) << e.abortCode
<< ": " << abortText(e.abortCode);
throwCommandException(err);
--- a/tool/CommandPdos.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandPdos.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -13,7 +13,7 @@
/*****************************************************************************/
CommandPdos::CommandPdos():
- Command("pdos", "List Sync managers, Pdo assignment and mapping.")
+ Command("pdos", "List Sync managers, PDO assignment and mapping.")
{
}
@@ -37,19 +37,19 @@
<< " SM3: PhysAddr 0x1100, DefaultSize 0, ControlRegister 0x20, "
<< "Enable 1" << endl
<< endl
- << "2) Assigned Pdos - Pdo direction, hexadecimal index and" << endl
- << " the Pdo name, if avaliable. Note that a 'Tx' and 'Rx'" << endl
+ << "2) Assigned PDOs - PDO direction, hexadecimal index and" << endl
+ << " the PDO name, if avaliable. Note that a 'Tx' and 'Rx'" << endl
<< " are seen from the slave's point of view. Example:" << endl
<< endl
- << " TxPdo 0x1a00 \"Channel1\"" << endl
+ << " TxPDO 0x1a00 \"Channel1\"" << endl
<< endl
- << "3) Mapped Pdo entries - Pdo entry index and subindex (both" << endl
+ << "3) Mapped PDO entries - PDO entry index and subindex (both" << endl
<< " hexadecimal), the length in bit and the description, if" << endl
<< " available. Example:" << endl
<< endl
- << " Pdo entry 0x3101:01, 8 bit, \"Status\"" << endl
+ << " PDO entry 0x3101:01, 8 bit, \"Status\"" << endl
<< endl
- << "Note, that the displayed Pdo assignment and Pdo mapping" << endl
+ << "Note, that the displayed PDO assignment and PDO mapping" << endl
<< "information can either originate from the SII or from the" << endl
<< "CoE communication area." << endl
<< endl
@@ -115,7 +115,7 @@
m.getPdo(&pdo, slave.position, i, j);
cout << " " << (sync.control_register & 0x04 ? "R" : "T")
- << "xPdo 0x"
+ << "xPDO 0x"
<< hex << setfill('0')
<< setw(4) << pdo.index
<< " \"" << pdo.name << "\"" << endl;
@@ -126,7 +126,7 @@
for (k = 0; k < pdo.entry_count; k++) {
m.getPdoEntry(&entry, slave.position, i, j, k);
- cout << " Pdo entry 0x"
+ cout << " PDO entry 0x"
<< hex << setfill('0')
<< setw(4) << entry.index
<< ":" << setw(2) << (unsigned int) entry.subindex
--- a/tool/CommandSdos.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandSdos.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -13,7 +13,7 @@
/*****************************************************************************/
CommandSdos::CommandSdos():
- SdoCommand("sdos", "List Sdo dictionaries.")
+ SdoCommand("sdos", "List SDO dictionaries.")
{
}
@@ -27,27 +27,27 @@
<< endl
<< getBriefDescription() << endl
<< endl
- << "Sdo dictionary information is displayed in two layers," << endl
+ << "SDO dictionary information is displayed in two layers," << endl
<< "which are indented accordingly:" << endl
<< endl
- << "1) Sdos - Hexadecimal Sdo index and the name. Example:" << endl
+ << "1) SDOs - Hexadecimal SDO index and the name. Example:" << endl
<< endl
- << " Sdo 0x1018, \"Identity object\"" << endl
+ << " SDO 0x1018, \"Identity object\"" << endl
<< endl
- << "2) Sdo entries - Sdo index and Sdo entry subindex (both" << endl
+ << "2) SDO entries - SDO index and SDO entry subindex (both" << endl
<< " hexadecimal) followed by the data type, the length in" << endl
<< " bit, and the description. Example:" << endl
<< endl
<< " 0x1018:01, uint32, 32 bit, \"Vendor id\"" << endl
<< endl
- << "If the --quiet option is given, only the Sdos are output."
+ << "If the --quiet option is given, only the SDOs are output."
<< endl << endl
<< "Command-specific options:" << endl
<< " --alias -a <alias>" << endl
<< " --position -p <pos> Slave selection. See the help of" << endl
<< " the 'slaves' command." << endl
- << " --quiet -q Only output Sdos (without the" << endl
- << " Sdo entries)." << endl
+ << " --quiet -q Only output SDOs (without the" << endl
+ << " SDO entries)." << endl
<< endl
<< numericInfo();
@@ -90,7 +90,7 @@
for (i = 0; i < slave.sdo_count; i++) {
m.getSdo(&sdo, slave.position, i);
- cout << "Sdo 0x"
+ cout << "SDO 0x"
<< hex << setfill('0')
<< setw(4) << sdo.sdo_index
<< ", \"" << sdo.name << "\"" << endl;
--- a/tool/CommandSlaves.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandSlaves.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -7,6 +7,7 @@
#include <iostream>
#include <iomanip>
#include <list>
+#include <string.h>
using namespace std;
#include "CommandSlaves.h"
@@ -189,8 +190,6 @@
)
{
SlaveList::const_iterator si;
- list<string> protoList;
- list<string>::const_iterator protoIter;
for (si = slaves.begin(); si != slaves.end(); si++) {
cout << "=== Slave " << dec << si->position << " ===" << endl;
@@ -213,6 +212,9 @@
<< setw(8) << si->serial_number << endl;
if (si->mailbox_protocols) {
+ list<string> protoList;
+ list<string>::const_iterator protoIter;
+
cout << "Mailboxes:" << endl
<< " RX: 0x"
<< hex << setw(4) << si->rx_mailbox_offset << "/"
@@ -259,20 +261,20 @@
if (si->mailbox_protocols & EC_MBOX_COE) {
cout << " CoE details:" << endl
- << " Enable Sdo: "
+ << " Enable SDO: "
<< (si->coe_details.enable_sdo ? "yes" : "no") << endl
- << " Enable Sdo Info: "
+ << " Enable SDO Info: "
<< (si->coe_details.enable_sdo_info ? "yes" : "no") << endl
- << " Enable Pdo Assign: "
+ << " Enable PDO Assign: "
<< (si->coe_details.enable_pdo_assign
? "yes" : "no") << endl
- << " Enable Pdo Configuration: "
+ << " Enable PDO Configuration: "
<< (si->coe_details.enable_pdo_configuration
? "yes" : "no") << endl
<< " Enable Upload at startup: "
<< (si->coe_details.enable_upload_at_startup
? "yes" : "no") << endl
- << " Enable Sdo complete access: "
+ << " Enable SDO complete access: "
<< (si->coe_details.enable_sdo_complete_access
? "yes" : "no") << endl;
}
--- a/tool/CommandStates.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandStates.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -5,6 +5,7 @@
****************************************************************************/
#include <iostream>
+#include <algorithm>
using namespace std;
#include "CommandStates.h"
--- a/tool/CommandUpload.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandUpload.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -14,7 +14,7 @@
/*****************************************************************************/
CommandUpload::CommandUpload():
- SdoCommand("upload", "Read an Sdo entry from a slave.")
+ SdoCommand("upload", "Read an SDO entry from a slave.")
{
}
@@ -30,26 +30,27 @@
<< endl
<< "This command requires a single slave to be selected." << endl
<< endl
- << "The data type of the Sdo entry is taken from the Sdo" << endl
+ << "The data type of the SDO entry is taken from the SDO" << endl
<< "dictionary by default. It can be overridden with the" << endl
- << "--type option. If the slave does not support the Sdo" << endl
- << "information service or the Sdo is not in the dictionary," << endl
+ << "--type option. If the slave does not support the SDO" << endl
+ << "information service or the SDO is not in the dictionary," << endl
<< "the --type option is mandatory." << endl
<< endl
- << "These are the valid Sdo entry data types:" << endl
- << " int8, int16, int32, uint8, uint16, uint32, string." << endl
+ << "These are the valid SDO entry data types:" << endl
+ << " int8, int16, int32, uint8, uint16, uint32, string," << endl
+ << " octet_string." << endl
<< endl
<< "Arguments:" << endl
- << " INDEX is the Sdo index and must be an unsigned" << endl
+ << " INDEX is the SDO index and must be an unsigned" << endl
<< " 16 bit number." << endl
- << " SUBINDEX is the Sdo entry subindex and must be an" << endl
+ << " SUBINDEX is the SDO entry subindex and must be an" << endl
<< " unsigned 8 bit number." << endl
<< endl
<< "Command-specific options:" << endl
<< " --alias -a <alias>" << endl
<< " --position -p <pos> Slave selection. See the help of" << endl
<< " the 'slaves' command." << endl
- << " --type -t <type> Sdo entry data type (see above)." << endl
+ << " --type -t <type> SDO entry data type (see above)." << endl
<< endl
<< numericInfo();
@@ -77,7 +78,7 @@
>> resetiosflags(ios::basefield) // guess base from prefix
>> data.sdo_index;
if (strIndex.fail()) {
- err << "Invalid Sdo index '" << args[0] << "'!";
+ err << "Invalid SDO index '" << args[0] << "'!";
throwInvalidUsageException(err);
}
@@ -86,7 +87,7 @@
>> resetiosflags(ios::basefield) // guess base from prefix
>> uval;
if (strSubIndex.fail() || uval > 0xff) {
- err << "Invalid Sdo subindex '" << args[1] << "'!";
+ err << "Invalid SDO subindex '" << args[1] << "'!";
throwInvalidUsageException(err);
}
data.sdo_entry_subindex = uval;
@@ -110,12 +111,12 @@
m.getSdoEntry(&entry, data.slave_position,
data.sdo_index, data.sdo_entry_subindex);
} catch (MasterDeviceException &e) {
- err << "Failed to determine Sdo entry data type. "
+ err << "Failed to determine SDO entry data type. "
<< "Please specify --type.";
throwCommandException(err);
}
if (!(dataType = findDataType(entry.data_type))) {
- err << "Pdo entry has unknown data type 0x"
+ err << "PDO entry has unknown data type 0x"
<< hex << setfill('0') << setw(4) << entry.data_type << "!"
<< " Please specify --type.";
throwCommandException(err);
@@ -134,7 +135,7 @@
m.sdoUpload(&data);
} catch (MasterDeviceSdoAbortException &e) {
delete [] data.target;
- err << "Sdo transfer aborted with code 0x"
+ err << "SDO transfer aborted with code 0x"
<< setfill('0') << hex << setw(8) << e.abortCode
<< ": " << abortText(e.abortCode);
throwCommandException(err);
@@ -182,6 +183,10 @@
cout << string((const char *) data.target, data.data_size)
<< endl;
break;
+ case 0x000a: // octet_string
+ cout << string((const char *) data.target, data.data_size)
+ << endl;
+ break;
default:
printRawData(data.target, data.data_size); // FIXME
break;
--- a/tool/CommandXml.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/CommandXml.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -6,6 +6,7 @@
#include <iostream>
#include <iomanip>
+#include <string.h>
using namespace std;
#include "CommandXml.h"
@@ -27,9 +28,9 @@
<< endl
<< getBriefDescription() << endl
<< endl
- << "Note that the Pdo information can either originate" << endl
+ << "Note that the PDO information can either originate" << endl
<< "from the SII or from the CoE communication area. For" << endl
- << "slaves, that support configuring Pdo assignment and" << endl
+ << "slaves, that support configuring PDO assignment and" << endl
<< "mapping, the output depends on the last configuration." << endl
<< endl
<< "Command-specific options:" << endl
@@ -110,7 +111,7 @@
for (j = 0; j < sync.pdo_count; j++) {
m.getPdo(&pdo, slave.position, i, j);
pdoType = (sync.control_register & 0x04 ? "R" : "T");
- pdoType += "xPdo";
+ pdoType += "xPdo"; // last 2 letters lowercase in XML!
cout
<< " <" << pdoType
--- a/tool/Makefile.am Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/Makefile.am Wed Jan 13 00:04:47 2010 +0100
@@ -2,32 +2,25 @@
#
# $Id$
#
-# Copyright (C) 2006 Florian Pose, Ingenieurgemeinschaft IgH
+# Copyright (C) 2006-2008 Florian Pose, Ingenieurgemeinschaft IgH
#
# This file is part of the IgH EtherCAT Master.
#
-# The IgH EtherCAT Master is free software; you can redistribute it
-# and/or modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2 of the
-# License, or (at your option) any later version.
+# The IgH EtherCAT Master is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License version 2, as
+# published by the Free Software Foundation.
#
-# The IgH EtherCAT Master is distributed in the hope that it will be
-# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
+# The IgH EtherCAT Master is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details.
#
-# You should have received a copy of the GNU General Public License
-# along with the IgH EtherCAT Master; if not, write to the Free Software
+# You should have received a copy of the GNU General Public License along
+# with the IgH EtherCAT Master; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
-# The right to use EtherCAT Technology is granted and comes free of
-# charge under condition of compatibility of product made by
-# Licensee. People intending to distribute/sell products based on the
-# code, have to sign an agreement to guarantee that products using
-# software based on IgH EtherCAT master stay compatible with the actual
-# EtherCAT specification (which are released themselves as an open
-# standard) as the (only) precondition to have the right to use EtherCAT
-# Technology, IP and trade marks.
+# Using the EtherCAT technology and brand is permitted in compliance with the
+# industrial property and similar rights of Beckhoff Automation GmbH.
#
# vim: syntax=make
#
--- a/tool/MasterDevice.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/MasterDevice.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -8,6 +8,8 @@
#include <fcntl.h>
#include <errno.h>
#include <sys/ioctl.h>
+#include <string.h>
+#include <unistd.h>
#include <sstream>
#include <iomanip>
@@ -145,7 +147,7 @@
if (ioctl(fd, EC_IOCTL_CONFIG_SDO, data) < 0) {
stringstream err;
- err << "Failed to get slave config Sdo: " << strerror(errno);
+ err << "Failed to get slave config SDO: " << strerror(errno);
throw MasterDeviceException(err);
}
}
@@ -291,7 +293,7 @@
if (ioctl(fd, EC_IOCTL_SLAVE_SDO, sdo)) {
stringstream err;
- err << "Failed to get Sdo: " << strerror(errno);
+ err << "Failed to get SDO: " << strerror(errno);
throw MasterDeviceException(err);
}
}
@@ -311,7 +313,7 @@
if (ioctl(fd, EC_IOCTL_SLAVE_SDO_ENTRY, entry)) {
stringstream err;
- err << "Failed to get Sdo entry: " << strerror(errno);
+ err << "Failed to get SDO entry: " << strerror(errno);
throw MasterDeviceException(err);
}
}
@@ -362,7 +364,7 @@
if (errno == EIO && data->abort_code) {
throw MasterDeviceSdoAbortException(data->abort_code);
} else {
- err << "Failed to download Sdo: " << strerror(errno);
+ err << "Failed to download SDO: " << strerror(errno);
throw MasterDeviceException(err);
}
}
@@ -377,7 +379,7 @@
if (errno == EIO && data->abort_code) {
throw MasterDeviceSdoAbortException(data->abort_code);
} else {
- err << "Failed to upload Sdo: " << strerror(errno);
+ err << "Failed to upload SDO: " << strerror(errno);
throw MasterDeviceException(err);
}
}
--- a/tool/MasterDevice.h Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/MasterDevice.h Wed Jan 13 00:04:47 2010 +0100
@@ -46,7 +46,7 @@
protected:
/** Constructor with stringstream parameter. */
MasterDeviceSdoAbortException(uint32_t code):
- MasterDeviceException("Sdo transfer aborted.") {
+ MasterDeviceException("SDO transfer aborted.") {
abortCode = code;
};
};
--- a/tool/SdoCommand.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/SdoCommand.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -57,37 +57,38 @@
/****************************************************************************/
const SdoCommand::DataType SdoCommand::dataTypes[] = {
- {"int8", 0x0002, 1},
- {"int16", 0x0003, 2},
- {"int32", 0x0004, 4},
- {"uint8", 0x0005, 1},
- {"uint16", 0x0006, 2},
- {"uint32", 0x0007, 4},
- {"string", 0x0009, 0},
- {"raw", 0xffff, 0},
+ {"int8", 0x0002, 1},
+ {"int16", 0x0003, 2},
+ {"int32", 0x0004, 4},
+ {"uint8", 0x0005, 1},
+ {"uint16", 0x0006, 2},
+ {"uint32", 0x0007, 4},
+ {"string", 0x0009, 0},
+ {"octet_string", 0x000a, 0},
+ {"raw", 0xffff, 0},
{}
};
/*****************************************************************************/
-/** Sdo abort messages.
+/** SDO abort messages.
*
- * The "Abort Sdo transfer request" supplies an abort code, which can be
+ * The "Abort SDO transfer request" supplies an abort code, which can be
* translated to clear text. This table does the mapping of the codes and
* messages.
*/
const SdoCommand::AbortMessage SdoCommand::abortMessages[] = {
{0x05030000, "Toggle bit not changed"},
- {0x05040000, "Sdo protocol timeout"},
+ {0x05040000, "SDO protocol timeout"},
{0x05040001, "Client/Server command specifier not valid or unknown"},
{0x05040005, "Out of memory"},
{0x06010000, "Unsupported access to an object"},
{0x06010001, "Attempt to read a write-only object"},
{0x06010002, "Attempt to write a read-only object"},
{0x06020000, "This object does not exist in the object directory"},
- {0x06040041, "The object cannot be mapped into the Pdo"},
+ {0x06040041, "The object cannot be mapped into the PDO"},
{0x06040042, "The number and length of the objects to be mapped would"
- " exceed the Pdo length"},
+ " exceed the PDO length"},
{0x06040043, "General parameter incompatibility reason"},
{0x06040047, "Gerneral internal incompatibility in device"},
{0x06060000, "Access failure due to a hardware error"},
--- a/tool/main.cpp Mon Oct 19 14:33:59 2009 +0200
+++ b/tool/main.cpp Wed Jan 13 00:04:47 2010 +0100
@@ -6,6 +6,7 @@
#include <getopt.h>
#include <libgen.h> // basename()
+#include <stdlib.h>
#include <iostream>
#include <iomanip>