Fixed MTP to work with TWRP

This commit is contained in:
awab228 2018-06-19 23:16:04 +02:00
commit f6dfaef42e
50820 changed files with 20846062 additions and 0 deletions

559
drivers/misc/Kconfig Normal file
View file

@ -0,0 +1,559 @@
#
# Misc strange devices
#
menu "Misc devices"
config SENSORS_LIS3LV02D
tristate
depends on INPUT
select INPUT_POLLDEV
default n
config AD525X_DPOT
tristate "Analog Devices Digital Potentiometers"
depends on (I2C || SPI) && SYSFS
help
If you say yes here, you get support for the Analog Devices
AD5258, AD5259, AD5251, AD5252, AD5253, AD5254, AD5255
AD5160, AD5161, AD5162, AD5165, AD5200, AD5201, AD5203,
AD5204, AD5206, AD5207, AD5231, AD5232, AD5233, AD5235,
AD5260, AD5262, AD5263, AD5290, AD5291, AD5292, AD5293,
AD7376, AD8400, AD8402, AD8403, ADN2850, AD5241, AD5242,
AD5243, AD5245, AD5246, AD5247, AD5248, AD5280, AD5282,
ADN2860, AD5273, AD5171, AD5170, AD5172, AD5173, AD5270,
AD5271, AD5272, AD5274
digital potentiometer chips.
See Documentation/misc-devices/ad525x_dpot.txt for the
userspace interface.
This driver can also be built as a module. If so, the module
will be called ad525x_dpot.
config AD525X_DPOT_I2C
tristate "support I2C bus connection"
depends on AD525X_DPOT && I2C
help
Say Y here if you have a digital potentiometers hooked to an I2C bus.
To compile this driver as a module, choose M here: the
module will be called ad525x_dpot-i2c.
config AD525X_DPOT_SPI
tristate "support SPI bus connection"
depends on AD525X_DPOT && SPI_MASTER
help
Say Y here if you have a digital potentiometers hooked to an SPI bus.
If unsure, say N (but it's safe to say "Y").
To compile this driver as a module, choose M here: the
module will be called ad525x_dpot-spi.
config ATMEL_TCLIB
bool "Atmel AT32/AT91 Timer/Counter Library"
depends on (AVR32 || ARCH_AT91)
help
Select this if you want a library to allocate the Timer/Counter
blocks found on many Atmel processors. This facilitates using
these blocks by different drivers despite processor differences.
config ATMEL_TCB_CLKSRC
bool "TC Block Clocksource"
depends on ATMEL_TCLIB
default y
help
Select this to get a high precision clocksource based on a
TC block with a 5+ MHz base clock rate. Two timer channels
are combined to make a single 32-bit timer.
When GENERIC_CLOCKEVENTS is defined, the third timer channel
may be used as a clock event device supporting oneshot mode
(delays of up to two seconds) based on the 32 KiHz clock.
config ATMEL_TCB_CLKSRC_BLOCK
int
depends on ATMEL_TCB_CLKSRC
prompt "TC Block" if ARCH_AT91RM9200 || ARCH_AT91SAM9260 || CPU_AT32AP700X
default 0
range 0 1
help
Some chips provide more than one TC block, so you have the
choice of which one to use for the clock framework. The other
TC can be used for other purposes, such as PWM generation and
interval timing.
config DUMMY_IRQ
tristate "Dummy IRQ handler"
default n
---help---
This module accepts a single 'irq' parameter, which it should register for.
The sole purpose of this module is to help with debugging of systems on
which spurious IRQs would happen on disabled IRQ vector.
config IBM_ASM
tristate "Device driver for IBM RSA service processor"
depends on X86 && PCI && INPUT
---help---
This option enables device driver support for in-band access to the
IBM RSA (Condor) service processor in eServer xSeries systems.
The ibmasm device driver allows user space application to access
ASM (Advanced Systems Management) functions on the service
processor. The driver is meant to be used in conjunction with
a user space API.
The ibmasm driver also enables the OS to use the UART on the
service processor board as a regular serial port. To make use of
this feature serial driver support (CONFIG_SERIAL_8250) must be
enabled.
WARNING: This software may not be supported or function
correctly on your IBM server. Please consult the IBM ServerProven
website <http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/>
for information on the specific driver level and support statement
for your IBM server.
config PHANTOM
tristate "Sensable PHANToM (PCI)"
depends on PCI
help
Say Y here if you want to build a driver for Sensable PHANToM device.
This driver is only for PCI PHANToMs.
If you choose to build module, its name will be phantom. If unsure,
say N here.
config INTEL_MID_PTI
tristate "Parallel Trace Interface for MIPI P1149.7 cJTAG standard"
depends on PCI && TTY && (X86_INTEL_MID || COMPILE_TEST)
default n
help
The PTI (Parallel Trace Interface) driver directs
trace data routed from various parts in the system out
through an Intel Penwell PTI port and out of the mobile
device for analysis with a debugging tool (Lauterbach or Fido).
You should select this driver if the target kernel is meant for
an Intel Atom (non-netbook) mobile device containing a MIPI
P1149.7 standard implementation.
config SGI_IOC4
tristate "SGI IOC4 Base IO support"
depends on PCI
---help---
This option enables basic support for the IOC4 chip on certain
SGI IO controller cards (IO9, IO10, and PCI-RT). This option
does not enable any specific functions on such a card, but provides
necessary infrastructure for other drivers to utilize.
If you have an SGI Altix with an IOC4-based card say Y.
Otherwise say N.
config TIFM_CORE
tristate "TI Flash Media interface support"
depends on PCI
help
If you want support for Texas Instruments(R) Flash Media adapters
you should select this option and then also choose an appropriate
host adapter, such as 'TI Flash Media PCI74xx/PCI76xx host adapter
support', if you have a TI PCI74xx compatible card reader, for
example.
You will also have to select some flash card format drivers. MMC/SD
cards are supported via 'MMC/SD Card support: TI Flash Media MMC/SD
Interface support (MMC_TIFM_SD)'.
To compile this driver as a module, choose M here: the module will
be called tifm_core.
config TIFM_7XX1
tristate "TI Flash Media PCI74xx/PCI76xx host adapter support"
depends on PCI && TIFM_CORE
default TIFM_CORE
help
This option enables support for Texas Instruments(R) PCI74xx and
PCI76xx families of Flash Media adapters, found in many laptops.
To make actual use of the device, you will have to select some
flash card format drivers, as outlined in the TIFM_CORE Help.
To compile this driver as a module, choose M here: the module will
be called tifm_7xx1.
config ICS932S401
tristate "Integrated Circuits ICS932S401"
depends on I2C
help
If you say yes here you get support for the Integrated Circuits
ICS932S401 clock control chips.
This driver can also be built as a module. If so, the module
will be called ics932s401.
config ATMEL_SSC
tristate "Device driver for Atmel SSC peripheral"
depends on HAS_IOMEM && (AVR32 || ARCH_AT91 || COMPILE_TEST)
---help---
This option enables device driver support for Atmel Synchronized
Serial Communication peripheral (SSC).
The SSC peripheral supports a wide variety of serial frame based
communications, i.e. I2S, SPI, etc.
If unsure, say N.
config ENCLOSURE_SERVICES
tristate "Enclosure Services"
default n
help
Provides support for intelligent enclosures (bays which
contain storage devices). You also need either a host
driver (SCSI/ATA) which supports enclosures
or a SCSI enclosure device (SES) to use these services.
config SGI_XP
tristate "Support communication between SGI SSIs"
depends on NET
depends on (IA64_GENERIC || IA64_SGI_SN2 || IA64_SGI_UV || X86_UV) && SMP
select IA64_UNCACHED_ALLOCATOR if IA64_GENERIC || IA64_SGI_SN2
select GENERIC_ALLOCATOR if IA64_GENERIC || IA64_SGI_SN2
select SGI_GRU if X86_64 && SMP
---help---
An SGI machine can be divided into multiple Single System
Images which act independently of each other and have
hardware based memory protection from the others. Enabling
this feature will allow for direct communication between SSIs
based on a network adapter and DMA messaging.
config CS5535_MFGPT
tristate "CS5535/CS5536 Geode Multi-Function General Purpose Timer (MFGPT) support"
depends on MFD_CS5535
default n
help
This driver provides access to MFGPT functionality for other
drivers that need timers. MFGPTs are available in the CS5535 and
CS5536 companion chips that are found in AMD Geode and several
other platforms. They have a better resolution and max interval
than the generic PIT, and are suitable for use as high-res timers.
You probably don't want to enable this manually; other drivers that
make use of it should enable it.
config CS5535_MFGPT_DEFAULT_IRQ
int
depends on CS5535_MFGPT
default 7
help
MFGPTs on the CS5535 require an interrupt. The selected IRQ
can be overridden as a module option as well as by driver that
use the cs5535_mfgpt_ API; however, different architectures might
want to use a different IRQ by default. This is here for
architectures to set as necessary.
config CS5535_CLOCK_EVENT_SRC
tristate "CS5535/CS5536 high-res timer (MFGPT) events"
depends on GENERIC_CLOCKEVENTS && CS5535_MFGPT
help
This driver provides a clock event source based on the MFGPT
timer(s) in the CS5535 and CS5536 companion chips.
MFGPTs have a better resolution and max interval than the
generic PIT, and are suitable for use as high-res timers.
config HP_ILO
tristate "Channel interface driver for the HP iLO processor"
depends on PCI
default n
help
The channel interface driver allows applications to communicate
with iLO management processors present on HP ProLiant servers.
Upon loading, the driver creates /dev/hpilo/dXccbN files, which
can be used to gather data from the management processor, via
read and write system calls.
To compile this driver as a module, choose M here: the
module will be called hpilo.
config SGI_GRU
tristate "SGI GRU driver"
depends on X86_UV && SMP
default n
select MMU_NOTIFIER
---help---
The GRU is a hardware resource located in the system chipset. The GRU
contains memory that can be mmapped into the user address space. This memory is
used to communicate with the GRU to perform functions such as load/store,
scatter/gather, bcopy, AMOs, etc. The GRU is directly accessed by user
instructions using user virtual addresses. GRU instructions (ex., bcopy) use
user virtual addresses for operands.
If you are not running on a SGI UV system, say N.
config SGI_GRU_DEBUG
bool "SGI GRU driver debug"
depends on SGI_GRU
default n
---help---
This option enables additional debugging code for the SGI GRU driver.
If you are unsure, say N.
config APDS9802ALS
tristate "Medfield Avago APDS9802 ALS Sensor module"
depends on I2C
help
If you say yes here you get support for the ALS APDS9802 ambient
light sensor.
This driver can also be built as a module. If so, the module
will be called apds9802als.
config ISL29003
tristate "Intersil ISL29003 ambient light sensor"
depends on I2C && SYSFS
help
If you say yes here you get support for the Intersil ISL29003
ambient light sensor.
This driver can also be built as a module. If so, the module
will be called isl29003.
config KNOX_KAP
bool "Enable KNOX KAP mode"
default n
help
Knox Active Protection mode
config ISL29020
tristate "Intersil ISL29020 ambient light sensor"
depends on I2C
help
If you say yes here you get support for the Intersil ISL29020
ambient light sensor.
This driver can also be built as a module. If so, the module
will be called isl29020.
config SENSORS_TSL2550
tristate "Taos TSL2550 ambient light sensor"
depends on I2C && SYSFS
help
If you say yes here you get support for the Taos TSL2550
ambient light sensor.
This driver can also be built as a module. If so, the module
will be called tsl2550.
config SENSORS_BH1780
tristate "ROHM BH1780GLI ambient light sensor"
depends on I2C && SYSFS
help
If you say yes here you get support for the ROHM BH1780GLI
ambient light sensor.
This driver can also be built as a module. If so, the module
will be called bh1780gli.
config SENSORS_BH1770
tristate "BH1770GLC / SFH7770 combined ALS - Proximity sensor"
depends on I2C
---help---
Say Y here if you want to build a driver for BH1770GLC (ROHM) or
SFH7770 (Osram) combined ambient light and proximity sensor chip.
To compile this driver as a module, choose M here: the
module will be called bh1770glc. If unsure, say N here.
config SENSORS_APDS990X
tristate "APDS990X combined als and proximity sensors"
depends on I2C
default n
---help---
Say Y here if you want to build a driver for Avago APDS990x
combined ambient light and proximity sensor chip.
To compile this driver as a module, choose M here: the
module will be called apds990x. If unsure, say N here.
config HMC6352
tristate "Honeywell HMC6352 compass"
depends on I2C
help
This driver provides support for the Honeywell HMC6352 compass,
providing configuration and heading data via sysfs.
config DS1682
tristate "Dallas DS1682 Total Elapsed Time Recorder with Alarm"
depends on I2C
help
If you say yes here you get support for Dallas Semiconductor
DS1682 Total Elapsed Time Recorder.
This driver can also be built as a module. If so, the module
will be called ds1682.
config SPEAR13XX_PCIE_GADGET
bool "PCIe gadget support for SPEAr13XX platform"
depends on ARCH_SPEAR13XX && BROKEN
default n
help
This option enables gadget support for PCIe controller. If
board file defines any controller as PCIe endpoint then a sysfs
entry will be created for that controller. User can use these
sysfs node to configure PCIe EP as per his requirements.
config TI_DAC7512
tristate "Texas Instruments DAC7512"
depends on SPI && SYSFS
help
If you say yes here you get support for the Texas Instruments
DAC7512 16-bit digital-to-analog converter.
This driver can also be built as a module. If so, the module
will be called ti_dac7512.
config UID_STAT
bool "UID based statistics tracking exported to /proc/uid_stat"
default n
config VMWARE_BALLOON
tristate "VMware Balloon Driver"
depends on X86 && HYPERVISOR_GUEST
help
This is VMware physical memory management driver which acts
like a "balloon" that can be inflated to reclaim physical pages
by reserving them in the guest and invalidating them in the
monitor, freeing up the underlying machine pages so they can
be allocated to other guests. The balloon can also be deflated
to allow the guest to use more physical memory.
If unsure, say N.
To compile this driver as a module, choose M here: the
module will be called vmw_balloon.
config ARM_CHARLCD
bool "ARM Ltd. Character LCD Driver"
depends on PLAT_VERSATILE
help
This is a driver for the character LCD found on the ARM Ltd.
Versatile and RealView Platform Baseboards. It doesn't do
very much more than display the text "ARM Linux" on the first
line and the Linux version on the second line, but that's
still useful.
config BMP085
bool
depends on SYSFS
config BMP085_I2C
tristate "BMP085 digital pressure sensor on I2C"
select BMP085
select REGMAP_I2C
depends on I2C && SYSFS
help
Say Y here if you want to support Bosch Sensortec's digital pressure
sensor hooked to an I2C bus.
To compile this driver as a module, choose M here: the
module will be called bmp085-i2c.
config BMP085_SPI
tristate "BMP085 digital pressure sensor on SPI"
select BMP085
select REGMAP_SPI
depends on SPI_MASTER && SYSFS
help
Say Y here if you want to support Bosch Sensortec's digital pressure
sensor hooked to an SPI bus.
To compile this driver as a module, choose M here: the
module will be called bmp085-spi.
config PCH_PHUB
tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) PHUB"
select GENERIC_NET_UTILS
depends on PCI && (X86_32 || COMPILE_TEST)
help
This driver is for PCH(Platform controller Hub) PHUB(Packet Hub) of
Intel Topcliff which is an IOH(Input/Output Hub) for x86 embedded
processor. The Topcliff has MAC address and Option ROM data in SROM.
This driver can access MAC address and Option ROM data in SROM.
This driver also can be used for LAPIS Semiconductor's IOH,
ML7213/ML7223/ML7831.
ML7213 which is for IVI(In-Vehicle Infotainment) use.
ML7223 IOH is for MP(Media Phone) use.
ML7831 IOH is for general purpose use.
ML7213/ML7223/ML7831 is companion chip for Intel Atom E6xx series.
ML7213/ML7223/ML7831 is completely compatible for Intel EG20T PCH.
To compile this driver as a module, choose M here: the module will
be called pch_phub.
config USB_SWITCH_FSA9480
tristate "FSA9480 USB Switch"
depends on I2C
help
The FSA9480 is a USB port accessory detector and switch.
The FSA9480 is fully controlled using I2C and enables USB data,
stereo and mono audio, video, microphone and UART data to use
a common connector port.
config LATTICE_ECP3_CONFIG
tristate "Lattice ECP3 FPGA bitstream configuration via SPI"
depends on SPI && SYSFS
select FW_LOADER
default n
help
This option enables support for bitstream configuration (programming
or loading) of the Lattice ECP3 FPGA family via SPI.
If unsure, say N.
config SRAM
bool "Generic on-chip SRAM driver"
depends on HAS_IOMEM
select GENERIC_ALLOCATOR
help
This driver allows you to declare a memory region to be managed by
the genalloc API. It is supposed to be used for small on-chip SRAM
areas found on many SoCs.
config VEXPRESS_SYSCFG
bool "Versatile Express System Configuration driver"
depends on VEXPRESS_CONFIG
default y
help
ARM Ltd. Versatile Express uses specialised platform configuration
bus. System Configuration interface is one of the possible means
of generating transactions on this bus.
config UID_CPUTIME
tristate "Per-UID cpu time statistics"
depends on PROFILING
help
Per UID based cpu time statistics exported to /proc/uid_cputime
config TIMA_LOG
tristate "Support for dumping TIMA log"
depends on TIMA
default y
help
This option enables support for dumping TIMA log.
source "drivers/misc/c2port/Kconfig"
source "drivers/misc/eeprom/Kconfig"
source "drivers/misc/cb710/Kconfig"
source "drivers/misc/ti-st/Kconfig"
source "drivers/misc/lis3lv02d/Kconfig"
source "drivers/misc/carma/Kconfig"
source "drivers/misc/altera-stapl/Kconfig"
source "drivers/misc/mei/Kconfig"
source "drivers/misc/vmw_vmci/Kconfig"
source "drivers/misc/mic/Kconfig"
source "drivers/misc/genwqe/Kconfig"
source "drivers/misc/echo/Kconfig"
source "drivers/misc/cxl/Kconfig"
source "drivers/misc/mcu_ipc/Kconfig"
source "drivers/misc/uart_sel/Kconfig"
source "drivers/misc/modem_v1/Kconfig"
source "drivers/misc/modem_if/Kconfig"
source "drivers/misc/gnss_if/Kconfig"
source "drivers/misc/samsung/Kconfig"
endmenu

77
drivers/misc/Makefile Normal file
View file

@ -0,0 +1,77 @@
#
# Makefile for misc devices that really don't fit anywhere else.
#
obj-$(CONFIG_IBM_ASM) += ibmasm/
obj-$(CONFIG_AD525X_DPOT) += ad525x_dpot.o
obj-$(CONFIG_AD525X_DPOT_I2C) += ad525x_dpot-i2c.o
obj-$(CONFIG_AD525X_DPOT_SPI) += ad525x_dpot-spi.o
obj-$(CONFIG_INTEL_MID_PTI) += pti.o
obj-$(CONFIG_ATMEL_SSC) += atmel-ssc.o
obj-$(CONFIG_ATMEL_TCLIB) += atmel_tclib.o
obj-$(CONFIG_BMP085) += bmp085.o
obj-$(CONFIG_BMP085_I2C) += bmp085-i2c.o
obj-$(CONFIG_BMP085_SPI) += bmp085-spi.o
obj-$(CONFIG_DUMMY_IRQ) += dummy-irq.o
obj-$(CONFIG_ICS932S401) += ics932s401.o
obj-$(CONFIG_LKDTM) += lkdtm.o
obj-$(CONFIG_TIFM_CORE) += tifm_core.o
obj-$(CONFIG_TIFM_7XX1) += tifm_7xx1.o
obj-$(CONFIG_PHANTOM) += phantom.o
obj-$(CONFIG_SENSORS_BH1780) += bh1780gli.o
obj-$(CONFIG_SENSORS_BH1770) += bh1770glc.o
obj-$(CONFIG_SENSORS_APDS990X) += apds990x.o
obj-$(CONFIG_SGI_IOC4) += ioc4.o
obj-$(CONFIG_ENCLOSURE_SERVICES) += enclosure.o
obj-$(CONFIG_KGDB_TESTS) += kgdbts.o
obj-$(CONFIG_SGI_XP) += sgi-xp/
obj-$(CONFIG_SGI_GRU) += sgi-gru/
obj-$(CONFIG_CS5535_MFGPT) += cs5535-mfgpt.o
obj-$(CONFIG_HP_ILO) += hpilo.o
obj-$(CONFIG_APDS9802ALS) += apds9802als.o
obj-$(CONFIG_ISL29003) += isl29003.o
obj-$(CONFIG_ISL29020) += isl29020.o
obj-$(CONFIG_SENSORS_TSL2550) += tsl2550.o
obj-$(CONFIG_DS1682) += ds1682.o
obj-$(CONFIG_TI_DAC7512) += ti_dac7512.o
obj-$(CONFIG_UID_STAT) += uid_stat.o
obj-$(CONFIG_C2PORT) += c2port/
obj-$(CONFIG_HMC6352) += hmc6352.o
obj-y += eeprom/
obj-y += cb710/
obj-$(CONFIG_SPEAR13XX_PCIE_GADGET) += spear13xx_pcie_gadget.o
obj-$(CONFIG_VMWARE_BALLOON) += vmw_balloon.o
obj-$(CONFIG_ARM_CHARLCD) += arm-charlcd.o
obj-$(CONFIG_PCH_PHUB) += pch_phub.o
obj-y += ti-st/
obj-y += lis3lv02d/
obj-y += carma/
obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
obj-$(CONFIG_ALTERA_STAPL) +=altera-stapl/
obj-$(CONFIG_INTEL_MEI) += mei/
obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci/
obj-$(CONFIG_LATTICE_ECP3_CONFIG) += lattice-ecp3-config.o
obj-$(CONFIG_SRAM) += sram.o
obj-y += mic/
obj-$(CONFIG_GENWQE) += genwqe/
obj-$(CONFIG_ECHO) += echo/
obj-$(CONFIG_VEXPRESS_SYSCFG) += vexpress-syscfg.o
obj-$(CONFIG_CXL_BASE) += cxl/
obj-$(CONFIG_UID_CPUTIME) += uid_cputime.o
obj-$(CONFIG_MCU_IPC) += mcu_ipc/
obj-$(CONFIG_UART_SEL) += uart_sel/
obj-$(CONFIG_SEC_SIPC_MODEM_IF) += modem_v1/
# Secure OS Mobicore Interface
obj-$(CONFIG_TRUSTONIC_TEE) += tzic64.o
obj-$(CONFIG_SEC_MODEM_IF) += modem_if/
obj-$(CONFIG_GNSS_SHMEM_IF) += gnss_if/
obj-y += samsung/
# Tima
ifeq ($(TIMA_ENABLED),1)
obj-$(CONFIG_TIMA_LOG) += tima_debug_log.o
endif
# for DMVerity
obj-$(CONFIG_DM_VERITY) += dmverity_query.o

View file

@ -0,0 +1,121 @@
/*
* Driver for the Analog Devices digital potentiometers (I2C bus)
*
* Copyright (C) 2010-2011 Michael Hennerich, Analog Devices Inc.
*
* Licensed under the GPL-2 or later.
*/
#include <linux/i2c.h>
#include <linux/module.h>
#include "ad525x_dpot.h"
/* I2C bus functions */
static int write_d8(void *client, u8 val)
{
return i2c_smbus_write_byte(client, val);
}
static int write_r8d8(void *client, u8 reg, u8 val)
{
return i2c_smbus_write_byte_data(client, reg, val);
}
static int write_r8d16(void *client, u8 reg, u16 val)
{
return i2c_smbus_write_word_data(client, reg, val);
}
static int read_d8(void *client)
{
return i2c_smbus_read_byte(client);
}
static int read_r8d8(void *client, u8 reg)
{
return i2c_smbus_read_byte_data(client, reg);
}
static int read_r8d16(void *client, u8 reg)
{
return i2c_smbus_read_word_data(client, reg);
}
static const struct ad_dpot_bus_ops bops = {
.read_d8 = read_d8,
.read_r8d8 = read_r8d8,
.read_r8d16 = read_r8d16,
.write_d8 = write_d8,
.write_r8d8 = write_r8d8,
.write_r8d16 = write_r8d16,
};
static int ad_dpot_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct ad_dpot_bus_data bdata = {
.client = client,
.bops = &bops,
};
if (!i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_WORD_DATA)) {
dev_err(&client->dev, "SMBUS Word Data not Supported\n");
return -EIO;
}
return ad_dpot_probe(&client->dev, &bdata, id->driver_data, id->name);
}
static int ad_dpot_i2c_remove(struct i2c_client *client)
{
return ad_dpot_remove(&client->dev);
}
static const struct i2c_device_id ad_dpot_id[] = {
{"ad5258", AD5258_ID},
{"ad5259", AD5259_ID},
{"ad5251", AD5251_ID},
{"ad5252", AD5252_ID},
{"ad5253", AD5253_ID},
{"ad5254", AD5254_ID},
{"ad5255", AD5255_ID},
{"ad5241", AD5241_ID},
{"ad5242", AD5242_ID},
{"ad5243", AD5243_ID},
{"ad5245", AD5245_ID},
{"ad5246", AD5246_ID},
{"ad5247", AD5247_ID},
{"ad5248", AD5248_ID},
{"ad5280", AD5280_ID},
{"ad5282", AD5282_ID},
{"adn2860", ADN2860_ID},
{"ad5273", AD5273_ID},
{"ad5161", AD5161_ID},
{"ad5171", AD5171_ID},
{"ad5170", AD5170_ID},
{"ad5172", AD5172_ID},
{"ad5173", AD5173_ID},
{"ad5272", AD5272_ID},
{"ad5274", AD5274_ID},
{}
};
MODULE_DEVICE_TABLE(i2c, ad_dpot_id);
static struct i2c_driver ad_dpot_i2c_driver = {
.driver = {
.name = "ad_dpot",
.owner = THIS_MODULE,
},
.probe = ad_dpot_i2c_probe,
.remove = ad_dpot_i2c_remove,
.id_table = ad_dpot_id,
};
module_i2c_driver(ad_dpot_i2c_driver);
MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>");
MODULE_DESCRIPTION("digital potentiometer I2C bus driver");
MODULE_LICENSE("GPL");
MODULE_ALIAS("i2c:ad_dpot");

View file

@ -0,0 +1,143 @@
/*
* Driver for the Analog Devices digital potentiometers (SPI bus)
*
* Copyright (C) 2010-2011 Michael Hennerich, Analog Devices Inc.
*
* Licensed under the GPL-2 or later.
*/
#include <linux/spi/spi.h>
#include <linux/module.h>
#include "ad525x_dpot.h"
/* SPI bus functions */
static int write8(void *client, u8 val)
{
u8 data = val;
return spi_write(client, &data, 1);
}
static int write16(void *client, u8 reg, u8 val)
{
u8 data[2] = {reg, val};
return spi_write(client, data, 2);
}
static int write24(void *client, u8 reg, u16 val)
{
u8 data[3] = {reg, val >> 8, val};
return spi_write(client, data, 3);
}
static int read8(void *client)
{
int ret;
u8 data;
ret = spi_read(client, &data, 1);
if (ret < 0)
return ret;
return data;
}
static int read16(void *client, u8 reg)
{
int ret;
u8 buf_rx[2];
write16(client, reg, 0);
ret = spi_read(client, buf_rx, 2);
if (ret < 0)
return ret;
return (buf_rx[0] << 8) | buf_rx[1];
}
static int read24(void *client, u8 reg)
{
int ret;
u8 buf_rx[3];
write24(client, reg, 0);
ret = spi_read(client, buf_rx, 3);
if (ret < 0)
return ret;
return (buf_rx[1] << 8) | buf_rx[2];
}
static const struct ad_dpot_bus_ops bops = {
.read_d8 = read8,
.read_r8d8 = read16,
.read_r8d16 = read24,
.write_d8 = write8,
.write_r8d8 = write16,
.write_r8d16 = write24,
};
static int ad_dpot_spi_probe(struct spi_device *spi)
{
struct ad_dpot_bus_data bdata = {
.client = spi,
.bops = &bops,
};
return ad_dpot_probe(&spi->dev, &bdata,
spi_get_device_id(spi)->driver_data,
spi_get_device_id(spi)->name);
}
static int ad_dpot_spi_remove(struct spi_device *spi)
{
return ad_dpot_remove(&spi->dev);
}
static const struct spi_device_id ad_dpot_spi_id[] = {
{"ad5160", AD5160_ID},
{"ad5161", AD5161_ID},
{"ad5162", AD5162_ID},
{"ad5165", AD5165_ID},
{"ad5200", AD5200_ID},
{"ad5201", AD5201_ID},
{"ad5203", AD5203_ID},
{"ad5204", AD5204_ID},
{"ad5206", AD5206_ID},
{"ad5207", AD5207_ID},
{"ad5231", AD5231_ID},
{"ad5232", AD5232_ID},
{"ad5233", AD5233_ID},
{"ad5235", AD5235_ID},
{"ad5260", AD5260_ID},
{"ad5262", AD5262_ID},
{"ad5263", AD5263_ID},
{"ad5290", AD5290_ID},
{"ad5291", AD5291_ID},
{"ad5292", AD5292_ID},
{"ad5293", AD5293_ID},
{"ad7376", AD7376_ID},
{"ad8400", AD8400_ID},
{"ad8402", AD8402_ID},
{"ad8403", AD8403_ID},
{"adn2850", ADN2850_ID},
{"ad5270", AD5270_ID},
{"ad5271", AD5271_ID},
{}
};
MODULE_DEVICE_TABLE(spi, ad_dpot_spi_id);
static struct spi_driver ad_dpot_spi_driver = {
.driver = {
.name = "ad_dpot",
.owner = THIS_MODULE,
},
.probe = ad_dpot_spi_probe,
.remove = ad_dpot_spi_remove,
.id_table = ad_dpot_spi_id,
};
module_spi_driver(ad_dpot_spi_driver);
MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>");
MODULE_DESCRIPTION("digital potentiometer SPI bus driver");
MODULE_LICENSE("GPL");
MODULE_ALIAS("spi:ad_dpot");

770
drivers/misc/ad525x_dpot.c Normal file
View file

@ -0,0 +1,770 @@
/*
* ad525x_dpot: Driver for the Analog Devices digital potentiometers
* Copyright (c) 2009-2010 Analog Devices, Inc.
* Author: Michael Hennerich <hennerich@blackfin.uclinux.org>
*
* DEVID #Wipers #Positions Resistor Options (kOhm)
* AD5258 1 64 1, 10, 50, 100
* AD5259 1 256 5, 10, 50, 100
* AD5251 2 64 1, 10, 50, 100
* AD5252 2 256 1, 10, 50, 100
* AD5255 3 512 25, 250
* AD5253 4 64 1, 10, 50, 100
* AD5254 4 256 1, 10, 50, 100
* AD5160 1 256 5, 10, 50, 100
* AD5161 1 256 5, 10, 50, 100
* AD5162 2 256 2.5, 10, 50, 100
* AD5165 1 256 100
* AD5200 1 256 10, 50
* AD5201 1 33 10, 50
* AD5203 4 64 10, 100
* AD5204 4 256 10, 50, 100
* AD5206 6 256 10, 50, 100
* AD5207 2 256 10, 50, 100
* AD5231 1 1024 10, 50, 100
* AD5232 2 256 10, 50, 100
* AD5233 4 64 10, 50, 100
* AD5235 2 1024 25, 250
* AD5260 1 256 20, 50, 200
* AD5262 2 256 20, 50, 200
* AD5263 4 256 20, 50, 200
* AD5290 1 256 10, 50, 100
* AD5291 1 256 20, 50, 100 (20-TP)
* AD5292 1 1024 20, 50, 100 (20-TP)
* AD5293 1 1024 20, 50, 100
* AD7376 1 128 10, 50, 100, 1M
* AD8400 1 256 1, 10, 50, 100
* AD8402 2 256 1, 10, 50, 100
* AD8403 4 256 1, 10, 50, 100
* ADN2850 3 512 25, 250
* AD5241 1 256 10, 100, 1M
* AD5246 1 128 5, 10, 50, 100
* AD5247 1 128 5, 10, 50, 100
* AD5245 1 256 5, 10, 50, 100
* AD5243 2 256 2.5, 10, 50, 100
* AD5248 2 256 2.5, 10, 50, 100
* AD5242 2 256 20, 50, 200
* AD5280 1 256 20, 50, 200
* AD5282 2 256 20, 50, 200
* ADN2860 3 512 25, 250
* AD5273 1 64 1, 10, 50, 100 (OTP)
* AD5171 1 64 5, 10, 50, 100 (OTP)
* AD5170 1 256 2.5, 10, 50, 100 (OTP)
* AD5172 2 256 2.5, 10, 50, 100 (OTP)
* AD5173 2 256 2.5, 10, 50, 100 (OTP)
* AD5270 1 1024 20, 50, 100 (50-TP)
* AD5271 1 256 20, 50, 100 (50-TP)
* AD5272 1 1024 20, 50, 100 (50-TP)
* AD5274 1 256 20, 50, 100 (50-TP)
*
* See Documentation/misc-devices/ad525x_dpot.txt for more info.
*
* derived from ad5258.c
* Copyright (c) 2009 Cyber Switching, Inc.
* Author: Chris Verges <chrisv@cyberswitching.com>
*
* derived from ad5252.c
* Copyright (c) 2006-2011 Michael Hennerich <hennerich@blackfin.uclinux.org>
*
* Licensed under the GPL-2 or later.
*/
#include <linux/module.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/slab.h>
#include "ad525x_dpot.h"
/*
* Client data (each client gets its own)
*/
struct dpot_data {
struct ad_dpot_bus_data bdata;
struct mutex update_lock;
unsigned rdac_mask;
unsigned max_pos;
unsigned long devid;
unsigned uid;
unsigned feat;
unsigned wipers;
u16 rdac_cache[MAX_RDACS];
DECLARE_BITMAP(otp_en_mask, MAX_RDACS);
};
static inline int dpot_read_d8(struct dpot_data *dpot)
{
return dpot->bdata.bops->read_d8(dpot->bdata.client);
}
static inline int dpot_read_r8d8(struct dpot_data *dpot, u8 reg)
{
return dpot->bdata.bops->read_r8d8(dpot->bdata.client, reg);
}
static inline int dpot_read_r8d16(struct dpot_data *dpot, u8 reg)
{
return dpot->bdata.bops->read_r8d16(dpot->bdata.client, reg);
}
static inline int dpot_write_d8(struct dpot_data *dpot, u8 val)
{
return dpot->bdata.bops->write_d8(dpot->bdata.client, val);
}
static inline int dpot_write_r8d8(struct dpot_data *dpot, u8 reg, u16 val)
{
return dpot->bdata.bops->write_r8d8(dpot->bdata.client, reg, val);
}
static inline int dpot_write_r8d16(struct dpot_data *dpot, u8 reg, u16 val)
{
return dpot->bdata.bops->write_r8d16(dpot->bdata.client, reg, val);
}
static s32 dpot_read_spi(struct dpot_data *dpot, u8 reg)
{
unsigned ctrl = 0;
int value;
if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD))) {
if (dpot->feat & F_RDACS_WONLY)
return dpot->rdac_cache[reg & DPOT_RDAC_MASK];
if (dpot->uid == DPOT_UID(AD5291_ID) ||
dpot->uid == DPOT_UID(AD5292_ID) ||
dpot->uid == DPOT_UID(AD5293_ID)) {
value = dpot_read_r8d8(dpot,
DPOT_AD5291_READ_RDAC << 2);
if (dpot->uid == DPOT_UID(AD5291_ID))
value = value >> 2;
return value;
} else if (dpot->uid == DPOT_UID(AD5270_ID) ||
dpot->uid == DPOT_UID(AD5271_ID)) {
value = dpot_read_r8d8(dpot,
DPOT_AD5270_1_2_4_READ_RDAC << 2);
if (value < 0)
return value;
if (dpot->uid == DPOT_UID(AD5271_ID))
value = value >> 2;
return value;
}
ctrl = DPOT_SPI_READ_RDAC;
} else if (reg & DPOT_ADDR_EEPROM) {
ctrl = DPOT_SPI_READ_EEPROM;
}
if (dpot->feat & F_SPI_16BIT)
return dpot_read_r8d8(dpot, ctrl);
else if (dpot->feat & F_SPI_24BIT)
return dpot_read_r8d16(dpot, ctrl);
return -EFAULT;
}
static s32 dpot_read_i2c(struct dpot_data *dpot, u8 reg)
{
int value;
unsigned ctrl = 0;
switch (dpot->uid) {
case DPOT_UID(AD5246_ID):
case DPOT_UID(AD5247_ID):
return dpot_read_d8(dpot);
case DPOT_UID(AD5245_ID):
case DPOT_UID(AD5241_ID):
case DPOT_UID(AD5242_ID):
case DPOT_UID(AD5243_ID):
case DPOT_UID(AD5248_ID):
case DPOT_UID(AD5280_ID):
case DPOT_UID(AD5282_ID):
ctrl = ((reg & DPOT_RDAC_MASK) == DPOT_RDAC0) ?
0 : DPOT_AD5282_RDAC_AB;
return dpot_read_r8d8(dpot, ctrl);
case DPOT_UID(AD5170_ID):
case DPOT_UID(AD5171_ID):
case DPOT_UID(AD5273_ID):
return dpot_read_d8(dpot);
case DPOT_UID(AD5172_ID):
case DPOT_UID(AD5173_ID):
ctrl = ((reg & DPOT_RDAC_MASK) == DPOT_RDAC0) ?
0 : DPOT_AD5172_3_A0;
return dpot_read_r8d8(dpot, ctrl);
case DPOT_UID(AD5272_ID):
case DPOT_UID(AD5274_ID):
dpot_write_r8d8(dpot,
(DPOT_AD5270_1_2_4_READ_RDAC << 2), 0);
value = dpot_read_r8d16(dpot,
DPOT_AD5270_1_2_4_RDAC << 2);
if (value < 0)
return value;
/*
* AD5272/AD5274 returns high byte first, however
* underling smbus expects low byte first.
*/
value = swab16(value);
if (dpot->uid == DPOT_UID(AD5271_ID))
value = value >> 2;
return value;
default:
if ((reg & DPOT_REG_TOL) || (dpot->max_pos > 256))
return dpot_read_r8d16(dpot, (reg & 0xF8) |
((reg & 0x7) << 1));
else
return dpot_read_r8d8(dpot, reg);
}
}
static s32 dpot_read(struct dpot_data *dpot, u8 reg)
{
if (dpot->feat & F_SPI)
return dpot_read_spi(dpot, reg);
else
return dpot_read_i2c(dpot, reg);
}
static s32 dpot_write_spi(struct dpot_data *dpot, u8 reg, u16 value)
{
unsigned val = 0;
if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD | DPOT_ADDR_OTP))) {
if (dpot->feat & F_RDACS_WONLY)
dpot->rdac_cache[reg & DPOT_RDAC_MASK] = value;
if (dpot->feat & F_AD_APPDATA) {
if (dpot->feat & F_SPI_8BIT) {
val = ((reg & DPOT_RDAC_MASK) <<
DPOT_MAX_POS(dpot->devid)) |
value;
return dpot_write_d8(dpot, val);
} else if (dpot->feat & F_SPI_16BIT) {
val = ((reg & DPOT_RDAC_MASK) <<
DPOT_MAX_POS(dpot->devid)) |
value;
return dpot_write_r8d8(dpot, val >> 8,
val & 0xFF);
} else
BUG();
} else {
if (dpot->uid == DPOT_UID(AD5291_ID) ||
dpot->uid == DPOT_UID(AD5292_ID) ||
dpot->uid == DPOT_UID(AD5293_ID)) {
dpot_write_r8d8(dpot, DPOT_AD5291_CTRLREG << 2,
DPOT_AD5291_UNLOCK_CMD);
if (dpot->uid == DPOT_UID(AD5291_ID))
value = value << 2;
return dpot_write_r8d8(dpot,
(DPOT_AD5291_RDAC << 2) |
(value >> 8), value & 0xFF);
} else if (dpot->uid == DPOT_UID(AD5270_ID) ||
dpot->uid == DPOT_UID(AD5271_ID)) {
dpot_write_r8d8(dpot,
DPOT_AD5270_1_2_4_CTRLREG << 2,
DPOT_AD5270_1_2_4_UNLOCK_CMD);
if (dpot->uid == DPOT_UID(AD5271_ID))
value = value << 2;
return dpot_write_r8d8(dpot,
(DPOT_AD5270_1_2_4_RDAC << 2) |
(value >> 8), value & 0xFF);
}
val = DPOT_SPI_RDAC | (reg & DPOT_RDAC_MASK);
}
} else if (reg & DPOT_ADDR_EEPROM) {
val = DPOT_SPI_EEPROM | (reg & DPOT_RDAC_MASK);
} else if (reg & DPOT_ADDR_CMD) {
switch (reg) {
case DPOT_DEC_ALL_6DB:
val = DPOT_SPI_DEC_ALL_6DB;
break;
case DPOT_INC_ALL_6DB:
val = DPOT_SPI_INC_ALL_6DB;
break;
case DPOT_DEC_ALL:
val = DPOT_SPI_DEC_ALL;
break;
case DPOT_INC_ALL:
val = DPOT_SPI_INC_ALL;
break;
}
} else if (reg & DPOT_ADDR_OTP) {
if (dpot->uid == DPOT_UID(AD5291_ID) ||
dpot->uid == DPOT_UID(AD5292_ID)) {
return dpot_write_r8d8(dpot,
DPOT_AD5291_STORE_XTPM << 2, 0);
} else if (dpot->uid == DPOT_UID(AD5270_ID) ||
dpot->uid == DPOT_UID(AD5271_ID)) {
return dpot_write_r8d8(dpot,
DPOT_AD5270_1_2_4_STORE_XTPM << 2, 0);
}
} else
BUG();
if (dpot->feat & F_SPI_16BIT)
return dpot_write_r8d8(dpot, val, value);
else if (dpot->feat & F_SPI_24BIT)
return dpot_write_r8d16(dpot, val, value);
return -EFAULT;
}
static s32 dpot_write_i2c(struct dpot_data *dpot, u8 reg, u16 value)
{
/* Only write the instruction byte for certain commands */
unsigned tmp = 0, ctrl = 0;
switch (dpot->uid) {
case DPOT_UID(AD5246_ID):
case DPOT_UID(AD5247_ID):
return dpot_write_d8(dpot, value);
break;
case DPOT_UID(AD5245_ID):
case DPOT_UID(AD5241_ID):
case DPOT_UID(AD5242_ID):
case DPOT_UID(AD5243_ID):
case DPOT_UID(AD5248_ID):
case DPOT_UID(AD5280_ID):
case DPOT_UID(AD5282_ID):
ctrl = ((reg & DPOT_RDAC_MASK) == DPOT_RDAC0) ?
0 : DPOT_AD5282_RDAC_AB;
return dpot_write_r8d8(dpot, ctrl, value);
break;
case DPOT_UID(AD5171_ID):
case DPOT_UID(AD5273_ID):
if (reg & DPOT_ADDR_OTP) {
tmp = dpot_read_d8(dpot);
if (tmp >> 6) /* Ready to Program? */
return -EFAULT;
ctrl = DPOT_AD5273_FUSE;
}
return dpot_write_r8d8(dpot, ctrl, value);
break;
case DPOT_UID(AD5172_ID):
case DPOT_UID(AD5173_ID):
ctrl = ((reg & DPOT_RDAC_MASK) == DPOT_RDAC0) ?
0 : DPOT_AD5172_3_A0;
if (reg & DPOT_ADDR_OTP) {
tmp = dpot_read_r8d16(dpot, ctrl);
if (tmp >> 14) /* Ready to Program? */
return -EFAULT;
ctrl |= DPOT_AD5170_2_3_FUSE;
}
return dpot_write_r8d8(dpot, ctrl, value);
break;
case DPOT_UID(AD5170_ID):
if (reg & DPOT_ADDR_OTP) {
tmp = dpot_read_r8d16(dpot, tmp);
if (tmp >> 14) /* Ready to Program? */
return -EFAULT;
ctrl = DPOT_AD5170_2_3_FUSE;
}
return dpot_write_r8d8(dpot, ctrl, value);
break;
case DPOT_UID(AD5272_ID):
case DPOT_UID(AD5274_ID):
dpot_write_r8d8(dpot, DPOT_AD5270_1_2_4_CTRLREG << 2,
DPOT_AD5270_1_2_4_UNLOCK_CMD);
if (reg & DPOT_ADDR_OTP)
return dpot_write_r8d8(dpot,
DPOT_AD5270_1_2_4_STORE_XTPM << 2, 0);
if (dpot->uid == DPOT_UID(AD5274_ID))
value = value << 2;
return dpot_write_r8d8(dpot, (DPOT_AD5270_1_2_4_RDAC << 2) |
(value >> 8), value & 0xFF);
break;
default:
if (reg & DPOT_ADDR_CMD)
return dpot_write_d8(dpot, reg);
if (dpot->max_pos > 256)
return dpot_write_r8d16(dpot, (reg & 0xF8) |
((reg & 0x7) << 1), value);
else
/* All other registers require instruction + data bytes */
return dpot_write_r8d8(dpot, reg, value);
}
}
static s32 dpot_write(struct dpot_data *dpot, u8 reg, u16 value)
{
if (dpot->feat & F_SPI)
return dpot_write_spi(dpot, reg, value);
else
return dpot_write_i2c(dpot, reg, value);
}
/* sysfs functions */
static ssize_t sysfs_show_reg(struct device *dev,
struct device_attribute *attr,
char *buf, u32 reg)
{
struct dpot_data *data = dev_get_drvdata(dev);
s32 value;
if (reg & DPOT_ADDR_OTP_EN)
return sprintf(buf, "%s\n",
test_bit(DPOT_RDAC_MASK & reg, data->otp_en_mask) ?
"enabled" : "disabled");
mutex_lock(&data->update_lock);
value = dpot_read(data, reg);
mutex_unlock(&data->update_lock);
if (value < 0)
return -EINVAL;
/*
* Let someone else deal with converting this ...
* the tolerance is a two-byte value where the MSB
* is a sign + integer value, and the LSB is a
* decimal value. See page 18 of the AD5258
* datasheet (Rev. A) for more details.
*/
if (reg & DPOT_REG_TOL)
return sprintf(buf, "0x%04x\n", value & 0xFFFF);
else
return sprintf(buf, "%u\n", value & data->rdac_mask);
}
static ssize_t sysfs_set_reg(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count, u32 reg)
{
struct dpot_data *data = dev_get_drvdata(dev);
unsigned long value;
int err;
if (reg & DPOT_ADDR_OTP_EN) {
if (!strncmp(buf, "enabled", sizeof("enabled")))
set_bit(DPOT_RDAC_MASK & reg, data->otp_en_mask);
else
clear_bit(DPOT_RDAC_MASK & reg, data->otp_en_mask);
return count;
}
if ((reg & DPOT_ADDR_OTP) &&
!test_bit(DPOT_RDAC_MASK & reg, data->otp_en_mask))
return -EPERM;
err = kstrtoul(buf, 10, &value);
if (err)
return err;
if (value > data->rdac_mask)
value = data->rdac_mask;
mutex_lock(&data->update_lock);
dpot_write(data, reg, value);
if (reg & DPOT_ADDR_EEPROM)
msleep(26); /* Sleep while the EEPROM updates */
else if (reg & DPOT_ADDR_OTP)
msleep(400); /* Sleep while the OTP updates */
mutex_unlock(&data->update_lock);
return count;
}
static ssize_t sysfs_do_cmd(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count, u32 reg)
{
struct dpot_data *data = dev_get_drvdata(dev);
mutex_lock(&data->update_lock);
dpot_write(data, reg, 0);
mutex_unlock(&data->update_lock);
return count;
}
/* ------------------------------------------------------------------------- */
#define DPOT_DEVICE_SHOW(_name, _reg) static ssize_t \
show_##_name(struct device *dev, \
struct device_attribute *attr, char *buf) \
{ \
return sysfs_show_reg(dev, attr, buf, _reg); \
}
#define DPOT_DEVICE_SET(_name, _reg) static ssize_t \
set_##_name(struct device *dev, \
struct device_attribute *attr, \
const char *buf, size_t count) \
{ \
return sysfs_set_reg(dev, attr, buf, count, _reg); \
}
#define DPOT_DEVICE_SHOW_SET(name, reg) \
DPOT_DEVICE_SHOW(name, reg) \
DPOT_DEVICE_SET(name, reg) \
static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, set_##name);
#define DPOT_DEVICE_SHOW_ONLY(name, reg) \
DPOT_DEVICE_SHOW(name, reg) \
static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, NULL);
DPOT_DEVICE_SHOW_SET(rdac0, DPOT_ADDR_RDAC | DPOT_RDAC0);
DPOT_DEVICE_SHOW_SET(eeprom0, DPOT_ADDR_EEPROM | DPOT_RDAC0);
DPOT_DEVICE_SHOW_ONLY(tolerance0, DPOT_ADDR_EEPROM | DPOT_TOL_RDAC0);
DPOT_DEVICE_SHOW_SET(otp0, DPOT_ADDR_OTP | DPOT_RDAC0);
DPOT_DEVICE_SHOW_SET(otp0en, DPOT_ADDR_OTP_EN | DPOT_RDAC0);
DPOT_DEVICE_SHOW_SET(rdac1, DPOT_ADDR_RDAC | DPOT_RDAC1);
DPOT_DEVICE_SHOW_SET(eeprom1, DPOT_ADDR_EEPROM | DPOT_RDAC1);
DPOT_DEVICE_SHOW_ONLY(tolerance1, DPOT_ADDR_EEPROM | DPOT_TOL_RDAC1);
DPOT_DEVICE_SHOW_SET(otp1, DPOT_ADDR_OTP | DPOT_RDAC1);
DPOT_DEVICE_SHOW_SET(otp1en, DPOT_ADDR_OTP_EN | DPOT_RDAC1);
DPOT_DEVICE_SHOW_SET(rdac2, DPOT_ADDR_RDAC | DPOT_RDAC2);
DPOT_DEVICE_SHOW_SET(eeprom2, DPOT_ADDR_EEPROM | DPOT_RDAC2);
DPOT_DEVICE_SHOW_ONLY(tolerance2, DPOT_ADDR_EEPROM | DPOT_TOL_RDAC2);
DPOT_DEVICE_SHOW_SET(otp2, DPOT_ADDR_OTP | DPOT_RDAC2);
DPOT_DEVICE_SHOW_SET(otp2en, DPOT_ADDR_OTP_EN | DPOT_RDAC2);
DPOT_DEVICE_SHOW_SET(rdac3, DPOT_ADDR_RDAC | DPOT_RDAC3);
DPOT_DEVICE_SHOW_SET(eeprom3, DPOT_ADDR_EEPROM | DPOT_RDAC3);
DPOT_DEVICE_SHOW_ONLY(tolerance3, DPOT_ADDR_EEPROM | DPOT_TOL_RDAC3);
DPOT_DEVICE_SHOW_SET(otp3, DPOT_ADDR_OTP | DPOT_RDAC3);
DPOT_DEVICE_SHOW_SET(otp3en, DPOT_ADDR_OTP_EN | DPOT_RDAC3);
DPOT_DEVICE_SHOW_SET(rdac4, DPOT_ADDR_RDAC | DPOT_RDAC4);
DPOT_DEVICE_SHOW_SET(eeprom4, DPOT_ADDR_EEPROM | DPOT_RDAC4);
DPOT_DEVICE_SHOW_ONLY(tolerance4, DPOT_ADDR_EEPROM | DPOT_TOL_RDAC4);
DPOT_DEVICE_SHOW_SET(otp4, DPOT_ADDR_OTP | DPOT_RDAC4);
DPOT_DEVICE_SHOW_SET(otp4en, DPOT_ADDR_OTP_EN | DPOT_RDAC4);
DPOT_DEVICE_SHOW_SET(rdac5, DPOT_ADDR_RDAC | DPOT_RDAC5);
DPOT_DEVICE_SHOW_SET(eeprom5, DPOT_ADDR_EEPROM | DPOT_RDAC5);
DPOT_DEVICE_SHOW_ONLY(tolerance5, DPOT_ADDR_EEPROM | DPOT_TOL_RDAC5);
DPOT_DEVICE_SHOW_SET(otp5, DPOT_ADDR_OTP | DPOT_RDAC5);
DPOT_DEVICE_SHOW_SET(otp5en, DPOT_ADDR_OTP_EN | DPOT_RDAC5);
static const struct attribute *dpot_attrib_wipers[] = {
&dev_attr_rdac0.attr,
&dev_attr_rdac1.attr,
&dev_attr_rdac2.attr,
&dev_attr_rdac3.attr,
&dev_attr_rdac4.attr,
&dev_attr_rdac5.attr,
NULL
};
static const struct attribute *dpot_attrib_eeprom[] = {
&dev_attr_eeprom0.attr,
&dev_attr_eeprom1.attr,
&dev_attr_eeprom2.attr,
&dev_attr_eeprom3.attr,
&dev_attr_eeprom4.attr,
&dev_attr_eeprom5.attr,
NULL
};
static const struct attribute *dpot_attrib_otp[] = {
&dev_attr_otp0.attr,
&dev_attr_otp1.attr,
&dev_attr_otp2.attr,
&dev_attr_otp3.attr,
&dev_attr_otp4.attr,
&dev_attr_otp5.attr,
NULL
};
static const struct attribute *dpot_attrib_otp_en[] = {
&dev_attr_otp0en.attr,
&dev_attr_otp1en.attr,
&dev_attr_otp2en.attr,
&dev_attr_otp3en.attr,
&dev_attr_otp4en.attr,
&dev_attr_otp5en.attr,
NULL
};
static const struct attribute *dpot_attrib_tolerance[] = {
&dev_attr_tolerance0.attr,
&dev_attr_tolerance1.attr,
&dev_attr_tolerance2.attr,
&dev_attr_tolerance3.attr,
&dev_attr_tolerance4.attr,
&dev_attr_tolerance5.attr,
NULL
};
/* ------------------------------------------------------------------------- */
#define DPOT_DEVICE_DO_CMD(_name, _cmd) static ssize_t \
set_##_name(struct device *dev, \
struct device_attribute *attr, \
const char *buf, size_t count) \
{ \
return sysfs_do_cmd(dev, attr, buf, count, _cmd); \
} \
static DEVICE_ATTR(_name, S_IWUSR | S_IRUGO, NULL, set_##_name);
DPOT_DEVICE_DO_CMD(inc_all, DPOT_INC_ALL);
DPOT_DEVICE_DO_CMD(dec_all, DPOT_DEC_ALL);
DPOT_DEVICE_DO_CMD(inc_all_6db, DPOT_INC_ALL_6DB);
DPOT_DEVICE_DO_CMD(dec_all_6db, DPOT_DEC_ALL_6DB);
static struct attribute *ad525x_attributes_commands[] = {
&dev_attr_inc_all.attr,
&dev_attr_dec_all.attr,
&dev_attr_inc_all_6db.attr,
&dev_attr_dec_all_6db.attr,
NULL
};
static const struct attribute_group ad525x_group_commands = {
.attrs = ad525x_attributes_commands,
};
static int ad_dpot_add_files(struct device *dev,
unsigned features, unsigned rdac)
{
int err = sysfs_create_file(&dev->kobj,
dpot_attrib_wipers[rdac]);
if (features & F_CMD_EEP)
err |= sysfs_create_file(&dev->kobj,
dpot_attrib_eeprom[rdac]);
if (features & F_CMD_TOL)
err |= sysfs_create_file(&dev->kobj,
dpot_attrib_tolerance[rdac]);
if (features & F_CMD_OTP) {
err |= sysfs_create_file(&dev->kobj,
dpot_attrib_otp_en[rdac]);
err |= sysfs_create_file(&dev->kobj,
dpot_attrib_otp[rdac]);
}
if (err)
dev_err(dev, "failed to register sysfs hooks for RDAC%d\n",
rdac);
return err;
}
static inline void ad_dpot_remove_files(struct device *dev,
unsigned features, unsigned rdac)
{
sysfs_remove_file(&dev->kobj,
dpot_attrib_wipers[rdac]);
if (features & F_CMD_EEP)
sysfs_remove_file(&dev->kobj,
dpot_attrib_eeprom[rdac]);
if (features & F_CMD_TOL)
sysfs_remove_file(&dev->kobj,
dpot_attrib_tolerance[rdac]);
if (features & F_CMD_OTP) {
sysfs_remove_file(&dev->kobj,
dpot_attrib_otp_en[rdac]);
sysfs_remove_file(&dev->kobj,
dpot_attrib_otp[rdac]);
}
}
int ad_dpot_probe(struct device *dev,
struct ad_dpot_bus_data *bdata, unsigned long devid,
const char *name)
{
struct dpot_data *data;
int i, err = 0;
data = kzalloc(sizeof(struct dpot_data), GFP_KERNEL);
if (!data) {
err = -ENOMEM;
goto exit;
}
dev_set_drvdata(dev, data);
mutex_init(&data->update_lock);
data->bdata = *bdata;
data->devid = devid;
data->max_pos = 1 << DPOT_MAX_POS(devid);
data->rdac_mask = data->max_pos - 1;
data->feat = DPOT_FEAT(devid);
data->uid = DPOT_UID(devid);
data->wipers = DPOT_WIPERS(devid);
for (i = DPOT_RDAC0; i < MAX_RDACS; i++)
if (data->wipers & (1 << i)) {
err = ad_dpot_add_files(dev, data->feat, i);
if (err)
goto exit_remove_files;
/* power-up midscale */
if (data->feat & F_RDACS_WONLY)
data->rdac_cache[i] = data->max_pos / 2;
}
if (data->feat & F_CMD_INC)
err = sysfs_create_group(&dev->kobj, &ad525x_group_commands);
if (err) {
dev_err(dev, "failed to register sysfs hooks\n");
goto exit_free;
}
dev_info(dev, "%s %d-Position Digital Potentiometer registered\n",
name, data->max_pos);
return 0;
exit_remove_files:
for (i = DPOT_RDAC0; i < MAX_RDACS; i++)
if (data->wipers & (1 << i))
ad_dpot_remove_files(dev, data->feat, i);
exit_free:
kfree(data);
dev_set_drvdata(dev, NULL);
exit:
dev_err(dev, "failed to create client for %s ID 0x%lX\n",
name, devid);
return err;
}
EXPORT_SYMBOL(ad_dpot_probe);
int ad_dpot_remove(struct device *dev)
{
struct dpot_data *data = dev_get_drvdata(dev);
int i;
for (i = DPOT_RDAC0; i < MAX_RDACS; i++)
if (data->wipers & (1 << i))
ad_dpot_remove_files(dev, data->feat, i);
kfree(data);
return 0;
}
EXPORT_SYMBOL(ad_dpot_remove);
MODULE_AUTHOR("Chris Verges <chrisv@cyberswitching.com>, "
"Michael Hennerich <hennerich@blackfin.uclinux.org>");
MODULE_DESCRIPTION("Digital potentiometer driver");
MODULE_LICENSE("GPL");

215
drivers/misc/ad525x_dpot.h Normal file
View file

@ -0,0 +1,215 @@
/*
* Driver for the Analog Devices digital potentiometers
*
* Copyright (C) 2010 Michael Hennerich, Analog Devices Inc.
*
* Licensed under the GPL-2 or later.
*/
#ifndef _AD_DPOT_H_
#define _AD_DPOT_H_
#include <linux/types.h>
#define DPOT_CONF(features, wipers, max_pos, uid) \
(((features) << 18) | (((wipers) & 0xFF) << 10) | \
((max_pos & 0xF) << 6) | (uid & 0x3F))
#define DPOT_UID(conf) (conf & 0x3F)
#define DPOT_MAX_POS(conf) ((conf >> 6) & 0xF)
#define DPOT_WIPERS(conf) ((conf >> 10) & 0xFF)
#define DPOT_FEAT(conf) (conf >> 18)
#define BRDAC0 (1 << 0)
#define BRDAC1 (1 << 1)
#define BRDAC2 (1 << 2)
#define BRDAC3 (1 << 3)
#define BRDAC4 (1 << 4)
#define BRDAC5 (1 << 5)
#define MAX_RDACS 6
#define F_CMD_INC (1 << 0) /* Features INC/DEC ALL, 6dB */
#define F_CMD_EEP (1 << 1) /* Features EEPROM */
#define F_CMD_OTP (1 << 2) /* Features OTP */
#define F_CMD_TOL (1 << 3) /* RDACS feature Tolerance REG */
#define F_RDACS_RW (1 << 4) /* RDACS are Read/Write */
#define F_RDACS_WONLY (1 << 5) /* RDACS are Write only */
#define F_AD_APPDATA (1 << 6) /* RDAC Address append to data */
#define F_SPI_8BIT (1 << 7) /* All SPI XFERS are 8-bit */
#define F_SPI_16BIT (1 << 8) /* All SPI XFERS are 16-bit */
#define F_SPI_24BIT (1 << 9) /* All SPI XFERS are 24-bit */
#define F_RDACS_RW_TOL (F_RDACS_RW | F_CMD_EEP | F_CMD_TOL)
#define F_RDACS_RW_EEP (F_RDACS_RW | F_CMD_EEP)
#define F_SPI (F_SPI_8BIT | F_SPI_16BIT | F_SPI_24BIT)
enum dpot_devid {
AD5258_ID = DPOT_CONF(F_RDACS_RW_TOL, BRDAC0, 6, 0), /* I2C */
AD5259_ID = DPOT_CONF(F_RDACS_RW_TOL, BRDAC0, 8, 1),
AD5251_ID = DPOT_CONF(F_RDACS_RW_TOL | F_CMD_INC,
BRDAC1 | BRDAC3, 6, 2),
AD5252_ID = DPOT_CONF(F_RDACS_RW_TOL | F_CMD_INC,
BRDAC1 | BRDAC3, 8, 3),
AD5253_ID = DPOT_CONF(F_RDACS_RW_TOL | F_CMD_INC,
BRDAC0 | BRDAC1 | BRDAC2 | BRDAC3, 6, 4),
AD5254_ID = DPOT_CONF(F_RDACS_RW_TOL | F_CMD_INC,
BRDAC0 | BRDAC1 | BRDAC2 | BRDAC3, 8, 5),
AD5255_ID = DPOT_CONF(F_RDACS_RW_TOL | F_CMD_INC,
BRDAC0 | BRDAC1 | BRDAC2, 9, 6),
AD5160_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0, 8, 7), /* SPI */
AD5161_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0, 8, 8),
AD5162_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0 | BRDAC1, 8, 9),
AD5165_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0, 8, 10),
AD5200_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0, 8, 11),
AD5201_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0, 5, 12),
AD5203_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0 | BRDAC1 | BRDAC2 | BRDAC3, 6, 13),
AD5204_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0 | BRDAC1 | BRDAC2 | BRDAC3, 8, 14),
AD5206_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0 | BRDAC1 | BRDAC2 | BRDAC3 | BRDAC4 | BRDAC5,
8, 15),
AD5207_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0 | BRDAC1, 8, 16),
AD5231_ID = DPOT_CONF(F_RDACS_RW_EEP | F_CMD_INC | F_SPI_24BIT,
BRDAC0, 10, 17),
AD5232_ID = DPOT_CONF(F_RDACS_RW_EEP | F_CMD_INC | F_SPI_16BIT,
BRDAC0 | BRDAC1, 8, 18),
AD5233_ID = DPOT_CONF(F_RDACS_RW_EEP | F_CMD_INC | F_SPI_16BIT,
BRDAC0 | BRDAC1 | BRDAC2 | BRDAC3, 6, 19),
AD5235_ID = DPOT_CONF(F_RDACS_RW_EEP | F_CMD_INC | F_SPI_24BIT,
BRDAC0 | BRDAC1, 10, 20),
AD5260_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0, 8, 21),
AD5262_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0 | BRDAC1, 8, 22),
AD5263_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0 | BRDAC1 | BRDAC2 | BRDAC3, 8, 23),
AD5290_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0, 8, 24),
AD5291_ID = DPOT_CONF(F_RDACS_RW | F_SPI_16BIT | F_CMD_OTP,
BRDAC0, 8, 25),
AD5292_ID = DPOT_CONF(F_RDACS_RW | F_SPI_16BIT | F_CMD_OTP,
BRDAC0, 10, 26),
AD5293_ID = DPOT_CONF(F_RDACS_RW | F_SPI_16BIT, BRDAC0, 10, 27),
AD7376_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT,
BRDAC0, 7, 28),
AD8400_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0, 8, 29),
AD8402_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0 | BRDAC1, 8, 30),
AD8403_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT,
BRDAC0 | BRDAC1 | BRDAC2, 8, 31),
ADN2850_ID = DPOT_CONF(F_RDACS_RW_EEP | F_CMD_INC | F_SPI_24BIT,
BRDAC0 | BRDAC1, 10, 32),
AD5241_ID = DPOT_CONF(F_RDACS_RW, BRDAC0, 8, 33),
AD5242_ID = DPOT_CONF(F_RDACS_RW, BRDAC0 | BRDAC1, 8, 34),
AD5243_ID = DPOT_CONF(F_RDACS_RW, BRDAC0 | BRDAC1, 8, 35),
AD5245_ID = DPOT_CONF(F_RDACS_RW, BRDAC0, 8, 36),
AD5246_ID = DPOT_CONF(F_RDACS_RW, BRDAC0, 7, 37),
AD5247_ID = DPOT_CONF(F_RDACS_RW, BRDAC0, 7, 38),
AD5248_ID = DPOT_CONF(F_RDACS_RW, BRDAC0 | BRDAC1, 8, 39),
AD5280_ID = DPOT_CONF(F_RDACS_RW, BRDAC0, 8, 40),
AD5282_ID = DPOT_CONF(F_RDACS_RW, BRDAC0 | BRDAC1, 8, 41),
ADN2860_ID = DPOT_CONF(F_RDACS_RW_TOL | F_CMD_INC,
BRDAC0 | BRDAC1 | BRDAC2, 9, 42),
AD5273_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP, BRDAC0, 6, 43),
AD5171_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP, BRDAC0, 6, 44),
AD5170_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP, BRDAC0, 8, 45),
AD5172_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP, BRDAC0 | BRDAC1, 8, 46),
AD5173_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP, BRDAC0 | BRDAC1, 8, 47),
AD5270_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP | F_SPI_16BIT,
BRDAC0, 10, 48),
AD5271_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP | F_SPI_16BIT,
BRDAC0, 8, 49),
AD5272_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP, BRDAC0, 10, 50),
AD5274_ID = DPOT_CONF(F_RDACS_RW | F_CMD_OTP, BRDAC0, 8, 51),
};
#define DPOT_RDAC0 0
#define DPOT_RDAC1 1
#define DPOT_RDAC2 2
#define DPOT_RDAC3 3
#define DPOT_RDAC4 4
#define DPOT_RDAC5 5
#define DPOT_RDAC_MASK 0x1F
#define DPOT_REG_TOL 0x18
#define DPOT_TOL_RDAC0 (DPOT_REG_TOL | DPOT_RDAC0)
#define DPOT_TOL_RDAC1 (DPOT_REG_TOL | DPOT_RDAC1)
#define DPOT_TOL_RDAC2 (DPOT_REG_TOL | DPOT_RDAC2)
#define DPOT_TOL_RDAC3 (DPOT_REG_TOL | DPOT_RDAC3)
#define DPOT_TOL_RDAC4 (DPOT_REG_TOL | DPOT_RDAC4)
#define DPOT_TOL_RDAC5 (DPOT_REG_TOL | DPOT_RDAC5)
/* RDAC-to-EEPROM Interface Commands */
#define DPOT_ADDR_RDAC (0x0 << 5)
#define DPOT_ADDR_EEPROM (0x1 << 5)
#define DPOT_ADDR_OTP (0x1 << 6)
#define DPOT_ADDR_CMD (0x1 << 7)
#define DPOT_ADDR_OTP_EN (0x1 << 9)
#define DPOT_DEC_ALL_6DB (DPOT_ADDR_CMD | (0x4 << 3))
#define DPOT_INC_ALL_6DB (DPOT_ADDR_CMD | (0x9 << 3))
#define DPOT_DEC_ALL (DPOT_ADDR_CMD | (0x6 << 3))
#define DPOT_INC_ALL (DPOT_ADDR_CMD | (0xB << 3))
#define DPOT_SPI_RDAC 0xB0
#define DPOT_SPI_EEPROM 0x30
#define DPOT_SPI_READ_RDAC 0xA0
#define DPOT_SPI_READ_EEPROM 0x90
#define DPOT_SPI_DEC_ALL_6DB 0x50
#define DPOT_SPI_INC_ALL_6DB 0xD0
#define DPOT_SPI_DEC_ALL 0x70
#define DPOT_SPI_INC_ALL 0xF0
/* AD5291/2/3 use special commands */
#define DPOT_AD5291_RDAC 0x01
#define DPOT_AD5291_READ_RDAC 0x02
#define DPOT_AD5291_STORE_XTPM 0x03
#define DPOT_AD5291_CTRLREG 0x06
#define DPOT_AD5291_UNLOCK_CMD 0x03
/* AD5270/1/2/4 use special commands */
#define DPOT_AD5270_1_2_4_RDAC 0x01
#define DPOT_AD5270_1_2_4_READ_RDAC 0x02
#define DPOT_AD5270_1_2_4_STORE_XTPM 0x03
#define DPOT_AD5270_1_2_4_CTRLREG 0x07
#define DPOT_AD5270_1_2_4_UNLOCK_CMD 0x03
#define DPOT_AD5282_RDAC_AB 0x80
#define DPOT_AD5273_FUSE 0x80
#define DPOT_AD5170_2_3_FUSE 0x20
#define DPOT_AD5170_2_3_OW 0x08
#define DPOT_AD5172_3_A0 0x08
#define DPOT_AD5170_2FUSE 0x80
struct dpot_data;
struct ad_dpot_bus_ops {
int (*read_d8) (void *client);
int (*read_r8d8) (void *client, u8 reg);
int (*read_r8d16) (void *client, u8 reg);
int (*write_d8) (void *client, u8 val);
int (*write_r8d8) (void *client, u8 reg, u8 val);
int (*write_r8d16) (void *client, u8 reg, u16 val);
};
struct ad_dpot_bus_data {
void *client;
const struct ad_dpot_bus_ops *bops;
};
int ad_dpot_probe(struct device *dev, struct ad_dpot_bus_data *bdata,
unsigned long devid, const char *name);
int ad_dpot_remove(struct device *dev);
#endif

View file

@ -0,0 +1,8 @@
comment "Altera FPGA firmware download module"
config ALTERA_STAPL
tristate "Altera FPGA firmware download module"
depends on I2C
default n
help
An Altera FPGA module. Say Y when you want to support this tool.

View file

@ -0,0 +1,3 @@
altera-stapl-objs = altera-lpt.o altera-jtag.o altera-comp.o altera.o
obj-$(CONFIG_ALTERA_STAPL) += altera-stapl.o

View file

@ -0,0 +1,142 @@
/*
* altera-comp.c
*
* altera FPGA driver
*
* Copyright (C) Altera Corporation 1998-2001
* Copyright (C) 2010 NetUP Inc.
* Copyright (C) 2010 Igor M. Liplianin <liplianin@netup.ru>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
*
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/kernel.h>
#include "altera-exprt.h"
#define SHORT_BITS 16
#define CHAR_BITS 8
#define DATA_BLOB_LENGTH 3
#define MATCH_DATA_LENGTH 8192
#define ALTERA_REQUEST_SIZE 1024
#define ALTERA_BUFFER_SIZE (MATCH_DATA_LENGTH + ALTERA_REQUEST_SIZE)
static u32 altera_bits_req(u32 n)
{
u32 result = SHORT_BITS;
if (n == 0)
result = 1;
else {
/* Look for the highest non-zero bit position */
while ((n & (1 << (SHORT_BITS - 1))) == 0) {
n <<= 1;
--result;
}
}
return result;
}
static u32 altera_read_packed(u8 *buffer, u32 bits, u32 *bits_avail,
u32 *in_index)
{
u32 result = 0;
u32 shift = 0;
u32 databyte = 0;
while (bits > 0) {
databyte = buffer[*in_index];
result |= (((databyte >> (CHAR_BITS - *bits_avail))
& (0xff >> (CHAR_BITS - *bits_avail))) << shift);
if (bits <= *bits_avail) {
result &= (0xffff >> (SHORT_BITS - (bits + shift)));
*bits_avail -= bits;
bits = 0;
} else {
++(*in_index);
shift += *bits_avail;
bits -= *bits_avail;
*bits_avail = CHAR_BITS;
}
}
return result;
}
u32 altera_shrink(u8 *in, u32 in_length, u8 *out, u32 out_length, s32 version)
{
u32 i, j, data_length = 0L;
u32 offset, length;
u32 match_data_length = MATCH_DATA_LENGTH;
u32 bits_avail = CHAR_BITS;
u32 in_index = 0L;
if (version > 0)
--match_data_length;
for (i = 0; i < out_length; ++i)
out[i] = 0;
/* Read number of bytes in data. */
for (i = 0; i < sizeof(in_length); ++i) {
data_length = data_length | (
altera_read_packed(in,
CHAR_BITS,
&bits_avail,
&in_index) << (i * CHAR_BITS));
}
if (data_length > out_length) {
data_length = 0L;
return data_length;
}
i = 0;
while (i < data_length) {
/* A 0 bit indicates literal data. */
if (altera_read_packed(in, 1, &bits_avail,
&in_index) == 0) {
for (j = 0; j < DATA_BLOB_LENGTH; ++j) {
if (i < data_length) {
out[i] = (u8)altera_read_packed(in,
CHAR_BITS,
&bits_avail,
&in_index);
i++;
}
}
} else {
/* A 1 bit indicates offset/length to follow. */
offset = altera_read_packed(in, altera_bits_req((s16)
(i > match_data_length ?
match_data_length : i)),
&bits_avail,
&in_index);
length = altera_read_packed(in, CHAR_BITS,
&bits_avail,
&in_index);
for (j = 0; j < length; ++j) {
if (i < data_length) {
out[i] = out[i - offset];
i++;
}
}
}
}
return data_length;
}

View file

@ -0,0 +1,33 @@
/*
* altera-exprt.h
*
* altera FPGA driver
*
* Copyright (C) Altera Corporation 1998-2001
* Copyright (C) 2010 NetUP Inc.
* Copyright (C) 2010 Igor M. Liplianin <liplianin@netup.ru>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
*
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef ALTERA_EXPRT_H
#define ALTERA_EXPRT_H
u32 altera_shrink(u8 *in, u32 in_length, u8 *out, u32 out_length, s32 version);
int netup_jtag_io_lpt(void *device, int tms, int tdi, int read_tdo);
#endif /* ALTERA_EXPRT_H */

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,113 @@
/*
* altera-jtag.h
*
* altera FPGA driver
*
* Copyright (C) Altera Corporation 1998-2001
* Copyright (C) 2010 NetUP Inc.
* Copyright (C) 2010 Igor M. Liplianin <liplianin@netup.ru>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
*
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef ALTERA_JTAG_H
#define ALTERA_JTAG_H
/* Function Prototypes */
enum altera_jtag_state {
ILLEGAL_JTAG_STATE = -1,
RESET = 0,
IDLE = 1,
DRSELECT = 2,
DRCAPTURE = 3,
DRSHIFT = 4,
DREXIT1 = 5,
DRPAUSE = 6,
DREXIT2 = 7,
DRUPDATE = 8,
IRSELECT = 9,
IRCAPTURE = 10,
IRSHIFT = 11,
IREXIT1 = 12,
IRPAUSE = 13,
IREXIT2 = 14,
IRUPDATE = 15
};
struct altera_jtag {
/* Global variable to store the current JTAG state */
enum altera_jtag_state jtag_state;
/* Store current stop-state for DR and IR scan commands */
enum altera_jtag_state drstop_state;
enum altera_jtag_state irstop_state;
/* Store current padding values */
u32 dr_pre;
u32 dr_post;
u32 ir_pre;
u32 ir_post;
u32 dr_length;
u32 ir_length;
u8 *dr_pre_data;
u8 *dr_post_data;
u8 *ir_pre_data;
u8 *ir_post_data;
u8 *dr_buffer;
u8 *ir_buffer;
};
#define ALTERA_STACK_SIZE 128
#define ALTERA_MESSAGE_LENGTH 1024
struct altera_state {
struct altera_config *config;
struct altera_jtag js;
char msg_buff[ALTERA_MESSAGE_LENGTH + 1];
long stack[ALTERA_STACK_SIZE];
};
int altera_jinit(struct altera_state *astate);
int altera_set_drstop(struct altera_jtag *js, enum altera_jtag_state state);
int altera_set_irstop(struct altera_jtag *js, enum altera_jtag_state state);
int altera_set_dr_pre(struct altera_jtag *js, u32 count, u32 start_index,
u8 *preamble_data);
int altera_set_ir_pre(struct altera_jtag *js, u32 count, u32 start_index,
u8 *preamble_data);
int altera_set_dr_post(struct altera_jtag *js, u32 count, u32 start_index,
u8 *postamble_data);
int altera_set_ir_post(struct altera_jtag *js, u32 count, u32 start_index,
u8 *postamble_data);
int altera_goto_jstate(struct altera_state *astate,
enum altera_jtag_state state);
int altera_wait_cycles(struct altera_state *astate, s32 cycles,
enum altera_jtag_state wait_state);
int altera_wait_msecs(struct altera_state *astate, s32 microseconds,
enum altera_jtag_state wait_state);
int altera_irscan(struct altera_state *astate, u32 count,
u8 *tdi_data, u32 start_index);
int altera_swap_ir(struct altera_state *astate,
u32 count, u8 *in_data,
u32 in_index, u8 *out_data,
u32 out_index);
int altera_drscan(struct altera_state *astate, u32 count,
u8 *tdi_data, u32 start_index);
int altera_swap_dr(struct altera_state *astate, u32 count,
u8 *in_data, u32 in_index,
u8 *out_data, u32 out_index);
void altera_free_buffers(struct altera_state *astate);
#endif /* ALTERA_JTAG_H */

View file

@ -0,0 +1,70 @@
/*
* altera-lpt.c
*
* altera FPGA driver
*
* Copyright (C) Altera Corporation 1998-2001
* Copyright (C) 2010 NetUP Inc.
* Copyright (C) 2010 Abylay Ospan <aospan@netup.ru>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
*
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/io.h>
#include <linux/kernel.h>
#include "altera-exprt.h"
static int lpt_hardware_initialized;
static void byteblaster_write(int port, int data)
{
outb((u8)data, (u16)(port + 0x378));
};
static int byteblaster_read(int port)
{
int data = 0;
data = inb((u16)(port + 0x378));
return data & 0xff;
};
int netup_jtag_io_lpt(void *device, int tms, int tdi, int read_tdo)
{
int data = 0;
int tdo = 0;
int initial_lpt_ctrl = 0;
if (!lpt_hardware_initialized) {
initial_lpt_ctrl = byteblaster_read(2);
byteblaster_write(2, (initial_lpt_ctrl | 0x02) & 0xdf);
lpt_hardware_initialized = 1;
}
data = ((tdi ? 0x40 : 0) | (tms ? 0x02 : 0));
byteblaster_write(0, data);
if (read_tdo) {
tdo = byteblaster_read(1);
tdo = ((tdo & 0x80) ? 0 : 1);
}
byteblaster_write(0, data | 0x01);
byteblaster_write(0, data);
return tdo;
}

File diff suppressed because it is too large Load diff

322
drivers/misc/apds9802als.c Normal file
View file

@ -0,0 +1,322 @@
/*
* apds9802als.c - apds9802 ALS Driver
*
* Copyright (C) 2009 Intel Corp
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
*/
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/i2c.h>
#include <linux/err.h>
#include <linux/delay.h>
#include <linux/mutex.h>
#include <linux/sysfs.h>
#include <linux/pm_runtime.h>
#define ALS_MIN_RANGE_VAL 1
#define ALS_MAX_RANGE_VAL 2
#define POWER_STA_ENABLE 1
#define POWER_STA_DISABLE 0
#define DRIVER_NAME "apds9802als"
struct als_data {
struct mutex mutex;
};
static ssize_t als_sensing_range_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct i2c_client *client = to_i2c_client(dev);
int val;
val = i2c_smbus_read_byte_data(client, 0x81);
if (val < 0)
return val;
if (val & 1)
return sprintf(buf, "4095\n");
else
return sprintf(buf, "65535\n");
}
static int als_wait_for_data_ready(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
int ret;
int retry = 10;
do {
msleep(30);
ret = i2c_smbus_read_byte_data(client, 0x86);
} while (!(ret & 0x80) && retry--);
if (retry < 0) {
dev_warn(dev, "timeout waiting for data ready\n");
return -ETIMEDOUT;
}
return 0;
}
static ssize_t als_lux0_input_data_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct i2c_client *client = to_i2c_client(dev);
struct als_data *data = i2c_get_clientdata(client);
int ret_val;
int temp;
/* Protect against parallel reads */
pm_runtime_get_sync(dev);
mutex_lock(&data->mutex);
/* clear EOC interrupt status */
i2c_smbus_write_byte(client, 0x40);
/* start measurement */
temp = i2c_smbus_read_byte_data(client, 0x81);
i2c_smbus_write_byte_data(client, 0x81, temp | 0x08);
ret_val = als_wait_for_data_ready(dev);
if (ret_val < 0)
goto failed;
temp = i2c_smbus_read_byte_data(client, 0x8C); /* LSB data */
if (temp < 0) {
ret_val = temp;
goto failed;
}
ret_val = i2c_smbus_read_byte_data(client, 0x8D); /* MSB data */
if (ret_val < 0)
goto failed;
mutex_unlock(&data->mutex);
pm_runtime_put_sync(dev);
temp = (ret_val << 8) | temp;
return sprintf(buf, "%d\n", temp);
failed:
mutex_unlock(&data->mutex);
pm_runtime_put_sync(dev);
return ret_val;
}
static ssize_t als_sensing_range_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct i2c_client *client = to_i2c_client(dev);
struct als_data *data = i2c_get_clientdata(client);
int ret_val;
unsigned long val;
ret_val = kstrtoul(buf, 10, &val);
if (ret_val)
return ret_val;
if (val < 4096)
val = 1;
else if (val < 65536)
val = 2;
else
return -ERANGE;
pm_runtime_get_sync(dev);
/* Make sure nobody else reads/modifies/writes 0x81 while we
are active */
mutex_lock(&data->mutex);
ret_val = i2c_smbus_read_byte_data(client, 0x81);
if (ret_val < 0)
goto fail;
/* Reset the bits before setting them */
ret_val = ret_val & 0xFA;
if (val == 1) /* Setting detection range up to 4k LUX */
ret_val = (ret_val | 0x01);
else /* Setting detection range up to 64k LUX*/
ret_val = (ret_val | 0x00);
ret_val = i2c_smbus_write_byte_data(client, 0x81, ret_val);
if (ret_val >= 0) {
/* All OK */
mutex_unlock(&data->mutex);
pm_runtime_put_sync(dev);
return count;
}
fail:
mutex_unlock(&data->mutex);
pm_runtime_put_sync(dev);
return ret_val;
}
static int als_set_power_state(struct i2c_client *client, bool on_off)
{
int ret_val;
struct als_data *data = i2c_get_clientdata(client);
mutex_lock(&data->mutex);
ret_val = i2c_smbus_read_byte_data(client, 0x80);
if (ret_val < 0)
goto fail;
if (on_off)
ret_val = ret_val | 0x01;
else
ret_val = ret_val & 0xFE;
ret_val = i2c_smbus_write_byte_data(client, 0x80, ret_val);
fail:
mutex_unlock(&data->mutex);
return ret_val;
}
static DEVICE_ATTR(lux0_sensor_range, S_IRUGO | S_IWUSR,
als_sensing_range_show, als_sensing_range_store);
static DEVICE_ATTR(lux0_input, S_IRUGO, als_lux0_input_data_show, NULL);
static struct attribute *mid_att_als[] = {
&dev_attr_lux0_sensor_range.attr,
&dev_attr_lux0_input.attr,
NULL
};
static struct attribute_group m_als_gr = {
.name = "apds9802als",
.attrs = mid_att_als
};
static int als_set_default_config(struct i2c_client *client)
{
int ret_val;
/* Write the command and then switch on */
ret_val = i2c_smbus_write_byte_data(client, 0x80, 0x01);
if (ret_val < 0) {
dev_err(&client->dev, "failed default switch on write\n");
return ret_val;
}
/* detection range: 1~64K Lux, maunal measurement */
ret_val = i2c_smbus_write_byte_data(client, 0x81, 0x08);
if (ret_val < 0)
dev_err(&client->dev, "failed default LUX on write\n");
/* We always get 0 for the 1st measurement after system power on,
* so make sure it is finished before user asks for data.
*/
als_wait_for_data_ready(&client->dev);
return ret_val;
}
static int apds9802als_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
int res;
struct als_data *data;
data = kzalloc(sizeof(struct als_data), GFP_KERNEL);
if (data == NULL) {
dev_err(&client->dev, "Memory allocation failed\n");
return -ENOMEM;
}
i2c_set_clientdata(client, data);
res = sysfs_create_group(&client->dev.kobj, &m_als_gr);
if (res) {
dev_err(&client->dev, "device create file failed\n");
goto als_error1;
}
dev_info(&client->dev, "ALS chip found\n");
als_set_default_config(client);
mutex_init(&data->mutex);
pm_runtime_set_active(&client->dev);
pm_runtime_enable(&client->dev);
return res;
als_error1:
kfree(data);
return res;
}
static int apds9802als_remove(struct i2c_client *client)
{
struct als_data *data = i2c_get_clientdata(client);
pm_runtime_get_sync(&client->dev);
als_set_power_state(client, false);
sysfs_remove_group(&client->dev.kobj, &m_als_gr);
pm_runtime_disable(&client->dev);
pm_runtime_set_suspended(&client->dev);
pm_runtime_put_noidle(&client->dev);
kfree(data);
return 0;
}
#ifdef CONFIG_PM
static int apds9802als_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
als_set_power_state(client, false);
return 0;
}
static int apds9802als_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
als_set_power_state(client, true);
return 0;
}
static UNIVERSAL_DEV_PM_OPS(apds9802als_pm_ops, apds9802als_suspend,
apds9802als_resume, NULL);
#define APDS9802ALS_PM_OPS (&apds9802als_pm_ops)
#else /* CONFIG_PM */
#define APDS9802ALS_PM_OPS NULL
#endif /* CONFIG_PM */
static struct i2c_device_id apds9802als_id[] = {
{ DRIVER_NAME, 0 },
{ }
};
MODULE_DEVICE_TABLE(i2c, apds9802als_id);
static struct i2c_driver apds9802als_driver = {
.driver = {
.name = DRIVER_NAME,
.pm = APDS9802ALS_PM_OPS,
},
.probe = apds9802als_probe,
.remove = apds9802als_remove,
.id_table = apds9802als_id,
};
module_i2c_driver(apds9802als_driver);
MODULE_AUTHOR("Anantha Narayanan <Anantha.Narayanan@intel.com");
MODULE_DESCRIPTION("Avago apds9802als ALS Driver");
MODULE_LICENSE("GPL v2");

1290
drivers/misc/apds990x.c Normal file

File diff suppressed because it is too large Load diff

389
drivers/misc/arm-charlcd.c Normal file
View file

@ -0,0 +1,389 @@
/*
* Driver for the on-board character LCD found on some ARM reference boards
* This is basically an Hitachi HD44780 LCD with a custom IP block to drive it
* http://en.wikipedia.org/wiki/HD44780_Character_LCD
* Currently it will just display the text "ARM Linux" and the linux version
*
* License terms: GNU General Public License (GPL) version 2
* Author: Linus Walleij <triad@df.lth.se>
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/completion.h>
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
#include <generated/utsrelease.h>
#define DRIVERNAME "arm-charlcd"
#define CHARLCD_TIMEOUT (msecs_to_jiffies(1000))
/* Offsets to registers */
#define CHAR_COM 0x00U
#define CHAR_DAT 0x04U
#define CHAR_RD 0x08U
#define CHAR_RAW 0x0CU
#define CHAR_MASK 0x10U
#define CHAR_STAT 0x14U
#define CHAR_RAW_CLEAR 0x00000000U
#define CHAR_RAW_VALID 0x00000100U
/* Hitachi HD44780 display commands */
#define HD_CLEAR 0x01U
#define HD_HOME 0x02U
#define HD_ENTRYMODE 0x04U
#define HD_ENTRYMODE_INCREMENT 0x02U
#define HD_ENTRYMODE_SHIFT 0x01U
#define HD_DISPCTRL 0x08U
#define HD_DISPCTRL_ON 0x04U
#define HD_DISPCTRL_CURSOR_ON 0x02U
#define HD_DISPCTRL_CURSOR_BLINK 0x01U
#define HD_CRSR_SHIFT 0x10U
#define HD_CRSR_SHIFT_DISPLAY 0x08U
#define HD_CRSR_SHIFT_DISPLAY_RIGHT 0x04U
#define HD_FUNCSET 0x20U
#define HD_FUNCSET_8BIT 0x10U
#define HD_FUNCSET_2_LINES 0x08U
#define HD_FUNCSET_FONT_5X10 0x04U
#define HD_SET_CGRAM 0x40U
#define HD_SET_DDRAM 0x80U
#define HD_BUSY_FLAG 0x80U
/**
* @dev: a pointer back to containing device
* @phybase: the offset to the controller in physical memory
* @physize: the size of the physical page
* @virtbase: the offset to the controller in virtual memory
* @irq: reserved interrupt number
* @complete: completion structure for the last LCD command
*/
struct charlcd {
struct device *dev;
u32 phybase;
u32 physize;
void __iomem *virtbase;
int irq;
struct completion complete;
struct delayed_work init_work;
};
static irqreturn_t charlcd_interrupt(int irq, void *data)
{
struct charlcd *lcd = data;
u8 status;
status = readl(lcd->virtbase + CHAR_STAT) & 0x01;
/* Clear IRQ */
writel(CHAR_RAW_CLEAR, lcd->virtbase + CHAR_RAW);
if (status)
complete(&lcd->complete);
else
dev_info(lcd->dev, "Spurious IRQ (%02x)\n", status);
return IRQ_HANDLED;
}
static void charlcd_wait_complete_irq(struct charlcd *lcd)
{
int ret;
ret = wait_for_completion_interruptible_timeout(&lcd->complete,
CHARLCD_TIMEOUT);
/* Disable IRQ after completion */
writel(0x00, lcd->virtbase + CHAR_MASK);
if (ret < 0) {
dev_err(lcd->dev,
"wait_for_completion_interruptible_timeout() "
"returned %d waiting for ready\n", ret);
return;
}
if (ret == 0) {
dev_err(lcd->dev, "charlcd controller timed out "
"waiting for ready\n");
return;
}
}
static u8 charlcd_4bit_read_char(struct charlcd *lcd)
{
u8 data;
u32 val;
int i;
/* If we can, use an IRQ to wait for the data, else poll */
if (lcd->irq >= 0)
charlcd_wait_complete_irq(lcd);
else {
i = 0;
val = 0;
while (!(val & CHAR_RAW_VALID) && i < 10) {
udelay(100);
val = readl(lcd->virtbase + CHAR_RAW);
i++;
}
writel(CHAR_RAW_CLEAR, lcd->virtbase + CHAR_RAW);
}
msleep(1);
/* Read the 4 high bits of the data */
data = readl(lcd->virtbase + CHAR_RD) & 0xf0;
/*
* The second read for the low bits does not trigger an IRQ
* so in this case we have to poll for the 4 lower bits
*/
i = 0;
val = 0;
while (!(val & CHAR_RAW_VALID) && i < 10) {
udelay(100);
val = readl(lcd->virtbase + CHAR_RAW);
i++;
}
writel(CHAR_RAW_CLEAR, lcd->virtbase + CHAR_RAW);
msleep(1);
/* Read the 4 low bits of the data */
data |= (readl(lcd->virtbase + CHAR_RD) >> 4) & 0x0f;
return data;
}
static bool charlcd_4bit_read_bf(struct charlcd *lcd)
{
if (lcd->irq >= 0) {
/*
* If we'll use IRQs to wait for the busyflag, clear any
* pending flag and enable IRQ
*/
writel(CHAR_RAW_CLEAR, lcd->virtbase + CHAR_RAW);
init_completion(&lcd->complete);
writel(0x01, lcd->virtbase + CHAR_MASK);
}
readl(lcd->virtbase + CHAR_COM);
return charlcd_4bit_read_char(lcd) & HD_BUSY_FLAG ? true : false;
}
static void charlcd_4bit_wait_busy(struct charlcd *lcd)
{
int retries = 50;
udelay(100);
while (charlcd_4bit_read_bf(lcd) && retries)
retries--;
if (!retries)
dev_err(lcd->dev, "timeout waiting for busyflag\n");
}
static void charlcd_4bit_command(struct charlcd *lcd, u8 cmd)
{
u32 cmdlo = (cmd << 4) & 0xf0;
u32 cmdhi = (cmd & 0xf0);
writel(cmdhi, lcd->virtbase + CHAR_COM);
udelay(10);
writel(cmdlo, lcd->virtbase + CHAR_COM);
charlcd_4bit_wait_busy(lcd);
}
static void charlcd_4bit_char(struct charlcd *lcd, u8 ch)
{
u32 chlo = (ch << 4) & 0xf0;
u32 chhi = (ch & 0xf0);
writel(chhi, lcd->virtbase + CHAR_DAT);
udelay(10);
writel(chlo, lcd->virtbase + CHAR_DAT);
charlcd_4bit_wait_busy(lcd);
}
static void charlcd_4bit_print(struct charlcd *lcd, int line, const char *str)
{
u8 offset;
int i;
/*
* We support line 0, 1
* Line 1 runs from 0x00..0x27
* Line 2 runs from 0x28..0x4f
*/
if (line == 0)
offset = 0;
else if (line == 1)
offset = 0x28;
else
return;
/* Set offset */
charlcd_4bit_command(lcd, HD_SET_DDRAM | offset);
/* Send string */
for (i = 0; i < strlen(str) && i < 0x28; i++)
charlcd_4bit_char(lcd, str[i]);
}
static void charlcd_4bit_init(struct charlcd *lcd)
{
/* These commands cannot be checked with the busy flag */
writel(HD_FUNCSET | HD_FUNCSET_8BIT, lcd->virtbase + CHAR_COM);
msleep(5);
writel(HD_FUNCSET | HD_FUNCSET_8BIT, lcd->virtbase + CHAR_COM);
udelay(100);
writel(HD_FUNCSET | HD_FUNCSET_8BIT, lcd->virtbase + CHAR_COM);
udelay(100);
/* Go to 4bit mode */
writel(HD_FUNCSET, lcd->virtbase + CHAR_COM);
udelay(100);
/*
* 4bit mode, 2 lines, 5x8 font, after this the number of lines
* and the font cannot be changed until the next initialization sequence
*/
charlcd_4bit_command(lcd, HD_FUNCSET | HD_FUNCSET_2_LINES);
charlcd_4bit_command(lcd, HD_DISPCTRL | HD_DISPCTRL_ON);
charlcd_4bit_command(lcd, HD_ENTRYMODE | HD_ENTRYMODE_INCREMENT);
charlcd_4bit_command(lcd, HD_CLEAR);
charlcd_4bit_command(lcd, HD_HOME);
/* Put something useful in the display */
charlcd_4bit_print(lcd, 0, "ARM Linux");
charlcd_4bit_print(lcd, 1, UTS_RELEASE);
}
static void charlcd_init_work(struct work_struct *work)
{
struct charlcd *lcd =
container_of(work, struct charlcd, init_work.work);
charlcd_4bit_init(lcd);
}
static int __init charlcd_probe(struct platform_device *pdev)
{
int ret;
struct charlcd *lcd;
struct resource *res;
lcd = kzalloc(sizeof(struct charlcd), GFP_KERNEL);
if (!lcd)
return -ENOMEM;
lcd->dev = &pdev->dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
ret = -ENOENT;
goto out_no_resource;
}
lcd->phybase = res->start;
lcd->physize = resource_size(res);
if (request_mem_region(lcd->phybase, lcd->physize,
DRIVERNAME) == NULL) {
ret = -EBUSY;
goto out_no_memregion;
}
lcd->virtbase = ioremap(lcd->phybase, lcd->physize);
if (!lcd->virtbase) {
ret = -ENOMEM;
goto out_no_memregion;
}
lcd->irq = platform_get_irq(pdev, 0);
/* If no IRQ is supplied, we'll survive without it */
if (lcd->irq >= 0) {
if (request_irq(lcd->irq, charlcd_interrupt, 0,
DRIVERNAME, lcd)) {
ret = -EIO;
goto out_no_irq;
}
}
platform_set_drvdata(pdev, lcd);
/*
* Initialize the display in a delayed work, because
* it is VERY slow and would slow down the boot of the system.
*/
INIT_DELAYED_WORK(&lcd->init_work, charlcd_init_work);
schedule_delayed_work(&lcd->init_work, 0);
dev_info(&pdev->dev, "initialized ARM character LCD at %08x\n",
lcd->phybase);
return 0;
out_no_irq:
iounmap(lcd->virtbase);
out_no_memregion:
release_mem_region(lcd->phybase, SZ_4K);
out_no_resource:
kfree(lcd);
return ret;
}
static int __exit charlcd_remove(struct platform_device *pdev)
{
struct charlcd *lcd = platform_get_drvdata(pdev);
if (lcd) {
free_irq(lcd->irq, lcd);
iounmap(lcd->virtbase);
release_mem_region(lcd->phybase, lcd->physize);
kfree(lcd);
}
return 0;
}
static int charlcd_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct charlcd *lcd = platform_get_drvdata(pdev);
/* Power the display off */
charlcd_4bit_command(lcd, HD_DISPCTRL);
return 0;
}
static int charlcd_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct charlcd *lcd = platform_get_drvdata(pdev);
/* Turn the display back on */
charlcd_4bit_command(lcd, HD_DISPCTRL | HD_DISPCTRL_ON);
return 0;
}
static const struct dev_pm_ops charlcd_pm_ops = {
.suspend = charlcd_suspend,
.resume = charlcd_resume,
};
static const struct of_device_id charlcd_match[] = {
{ .compatible = "arm,versatile-lcd", },
{}
};
static struct platform_driver charlcd_driver = {
.driver = {
.name = DRIVERNAME,
.owner = THIS_MODULE,
.pm = &charlcd_pm_ops,
.of_match_table = of_match_ptr(charlcd_match),
},
.remove = __exit_p(charlcd_remove),
};
module_platform_driver_probe(charlcd_driver, charlcd_probe);
MODULE_AUTHOR("Linus Walleij <triad@df.lth.se>");
MODULE_DESCRIPTION("ARM Character LCD Driver");
MODULE_LICENSE("GPL v2");

235
drivers/misc/atmel-ssc.c Normal file
View file

@ -0,0 +1,235 @@
/*
* Atmel SSC driver
*
* Copyright (C) 2007 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/platform_device.h>
#include <linux/list.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/spinlock.h>
#include <linux/atmel-ssc.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/of.h>
/* Serialize access to ssc_list and user count */
static DEFINE_SPINLOCK(user_lock);
static LIST_HEAD(ssc_list);
struct ssc_device *ssc_request(unsigned int ssc_num)
{
int ssc_valid = 0;
struct ssc_device *ssc;
spin_lock(&user_lock);
list_for_each_entry(ssc, &ssc_list, list) {
if (ssc->pdev->dev.of_node) {
if (of_alias_get_id(ssc->pdev->dev.of_node, "ssc")
== ssc_num) {
ssc_valid = 1;
break;
}
} else if (ssc->pdev->id == ssc_num) {
ssc_valid = 1;
break;
}
}
if (!ssc_valid) {
spin_unlock(&user_lock);
pr_err("ssc: ssc%d platform device is missing\n", ssc_num);
return ERR_PTR(-ENODEV);
}
if (ssc->user) {
spin_unlock(&user_lock);
dev_dbg(&ssc->pdev->dev, "module busy\n");
return ERR_PTR(-EBUSY);
}
ssc->user++;
spin_unlock(&user_lock);
clk_prepare_enable(ssc->clk);
return ssc;
}
EXPORT_SYMBOL(ssc_request);
void ssc_free(struct ssc_device *ssc)
{
bool disable_clk = true;
spin_lock(&user_lock);
if (ssc->user)
ssc->user--;
else {
disable_clk = false;
dev_dbg(&ssc->pdev->dev, "device already free\n");
}
spin_unlock(&user_lock);
if (disable_clk)
clk_disable_unprepare(ssc->clk);
}
EXPORT_SYMBOL(ssc_free);
static struct atmel_ssc_platform_data at91rm9200_config = {
.use_dma = 0,
.has_fslen_ext = 0,
};
static struct atmel_ssc_platform_data at91sam9rl_config = {
.use_dma = 0,
.has_fslen_ext = 1,
};
static struct atmel_ssc_platform_data at91sam9g45_config = {
.use_dma = 1,
.has_fslen_ext = 1,
};
static const struct platform_device_id atmel_ssc_devtypes[] = {
{
.name = "at91rm9200_ssc",
.driver_data = (unsigned long) &at91rm9200_config,
}, {
.name = "at91sam9rl_ssc",
.driver_data = (unsigned long) &at91sam9rl_config,
}, {
.name = "at91sam9g45_ssc",
.driver_data = (unsigned long) &at91sam9g45_config,
}, {
/* sentinel */
}
};
#ifdef CONFIG_OF
static const struct of_device_id atmel_ssc_dt_ids[] = {
{
.compatible = "atmel,at91rm9200-ssc",
.data = &at91rm9200_config,
}, {
.compatible = "atmel,at91sam9rl-ssc",
.data = &at91sam9rl_config,
}, {
.compatible = "atmel,at91sam9g45-ssc",
.data = &at91sam9g45_config,
}, {
/* sentinel */
}
};
MODULE_DEVICE_TABLE(of, atmel_ssc_dt_ids);
#endif
static inline const struct atmel_ssc_platform_data * __init
atmel_ssc_get_driver_data(struct platform_device *pdev)
{
if (pdev->dev.of_node) {
const struct of_device_id *match;
match = of_match_node(atmel_ssc_dt_ids, pdev->dev.of_node);
if (match == NULL)
return NULL;
return match->data;
}
return (struct atmel_ssc_platform_data *)
platform_get_device_id(pdev)->driver_data;
}
static int ssc_probe(struct platform_device *pdev)
{
struct resource *regs;
struct ssc_device *ssc;
const struct atmel_ssc_platform_data *plat_dat;
ssc = devm_kzalloc(&pdev->dev, sizeof(struct ssc_device), GFP_KERNEL);
if (!ssc) {
dev_dbg(&pdev->dev, "out of memory\n");
return -ENOMEM;
}
ssc->pdev = pdev;
plat_dat = atmel_ssc_get_driver_data(pdev);
if (!plat_dat)
return -ENODEV;
ssc->pdata = (struct atmel_ssc_platform_data *)plat_dat;
if (pdev->dev.of_node) {
struct device_node *np = pdev->dev.of_node;
ssc->clk_from_rk_pin =
of_property_read_bool(np, "atmel,clk-from-rk-pin");
}
regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ssc->regs = devm_ioremap_resource(&pdev->dev, regs);
if (IS_ERR(ssc->regs))
return PTR_ERR(ssc->regs);
ssc->phybase = regs->start;
ssc->clk = devm_clk_get(&pdev->dev, "pclk");
if (IS_ERR(ssc->clk)) {
dev_dbg(&pdev->dev, "no pclk clock defined\n");
return -ENXIO;
}
/* disable all interrupts */
clk_prepare_enable(ssc->clk);
ssc_writel(ssc->regs, IDR, -1);
ssc_readl(ssc->regs, SR);
clk_disable_unprepare(ssc->clk);
ssc->irq = platform_get_irq(pdev, 0);
if (!ssc->irq) {
dev_dbg(&pdev->dev, "could not get irq\n");
return -ENXIO;
}
spin_lock(&user_lock);
list_add_tail(&ssc->list, &ssc_list);
spin_unlock(&user_lock);
platform_set_drvdata(pdev, ssc);
dev_info(&pdev->dev, "Atmel SSC device at 0x%p (irq %d)\n",
ssc->regs, ssc->irq);
return 0;
}
static int ssc_remove(struct platform_device *pdev)
{
struct ssc_device *ssc = platform_get_drvdata(pdev);
spin_lock(&user_lock);
list_del(&ssc->list);
spin_unlock(&user_lock);
return 0;
}
static struct platform_driver ssc_driver = {
.driver = {
.name = "ssc",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(atmel_ssc_dt_ids),
},
.id_table = atmel_ssc_devtypes,
.probe = ssc_probe,
.remove = ssc_remove,
};
module_platform_driver(ssc_driver);
MODULE_AUTHOR("Hans-Christian Egtvedt <hcegtvedt@atmel.com>");
MODULE_DESCRIPTION("SSC driver for Atmel AVR32 and AT91");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:ssc");

194
drivers/misc/atmel_tclib.c Normal file
View file

@ -0,0 +1,194 @@
#include <linux/atmel_tc.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/ioport.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/export.h>
#include <linux/of.h>
/*
* This is a thin library to solve the problem of how to portably allocate
* one of the TC blocks. For simplicity, it doesn't currently expect to
* share individual timers between different drivers.
*/
#if defined(CONFIG_AVR32)
/* AVR32 has these divide PBB */
const u8 atmel_tc_divisors[5] = { 0, 4, 8, 16, 32, };
EXPORT_SYMBOL(atmel_tc_divisors);
#elif defined(CONFIG_ARCH_AT91)
/* AT91 has these divide MCK */
const u8 atmel_tc_divisors[5] = { 2, 8, 32, 128, 0, };
EXPORT_SYMBOL(atmel_tc_divisors);
#endif
static DEFINE_SPINLOCK(tc_list_lock);
static LIST_HEAD(tc_list);
/**
* atmel_tc_alloc - allocate a specified TC block
* @block: which block to allocate
*
* Caller allocates a block. If it is available, a pointer to a
* pre-initialized struct atmel_tc is returned. The caller can access
* the registers directly through the "regs" field.
*/
struct atmel_tc *atmel_tc_alloc(unsigned block)
{
struct atmel_tc *tc;
struct platform_device *pdev = NULL;
spin_lock(&tc_list_lock);
list_for_each_entry(tc, &tc_list, node) {
if (tc->allocated)
continue;
if ((tc->pdev->dev.of_node && tc->id == block) ||
(tc->pdev->id == block)) {
pdev = tc->pdev;
tc->allocated = true;
break;
}
}
spin_unlock(&tc_list_lock);
return pdev ? tc : NULL;
}
EXPORT_SYMBOL_GPL(atmel_tc_alloc);
/**
* atmel_tc_free - release a specified TC block
* @tc: Timer/counter block that was returned by atmel_tc_alloc()
*
* This reverses the effect of atmel_tc_alloc(), invalidating the resource
* returned by that routine and making the TC available to other drivers.
*/
void atmel_tc_free(struct atmel_tc *tc)
{
spin_lock(&tc_list_lock);
if (tc->allocated)
tc->allocated = false;
spin_unlock(&tc_list_lock);
}
EXPORT_SYMBOL_GPL(atmel_tc_free);
#if defined(CONFIG_OF)
static struct atmel_tcb_config tcb_rm9200_config = {
.counter_width = 16,
};
static struct atmel_tcb_config tcb_sam9x5_config = {
.counter_width = 32,
};
static const struct of_device_id atmel_tcb_dt_ids[] = {
{
.compatible = "atmel,at91rm9200-tcb",
.data = &tcb_rm9200_config,
}, {
.compatible = "atmel,at91sam9x5-tcb",
.data = &tcb_sam9x5_config,
}, {
/* sentinel */
}
};
MODULE_DEVICE_TABLE(of, atmel_tcb_dt_ids);
#endif
static int __init tc_probe(struct platform_device *pdev)
{
struct atmel_tc *tc;
struct clk *clk;
int irq;
struct resource *r;
unsigned int i;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return -EINVAL;
tc = devm_kzalloc(&pdev->dev, sizeof(struct atmel_tc), GFP_KERNEL);
if (!tc)
return -ENOMEM;
tc->pdev = pdev;
clk = devm_clk_get(&pdev->dev, "t0_clk");
if (IS_ERR(clk))
return PTR_ERR(clk);
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
tc->regs = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(tc->regs))
return PTR_ERR(tc->regs);
/* Now take SoC information if available */
if (pdev->dev.of_node) {
const struct of_device_id *match;
match = of_match_node(atmel_tcb_dt_ids, pdev->dev.of_node);
if (match)
tc->tcb_config = match->data;
tc->id = of_alias_get_id(tc->pdev->dev.of_node, "tcb");
} else {
tc->id = pdev->id;
}
tc->clk[0] = clk;
tc->clk[1] = devm_clk_get(&pdev->dev, "t1_clk");
if (IS_ERR(tc->clk[1]))
tc->clk[1] = clk;
tc->clk[2] = devm_clk_get(&pdev->dev, "t2_clk");
if (IS_ERR(tc->clk[2]))
tc->clk[2] = clk;
tc->irq[0] = irq;
tc->irq[1] = platform_get_irq(pdev, 1);
if (tc->irq[1] < 0)
tc->irq[1] = irq;
tc->irq[2] = platform_get_irq(pdev, 2);
if (tc->irq[2] < 0)
tc->irq[2] = irq;
for (i = 0; i < 3; i++)
writel(ATMEL_TC_ALL_IRQ, tc->regs + ATMEL_TC_REG(i, IDR));
spin_lock(&tc_list_lock);
list_add_tail(&tc->node, &tc_list);
spin_unlock(&tc_list_lock);
platform_set_drvdata(pdev, tc);
return 0;
}
static void tc_shutdown(struct platform_device *pdev)
{
int i;
struct atmel_tc *tc = platform_get_drvdata(pdev);
for (i = 0; i < 3; i++)
writel(ATMEL_TC_ALL_IRQ, tc->regs + ATMEL_TC_REG(i, IDR));
}
static struct platform_driver tc_driver = {
.driver = {
.name = "atmel_tcb",
.of_match_table = of_match_ptr(atmel_tcb_dt_ids),
},
.shutdown = tc_shutdown,
};
static int __init tc_init(void)
{
return platform_driver_probe(&tc_driver, tc_probe);
}
arch_initcall(tc_init);

1411
drivers/misc/bh1770glc.c Normal file

File diff suppressed because it is too large Load diff

257
drivers/misc/bh1780gli.c Normal file
View file

@ -0,0 +1,257 @@
/*
* bh1780gli.c
* ROHM Ambient Light Sensor Driver
*
* Copyright (C) 2010 Texas Instruments
* Author: Hemanth V <hemanthv@ti.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/i2c.h>
#include <linux/slab.h>
#include <linux/mutex.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/of.h>
#define BH1780_REG_CONTROL 0x80
#define BH1780_REG_PARTID 0x8A
#define BH1780_REG_MANFID 0x8B
#define BH1780_REG_DLOW 0x8C
#define BH1780_REG_DHIGH 0x8D
#define BH1780_REVMASK (0xf)
#define BH1780_POWMASK (0x3)
#define BH1780_POFF (0x0)
#define BH1780_PON (0x3)
/* power on settling time in ms */
#define BH1780_PON_DELAY 2
struct bh1780_data {
struct i2c_client *client;
int power_state;
/* lock for sysfs operations */
struct mutex lock;
};
static int bh1780_write(struct bh1780_data *ddata, u8 reg, u8 val, char *msg)
{
int ret = i2c_smbus_write_byte_data(ddata->client, reg, val);
if (ret < 0)
dev_err(&ddata->client->dev,
"i2c_smbus_write_byte_data failed error %d Register (%s)\n",
ret, msg);
return ret;
}
static int bh1780_read(struct bh1780_data *ddata, u8 reg, char *msg)
{
int ret = i2c_smbus_read_byte_data(ddata->client, reg);
if (ret < 0)
dev_err(&ddata->client->dev,
"i2c_smbus_read_byte_data failed error %d Register (%s)\n",
ret, msg);
return ret;
}
static ssize_t bh1780_show_lux(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct platform_device *pdev = to_platform_device(dev);
struct bh1780_data *ddata = platform_get_drvdata(pdev);
int lsb, msb;
lsb = bh1780_read(ddata, BH1780_REG_DLOW, "DLOW");
if (lsb < 0)
return lsb;
msb = bh1780_read(ddata, BH1780_REG_DHIGH, "DHIGH");
if (msb < 0)
return msb;
return sprintf(buf, "%d\n", (msb << 8) | lsb);
}
static ssize_t bh1780_show_power_state(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct platform_device *pdev = to_platform_device(dev);
struct bh1780_data *ddata = platform_get_drvdata(pdev);
int state;
state = bh1780_read(ddata, BH1780_REG_CONTROL, "CONTROL");
if (state < 0)
return state;
return sprintf(buf, "%d\n", state & BH1780_POWMASK);
}
static ssize_t bh1780_store_power_state(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct platform_device *pdev = to_platform_device(dev);
struct bh1780_data *ddata = platform_get_drvdata(pdev);
unsigned long val;
int error;
error = kstrtoul(buf, 0, &val);
if (error)
return error;
if (val < BH1780_POFF || val > BH1780_PON)
return -EINVAL;
mutex_lock(&ddata->lock);
error = bh1780_write(ddata, BH1780_REG_CONTROL, val, "CONTROL");
if (error < 0) {
mutex_unlock(&ddata->lock);
return error;
}
msleep(BH1780_PON_DELAY);
ddata->power_state = val;
mutex_unlock(&ddata->lock);
return count;
}
static DEVICE_ATTR(lux, S_IRUGO, bh1780_show_lux, NULL);
static DEVICE_ATTR(power_state, S_IWUSR | S_IRUGO,
bh1780_show_power_state, bh1780_store_power_state);
static struct attribute *bh1780_attributes[] = {
&dev_attr_power_state.attr,
&dev_attr_lux.attr,
NULL
};
static const struct attribute_group bh1780_attr_group = {
.attrs = bh1780_attributes,
};
static int bh1780_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
int ret;
struct bh1780_data *ddata;
struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent);
if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE))
return -EIO;
ddata = devm_kzalloc(&client->dev, sizeof(struct bh1780_data),
GFP_KERNEL);
if (ddata == NULL)
return -ENOMEM;
ddata->client = client;
i2c_set_clientdata(client, ddata);
ret = bh1780_read(ddata, BH1780_REG_PARTID, "PART ID");
if (ret < 0)
return ret;
dev_info(&client->dev, "Ambient Light Sensor, Rev : %d\n",
(ret & BH1780_REVMASK));
mutex_init(&ddata->lock);
return sysfs_create_group(&client->dev.kobj, &bh1780_attr_group);
}
static int bh1780_remove(struct i2c_client *client)
{
sysfs_remove_group(&client->dev.kobj, &bh1780_attr_group);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int bh1780_suspend(struct device *dev)
{
struct bh1780_data *ddata;
int state, ret;
struct i2c_client *client = to_i2c_client(dev);
ddata = i2c_get_clientdata(client);
state = bh1780_read(ddata, BH1780_REG_CONTROL, "CONTROL");
if (state < 0)
return state;
ddata->power_state = state & BH1780_POWMASK;
ret = bh1780_write(ddata, BH1780_REG_CONTROL, BH1780_POFF,
"CONTROL");
if (ret < 0)
return ret;
return 0;
}
static int bh1780_resume(struct device *dev)
{
struct bh1780_data *ddata;
int state, ret;
struct i2c_client *client = to_i2c_client(dev);
ddata = i2c_get_clientdata(client);
state = ddata->power_state;
ret = bh1780_write(ddata, BH1780_REG_CONTROL, state,
"CONTROL");
if (ret < 0)
return ret;
return 0;
}
#endif /* CONFIG_PM_SLEEP */
static SIMPLE_DEV_PM_OPS(bh1780_pm, bh1780_suspend, bh1780_resume);
static const struct i2c_device_id bh1780_id[] = {
{ "bh1780", 0 },
{ },
};
#ifdef CONFIG_OF
static const struct of_device_id of_bh1780_match[] = {
{ .compatible = "rohm,bh1780gli", },
{},
};
MODULE_DEVICE_TABLE(of, of_bh1780_match);
#endif
static struct i2c_driver bh1780_driver = {
.probe = bh1780_probe,
.remove = bh1780_remove,
.id_table = bh1780_id,
.driver = {
.name = "bh1780",
.pm = &bh1780_pm,
.of_match_table = of_match_ptr(of_bh1780_match),
},
};
module_i2c_driver(bh1780_driver);
MODULE_DESCRIPTION("BH1780GLI Ambient Light Sensor Driver");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Hemanth V <hemanthv@ti.com>");

84
drivers/misc/bmp085-i2c.c Normal file
View file

@ -0,0 +1,84 @@
/*
* Copyright (c) 2012 Bosch Sensortec GmbH
* Copyright (c) 2012 Unixphere AB
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/module.h>
#include <linux/i2c.h>
#include <linux/err.h>
#include "bmp085.h"
#define BMP085_I2C_ADDRESS 0x77
static const unsigned short normal_i2c[] = { BMP085_I2C_ADDRESS,
I2C_CLIENT_END };
static int bmp085_i2c_detect(struct i2c_client *client,
struct i2c_board_info *info)
{
if (client->addr != BMP085_I2C_ADDRESS)
return -ENODEV;
return bmp085_detect(&client->dev);
}
static int bmp085_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
int err;
struct regmap *regmap = devm_regmap_init_i2c(client,
&bmp085_regmap_config);
if (IS_ERR(regmap)) {
err = PTR_ERR(regmap);
dev_err(&client->dev, "Failed to init regmap: %d\n", err);
return err;
}
return bmp085_probe(&client->dev, regmap, client->irq);
}
static int bmp085_i2c_remove(struct i2c_client *client)
{
return bmp085_remove(&client->dev);
}
static const struct i2c_device_id bmp085_id[] = {
{ BMP085_NAME, 0 },
{ "bmp180", 0 },
{ }
};
MODULE_DEVICE_TABLE(i2c, bmp085_id);
static struct i2c_driver bmp085_i2c_driver = {
.driver = {
.owner = THIS_MODULE,
.name = BMP085_NAME,
},
.id_table = bmp085_id,
.probe = bmp085_i2c_probe,
.remove = bmp085_i2c_remove,
.detect = bmp085_i2c_detect,
.address_list = normal_i2c
};
module_i2c_driver(bmp085_i2c_driver);
MODULE_AUTHOR("Eric Andersson <eric.andersson@unixphere.com>");
MODULE_DESCRIPTION("BMP085 I2C bus driver");
MODULE_LICENSE("GPL");

80
drivers/misc/bmp085-spi.c Normal file
View file

@ -0,0 +1,80 @@
/*
* Copyright (c) 2012 Bosch Sensortec GmbH
* Copyright (c) 2012 Unixphere AB
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/module.h>
#include <linux/spi/spi.h>
#include <linux/err.h>
#include "bmp085.h"
static int bmp085_spi_probe(struct spi_device *client)
{
int err;
struct regmap *regmap;
client->bits_per_word = 8;
err = spi_setup(client);
if (err < 0) {
dev_err(&client->dev, "spi_setup failed!\n");
return err;
}
regmap = devm_regmap_init_spi(client, &bmp085_regmap_config);
if (IS_ERR(regmap)) {
err = PTR_ERR(regmap);
dev_err(&client->dev, "Failed to init regmap: %d\n", err);
return err;
}
return bmp085_probe(&client->dev, regmap, client->irq);
}
static int bmp085_spi_remove(struct spi_device *client)
{
return bmp085_remove(&client->dev);
}
static const struct of_device_id bmp085_of_match[] = {
{ .compatible = "bosch,bmp085", },
{ },
};
MODULE_DEVICE_TABLE(of, bmp085_of_match);
static const struct spi_device_id bmp085_id[] = {
{ "bmp180", 0 },
{ "bmp181", 0 },
{ }
};
MODULE_DEVICE_TABLE(spi, bmp085_id);
static struct spi_driver bmp085_spi_driver = {
.driver = {
.owner = THIS_MODULE,
.name = BMP085_NAME,
.of_match_table = bmp085_of_match
},
.id_table = bmp085_id,
.probe = bmp085_spi_probe,
.remove = bmp085_spi_remove
};
module_spi_driver(bmp085_spi_driver);
MODULE_AUTHOR("Eric Andersson <eric.andersson@unixphere.com>");
MODULE_DESCRIPTION("BMP085 SPI bus driver");
MODULE_LICENSE("GPL");

506
drivers/misc/bmp085.c Normal file
View file

@ -0,0 +1,506 @@
/* Copyright (c) 2010 Christoph Mair <christoph.mair@gmail.com>
* Copyright (c) 2012 Bosch Sensortec GmbH
* Copyright (c) 2012 Unixphere AB
*
* This driver supports the bmp085 and bmp18x digital barometric pressure
* and temperature sensors from Bosch Sensortec. The datasheets
* are available from their website:
* http://www.bosch-sensortec.com/content/language1/downloads/BST-BMP085-DS000-05.pdf
* http://www.bosch-sensortec.com/content/language1/downloads/BST-BMP180-DS000-07.pdf
*
* A pressure measurement is issued by reading from pressure0_input.
* The return value ranges from 30000 to 110000 pascal with a resulution
* of 1 pascal (0.01 millibar) which enables measurements from 9000m above
* to 500m below sea level.
*
* The temperature can be read from temp0_input. Values range from
* -400 to 850 representing the ambient temperature in degree celsius
* multiplied by 10.The resolution is 0.1 celsius.
*
* Because ambient pressure is temperature dependent, a temperature
* measurement will be executed automatically even if the user is reading
* from pressure0_input. This happens if the last temperature measurement
* has been executed more then one second ago.
*
* To decrease RMS noise from pressure measurements, the bmp085 can
* autonomously calculate the average of up to eight samples. This is
* set up by writing to the oversampling sysfs file. Accepted values
* are 0, 1, 2 and 3. 2^x when x is the value written to this file
* specifies the number of samples used to calculate the ambient pressure.
* RMS noise is specified with six pascal (without averaging) and decreases
* down to 3 pascal when using an oversampling setting of 3.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/module.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/of.h>
#include "bmp085.h"
#include <linux/interrupt.h>
#include <linux/completion.h>
#include <linux/gpio.h>
#define BMP085_CHIP_ID 0x55
#define BMP085_CALIBRATION_DATA_START 0xAA
#define BMP085_CALIBRATION_DATA_LENGTH 11 /* 16 bit values */
#define BMP085_CHIP_ID_REG 0xD0
#define BMP085_CTRL_REG 0xF4
#define BMP085_TEMP_MEASUREMENT 0x2E
#define BMP085_PRESSURE_MEASUREMENT 0x34
#define BMP085_CONVERSION_REGISTER_MSB 0xF6
#define BMP085_CONVERSION_REGISTER_LSB 0xF7
#define BMP085_CONVERSION_REGISTER_XLSB 0xF8
#define BMP085_TEMP_CONVERSION_TIME 5
struct bmp085_calibration_data {
s16 AC1, AC2, AC3;
u16 AC4, AC5, AC6;
s16 B1, B2;
s16 MB, MC, MD;
};
struct bmp085_data {
struct device *dev;
struct regmap *regmap;
struct mutex lock;
struct bmp085_calibration_data calibration;
u8 oversampling_setting;
u32 raw_temperature;
u32 raw_pressure;
u32 temp_measurement_period;
unsigned long last_temp_measurement;
u8 chip_id;
s32 b6; /* calculated temperature correction coefficient */
int irq;
struct completion done;
};
static irqreturn_t bmp085_eoc_isr(int irq, void *devid)
{
struct bmp085_data *data = devid;
complete(&data->done);
return IRQ_HANDLED;
}
static s32 bmp085_read_calibration_data(struct bmp085_data *data)
{
u16 tmp[BMP085_CALIBRATION_DATA_LENGTH];
struct bmp085_calibration_data *cali = &(data->calibration);
s32 status = regmap_bulk_read(data->regmap,
BMP085_CALIBRATION_DATA_START, (u8 *)tmp,
(BMP085_CALIBRATION_DATA_LENGTH << 1));
if (status < 0)
return status;
cali->AC1 = be16_to_cpu(tmp[0]);
cali->AC2 = be16_to_cpu(tmp[1]);
cali->AC3 = be16_to_cpu(tmp[2]);
cali->AC4 = be16_to_cpu(tmp[3]);
cali->AC5 = be16_to_cpu(tmp[4]);
cali->AC6 = be16_to_cpu(tmp[5]);
cali->B1 = be16_to_cpu(tmp[6]);
cali->B2 = be16_to_cpu(tmp[7]);
cali->MB = be16_to_cpu(tmp[8]);
cali->MC = be16_to_cpu(tmp[9]);
cali->MD = be16_to_cpu(tmp[10]);
return 0;
}
static s32 bmp085_update_raw_temperature(struct bmp085_data *data)
{
u16 tmp;
s32 status;
mutex_lock(&data->lock);
init_completion(&data->done);
status = regmap_write(data->regmap, BMP085_CTRL_REG,
BMP085_TEMP_MEASUREMENT);
if (status < 0) {
dev_err(data->dev,
"Error while requesting temperature measurement.\n");
goto exit;
}
wait_for_completion_timeout(&data->done, 1 + msecs_to_jiffies(
BMP085_TEMP_CONVERSION_TIME));
status = regmap_bulk_read(data->regmap, BMP085_CONVERSION_REGISTER_MSB,
&tmp, sizeof(tmp));
if (status < 0) {
dev_err(data->dev,
"Error while reading temperature measurement result\n");
goto exit;
}
data->raw_temperature = be16_to_cpu(tmp);
data->last_temp_measurement = jiffies;
status = 0; /* everything ok, return 0 */
exit:
mutex_unlock(&data->lock);
return status;
}
static s32 bmp085_update_raw_pressure(struct bmp085_data *data)
{
u32 tmp = 0;
s32 status;
mutex_lock(&data->lock);
init_completion(&data->done);
status = regmap_write(data->regmap, BMP085_CTRL_REG,
BMP085_PRESSURE_MEASUREMENT +
(data->oversampling_setting << 6));
if (status < 0) {
dev_err(data->dev,
"Error while requesting pressure measurement.\n");
goto exit;
}
/* wait for the end of conversion */
wait_for_completion_timeout(&data->done, 1 + msecs_to_jiffies(
2+(3 << data->oversampling_setting)));
/* copy data into a u32 (4 bytes), but skip the first byte. */
status = regmap_bulk_read(data->regmap, BMP085_CONVERSION_REGISTER_MSB,
((u8 *)&tmp)+1, 3);
if (status < 0) {
dev_err(data->dev,
"Error while reading pressure measurement results\n");
goto exit;
}
data->raw_pressure = be32_to_cpu((tmp));
data->raw_pressure >>= (8-data->oversampling_setting);
status = 0; /* everything ok, return 0 */
exit:
mutex_unlock(&data->lock);
return status;
}
/*
* This function starts the temperature measurement and returns the value
* in tenth of a degree celsius.
*/
static s32 bmp085_get_temperature(struct bmp085_data *data, int *temperature)
{
struct bmp085_calibration_data *cali = &data->calibration;
long x1, x2;
int status;
status = bmp085_update_raw_temperature(data);
if (status < 0)
goto exit;
x1 = ((data->raw_temperature - cali->AC6) * cali->AC5) >> 15;
x2 = (cali->MC << 11) / (x1 + cali->MD);
data->b6 = x1 + x2 - 4000;
/* if NULL just update b6. Used for pressure only measurements */
if (temperature != NULL)
*temperature = (x1+x2+8) >> 4;
exit:
return status;
}
/*
* This function starts the pressure measurement and returns the value
* in millibar. Since the pressure depends on the ambient temperature,
* a temperature measurement is executed according to the given temperature
* measurement period (default is 1 sec boundary). This period could vary
* and needs to be adjusted according to the sensor environment, i.e. if big
* temperature variations then the temperature needs to be read out often.
*/
static s32 bmp085_get_pressure(struct bmp085_data *data, int *pressure)
{
struct bmp085_calibration_data *cali = &data->calibration;
s32 x1, x2, x3, b3;
u32 b4, b7;
s32 p;
int status;
/* alt least every second force an update of the ambient temperature */
if ((data->last_temp_measurement == 0) ||
time_is_before_jiffies(data->last_temp_measurement + 1*HZ)) {
status = bmp085_get_temperature(data, NULL);
if (status < 0)
return status;
}
status = bmp085_update_raw_pressure(data);
if (status < 0)
return status;
x1 = (data->b6 * data->b6) >> 12;
x1 *= cali->B2;
x1 >>= 11;
x2 = cali->AC2 * data->b6;
x2 >>= 11;
x3 = x1 + x2;
b3 = (((((s32)cali->AC1) * 4 + x3) << data->oversampling_setting) + 2);
b3 >>= 2;
x1 = (cali->AC3 * data->b6) >> 13;
x2 = (cali->B1 * ((data->b6 * data->b6) >> 12)) >> 16;
x3 = (x1 + x2 + 2) >> 2;
b4 = (cali->AC4 * (u32)(x3 + 32768)) >> 15;
b7 = ((u32)data->raw_pressure - b3) *
(50000 >> data->oversampling_setting);
p = ((b7 < 0x80000000) ? ((b7 << 1) / b4) : ((b7 / b4) * 2));
x1 = p >> 8;
x1 *= x1;
x1 = (x1 * 3038) >> 16;
x2 = (-7357 * p) >> 16;
p += (x1 + x2 + 3791) >> 4;
*pressure = p;
return 0;
}
/*
* This function sets the chip-internal oversampling. Valid values are 0..3.
* The chip will use 2^oversampling samples for internal averaging.
* This influences the measurement time and the accuracy; larger values
* increase both. The datasheet gives an overview on how measurement time,
* accuracy and noise correlate.
*/
static void bmp085_set_oversampling(struct bmp085_data *data,
unsigned char oversampling)
{
if (oversampling > 3)
oversampling = 3;
data->oversampling_setting = oversampling;
}
/*
* Returns the currently selected oversampling. Range: 0..3
*/
static unsigned char bmp085_get_oversampling(struct bmp085_data *data)
{
return data->oversampling_setting;
}
/* sysfs callbacks */
static ssize_t set_oversampling(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct bmp085_data *data = dev_get_drvdata(dev);
unsigned long oversampling;
int err = kstrtoul(buf, 10, &oversampling);
if (err == 0) {
mutex_lock(&data->lock);
bmp085_set_oversampling(data, oversampling);
mutex_unlock(&data->lock);
return count;
}
return err;
}
static ssize_t show_oversampling(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct bmp085_data *data = dev_get_drvdata(dev);
return sprintf(buf, "%u\n", bmp085_get_oversampling(data));
}
static DEVICE_ATTR(oversampling, S_IWUSR | S_IRUGO,
show_oversampling, set_oversampling);
static ssize_t show_temperature(struct device *dev,
struct device_attribute *attr, char *buf)
{
int temperature;
int status;
struct bmp085_data *data = dev_get_drvdata(dev);
status = bmp085_get_temperature(data, &temperature);
if (status < 0)
return status;
else
return sprintf(buf, "%d\n", temperature);
}
static DEVICE_ATTR(temp0_input, S_IRUGO, show_temperature, NULL);
static ssize_t show_pressure(struct device *dev,
struct device_attribute *attr, char *buf)
{
int pressure;
int status;
struct bmp085_data *data = dev_get_drvdata(dev);
status = bmp085_get_pressure(data, &pressure);
if (status < 0)
return status;
else
return sprintf(buf, "%d\n", pressure);
}
static DEVICE_ATTR(pressure0_input, S_IRUGO, show_pressure, NULL);
static struct attribute *bmp085_attributes[] = {
&dev_attr_temp0_input.attr,
&dev_attr_pressure0_input.attr,
&dev_attr_oversampling.attr,
NULL
};
static const struct attribute_group bmp085_attr_group = {
.attrs = bmp085_attributes,
};
int bmp085_detect(struct device *dev)
{
struct bmp085_data *data = dev_get_drvdata(dev);
unsigned int id;
int ret;
ret = regmap_read(data->regmap, BMP085_CHIP_ID_REG, &id);
if (ret < 0)
return ret;
if (id != data->chip_id)
return -ENODEV;
return 0;
}
EXPORT_SYMBOL_GPL(bmp085_detect);
static void bmp085_get_of_properties(struct bmp085_data *data)
{
#ifdef CONFIG_OF
struct device_node *np = data->dev->of_node;
u32 prop;
if (!np)
return;
if (!of_property_read_u32(np, "chip-id", &prop))
data->chip_id = prop & 0xff;
if (!of_property_read_u32(np, "temp-measurement-period", &prop))
data->temp_measurement_period = (prop/100)*HZ;
if (!of_property_read_u32(np, "default-oversampling", &prop))
data->oversampling_setting = prop & 0xff;
#endif
}
static int bmp085_init_client(struct bmp085_data *data)
{
int status = bmp085_read_calibration_data(data);
if (status < 0)
return status;
/* default settings */
data->chip_id = BMP085_CHIP_ID;
data->last_temp_measurement = 0;
data->temp_measurement_period = 1*HZ;
data->oversampling_setting = 3;
bmp085_get_of_properties(data);
mutex_init(&data->lock);
return 0;
}
struct regmap_config bmp085_regmap_config = {
.reg_bits = 8,
.val_bits = 8
};
EXPORT_SYMBOL_GPL(bmp085_regmap_config);
int bmp085_probe(struct device *dev, struct regmap *regmap, int irq)
{
struct bmp085_data *data;
int err = 0;
data = kzalloc(sizeof(struct bmp085_data), GFP_KERNEL);
if (!data) {
err = -ENOMEM;
goto exit;
}
dev_set_drvdata(dev, data);
data->dev = dev;
data->regmap = regmap;
data->irq = irq;
if (data->irq > 0) {
err = devm_request_irq(dev, data->irq, bmp085_eoc_isr,
IRQF_TRIGGER_RISING, "bmp085",
data);
if (err < 0)
goto exit_free;
}
/* Initialize the BMP085 chip */
err = bmp085_init_client(data);
if (err < 0)
goto exit_free;
err = bmp085_detect(dev);
if (err < 0) {
dev_err(dev, "%s: chip_id failed!\n", BMP085_NAME);
goto exit_free;
}
/* Register sysfs hooks */
err = sysfs_create_group(&dev->kobj, &bmp085_attr_group);
if (err)
goto exit_free;
dev_info(dev, "Successfully initialized %s!\n", BMP085_NAME);
return 0;
exit_free:
kfree(data);
exit:
return err;
}
EXPORT_SYMBOL_GPL(bmp085_probe);
int bmp085_remove(struct device *dev)
{
struct bmp085_data *data = dev_get_drvdata(dev);
sysfs_remove_group(&data->dev->kobj, &bmp085_attr_group);
kfree(data);
return 0;
}
EXPORT_SYMBOL_GPL(bmp085_remove);
MODULE_AUTHOR("Christoph Mair <christoph.mair@gmail.com>");
MODULE_DESCRIPTION("BMP085 driver");
MODULE_LICENSE("GPL");

33
drivers/misc/bmp085.h Normal file
View file

@ -0,0 +1,33 @@
/*
* Copyright (c) 2012 Bosch Sensortec GmbH
* Copyright (c) 2012 Unixphere AB
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef _BMP085_H
#define _BMP085_H
#include <linux/regmap.h>
#define BMP085_NAME "bmp085"
extern struct regmap_config bmp085_regmap_config;
int bmp085_probe(struct device *dev, struct regmap *regmap, int irq);
int bmp085_remove(struct device *dev);
int bmp085_detect(struct device *dev);
#endif

View file

@ -0,0 +1,34 @@
#
# C2 port devices
#
menuconfig C2PORT
tristate "Silicon Labs C2 port support"
default n
help
This option enables support for Silicon Labs C2 port used to
program Silicon micro controller chips (and other 8051 compatible).
If your board have no such micro controllers you don't need this
interface at all.
To compile this driver as a module, choose M here: the module will
be called c2port_core. Note that you also need a client module
usually called c2port-*.
If you are not sure, say N here.
if C2PORT
config C2PORT_DURAMAR_2150
tristate "C2 port support for Eurotech's Duramar 2150"
depends on X86
default n
help
This option enables C2 support for the Eurotech's Duramar 2150
on board micro controller.
To compile this driver as a module, choose M here: the module will
be called c2port-duramar2150.
endif # C2PORT

View file

@ -0,0 +1,3 @@
obj-$(CONFIG_C2PORT) += core.o
obj-$(CONFIG_C2PORT_DURAMAR_2150) += c2port-duramar2150.o

View file

@ -0,0 +1,159 @@
/*
* Silicon Labs C2 port Linux support for Eurotech Duramar 2150
*
* Copyright (c) 2008 Rodolfo Giometti <giometti@linux.it>
* Copyright (c) 2008 Eurotech S.p.A. <info@eurotech.it>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation
*/
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/ioport.h>
#include <linux/c2port.h>
#define DATA_PORT 0x325
#define DIR_PORT 0x326
#define C2D (1 << 0)
#define C2CK (1 << 1)
static DEFINE_MUTEX(update_lock);
/*
* C2 port operations
*/
static void duramar2150_c2port_access(struct c2port_device *dev, int status)
{
u8 v;
mutex_lock(&update_lock);
v = inb(DIR_PORT);
/* 0 = input, 1 = output */
if (status)
outb(v | (C2D | C2CK), DIR_PORT);
else
/* When access is "off" is important that both lines are set
* as inputs or hi-impedance */
outb(v & ~(C2D | C2CK), DIR_PORT);
mutex_unlock(&update_lock);
}
static void duramar2150_c2port_c2d_dir(struct c2port_device *dev, int dir)
{
u8 v;
mutex_lock(&update_lock);
v = inb(DIR_PORT);
if (dir)
outb(v & ~C2D, DIR_PORT);
else
outb(v | C2D, DIR_PORT);
mutex_unlock(&update_lock);
}
static int duramar2150_c2port_c2d_get(struct c2port_device *dev)
{
return inb(DATA_PORT) & C2D;
}
static void duramar2150_c2port_c2d_set(struct c2port_device *dev, int status)
{
u8 v;
mutex_lock(&update_lock);
v = inb(DATA_PORT);
if (status)
outb(v | C2D, DATA_PORT);
else
outb(v & ~C2D, DATA_PORT);
mutex_unlock(&update_lock);
}
static void duramar2150_c2port_c2ck_set(struct c2port_device *dev, int status)
{
u8 v;
mutex_lock(&update_lock);
v = inb(DATA_PORT);
if (status)
outb(v | C2CK, DATA_PORT);
else
outb(v & ~C2CK, DATA_PORT);
mutex_unlock(&update_lock);
}
static struct c2port_ops duramar2150_c2port_ops = {
.block_size = 512, /* bytes */
.blocks_num = 30, /* total flash size: 15360 bytes */
.access = duramar2150_c2port_access,
.c2d_dir = duramar2150_c2port_c2d_dir,
.c2d_get = duramar2150_c2port_c2d_get,
.c2d_set = duramar2150_c2port_c2d_set,
.c2ck_set = duramar2150_c2port_c2ck_set,
};
static struct c2port_device *duramar2150_c2port_dev;
/*
* Module stuff
*/
static int __init duramar2150_c2port_init(void)
{
struct resource *res;
int ret = 0;
res = request_region(0x325, 2, "c2port");
if (!res)
return -EBUSY;
duramar2150_c2port_dev = c2port_device_register("uc",
&duramar2150_c2port_ops, NULL);
if (!duramar2150_c2port_dev) {
ret = -ENODEV;
goto free_region;
}
return 0;
free_region:
release_region(0x325, 2);
return ret;
}
static void __exit duramar2150_c2port_exit(void)
{
/* Setup the GPIOs as input by default (access = 0) */
duramar2150_c2port_access(duramar2150_c2port_dev, 0);
c2port_device_unregister(duramar2150_c2port_dev);
release_region(0x325, 2);
}
module_init(duramar2150_c2port_init);
module_exit(duramar2150_c2port_exit);
MODULE_AUTHOR("Rodolfo Giometti <giometti@linux.it>");
MODULE_DESCRIPTION("Silicon Labs C2 port Linux support for Duramar 2150");
MODULE_LICENSE("GPL");

1009
drivers/misc/c2port/core.c Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,17 @@
config CARMA_FPGA
tristate "CARMA DATA-FPGA Access Driver"
depends on FSL_SOC && PPC_83xx && MEDIA_SUPPORT && HAS_DMA && FSL_DMA
select VIDEOBUF_DMA_SG
default n
help
Say Y here to include support for communicating with the data
processing FPGAs on the OVRO CARMA board.
config CARMA_FPGA_PROGRAM
tristate "CARMA DATA-FPGA Programmer"
depends on FSL_SOC && PPC_83xx && MEDIA_SUPPORT && HAS_DMA && FSL_DMA
select VIDEOBUF_DMA_SG
default n
help
Say Y here to include support for programming the data processing
FPGAs on the OVRO CARMA board.

View file

@ -0,0 +1,2 @@
obj-$(CONFIG_CARMA_FPGA) += carma-fpga.o
obj-$(CONFIG_CARMA_FPGA_PROGRAM) += carma-fpga-program.o

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,25 @@
config CB710_CORE
tristate "ENE CB710/720 Flash memory card reader support"
depends on PCI
help
This option enables support for PCI ENE CB710/720 Flash memory card
reader found in some laptops (ie. some versions of HP Compaq nx9500).
You will also have to select some flash card format drivers (MMC/SD,
MemoryStick).
This driver can also be built as a module. If so, the module
will be called cb710.
config CB710_DEBUG
bool "Enable driver debugging"
depends on CB710_CORE != n
default n
help
This is an option for use by developers; most people should
say N here. This adds a lot of debugging output to dmesg.
config CB710_DEBUG_ASSUMPTIONS
bool
depends on CB710_CORE != n
default y

View file

@ -0,0 +1,6 @@
ccflags-$(CONFIG_CB710_DEBUG) := -DDEBUG
obj-$(CONFIG_CB710_CORE) += cb710.o
cb710-y := core.o sgbuf2.o
cb710-$(CONFIG_CB710_DEBUG) += debug.o

359
drivers/misc/cb710/core.c Normal file
View file

@ -0,0 +1,359 @@
/*
* cb710/core.c
*
* Copyright by Michał Mirosław, 2008-2009
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/spinlock.h>
#include <linux/idr.h>
#include <linux/cb710.h>
#include <linux/gfp.h>
static DEFINE_IDA(cb710_ida);
static DEFINE_SPINLOCK(cb710_ida_lock);
void cb710_pci_update_config_reg(struct pci_dev *pdev,
int reg, uint32_t mask, uint32_t xor)
{
u32 rval;
pci_read_config_dword(pdev, reg, &rval);
rval = (rval & mask) ^ xor;
pci_write_config_dword(pdev, reg, rval);
}
EXPORT_SYMBOL_GPL(cb710_pci_update_config_reg);
/* Some magic writes based on Windows driver init code */
static int cb710_pci_configure(struct pci_dev *pdev)
{
unsigned int devfn = PCI_DEVFN(PCI_SLOT(pdev->devfn), 0);
struct pci_dev *pdev0;
u32 val;
cb710_pci_update_config_reg(pdev, 0x48,
~0x000000FF, 0x0000003F);
pci_read_config_dword(pdev, 0x48, &val);
if (val & 0x80000000)
return 0;
pdev0 = pci_get_slot(pdev->bus, devfn);
if (!pdev0)
return -ENODEV;
if (pdev0->vendor == PCI_VENDOR_ID_ENE
&& pdev0->device == PCI_DEVICE_ID_ENE_720) {
cb710_pci_update_config_reg(pdev0, 0x8C,
~0x00F00000, 0x00100000);
cb710_pci_update_config_reg(pdev0, 0xB0,
~0x08000000, 0x08000000);
}
cb710_pci_update_config_reg(pdev0, 0x8C,
~0x00000F00, 0x00000200);
cb710_pci_update_config_reg(pdev0, 0x90,
~0x00060000, 0x00040000);
pci_dev_put(pdev0);
return 0;
}
static irqreturn_t cb710_irq_handler(int irq, void *data)
{
struct cb710_chip *chip = data;
struct cb710_slot *slot = &chip->slot[0];
irqreturn_t handled = IRQ_NONE;
unsigned nr;
spin_lock(&chip->irq_lock); /* incl. smp_rmb() */
for (nr = chip->slots; nr; ++slot, --nr) {
cb710_irq_handler_t handler_func = slot->irq_handler;
if (handler_func && handler_func(slot))
handled = IRQ_HANDLED;
}
spin_unlock(&chip->irq_lock);
return handled;
}
static void cb710_release_slot(struct device *dev)
{
#ifdef CONFIG_CB710_DEBUG_ASSUMPTIONS
struct cb710_slot *slot = cb710_pdev_to_slot(to_platform_device(dev));
struct cb710_chip *chip = cb710_slot_to_chip(slot);
/* slot struct can be freed now */
atomic_dec(&chip->slot_refs_count);
#endif
}
static int cb710_register_slot(struct cb710_chip *chip,
unsigned slot_mask, unsigned io_offset, const char *name)
{
int nr = chip->slots;
struct cb710_slot *slot = &chip->slot[nr];
int err;
dev_dbg(cb710_chip_dev(chip),
"register: %s.%d; slot %d; mask %d; IO offset: 0x%02X\n",
name, chip->platform_id, nr, slot_mask, io_offset);
/* slot->irq_handler == NULL here; this needs to be
* seen before platform_device_register() */
++chip->slots;
smp_wmb();
slot->iobase = chip->iobase + io_offset;
slot->pdev.name = name;
slot->pdev.id = chip->platform_id;
slot->pdev.dev.parent = &chip->pdev->dev;
slot->pdev.dev.release = cb710_release_slot;
err = platform_device_register(&slot->pdev);
#ifdef CONFIG_CB710_DEBUG_ASSUMPTIONS
atomic_inc(&chip->slot_refs_count);
#endif
if (err) {
/* device_initialize() called from platform_device_register()
* wants this on error path */
platform_device_put(&slot->pdev);
/* slot->irq_handler == NULL here anyway, so no lock needed */
--chip->slots;
return err;
}
chip->slot_mask |= slot_mask;
return 0;
}
static void cb710_unregister_slot(struct cb710_chip *chip,
unsigned slot_mask)
{
int nr = chip->slots - 1;
if (!(chip->slot_mask & slot_mask))
return;
platform_device_unregister(&chip->slot[nr].pdev);
/* complementary to spin_unlock() in cb710_set_irq_handler() */
smp_rmb();
BUG_ON(chip->slot[nr].irq_handler != NULL);
/* slot->irq_handler == NULL here, so no lock needed */
--chip->slots;
chip->slot_mask &= ~slot_mask;
}
void cb710_set_irq_handler(struct cb710_slot *slot,
cb710_irq_handler_t handler)
{
struct cb710_chip *chip = cb710_slot_to_chip(slot);
unsigned long flags;
spin_lock_irqsave(&chip->irq_lock, flags);
slot->irq_handler = handler;
spin_unlock_irqrestore(&chip->irq_lock, flags);
}
EXPORT_SYMBOL_GPL(cb710_set_irq_handler);
#ifdef CONFIG_PM
static int cb710_suspend(struct pci_dev *pdev, pm_message_t state)
{
struct cb710_chip *chip = pci_get_drvdata(pdev);
devm_free_irq(&pdev->dev, pdev->irq, chip);
pci_save_state(pdev);
pci_disable_device(pdev);
if (state.event & PM_EVENT_SLEEP)
pci_set_power_state(pdev, PCI_D3hot);
return 0;
}
static int cb710_resume(struct pci_dev *pdev)
{
struct cb710_chip *chip = pci_get_drvdata(pdev);
int err;
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
err = pcim_enable_device(pdev);
if (err)
return err;
return devm_request_irq(&pdev->dev, pdev->irq,
cb710_irq_handler, IRQF_SHARED, KBUILD_MODNAME, chip);
}
#endif /* CONFIG_PM */
static int cb710_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
struct cb710_chip *chip;
unsigned long flags;
u32 val;
int err;
int n = 0;
err = cb710_pci_configure(pdev);
if (err)
return err;
/* this is actually magic... */
pci_read_config_dword(pdev, 0x48, &val);
if (!(val & 0x80000000)) {
pci_write_config_dword(pdev, 0x48, val|0x71000000);
pci_read_config_dword(pdev, 0x48, &val);
}
dev_dbg(&pdev->dev, "PCI config[0x48] = 0x%08X\n", val);
if (!(val & 0x70000000))
return -ENODEV;
val = (val >> 28) & 7;
if (val & CB710_SLOT_MMC)
++n;
if (val & CB710_SLOT_MS)
++n;
if (val & CB710_SLOT_SM)
++n;
chip = devm_kzalloc(&pdev->dev,
sizeof(*chip) + n * sizeof(*chip->slot), GFP_KERNEL);
if (!chip)
return -ENOMEM;
err = pcim_enable_device(pdev);
if (err)
return err;
err = pcim_iomap_regions(pdev, 0x0001, KBUILD_MODNAME);
if (err)
return err;
spin_lock_init(&chip->irq_lock);
chip->pdev = pdev;
chip->iobase = pcim_iomap_table(pdev)[0];
pci_set_drvdata(pdev, chip);
err = devm_request_irq(&pdev->dev, pdev->irq,
cb710_irq_handler, IRQF_SHARED, KBUILD_MODNAME, chip);
if (err)
return err;
do {
if (!ida_pre_get(&cb710_ida, GFP_KERNEL))
return -ENOMEM;
spin_lock_irqsave(&cb710_ida_lock, flags);
err = ida_get_new(&cb710_ida, &chip->platform_id);
spin_unlock_irqrestore(&cb710_ida_lock, flags);
if (err && err != -EAGAIN)
return err;
} while (err);
dev_info(&pdev->dev, "id %d, IO 0x%p, IRQ %d\n",
chip->platform_id, chip->iobase, pdev->irq);
if (val & CB710_SLOT_MMC) { /* MMC/SD slot */
err = cb710_register_slot(chip,
CB710_SLOT_MMC, 0x00, "cb710-mmc");
if (err)
return err;
}
if (val & CB710_SLOT_MS) { /* MemoryStick slot */
err = cb710_register_slot(chip,
CB710_SLOT_MS, 0x40, "cb710-ms");
if (err)
goto unreg_mmc;
}
if (val & CB710_SLOT_SM) { /* SmartMedia slot */
err = cb710_register_slot(chip,
CB710_SLOT_SM, 0x60, "cb710-sm");
if (err)
goto unreg_ms;
}
return 0;
unreg_ms:
cb710_unregister_slot(chip, CB710_SLOT_MS);
unreg_mmc:
cb710_unregister_slot(chip, CB710_SLOT_MMC);
#ifdef CONFIG_CB710_DEBUG_ASSUMPTIONS
BUG_ON(atomic_read(&chip->slot_refs_count) != 0);
#endif
return err;
}
static void cb710_remove_one(struct pci_dev *pdev)
{
struct cb710_chip *chip = pci_get_drvdata(pdev);
unsigned long flags;
cb710_unregister_slot(chip, CB710_SLOT_SM);
cb710_unregister_slot(chip, CB710_SLOT_MS);
cb710_unregister_slot(chip, CB710_SLOT_MMC);
#ifdef CONFIG_CB710_DEBUG_ASSUMPTIONS
BUG_ON(atomic_read(&chip->slot_refs_count) != 0);
#endif
spin_lock_irqsave(&cb710_ida_lock, flags);
ida_remove(&cb710_ida, chip->platform_id);
spin_unlock_irqrestore(&cb710_ida_lock, flags);
}
static const struct pci_device_id cb710_pci_tbl[] = {
{ PCI_VENDOR_ID_ENE, PCI_DEVICE_ID_ENE_CB710_FLASH,
PCI_ANY_ID, PCI_ANY_ID, },
{ 0, }
};
static struct pci_driver cb710_driver = {
.name = KBUILD_MODNAME,
.id_table = cb710_pci_tbl,
.probe = cb710_probe,
.remove = cb710_remove_one,
#ifdef CONFIG_PM
.suspend = cb710_suspend,
.resume = cb710_resume,
#endif
};
static int __init cb710_init_module(void)
{
return pci_register_driver(&cb710_driver);
}
static void __exit cb710_cleanup_module(void)
{
pci_unregister_driver(&cb710_driver);
ida_destroy(&cb710_ida);
}
module_init(cb710_init_module);
module_exit(cb710_cleanup_module);
MODULE_AUTHOR("Michał Mirosław <mirq-linux@rere.qmqm.pl>");
MODULE_DESCRIPTION("ENE CB710 memory card reader driver");
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(pci, cb710_pci_tbl);

118
drivers/misc/cb710/debug.c Normal file
View file

@ -0,0 +1,118 @@
/*
* cb710/debug.c
*
* Copyright by Michał Mirosław, 2008-2009
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/cb710.h>
#include <linux/kernel.h>
#include <linux/module.h>
#define CB710_REG_COUNT 0x80
static const u16 allow[CB710_REG_COUNT/16] = {
0xFFF0, 0xFFFF, 0xFFFF, 0xFFFF,
0xFFF0, 0xFFFF, 0xFFFF, 0xFFFF,
};
static const char *const prefix[ARRAY_SIZE(allow)] = {
"MMC", "MMC", "MMC", "MMC",
"MS?", "MS?", "SM?", "SM?"
};
static inline int allow_reg_read(unsigned block, unsigned offset, unsigned bits)
{
unsigned mask = (1 << bits/8) - 1;
offset *= bits/8;
return ((allow[block] >> offset) & mask) == mask;
}
#define CB710_READ_REGS_TEMPLATE(t) \
static void cb710_read_regs_##t(void __iomem *iobase, \
u##t *reg, unsigned select) \
{ \
unsigned i, j; \
\
for (i = 0; i < ARRAY_SIZE(allow); ++i, reg += 16/(t/8)) { \
if (!(select & (1 << i))) \
continue; \
\
for (j = 0; j < 0x10/(t/8); ++j) { \
if (!allow_reg_read(i, j, t)) \
continue; \
reg[j] = ioread##t(iobase \
+ (i << 4) + (j * (t/8))); \
} \
} \
}
static const char cb710_regf_8[] = "%02X";
static const char cb710_regf_16[] = "%04X";
static const char cb710_regf_32[] = "%08X";
static const char cb710_xes[] = "xxxxxxxx";
#define CB710_DUMP_REGS_TEMPLATE(t) \
static void cb710_dump_regs_##t(struct device *dev, \
const u##t *reg, unsigned select) \
{ \
const char *const xp = &cb710_xes[8 - t/4]; \
const char *const format = cb710_regf_##t; \
\
char msg[100], *p; \
unsigned i, j; \
\
for (i = 0; i < ARRAY_SIZE(allow); ++i, reg += 16/(t/8)) { \
if (!(select & (1 << i))) \
continue; \
p = msg; \
for (j = 0; j < 0x10/(t/8); ++j) { \
*p++ = ' '; \
if (j == 8/(t/8)) \
*p++ = ' '; \
if (allow_reg_read(i, j, t)) \
p += sprintf(p, format, reg[j]); \
else \
p += sprintf(p, "%s", xp); \
} \
dev_dbg(dev, "%s 0x%02X %s\n", prefix[i], i << 4, msg); \
} \
}
#define CB710_READ_AND_DUMP_REGS_TEMPLATE(t) \
static void cb710_read_and_dump_regs_##t(struct cb710_chip *chip, \
unsigned select) \
{ \
u##t regs[CB710_REG_COUNT/sizeof(u##t)]; \
\
memset(&regs, 0, sizeof(regs)); \
cb710_read_regs_##t(chip->iobase, regs, select); \
cb710_dump_regs_##t(cb710_chip_dev(chip), regs, select); \
}
#define CB710_REG_ACCESS_TEMPLATES(t) \
CB710_READ_REGS_TEMPLATE(t) \
CB710_DUMP_REGS_TEMPLATE(t) \
CB710_READ_AND_DUMP_REGS_TEMPLATE(t)
CB710_REG_ACCESS_TEMPLATES(8)
CB710_REG_ACCESS_TEMPLATES(16)
CB710_REG_ACCESS_TEMPLATES(32)
void cb710_dump_regs(struct cb710_chip *chip, unsigned select)
{
if (!(select & CB710_DUMP_REGS_MASK))
select = CB710_DUMP_REGS_ALL;
if (!(select & CB710_DUMP_ACCESS_MASK))
select |= CB710_DUMP_ACCESS_8;
if (select & CB710_DUMP_ACCESS_32)
cb710_read_and_dump_regs_32(chip, select);
if (select & CB710_DUMP_ACCESS_16)
cb710_read_and_dump_regs_16(chip, select);
if (select & CB710_DUMP_ACCESS_8)
cb710_read_and_dump_regs_8(chip, select);
}
EXPORT_SYMBOL_GPL(cb710_dump_regs);

146
drivers/misc/cb710/sgbuf2.c Normal file
View file

@ -0,0 +1,146 @@
/*
* cb710/sgbuf2.c
*
* Copyright by Michał Mirosław, 2008-2009
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/cb710.h>
static bool sg_dwiter_next(struct sg_mapping_iter *miter)
{
if (sg_miter_next(miter)) {
miter->consumed = 0;
return true;
} else
return false;
}
static bool sg_dwiter_is_at_end(struct sg_mapping_iter *miter)
{
return miter->length == miter->consumed && !sg_dwiter_next(miter);
}
static uint32_t sg_dwiter_read_buffer(struct sg_mapping_iter *miter)
{
size_t len, left = 4;
uint32_t data;
void *addr = &data;
do {
len = min(miter->length - miter->consumed, left);
memcpy(addr, miter->addr + miter->consumed, len);
miter->consumed += len;
left -= len;
if (!left)
return data;
addr += len;
} while (sg_dwiter_next(miter));
memset(addr, 0, left);
return data;
}
static inline bool needs_unaligned_copy(const void *ptr)
{
#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
return false;
#else
return ((ptr - NULL) & 3) != 0;
#endif
}
static bool sg_dwiter_get_next_block(struct sg_mapping_iter *miter, uint32_t **ptr)
{
size_t len;
if (sg_dwiter_is_at_end(miter))
return true;
len = miter->length - miter->consumed;
if (likely(len >= 4 && !needs_unaligned_copy(
miter->addr + miter->consumed))) {
*ptr = miter->addr + miter->consumed;
miter->consumed += 4;
return true;
}
return false;
}
/**
* cb710_sg_dwiter_read_next_block() - get next 32-bit word from sg buffer
* @miter: sg mapping iterator used for reading
*
* Description:
* Returns 32-bit word starting at byte pointed to by @miter@
* handling any alignment issues. Bytes past the buffer's end
* are not accessed (read) but are returned as zeroes. @miter@
* is advanced by 4 bytes or to the end of buffer whichever is
* closer.
*
* Context:
* Same requirements as in sg_miter_next().
*
* Returns:
* 32-bit word just read.
*/
uint32_t cb710_sg_dwiter_read_next_block(struct sg_mapping_iter *miter)
{
uint32_t *ptr = NULL;
if (likely(sg_dwiter_get_next_block(miter, &ptr)))
return ptr ? *ptr : 0;
return sg_dwiter_read_buffer(miter);
}
EXPORT_SYMBOL_GPL(cb710_sg_dwiter_read_next_block);
static void sg_dwiter_write_slow(struct sg_mapping_iter *miter, uint32_t data)
{
size_t len, left = 4;
void *addr = &data;
do {
len = min(miter->length - miter->consumed, left);
memcpy(miter->addr, addr, len);
miter->consumed += len;
left -= len;
if (!left)
return;
addr += len;
} while (sg_dwiter_next(miter));
}
/**
* cb710_sg_dwiter_write_next_block() - write next 32-bit word to sg buffer
* @miter: sg mapping iterator used for writing
*
* Description:
* Writes 32-bit word starting at byte pointed to by @miter@
* handling any alignment issues. Bytes which would be written
* past the buffer's end are silently discarded. @miter@ is
* advanced by 4 bytes or to the end of buffer whichever is closer.
*
* Context:
* Same requirements as in sg_miter_next().
*/
void cb710_sg_dwiter_write_next_block(struct sg_mapping_iter *miter, uint32_t data)
{
uint32_t *ptr = NULL;
if (likely(sg_dwiter_get_next_block(miter, &ptr))) {
if (ptr)
*ptr = data;
else
return;
} else
sg_dwiter_write_slow(miter, data);
}
EXPORT_SYMBOL_GPL(cb710_sg_dwiter_write_next_block);

384
drivers/misc/cs5535-mfgpt.c Normal file
View file

@ -0,0 +1,384 @@
/*
* Driver for the CS5535/CS5536 Multi-Function General Purpose Timers (MFGPT)
*
* Copyright (C) 2006, Advanced Micro Devices, Inc.
* Copyright (C) 2007 Andres Salomon <dilinger@debian.org>
* Copyright (C) 2009 Andres Salomon <dilinger@collabora.co.uk>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General Public License
* as published by the Free Software Foundation.
*
* The MFGPTs are documented in AMD Geode CS5536 Companion Device Data Book.
*/
#include <linux/kernel.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/cs5535.h>
#include <linux/slab.h>
#define DRV_NAME "cs5535-mfgpt"
static int mfgpt_reset_timers;
module_param_named(mfgptfix, mfgpt_reset_timers, int, 0644);
MODULE_PARM_DESC(mfgptfix, "Try to reset the MFGPT timers during init; "
"required by some broken BIOSes (ie, TinyBIOS < 0.99) or kexec "
"(1 = reset the MFGPT using an undocumented bit, "
"2 = perform a soft reset by unconfiguring all timers); "
"use what works best for you.");
struct cs5535_mfgpt_timer {
struct cs5535_mfgpt_chip *chip;
int nr;
};
static struct cs5535_mfgpt_chip {
DECLARE_BITMAP(avail, MFGPT_MAX_TIMERS);
resource_size_t base;
struct platform_device *pdev;
spinlock_t lock;
int initialized;
} cs5535_mfgpt_chip;
int cs5535_mfgpt_toggle_event(struct cs5535_mfgpt_timer *timer, int cmp,
int event, int enable)
{
uint32_t msr, mask, value, dummy;
int shift = (cmp == MFGPT_CMP1) ? 0 : 8;
if (!timer) {
WARN_ON(1);
return -EIO;
}
/*
* The register maps for these are described in sections 6.17.1.x of
* the AMD Geode CS5536 Companion Device Data Book.
*/
switch (event) {
case MFGPT_EVENT_RESET:
/*
* XXX: According to the docs, we cannot reset timers above
* 6; that is, resets for 7 and 8 will be ignored. Is this
* a problem? -dilinger
*/
msr = MSR_MFGPT_NR;
mask = 1 << (timer->nr + 24);
break;
case MFGPT_EVENT_NMI:
msr = MSR_MFGPT_NR;
mask = 1 << (timer->nr + shift);
break;
case MFGPT_EVENT_IRQ:
msr = MSR_MFGPT_IRQ;
mask = 1 << (timer->nr + shift);
break;
default:
return -EIO;
}
rdmsr(msr, value, dummy);
if (enable)
value |= mask;
else
value &= ~mask;
wrmsr(msr, value, dummy);
return 0;
}
EXPORT_SYMBOL_GPL(cs5535_mfgpt_toggle_event);
int cs5535_mfgpt_set_irq(struct cs5535_mfgpt_timer *timer, int cmp, int *irq,
int enable)
{
uint32_t zsel, lpc, dummy;
int shift;
if (!timer) {
WARN_ON(1);
return -EIO;
}
/*
* Unfortunately, MFGPTs come in pairs sharing their IRQ lines. If VSA
* is using the same CMP of the timer's Siamese twin, the IRQ is set to
* 2, and we mustn't use nor change it.
* XXX: Likewise, 2 Linux drivers might clash if the 2nd overwrites the
* IRQ of the 1st. This can only happen if forcing an IRQ, calling this
* with *irq==0 is safe. Currently there _are_ no 2 drivers.
*/
rdmsr(MSR_PIC_ZSEL_LOW, zsel, dummy);
shift = ((cmp == MFGPT_CMP1 ? 0 : 4) + timer->nr % 4) * 4;
if (((zsel >> shift) & 0xF) == 2)
return -EIO;
/* Choose IRQ: if none supplied, keep IRQ already set or use default */
if (!*irq)
*irq = (zsel >> shift) & 0xF;
if (!*irq)
*irq = CONFIG_CS5535_MFGPT_DEFAULT_IRQ;
/* Can't use IRQ if it's 0 (=disabled), 2, or routed to LPC */
if (*irq < 1 || *irq == 2 || *irq > 15)
return -EIO;
rdmsr(MSR_PIC_IRQM_LPC, lpc, dummy);
if (lpc & (1 << *irq))
return -EIO;
/* All chosen and checked - go for it */
if (cs5535_mfgpt_toggle_event(timer, cmp, MFGPT_EVENT_IRQ, enable))
return -EIO;
if (enable) {
zsel = (zsel & ~(0xF << shift)) | (*irq << shift);
wrmsr(MSR_PIC_ZSEL_LOW, zsel, dummy);
}
return 0;
}
EXPORT_SYMBOL_GPL(cs5535_mfgpt_set_irq);
struct cs5535_mfgpt_timer *cs5535_mfgpt_alloc_timer(int timer_nr, int domain)
{
struct cs5535_mfgpt_chip *mfgpt = &cs5535_mfgpt_chip;
struct cs5535_mfgpt_timer *timer = NULL;
unsigned long flags;
int max;
if (!mfgpt->initialized)
goto done;
/* only allocate timers from the working domain if requested */
if (domain == MFGPT_DOMAIN_WORKING)
max = 6;
else
max = MFGPT_MAX_TIMERS;
if (timer_nr >= max) {
/* programmer error. silly programmers! */
WARN_ON(1);
goto done;
}
spin_lock_irqsave(&mfgpt->lock, flags);
if (timer_nr < 0) {
unsigned long t;
/* try to find any available timer */
t = find_first_bit(mfgpt->avail, max);
/* set timer_nr to -1 if no timers available */
timer_nr = t < max ? (int) t : -1;
} else {
/* check if the requested timer's available */
if (!test_bit(timer_nr, mfgpt->avail))
timer_nr = -1;
}
if (timer_nr >= 0)
/* if timer_nr is not -1, it's an available timer */
__clear_bit(timer_nr, mfgpt->avail);
spin_unlock_irqrestore(&mfgpt->lock, flags);
if (timer_nr < 0)
goto done;
timer = kmalloc(sizeof(*timer), GFP_KERNEL);
if (!timer) {
/* aw hell */
spin_lock_irqsave(&mfgpt->lock, flags);
__set_bit(timer_nr, mfgpt->avail);
spin_unlock_irqrestore(&mfgpt->lock, flags);
goto done;
}
timer->chip = mfgpt;
timer->nr = timer_nr;
dev_info(&mfgpt->pdev->dev, "registered timer %d\n", timer_nr);
done:
return timer;
}
EXPORT_SYMBOL_GPL(cs5535_mfgpt_alloc_timer);
/*
* XXX: This frees the timer memory, but never resets the actual hardware
* timer. The old geode_mfgpt code did this; it would be good to figure
* out a way to actually release the hardware timer. See comments below.
*/
void cs5535_mfgpt_free_timer(struct cs5535_mfgpt_timer *timer)
{
unsigned long flags;
uint16_t val;
/* timer can be made available again only if never set up */
val = cs5535_mfgpt_read(timer, MFGPT_REG_SETUP);
if (!(val & MFGPT_SETUP_SETUP)) {
spin_lock_irqsave(&timer->chip->lock, flags);
__set_bit(timer->nr, timer->chip->avail);
spin_unlock_irqrestore(&timer->chip->lock, flags);
}
kfree(timer);
}
EXPORT_SYMBOL_GPL(cs5535_mfgpt_free_timer);
uint16_t cs5535_mfgpt_read(struct cs5535_mfgpt_timer *timer, uint16_t reg)
{
return inw(timer->chip->base + reg + (timer->nr * 8));
}
EXPORT_SYMBOL_GPL(cs5535_mfgpt_read);
void cs5535_mfgpt_write(struct cs5535_mfgpt_timer *timer, uint16_t reg,
uint16_t value)
{
outw(value, timer->chip->base + reg + (timer->nr * 8));
}
EXPORT_SYMBOL_GPL(cs5535_mfgpt_write);
/*
* This is a sledgehammer that resets all MFGPT timers. This is required by
* some broken BIOSes which leave the system in an unstable state
* (TinyBIOS 0.98, for example; fixed in 0.99). It's uncertain as to
* whether or not this secret MSR can be used to release individual timers.
* Jordan tells me that he and Mitch once played w/ it, but it's unclear
* what the results of that were (and they experienced some instability).
*/
static void reset_all_timers(void)
{
uint32_t val, dummy;
/* The following undocumented bit resets the MFGPT timers */
val = 0xFF; dummy = 0;
wrmsr(MSR_MFGPT_SETUP, val, dummy);
}
/*
* This is another sledgehammer to reset all MFGPT timers.
* Instead of using the undocumented bit method it clears
* IRQ, NMI and RESET settings.
*/
static void soft_reset(void)
{
int i;
struct cs5535_mfgpt_timer t;
for (i = 0; i < MFGPT_MAX_TIMERS; i++) {
t.nr = i;
cs5535_mfgpt_toggle_event(&t, MFGPT_CMP1, MFGPT_EVENT_RESET, 0);
cs5535_mfgpt_toggle_event(&t, MFGPT_CMP2, MFGPT_EVENT_RESET, 0);
cs5535_mfgpt_toggle_event(&t, MFGPT_CMP1, MFGPT_EVENT_NMI, 0);
cs5535_mfgpt_toggle_event(&t, MFGPT_CMP2, MFGPT_EVENT_NMI, 0);
cs5535_mfgpt_toggle_event(&t, MFGPT_CMP1, MFGPT_EVENT_IRQ, 0);
cs5535_mfgpt_toggle_event(&t, MFGPT_CMP2, MFGPT_EVENT_IRQ, 0);
}
}
/*
* Check whether any MFGPTs are available for the kernel to use. In most
* cases, firmware that uses AMD's VSA code will claim all timers during
* bootup; we certainly don't want to take them if they're already in use.
* In other cases (such as with VSAless OpenFirmware), the system firmware
* leaves timers available for us to use.
*/
static int scan_timers(struct cs5535_mfgpt_chip *mfgpt)
{
struct cs5535_mfgpt_timer timer = { .chip = mfgpt };
unsigned long flags;
int timers = 0;
uint16_t val;
int i;
/* bios workaround */
if (mfgpt_reset_timers == 1)
reset_all_timers();
else if (mfgpt_reset_timers == 2)
soft_reset();
/* just to be safe, protect this section w/ lock */
spin_lock_irqsave(&mfgpt->lock, flags);
for (i = 0; i < MFGPT_MAX_TIMERS; i++) {
timer.nr = i;
val = cs5535_mfgpt_read(&timer, MFGPT_REG_SETUP);
if (!(val & MFGPT_SETUP_SETUP) || mfgpt_reset_timers == 2) {
__set_bit(i, mfgpt->avail);
timers++;
}
}
spin_unlock_irqrestore(&mfgpt->lock, flags);
return timers;
}
static int cs5535_mfgpt_probe(struct platform_device *pdev)
{
struct resource *res;
int err = -EIO, t;
if (mfgpt_reset_timers < 0 || mfgpt_reset_timers > 2) {
dev_err(&pdev->dev, "Bad mfgpt_reset_timers value: %i\n",
mfgpt_reset_timers);
goto done;
}
/* There are two ways to get the MFGPT base address; one is by
* fetching it from MSR_LBAR_MFGPT, the other is by reading the
* PCI BAR info. The latter method is easier (especially across
* different architectures), so we'll stick with that for now. If
* it turns out to be unreliable in the face of crappy BIOSes, we
* can always go back to using MSRs.. */
res = platform_get_resource(pdev, IORESOURCE_IO, 0);
if (!res) {
dev_err(&pdev->dev, "can't fetch device resource info\n");
goto done;
}
if (!request_region(res->start, resource_size(res), pdev->name)) {
dev_err(&pdev->dev, "can't request region\n");
goto done;
}
/* set up the driver-specific struct */
cs5535_mfgpt_chip.base = res->start;
cs5535_mfgpt_chip.pdev = pdev;
spin_lock_init(&cs5535_mfgpt_chip.lock);
dev_info(&pdev->dev, "reserved resource region %pR\n", res);
/* detect the available timers */
t = scan_timers(&cs5535_mfgpt_chip);
dev_info(&pdev->dev, "%d MFGPT timers available\n", t);
cs5535_mfgpt_chip.initialized = 1;
return 0;
done:
return err;
}
static struct platform_driver cs5535_mfgpt_driver = {
.driver = {
.name = DRV_NAME,
.owner = THIS_MODULE,
},
.probe = cs5535_mfgpt_probe,
};
static int __init cs5535_mfgpt_init(void)
{
return platform_driver_register(&cs5535_mfgpt_driver);
}
module_init(cs5535_mfgpt_init);
MODULE_AUTHOR("Andres Salomon <dilinger@queued.net>");
MODULE_DESCRIPTION("CS5535/CS5536 MFGPT timer driver");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:" DRV_NAME);

25
drivers/misc/cxl/Kconfig Normal file
View file

@ -0,0 +1,25 @@
#
# IBM Coherent Accelerator (CXL) compatible devices
#
config CXL_BASE
bool
default n
select PPC_COPRO_BASE
config CXL
tristate "Support for IBM Coherent Accelerators (CXL)"
depends on PPC_POWERNV && PCI_MSI
select CXL_BASE
default m
help
Select this option to enable driver support for IBM Coherent
Accelerators (CXL). CXL is otherwise known as Coherent Accelerator
Processor Interface (CAPI). CAPI allows accelerators in FPGAs to be
coherently attached to a CPU via an MMU. This driver enables
userspace programs to access these accelerators via /dev/cxl/afuM.N
devices.
CAPI adapters are found in POWER8 based systems.
If unsure, say N.

View file

@ -0,0 +1,3 @@
cxl-y += main.o file.o irq.o fault.o native.o context.o sysfs.o debugfs.o pci.o
obj-$(CONFIG_CXL) += cxl.o
obj-$(CONFIG_CXL_BASE) += base.o

86
drivers/misc/cxl/base.c Normal file
View file

@ -0,0 +1,86 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/module.h>
#include <linux/rcupdate.h>
#include <asm/errno.h>
#include <misc/cxl.h>
#include "cxl.h"
/* protected by rcu */
static struct cxl_calls *cxl_calls;
atomic_t cxl_use_count = ATOMIC_INIT(0);
EXPORT_SYMBOL(cxl_use_count);
#ifdef CONFIG_CXL_MODULE
static inline struct cxl_calls *cxl_calls_get(void)
{
struct cxl_calls *calls = NULL;
rcu_read_lock();
calls = rcu_dereference(cxl_calls);
if (calls && !try_module_get(calls->owner))
calls = NULL;
rcu_read_unlock();
return calls;
}
static inline void cxl_calls_put(struct cxl_calls *calls)
{
BUG_ON(calls != cxl_calls);
/* we don't need to rcu this, as we hold a reference to the module */
module_put(cxl_calls->owner);
}
#else /* !defined CONFIG_CXL_MODULE */
static inline struct cxl_calls *cxl_calls_get(void)
{
return cxl_calls;
}
static inline void cxl_calls_put(struct cxl_calls *calls) { }
#endif /* CONFIG_CXL_MODULE */
void cxl_slbia(struct mm_struct *mm)
{
struct cxl_calls *calls;
calls = cxl_calls_get();
if (!calls)
return;
if (cxl_ctx_in_use())
calls->cxl_slbia(mm);
cxl_calls_put(calls);
}
int register_cxl_calls(struct cxl_calls *calls)
{
if (cxl_calls)
return -EBUSY;
rcu_assign_pointer(cxl_calls, calls);
return 0;
}
EXPORT_SYMBOL_GPL(register_cxl_calls);
void unregister_cxl_calls(struct cxl_calls *calls)
{
BUG_ON(cxl_calls->owner != calls->owner);
RCU_INIT_POINTER(cxl_calls, NULL);
synchronize_rcu();
}
EXPORT_SYMBOL_GPL(unregister_cxl_calls);

203
drivers/misc/cxl/context.c Normal file
View file

@ -0,0 +1,203 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/bitmap.h>
#include <linux/sched.h>
#include <linux/pid.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/debugfs.h>
#include <linux/slab.h>
#include <linux/idr.h>
#include <asm/cputable.h>
#include <asm/current.h>
#include <asm/copro.h>
#include "cxl.h"
/*
* Allocates space for a CXL context.
*/
struct cxl_context *cxl_context_alloc(void)
{
return kzalloc(sizeof(struct cxl_context), GFP_KERNEL);
}
/*
* Initialises a CXL context.
*/
int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master,
struct address_space *mapping)
{
int i;
spin_lock_init(&ctx->sste_lock);
ctx->afu = afu;
ctx->master = master;
ctx->pid = NULL; /* Set in start work ioctl */
mutex_init(&ctx->mapping_lock);
ctx->mapping = mapping;
/*
* Allocate the segment table before we put it in the IDR so that we
* can always access it when dereferenced from IDR. For the same
* reason, the segment table is only destroyed after the context is
* removed from the IDR. Access to this in the IOCTL is protected by
* Linux filesytem symantics (can't IOCTL until open is complete).
*/
i = cxl_alloc_sst(ctx);
if (i)
return i;
INIT_WORK(&ctx->fault_work, cxl_handle_fault);
init_waitqueue_head(&ctx->wq);
spin_lock_init(&ctx->lock);
ctx->irq_bitmap = NULL;
ctx->pending_irq = false;
ctx->pending_fault = false;
ctx->pending_afu_err = false;
/*
* When we have to destroy all contexts in cxl_context_detach_all() we
* end up with afu_release_irqs() called from inside a
* idr_for_each_entry(). Hence we need to make sure that anything
* dereferenced from this IDR is ok before we allocate the IDR here.
* This clears out the IRQ ranges to ensure this.
*/
for (i = 0; i < CXL_IRQ_RANGES; i++)
ctx->irqs.range[i] = 0;
mutex_init(&ctx->status_mutex);
ctx->status = OPENED;
/*
* Allocating IDR! We better make sure everything's setup that
* dereferences from it.
*/
mutex_lock(&afu->contexts_lock);
idr_preload(GFP_KERNEL);
i = idr_alloc(&ctx->afu->contexts_idr, ctx, 0,
ctx->afu->num_procs, GFP_NOWAIT);
idr_preload_end();
mutex_unlock(&afu->contexts_lock);
if (i < 0)
return i;
ctx->pe = i;
ctx->elem = &ctx->afu->spa[i];
ctx->pe_inserted = false;
return 0;
}
/*
* Map a per-context mmio space into the given vma.
*/
int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
{
u64 len = vma->vm_end - vma->vm_start;
len = min(len, ctx->psn_size);
if (ctx->afu->current_mode == CXL_MODE_DEDICATED) {
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
return vm_iomap_memory(vma, ctx->afu->psn_phys, ctx->afu->adapter->ps_size);
}
/* make sure there is a valid per process space for this AFU */
if ((ctx->master && !ctx->afu->psa) || (!ctx->afu->pp_psa)) {
pr_devel("AFU doesn't support mmio space\n");
return -EINVAL;
}
/* Can't mmap until the AFU is enabled */
if (!ctx->afu->enabled)
return -EBUSY;
pr_devel("%s: mmio physical: %llx pe: %i master:%i\n", __func__,
ctx->psn_phys, ctx->pe , ctx->master);
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
return vm_iomap_memory(vma, ctx->psn_phys, len);
}
/*
* Detach a context from the hardware. This disables interrupts and doesn't
* return until all outstanding interrupts for this context have completed. The
* hardware should no longer access *ctx after this has returned.
*/
static void __detach_context(struct cxl_context *ctx)
{
enum cxl_context_status status;
mutex_lock(&ctx->status_mutex);
status = ctx->status;
ctx->status = CLOSED;
mutex_unlock(&ctx->status_mutex);
if (status != STARTED)
return;
WARN_ON(cxl_detach_process(ctx));
afu_release_irqs(ctx);
flush_work(&ctx->fault_work); /* Only needed for dedicated process */
wake_up_all(&ctx->wq);
/* Release Problem State Area mapping */
mutex_lock(&ctx->mapping_lock);
if (ctx->mapping)
unmap_mapping_range(ctx->mapping, 0, 0, 1);
mutex_unlock(&ctx->mapping_lock);
}
/*
* Detach the given context from the AFU. This doesn't actually
* free the context but it should stop the context running in hardware
* (ie. prevent this context from generating any further interrupts
* so that it can be freed).
*/
void cxl_context_detach(struct cxl_context *ctx)
{
__detach_context(ctx);
}
/*
* Detach all contexts on the given AFU.
*/
void cxl_context_detach_all(struct cxl_afu *afu)
{
struct cxl_context *ctx;
int tmp;
mutex_lock(&afu->contexts_lock);
idr_for_each_entry(&afu->contexts_idr, ctx, tmp) {
/*
* Anything done in here needs to be setup before the IDR is
* created and torn down after the IDR removed
*/
__detach_context(ctx);
}
mutex_unlock(&afu->contexts_lock);
}
void cxl_context_free(struct cxl_context *ctx)
{
mutex_lock(&ctx->afu->contexts_lock);
idr_remove(&ctx->afu->contexts_idr, ctx->pe);
mutex_unlock(&ctx->afu->contexts_lock);
synchronize_rcu();
free_page((u64)ctx->sstp);
ctx->sstp = NULL;
put_pid(ctx->pid);
kfree(ctx);
}

635
drivers/misc/cxl/cxl.h Normal file
View file

@ -0,0 +1,635 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _CXL_H_
#define _CXL_H_
#include <linux/interrupt.h>
#include <linux/semaphore.h>
#include <linux/device.h>
#include <linux/types.h>
#include <linux/cdev.h>
#include <linux/pid.h>
#include <linux/io.h>
#include <linux/pci.h>
#include <asm/cputable.h>
#include <asm/mmu.h>
#include <asm/reg.h>
#include <misc/cxl.h>
#include <uapi/misc/cxl.h>
extern uint cxl_verbose;
#define CXL_TIMEOUT 5
/*
* Bump version each time a user API change is made, whether it is
* backwards compatible ot not.
*/
#define CXL_API_VERSION 1
#define CXL_API_VERSION_COMPATIBLE 1
/*
* Opaque types to avoid accidentally passing registers for the wrong MMIO
*
* At the end of the day, I'm not married to using typedef here, but it might
* (and has!) help avoid bugs like mixing up CXL_PSL_CtxTime and
* CXL_PSL_CtxTime_An, or calling cxl_p1n_write instead of cxl_p1_write.
*
* I'm quite happy if these are changed back to #defines before upstreaming, it
* should be little more than a regexp search+replace operation in this file.
*/
typedef struct {
const int x;
} cxl_p1_reg_t;
typedef struct {
const int x;
} cxl_p1n_reg_t;
typedef struct {
const int x;
} cxl_p2n_reg_t;
#define cxl_reg_off(reg) \
(reg.x)
/* Memory maps. Ref CXL Appendix A */
/* PSL Privilege 1 Memory Map */
/* Configuration and Control area */
static const cxl_p1_reg_t CXL_PSL_CtxTime = {0x0000};
static const cxl_p1_reg_t CXL_PSL_ErrIVTE = {0x0008};
static const cxl_p1_reg_t CXL_PSL_KEY1 = {0x0010};
static const cxl_p1_reg_t CXL_PSL_KEY2 = {0x0018};
static const cxl_p1_reg_t CXL_PSL_Control = {0x0020};
/* Downloading */
static const cxl_p1_reg_t CXL_PSL_DLCNTL = {0x0060};
static const cxl_p1_reg_t CXL_PSL_DLADDR = {0x0068};
/* PSL Lookaside Buffer Management Area */
static const cxl_p1_reg_t CXL_PSL_LBISEL = {0x0080};
static const cxl_p1_reg_t CXL_PSL_SLBIE = {0x0088};
static const cxl_p1_reg_t CXL_PSL_SLBIA = {0x0090};
static const cxl_p1_reg_t CXL_PSL_TLBIE = {0x00A0};
static const cxl_p1_reg_t CXL_PSL_TLBIA = {0x00A8};
static const cxl_p1_reg_t CXL_PSL_AFUSEL = {0x00B0};
/* 0x00C0:7EFF Implementation dependent area */
static const cxl_p1_reg_t CXL_PSL_FIR1 = {0x0100};
static const cxl_p1_reg_t CXL_PSL_FIR2 = {0x0108};
static const cxl_p1_reg_t CXL_PSL_VERSION = {0x0118};
static const cxl_p1_reg_t CXL_PSL_RESLCKTO = {0x0128};
static const cxl_p1_reg_t CXL_PSL_FIR_CNTL = {0x0148};
static const cxl_p1_reg_t CXL_PSL_DSNDCTL = {0x0150};
static const cxl_p1_reg_t CXL_PSL_SNWRALLOC = {0x0158};
static const cxl_p1_reg_t CXL_PSL_TRACE = {0x0170};
/* 0x7F00:7FFF Reserved PCIe MSI-X Pending Bit Array area */
/* 0x8000:FFFF Reserved PCIe MSI-X Table Area */
/* PSL Slice Privilege 1 Memory Map */
/* Configuration Area */
static const cxl_p1n_reg_t CXL_PSL_SR_An = {0x00};
static const cxl_p1n_reg_t CXL_PSL_LPID_An = {0x08};
static const cxl_p1n_reg_t CXL_PSL_AMBAR_An = {0x10};
static const cxl_p1n_reg_t CXL_PSL_SPOffset_An = {0x18};
static const cxl_p1n_reg_t CXL_PSL_ID_An = {0x20};
static const cxl_p1n_reg_t CXL_PSL_SERR_An = {0x28};
/* Memory Management and Lookaside Buffer Management */
static const cxl_p1n_reg_t CXL_PSL_SDR_An = {0x30};
static const cxl_p1n_reg_t CXL_PSL_AMOR_An = {0x38};
/* Pointer Area */
static const cxl_p1n_reg_t CXL_HAURP_An = {0x80};
static const cxl_p1n_reg_t CXL_PSL_SPAP_An = {0x88};
static const cxl_p1n_reg_t CXL_PSL_LLCMD_An = {0x90};
/* Control Area */
static const cxl_p1n_reg_t CXL_PSL_SCNTL_An = {0xA0};
static const cxl_p1n_reg_t CXL_PSL_CtxTime_An = {0xA8};
static const cxl_p1n_reg_t CXL_PSL_IVTE_Offset_An = {0xB0};
static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An = {0xB8};
/* 0xC0:FF Implementation Dependent Area */
static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An = {0xC0};
static const cxl_p1n_reg_t CXL_AFU_DEBUG_An = {0xC8};
static const cxl_p1n_reg_t CXL_PSL_APCALLOC_A = {0xD0};
static const cxl_p1n_reg_t CXL_PSL_COALLOC_A = {0xD8};
static const cxl_p1n_reg_t CXL_PSL_RXCTL_A = {0xE0};
static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE = {0xE8};
/* PSL Slice Privilege 2 Memory Map */
/* Configuration and Control Area */
static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
static const cxl_p2n_reg_t CXL_CSRP_An = {0x008};
static const cxl_p2n_reg_t CXL_AURP0_An = {0x010};
static const cxl_p2n_reg_t CXL_AURP1_An = {0x018};
static const cxl_p2n_reg_t CXL_SSTP0_An = {0x020};
static const cxl_p2n_reg_t CXL_SSTP1_An = {0x028};
static const cxl_p2n_reg_t CXL_PSL_AMR_An = {0x030};
/* Segment Lookaside Buffer Management */
static const cxl_p2n_reg_t CXL_SLBIE_An = {0x040};
static const cxl_p2n_reg_t CXL_SLBIA_An = {0x048};
static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
/* Interrupt Registers */
static const cxl_p2n_reg_t CXL_PSL_DSISR_An = {0x060};
static const cxl_p2n_reg_t CXL_PSL_DAR_An = {0x068};
static const cxl_p2n_reg_t CXL_PSL_DSR_An = {0x070};
static const cxl_p2n_reg_t CXL_PSL_TFC_An = {0x078};
static const cxl_p2n_reg_t CXL_PSL_PEHandle_An = {0x080};
static const cxl_p2n_reg_t CXL_PSL_ErrStat_An = {0x088};
/* AFU Registers */
static const cxl_p2n_reg_t CXL_AFU_Cntl_An = {0x090};
static const cxl_p2n_reg_t CXL_AFU_ERR_An = {0x098};
/* Work Element Descriptor */
static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
/* 0x0C0:FFF Implementation Dependent Area */
#define CXL_PSL_SPAP_Addr 0x0ffffffffffff000ULL
#define CXL_PSL_SPAP_Size 0x0000000000000ff0ULL
#define CXL_PSL_SPAP_Size_Shift 4
#define CXL_PSL_SPAP_V 0x0000000000000001ULL
/****** CXL_PSL_DLCNTL *****************************************************/
#define CXL_PSL_DLCNTL_D (0x1ull << (63-28))
#define CXL_PSL_DLCNTL_C (0x1ull << (63-29))
#define CXL_PSL_DLCNTL_E (0x1ull << (63-30))
#define CXL_PSL_DLCNTL_S (0x1ull << (63-31))
#define CXL_PSL_DLCNTL_CE (CXL_PSL_DLCNTL_C | CXL_PSL_DLCNTL_E)
#define CXL_PSL_DLCNTL_DCES (CXL_PSL_DLCNTL_D | CXL_PSL_DLCNTL_CE | CXL_PSL_DLCNTL_S)
/****** CXL_PSL_SR_An ******************************************************/
#define CXL_PSL_SR_An_SF MSR_SF /* 64bit */
#define CXL_PSL_SR_An_TA (1ull << (63-1)) /* Tags active, GA1: 0 */
#define CXL_PSL_SR_An_HV MSR_HV /* Hypervisor, GA1: 0 */
#define CXL_PSL_SR_An_PR MSR_PR /* Problem state, GA1: 1 */
#define CXL_PSL_SR_An_ISL (1ull << (63-53)) /* Ignore Segment Large Page */
#define CXL_PSL_SR_An_TC (1ull << (63-54)) /* Page Table secondary hash */
#define CXL_PSL_SR_An_US (1ull << (63-56)) /* User state, GA1: X */
#define CXL_PSL_SR_An_SC (1ull << (63-58)) /* Segment Table secondary hash */
#define CXL_PSL_SR_An_R MSR_DR /* Relocate, GA1: 1 */
#define CXL_PSL_SR_An_MP (1ull << (63-62)) /* Master Process */
#define CXL_PSL_SR_An_LE (1ull << (63-63)) /* Little Endian */
/****** CXL_PSL_LLCMD_An ****************************************************/
#define CXL_LLCMD_TERMINATE 0x0001000000000000ULL
#define CXL_LLCMD_REMOVE 0x0002000000000000ULL
#define CXL_LLCMD_SUSPEND 0x0003000000000000ULL
#define CXL_LLCMD_RESUME 0x0004000000000000ULL
#define CXL_LLCMD_ADD 0x0005000000000000ULL
#define CXL_LLCMD_UPDATE 0x0006000000000000ULL
#define CXL_LLCMD_HANDLE_MASK 0x000000000000ffffULL
/****** CXL_PSL_ID_An ****************************************************/
#define CXL_PSL_ID_An_F (1ull << (63-31))
#define CXL_PSL_ID_An_L (1ull << (63-30))
/****** CXL_PSL_SCNTL_An ****************************************************/
#define CXL_PSL_SCNTL_An_CR (0x1ull << (63-15))
/* Programming Modes: */
#define CXL_PSL_SCNTL_An_PM_MASK (0xffffull << (63-31))
#define CXL_PSL_SCNTL_An_PM_Shared (0x0000ull << (63-31))
#define CXL_PSL_SCNTL_An_PM_OS (0x0001ull << (63-31))
#define CXL_PSL_SCNTL_An_PM_Process (0x0002ull << (63-31))
#define CXL_PSL_SCNTL_An_PM_AFU (0x0004ull << (63-31))
#define CXL_PSL_SCNTL_An_PM_AFU_PBT (0x0104ull << (63-31))
/* Purge Status (ro) */
#define CXL_PSL_SCNTL_An_Ps_MASK (0x3ull << (63-39))
#define CXL_PSL_SCNTL_An_Ps_Pending (0x1ull << (63-39))
#define CXL_PSL_SCNTL_An_Ps_Complete (0x3ull << (63-39))
/* Purge */
#define CXL_PSL_SCNTL_An_Pc (0x1ull << (63-48))
/* Suspend Status (ro) */
#define CXL_PSL_SCNTL_An_Ss_MASK (0x3ull << (63-55))
#define CXL_PSL_SCNTL_An_Ss_Pending (0x1ull << (63-55))
#define CXL_PSL_SCNTL_An_Ss_Complete (0x3ull << (63-55))
/* Suspend Control */
#define CXL_PSL_SCNTL_An_Sc (0x1ull << (63-63))
/* AFU Slice Enable Status (ro) */
#define CXL_AFU_Cntl_An_ES_MASK (0x7ull << (63-2))
#define CXL_AFU_Cntl_An_ES_Disabled (0x0ull << (63-2))
#define CXL_AFU_Cntl_An_ES_Enabled (0x4ull << (63-2))
/* AFU Slice Enable */
#define CXL_AFU_Cntl_An_E (0x1ull << (63-3))
/* AFU Slice Reset status (ro) */
#define CXL_AFU_Cntl_An_RS_MASK (0x3ull << (63-5))
#define CXL_AFU_Cntl_An_RS_Pending (0x1ull << (63-5))
#define CXL_AFU_Cntl_An_RS_Complete (0x2ull << (63-5))
/* AFU Slice Reset */
#define CXL_AFU_Cntl_An_RA (0x1ull << (63-7))
/****** CXL_SSTP0/1_An ******************************************************/
/* These top bits are for the segment that CONTAINS the segment table */
#define CXL_SSTP0_An_B_SHIFT SLB_VSID_SSIZE_SHIFT
#define CXL_SSTP0_An_KS (1ull << (63-2))
#define CXL_SSTP0_An_KP (1ull << (63-3))
#define CXL_SSTP0_An_N (1ull << (63-4))
#define CXL_SSTP0_An_L (1ull << (63-5))
#define CXL_SSTP0_An_C (1ull << (63-6))
#define CXL_SSTP0_An_TA (1ull << (63-7))
#define CXL_SSTP0_An_LP_SHIFT (63-9) /* 2 Bits */
/* And finally, the virtual address & size of the segment table: */
#define CXL_SSTP0_An_SegTableSize_SHIFT (63-31) /* 12 Bits */
#define CXL_SSTP0_An_SegTableSize_MASK \
(((1ull << 12) - 1) << CXL_SSTP0_An_SegTableSize_SHIFT)
#define CXL_SSTP0_An_STVA_U_MASK ((1ull << (63-49))-1)
#define CXL_SSTP1_An_STVA_L_MASK (~((1ull << (63-55))-1))
#define CXL_SSTP1_An_V (1ull << (63-63))
/****** CXL_PSL_SLBIE_[An] **************************************************/
/* write: */
#define CXL_SLBIE_C PPC_BIT(36) /* Class */
#define CXL_SLBIE_SS PPC_BITMASK(37, 38) /* Segment Size */
#define CXL_SLBIE_SS_SHIFT PPC_BITLSHIFT(38)
#define CXL_SLBIE_TA PPC_BIT(38) /* Tags Active */
/* read: */
#define CXL_SLBIE_MAX PPC_BITMASK(24, 31)
#define CXL_SLBIE_PENDING PPC_BITMASK(56, 63)
/****** Common to all CXL_TLBIA/SLBIA_[An] **********************************/
#define CXL_TLB_SLB_P (1ull) /* Pending (read) */
/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers **********************/
#define CXL_TLB_SLB_IQ_ALL (0ull) /* Inv qualifier */
#define CXL_TLB_SLB_IQ_LPID (1ull) /* Inv qualifier */
#define CXL_TLB_SLB_IQ_LPIDPID (3ull) /* Inv qualifier */
/****** CXL_PSL_AFUSEL ******************************************************/
#define CXL_PSL_AFUSEL_A (1ull << (63-55)) /* Adapter wide invalidates affect all AFUs */
/****** CXL_PSL_DSISR_An ****************************************************/
#define CXL_PSL_DSISR_An_DS (1ull << (63-0)) /* Segment not found */
#define CXL_PSL_DSISR_An_DM (1ull << (63-1)) /* PTE not found (See also: M) or protection fault */
#define CXL_PSL_DSISR_An_ST (1ull << (63-2)) /* Segment Table PTE not found */
#define CXL_PSL_DSISR_An_UR (1ull << (63-3)) /* AURP PTE not found */
#define CXL_PSL_DSISR_TRANS (CXL_PSL_DSISR_An_DS | CXL_PSL_DSISR_An_DM | CXL_PSL_DSISR_An_ST | CXL_PSL_DSISR_An_UR)
#define CXL_PSL_DSISR_An_PE (1ull << (63-4)) /* PSL Error (implementation specific) */
#define CXL_PSL_DSISR_An_AE (1ull << (63-5)) /* AFU Error */
#define CXL_PSL_DSISR_An_OC (1ull << (63-6)) /* OS Context Warning */
/* NOTE: Bits 32:63 are undefined if DSISR[DS] = 1 */
#define CXL_PSL_DSISR_An_M DSISR_NOHPTE /* PTE not found */
#define CXL_PSL_DSISR_An_P DSISR_PROTFAULT /* Storage protection violation */
#define CXL_PSL_DSISR_An_A (1ull << (63-37)) /* AFU lock access to write through or cache inhibited storage */
#define CXL_PSL_DSISR_An_S DSISR_ISSTORE /* Access was afu_wr or afu_zero */
#define CXL_PSL_DSISR_An_K DSISR_KEYFAULT /* Access not permitted by virtual page class key protection */
/****** CXL_PSL_TFC_An ******************************************************/
#define CXL_PSL_TFC_An_A (1ull << (63-28)) /* Acknowledge non-translation fault */
#define CXL_PSL_TFC_An_C (1ull << (63-29)) /* Continue (abort transaction) */
#define CXL_PSL_TFC_An_AE (1ull << (63-30)) /* Restart PSL with address error */
#define CXL_PSL_TFC_An_R (1ull << (63-31)) /* Restart PSL transaction */
/* cxl_process_element->software_status */
#define CXL_PE_SOFTWARE_STATE_V (1ul << (31 - 0)) /* Valid */
#define CXL_PE_SOFTWARE_STATE_C (1ul << (31 - 29)) /* Complete */
#define CXL_PE_SOFTWARE_STATE_S (1ul << (31 - 30)) /* Suspend */
#define CXL_PE_SOFTWARE_STATE_T (1ul << (31 - 31)) /* Terminate */
/* SPA->sw_command_status */
#define CXL_SPA_SW_CMD_MASK 0xffff000000000000ULL
#define CXL_SPA_SW_CMD_TERMINATE 0x0001000000000000ULL
#define CXL_SPA_SW_CMD_REMOVE 0x0002000000000000ULL
#define CXL_SPA_SW_CMD_SUSPEND 0x0003000000000000ULL
#define CXL_SPA_SW_CMD_RESUME 0x0004000000000000ULL
#define CXL_SPA_SW_CMD_ADD 0x0005000000000000ULL
#define CXL_SPA_SW_CMD_UPDATE 0x0006000000000000ULL
#define CXL_SPA_SW_STATE_MASK 0x0000ffff00000000ULL
#define CXL_SPA_SW_STATE_TERMINATED 0x0000000100000000ULL
#define CXL_SPA_SW_STATE_REMOVED 0x0000000200000000ULL
#define CXL_SPA_SW_STATE_SUSPENDED 0x0000000300000000ULL
#define CXL_SPA_SW_STATE_RESUMED 0x0000000400000000ULL
#define CXL_SPA_SW_STATE_ADDED 0x0000000500000000ULL
#define CXL_SPA_SW_STATE_UPDATED 0x0000000600000000ULL
#define CXL_SPA_SW_PSL_ID_MASK 0x00000000ffff0000ULL
#define CXL_SPA_SW_LINK_MASK 0x000000000000ffffULL
#define CXL_MAX_SLICES 4
#define MAX_AFU_MMIO_REGS 3
#define CXL_MODE_DEDICATED 0x1
#define CXL_MODE_DIRECTED 0x2
#define CXL_MODE_TIME_SLICED 0x4
#define CXL_SUPPORTED_MODES (CXL_MODE_DEDICATED | CXL_MODE_DIRECTED)
enum cxl_context_status {
CLOSED,
OPENED,
STARTED
};
enum prefault_modes {
CXL_PREFAULT_NONE,
CXL_PREFAULT_WED,
CXL_PREFAULT_ALL,
};
struct cxl_sste {
__be64 esid_data;
__be64 vsid_data;
};
#define to_cxl_adapter(d) container_of(d, struct cxl, dev)
#define to_cxl_afu(d) container_of(d, struct cxl_afu, dev)
struct cxl_afu {
irq_hw_number_t psl_hwirq;
irq_hw_number_t serr_hwirq;
unsigned int serr_virq;
void __iomem *p1n_mmio;
void __iomem *p2n_mmio;
phys_addr_t psn_phys;
u64 pp_offset;
u64 pp_size;
void __iomem *afu_desc_mmio;
struct cxl *adapter;
struct device dev;
struct cdev afu_cdev_s, afu_cdev_m, afu_cdev_d;
struct device *chardev_s, *chardev_m, *chardev_d;
struct idr contexts_idr;
struct dentry *debugfs;
struct mutex contexts_lock;
struct mutex spa_mutex;
spinlock_t afu_cntl_lock;
/*
* Only the first part of the SPA is used for the process element
* linked list. The only other part that software needs to worry about
* is sw_command_status, which we store a separate pointer to.
* Everything else in the SPA is only used by hardware
*/
struct cxl_process_element *spa;
__be64 *sw_command_status;
unsigned int spa_size;
int spa_order;
int spa_max_procs;
unsigned int psl_virq;
int pp_irqs;
int irqs_max;
int num_procs;
int max_procs_virtualised;
int slice;
int modes_supported;
int current_mode;
enum prefault_modes prefault_mode;
bool psa;
bool pp_psa;
bool enabled;
};
/*
* This is a cxl context. If the PSL is in dedicated mode, there will be one
* of these per AFU. If in AFU directed there can be lots of these.
*/
struct cxl_context {
struct cxl_afu *afu;
/* Problem state MMIO */
phys_addr_t psn_phys;
u64 psn_size;
/* Used to unmap any mmaps when force detaching */
struct address_space *mapping;
struct mutex mapping_lock;
spinlock_t sste_lock; /* Protects segment table entries */
struct cxl_sste *sstp;
u64 sstp0, sstp1;
unsigned int sst_size, sst_lru;
wait_queue_head_t wq;
struct pid *pid;
spinlock_t lock; /* Protects pending_irq_mask, pending_fault and fault_addr */
/* Only used in PR mode */
u64 process_token;
unsigned long *irq_bitmap; /* Accessed from IRQ context */
struct cxl_irq_ranges irqs;
u64 fault_addr;
u64 fault_dsisr;
u64 afu_err;
/*
* This status and it's lock pretects start and detach context
* from racing. It also prevents detach from racing with
* itself
*/
enum cxl_context_status status;
struct mutex status_mutex;
/* XXX: Is it possible to need multiple work items at once? */
struct work_struct fault_work;
u64 dsisr;
u64 dar;
struct cxl_process_element *elem;
int pe; /* process element handle */
u32 irq_count;
bool pe_inserted;
bool master;
bool kernel;
bool pending_irq;
bool pending_fault;
bool pending_afu_err;
};
struct cxl {
void __iomem *p1_mmio;
void __iomem *p2_mmio;
irq_hw_number_t err_hwirq;
unsigned int err_virq;
spinlock_t afu_list_lock;
struct cxl_afu *afu[CXL_MAX_SLICES];
struct device dev;
struct dentry *trace;
struct dentry *psl_err_chk;
struct dentry *debugfs;
struct bin_attribute cxl_attr;
int adapter_num;
int user_irqs;
u64 afu_desc_off;
u64 afu_desc_size;
u64 ps_off;
u64 ps_size;
u16 psl_rev;
u16 base_image;
u8 vsec_status;
u8 caia_major;
u8 caia_minor;
u8 slices;
bool user_image_loaded;
bool perst_loads_image;
bool perst_select_user;
};
int cxl_alloc_one_irq(struct cxl *adapter);
void cxl_release_one_irq(struct cxl *adapter, int hwirq);
int cxl_alloc_irq_ranges(struct cxl_irq_ranges *irqs, struct cxl *adapter, unsigned int num);
void cxl_release_irq_ranges(struct cxl_irq_ranges *irqs, struct cxl *adapter);
int cxl_setup_irq(struct cxl *adapter, unsigned int hwirq, unsigned int virq);
int cxl_update_image_control(struct cxl *adapter);
/* common == phyp + powernv */
struct cxl_process_element_common {
__be32 tid;
__be32 pid;
__be64 csrp;
__be64 aurp0;
__be64 aurp1;
__be64 sstp0;
__be64 sstp1;
__be64 amr;
u8 reserved3[4];
__be64 wed;
} __packed;
/* just powernv */
struct cxl_process_element {
__be64 sr;
__be64 SPOffset;
__be64 sdr;
__be64 haurp;
__be32 ctxtime;
__be16 ivte_offsets[4];
__be16 ivte_ranges[4];
__be32 lpid;
struct cxl_process_element_common common;
__be32 software_state;
} __packed;
static inline void __iomem *_cxl_p1_addr(struct cxl *cxl, cxl_p1_reg_t reg)
{
WARN_ON(!cpu_has_feature(CPU_FTR_HVMODE));
return cxl->p1_mmio + cxl_reg_off(reg);
}
#define cxl_p1_write(cxl, reg, val) \
out_be64(_cxl_p1_addr(cxl, reg), val)
#define cxl_p1_read(cxl, reg) \
in_be64(_cxl_p1_addr(cxl, reg))
static inline void __iomem *_cxl_p1n_addr(struct cxl_afu *afu, cxl_p1n_reg_t reg)
{
WARN_ON(!cpu_has_feature(CPU_FTR_HVMODE));
return afu->p1n_mmio + cxl_reg_off(reg);
}
#define cxl_p1n_write(afu, reg, val) \
out_be64(_cxl_p1n_addr(afu, reg), val)
#define cxl_p1n_read(afu, reg) \
in_be64(_cxl_p1n_addr(afu, reg))
static inline void __iomem *_cxl_p2n_addr(struct cxl_afu *afu, cxl_p2n_reg_t reg)
{
return afu->p2n_mmio + cxl_reg_off(reg);
}
#define cxl_p2n_write(afu, reg, val) \
out_be64(_cxl_p2n_addr(afu, reg), val)
#define cxl_p2n_read(afu, reg) \
in_be64(_cxl_p2n_addr(afu, reg))
struct cxl_calls {
void (*cxl_slbia)(struct mm_struct *mm);
struct module *owner;
};
int register_cxl_calls(struct cxl_calls *calls);
void unregister_cxl_calls(struct cxl_calls *calls);
int cxl_alloc_adapter_nr(struct cxl *adapter);
void cxl_remove_adapter_nr(struct cxl *adapter);
int cxl_file_init(void);
void cxl_file_exit(void);
int cxl_register_adapter(struct cxl *adapter);
int cxl_register_afu(struct cxl_afu *afu);
int cxl_chardev_d_afu_add(struct cxl_afu *afu);
int cxl_chardev_m_afu_add(struct cxl_afu *afu);
int cxl_chardev_s_afu_add(struct cxl_afu *afu);
void cxl_chardev_afu_remove(struct cxl_afu *afu);
void cxl_context_detach_all(struct cxl_afu *afu);
void cxl_context_free(struct cxl_context *ctx);
void cxl_context_detach(struct cxl_context *ctx);
int cxl_sysfs_adapter_add(struct cxl *adapter);
void cxl_sysfs_adapter_remove(struct cxl *adapter);
int cxl_sysfs_afu_add(struct cxl_afu *afu);
void cxl_sysfs_afu_remove(struct cxl_afu *afu);
int cxl_sysfs_afu_m_add(struct cxl_afu *afu);
void cxl_sysfs_afu_m_remove(struct cxl_afu *afu);
int cxl_afu_activate_mode(struct cxl_afu *afu, int mode);
int _cxl_afu_deactivate_mode(struct cxl_afu *afu, int mode);
int cxl_afu_deactivate_mode(struct cxl_afu *afu);
int cxl_afu_select_best_mode(struct cxl_afu *afu);
unsigned int cxl_map_irq(struct cxl *adapter, irq_hw_number_t hwirq,
irq_handler_t handler, void *cookie);
void cxl_unmap_irq(unsigned int virq, void *cookie);
int cxl_register_psl_irq(struct cxl_afu *afu);
void cxl_release_psl_irq(struct cxl_afu *afu);
int cxl_register_psl_err_irq(struct cxl *adapter);
void cxl_release_psl_err_irq(struct cxl *adapter);
int cxl_register_serr_irq(struct cxl_afu *afu);
void cxl_release_serr_irq(struct cxl_afu *afu);
int afu_register_irqs(struct cxl_context *ctx, u32 count);
void afu_release_irqs(struct cxl_context *ctx);
irqreturn_t cxl_slice_irq_err(int irq, void *data);
int cxl_debugfs_init(void);
void cxl_debugfs_exit(void);
int cxl_debugfs_adapter_add(struct cxl *adapter);
void cxl_debugfs_adapter_remove(struct cxl *adapter);
int cxl_debugfs_afu_add(struct cxl_afu *afu);
void cxl_debugfs_afu_remove(struct cxl_afu *afu);
void cxl_handle_fault(struct work_struct *work);
void cxl_prefault(struct cxl_context *ctx, u64 wed);
struct cxl *get_cxl_adapter(int num);
int cxl_alloc_sst(struct cxl_context *ctx);
void init_cxl_native(void);
struct cxl_context *cxl_context_alloc(void);
int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master,
struct address_space *mapping);
void cxl_context_free(struct cxl_context *ctx);
int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma);
/* This matches the layout of the H_COLLECT_CA_INT_INFO retbuf */
struct cxl_irq_info {
u64 dsisr;
u64 dar;
u64 dsr;
u32 pid;
u32 tid;
u64 afu_err;
u64 errstat;
u64 padding[3]; /* to match the expected retbuf size for plpar_hcall9 */
};
int cxl_attach_process(struct cxl_context *ctx, bool kernel, u64 wed,
u64 amr);
int cxl_detach_process(struct cxl_context *ctx);
int cxl_get_irq(struct cxl_context *ctx, struct cxl_irq_info *info);
int cxl_ack_irq(struct cxl_context *ctx, u64 tfc, u64 psl_reset_mask);
int cxl_check_error(struct cxl_afu *afu);
int cxl_afu_slbia(struct cxl_afu *afu);
int cxl_tlb_slb_invalidate(struct cxl *adapter);
int cxl_afu_disable(struct cxl_afu *afu);
int cxl_afu_reset(struct cxl_afu *afu);
int cxl_psl_purge(struct cxl_afu *afu);
void cxl_stop_trace(struct cxl *cxl);
extern struct pci_driver cxl_pci_driver;
#endif

132
drivers/misc/cxl/debugfs.c Normal file
View file

@ -0,0 +1,132 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/debugfs.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include "cxl.h"
static struct dentry *cxl_debugfs;
void cxl_stop_trace(struct cxl *adapter)
{
int slice;
/* Stop the trace */
cxl_p1_write(adapter, CXL_PSL_TRACE, 0x8000000000000017LL);
/* Stop the slice traces */
spin_lock(&adapter->afu_list_lock);
for (slice = 0; slice < adapter->slices; slice++) {
if (adapter->afu[slice])
cxl_p1n_write(adapter->afu[slice], CXL_PSL_SLICE_TRACE, 0x8000000000000000LL);
}
spin_unlock(&adapter->afu_list_lock);
}
/* Helpers to export CXL mmaped IO registers via debugfs */
static int debugfs_io_u64_get(void *data, u64 *val)
{
*val = in_be64((u64 __iomem *)data);
return 0;
}
static int debugfs_io_u64_set(void *data, u64 val)
{
out_be64((u64 __iomem *)data, val);
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(fops_io_x64, debugfs_io_u64_get, debugfs_io_u64_set, "0x%016llx\n");
static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
struct dentry *parent, u64 __iomem *value)
{
return debugfs_create_file(name, mode, parent, (void *)value, &fops_io_x64);
}
int cxl_debugfs_adapter_add(struct cxl *adapter)
{
struct dentry *dir;
char buf[32];
if (!cxl_debugfs)
return -ENODEV;
snprintf(buf, 32, "card%i", adapter->adapter_num);
dir = debugfs_create_dir(buf, cxl_debugfs);
if (IS_ERR(dir))
return PTR_ERR(dir);
adapter->debugfs = dir;
debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR2));
debugfs_create_io_x64("fir_cntl", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR_CNTL));
debugfs_create_io_x64("err_ivte", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_ErrIVTE));
debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_TRACE));
return 0;
}
void cxl_debugfs_adapter_remove(struct cxl *adapter)
{
debugfs_remove_recursive(adapter->debugfs);
}
int cxl_debugfs_afu_add(struct cxl_afu *afu)
{
struct dentry *dir;
char buf[32];
if (!afu->adapter->debugfs)
return -ENODEV;
snprintf(buf, 32, "psl%i.%i", afu->adapter->adapter_num, afu->slice);
dir = debugfs_create_dir(buf, afu->adapter->debugfs);
if (IS_ERR(dir))
return PTR_ERR(dir);
afu->debugfs = dir;
debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
debugfs_create_io_x64("afu_debug", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_AFU_DEBUG_An));
debugfs_create_io_x64("sr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SR_An));
debugfs_create_io_x64("dsisr", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DSISR_An));
debugfs_create_io_x64("dar", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DAR_An));
debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SLICE_TRACE));
return 0;
}
void cxl_debugfs_afu_remove(struct cxl_afu *afu)
{
debugfs_remove_recursive(afu->debugfs);
}
int __init cxl_debugfs_init(void)
{
struct dentry *ent;
ent = debugfs_create_dir("cxl", NULL);
if (IS_ERR(ent))
return PTR_ERR(ent);
cxl_debugfs = ent;
return 0;
}
void cxl_debugfs_exit(void)
{
debugfs_remove_recursive(cxl_debugfs);
}

295
drivers/misc/cxl/fault.c Normal file
View file

@ -0,0 +1,295 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/workqueue.h>
#include <linux/sched.h>
#include <linux/pid.h>
#include <linux/mm.h>
#include <linux/moduleparam.h>
#undef MODULE_PARAM_PREFIX
#define MODULE_PARAM_PREFIX "cxl" "."
#include <asm/current.h>
#include <asm/copro.h>
#include <asm/mmu.h>
#include "cxl.h"
static bool sste_matches(struct cxl_sste *sste, struct copro_slb *slb)
{
return ((sste->vsid_data == cpu_to_be64(slb->vsid)) &&
(sste->esid_data == cpu_to_be64(slb->esid)));
}
/*
* This finds a free SSTE for the given SLB, or returns NULL if it's already in
* the segment table.
*/
static struct cxl_sste* find_free_sste(struct cxl_context *ctx,
struct copro_slb *slb)
{
struct cxl_sste *primary, *sste, *ret = NULL;
unsigned int mask = (ctx->sst_size >> 7) - 1; /* SSTP0[SegTableSize] */
unsigned int entry;
unsigned int hash;
if (slb->vsid & SLB_VSID_B_1T)
hash = (slb->esid >> SID_SHIFT_1T) & mask;
else /* 256M */
hash = (slb->esid >> SID_SHIFT) & mask;
primary = ctx->sstp + (hash << 3);
for (entry = 0, sste = primary; entry < 8; entry++, sste++) {
if (!ret && !(be64_to_cpu(sste->esid_data) & SLB_ESID_V))
ret = sste;
if (sste_matches(sste, slb))
return NULL;
}
if (ret)
return ret;
/* Nothing free, select an entry to cast out */
ret = primary + ctx->sst_lru;
ctx->sst_lru = (ctx->sst_lru + 1) & 0x7;
return ret;
}
static void cxl_load_segment(struct cxl_context *ctx, struct copro_slb *slb)
{
/* mask is the group index, we search primary and secondary here. */
struct cxl_sste *sste;
unsigned long flags;
spin_lock_irqsave(&ctx->sste_lock, flags);
sste = find_free_sste(ctx, slb);
if (!sste)
goto out_unlock;
pr_devel("CXL Populating SST[%li]: %#llx %#llx\n",
sste - ctx->sstp, slb->vsid, slb->esid);
sste->vsid_data = cpu_to_be64(slb->vsid);
sste->esid_data = cpu_to_be64(slb->esid);
out_unlock:
spin_unlock_irqrestore(&ctx->sste_lock, flags);
}
static int cxl_fault_segment(struct cxl_context *ctx, struct mm_struct *mm,
u64 ea)
{
struct copro_slb slb = {0,0};
int rc;
if (!(rc = copro_calculate_slb(mm, ea, &slb))) {
cxl_load_segment(ctx, &slb);
}
return rc;
}
static void cxl_ack_ae(struct cxl_context *ctx)
{
unsigned long flags;
cxl_ack_irq(ctx, CXL_PSL_TFC_An_AE, 0);
spin_lock_irqsave(&ctx->lock, flags);
ctx->pending_fault = true;
ctx->fault_addr = ctx->dar;
ctx->fault_dsisr = ctx->dsisr;
spin_unlock_irqrestore(&ctx->lock, flags);
wake_up_all(&ctx->wq);
}
static int cxl_handle_segment_miss(struct cxl_context *ctx,
struct mm_struct *mm, u64 ea)
{
int rc;
pr_devel("CXL interrupt: Segment fault pe: %i ea: %#llx\n", ctx->pe, ea);
if ((rc = cxl_fault_segment(ctx, mm, ea)))
cxl_ack_ae(ctx);
else {
mb(); /* Order seg table write to TFC MMIO write */
cxl_ack_irq(ctx, CXL_PSL_TFC_An_R, 0);
}
return IRQ_HANDLED;
}
static void cxl_handle_page_fault(struct cxl_context *ctx,
struct mm_struct *mm, u64 dsisr, u64 dar)
{
unsigned flt = 0;
int result;
unsigned long access, flags;
if ((result = copro_handle_mm_fault(mm, dar, dsisr, &flt))) {
pr_devel("copro_handle_mm_fault failed: %#x\n", result);
return cxl_ack_ae(ctx);
}
/*
* update_mmu_cache() will not have loaded the hash since current->trap
* is not a 0x400 or 0x300, so just call hash_page_mm() here.
*/
access = _PAGE_PRESENT;
if (dsisr & CXL_PSL_DSISR_An_S)
access |= _PAGE_RW;
if ((!ctx->kernel) || ~(dar & (1ULL << 63)))
access |= _PAGE_USER;
local_irq_save(flags);
hash_page_mm(mm, dar, access, 0x300);
local_irq_restore(flags);
pr_devel("Page fault successfully handled for pe: %i!\n", ctx->pe);
cxl_ack_irq(ctx, CXL_PSL_TFC_An_R, 0);
}
void cxl_handle_fault(struct work_struct *fault_work)
{
struct cxl_context *ctx =
container_of(fault_work, struct cxl_context, fault_work);
u64 dsisr = ctx->dsisr;
u64 dar = ctx->dar;
struct task_struct *task;
struct mm_struct *mm;
if (cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An) != dsisr ||
cxl_p2n_read(ctx->afu, CXL_PSL_DAR_An) != dar ||
cxl_p2n_read(ctx->afu, CXL_PSL_PEHandle_An) != ctx->pe) {
/* Most likely explanation is harmless - a dedicated process
* has detached and these were cleared by the PSL purge, but
* warn about it just in case */
dev_notice(&ctx->afu->dev, "cxl_handle_fault: Translation fault regs changed\n");
return;
}
pr_devel("CXL BOTTOM HALF handling fault for afu pe: %i. "
"DSISR: %#llx DAR: %#llx\n", ctx->pe, dsisr, dar);
if (!(task = get_pid_task(ctx->pid, PIDTYPE_PID))) {
pr_devel("cxl_handle_fault unable to get task %i\n",
pid_nr(ctx->pid));
cxl_ack_ae(ctx);
return;
}
if (!(mm = get_task_mm(task))) {
pr_devel("cxl_handle_fault unable to get mm %i\n",
pid_nr(ctx->pid));
cxl_ack_ae(ctx);
goto out;
}
if (dsisr & CXL_PSL_DSISR_An_DS)
cxl_handle_segment_miss(ctx, mm, dar);
else if (dsisr & CXL_PSL_DSISR_An_DM)
cxl_handle_page_fault(ctx, mm, dsisr, dar);
else
WARN(1, "cxl_handle_fault has nothing to handle\n");
mmput(mm);
out:
put_task_struct(task);
}
static void cxl_prefault_one(struct cxl_context *ctx, u64 ea)
{
int rc;
struct task_struct *task;
struct mm_struct *mm;
if (!(task = get_pid_task(ctx->pid, PIDTYPE_PID))) {
pr_devel("cxl_prefault_one unable to get task %i\n",
pid_nr(ctx->pid));
return;
}
if (!(mm = get_task_mm(task))) {
pr_devel("cxl_prefault_one unable to get mm %i\n",
pid_nr(ctx->pid));
put_task_struct(task);
return;
}
rc = cxl_fault_segment(ctx, mm, ea);
mmput(mm);
put_task_struct(task);
}
static u64 next_segment(u64 ea, u64 vsid)
{
if (vsid & SLB_VSID_B_1T)
ea |= (1ULL << 40) - 1;
else
ea |= (1ULL << 28) - 1;
return ea + 1;
}
static void cxl_prefault_vma(struct cxl_context *ctx)
{
u64 ea, last_esid = 0;
struct copro_slb slb;
struct vm_area_struct *vma;
int rc;
struct task_struct *task;
struct mm_struct *mm;
if (!(task = get_pid_task(ctx->pid, PIDTYPE_PID))) {
pr_devel("cxl_prefault_vma unable to get task %i\n",
pid_nr(ctx->pid));
return;
}
if (!(mm = get_task_mm(task))) {
pr_devel("cxl_prefault_vm unable to get mm %i\n",
pid_nr(ctx->pid));
goto out1;
}
down_read(&mm->mmap_sem);
for (vma = mm->mmap; vma; vma = vma->vm_next) {
for (ea = vma->vm_start; ea < vma->vm_end;
ea = next_segment(ea, slb.vsid)) {
rc = copro_calculate_slb(mm, ea, &slb);
if (rc)
continue;
if (last_esid == slb.esid)
continue;
cxl_load_segment(ctx, &slb);
last_esid = slb.esid;
}
}
up_read(&mm->mmap_sem);
mmput(mm);
out1:
put_task_struct(task);
}
void cxl_prefault(struct cxl_context *ctx, u64 wed)
{
switch (ctx->afu->prefault_mode) {
case CXL_PREFAULT_WED:
cxl_prefault_one(ctx, wed);
break;
case CXL_PREFAULT_ALL:
cxl_prefault_vma(ctx);
break;
default:
break;
}
}

522
drivers/misc/cxl/file.c Normal file
View file

@ -0,0 +1,522 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/spinlock.h>
#include <linux/module.h>
#include <linux/export.h>
#include <linux/kernel.h>
#include <linux/bitmap.h>
#include <linux/sched.h>
#include <linux/poll.h>
#include <linux/pid.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <asm/cputable.h>
#include <asm/current.h>
#include <asm/copro.h>
#include "cxl.h"
#define CXL_NUM_MINORS 256 /* Total to reserve */
#define CXL_DEV_MINORS 13 /* 1 control + 4 AFUs * 3 (dedicated/master/shared) */
#define CXL_CARD_MINOR(adapter) (adapter->adapter_num * CXL_DEV_MINORS)
#define CXL_AFU_MINOR_D(afu) (CXL_CARD_MINOR(afu->adapter) + 1 + (3 * afu->slice))
#define CXL_AFU_MINOR_M(afu) (CXL_AFU_MINOR_D(afu) + 1)
#define CXL_AFU_MINOR_S(afu) (CXL_AFU_MINOR_D(afu) + 2)
#define CXL_AFU_MKDEV_D(afu) MKDEV(MAJOR(cxl_dev), CXL_AFU_MINOR_D(afu))
#define CXL_AFU_MKDEV_M(afu) MKDEV(MAJOR(cxl_dev), CXL_AFU_MINOR_M(afu))
#define CXL_AFU_MKDEV_S(afu) MKDEV(MAJOR(cxl_dev), CXL_AFU_MINOR_S(afu))
#define CXL_DEVT_ADAPTER(dev) (MINOR(dev) / CXL_DEV_MINORS)
#define CXL_DEVT_AFU(dev) ((MINOR(dev) % CXL_DEV_MINORS - 1) / 3)
#define CXL_DEVT_IS_CARD(dev) (MINOR(dev) % CXL_DEV_MINORS == 0)
static dev_t cxl_dev;
static struct class *cxl_class;
static int __afu_open(struct inode *inode, struct file *file, bool master)
{
struct cxl *adapter;
struct cxl_afu *afu;
struct cxl_context *ctx;
int adapter_num = CXL_DEVT_ADAPTER(inode->i_rdev);
int slice = CXL_DEVT_AFU(inode->i_rdev);
int rc = -ENODEV;
pr_devel("afu_open afu%i.%i\n", slice, adapter_num);
if (!(adapter = get_cxl_adapter(adapter_num)))
return -ENODEV;
if (slice > adapter->slices)
goto err_put_adapter;
spin_lock(&adapter->afu_list_lock);
if (!(afu = adapter->afu[slice])) {
spin_unlock(&adapter->afu_list_lock);
goto err_put_adapter;
}
get_device(&afu->dev);
spin_unlock(&adapter->afu_list_lock);
if (!afu->current_mode)
goto err_put_afu;
if (!(ctx = cxl_context_alloc())) {
rc = -ENOMEM;
goto err_put_afu;
}
if ((rc = cxl_context_init(ctx, afu, master, inode->i_mapping)))
goto err_put_afu;
pr_devel("afu_open pe: %i\n", ctx->pe);
file->private_data = ctx;
cxl_ctx_get();
/* Our ref on the AFU will now hold the adapter */
put_device(&adapter->dev);
return 0;
err_put_afu:
put_device(&afu->dev);
err_put_adapter:
put_device(&adapter->dev);
return rc;
}
static int afu_open(struct inode *inode, struct file *file)
{
return __afu_open(inode, file, false);
}
static int afu_master_open(struct inode *inode, struct file *file)
{
return __afu_open(inode, file, true);
}
static int afu_release(struct inode *inode, struct file *file)
{
struct cxl_context *ctx = file->private_data;
pr_devel("%s: closing cxl file descriptor. pe: %i\n",
__func__, ctx->pe);
cxl_context_detach(ctx);
mutex_lock(&ctx->mapping_lock);
ctx->mapping = NULL;
mutex_unlock(&ctx->mapping_lock);
put_device(&ctx->afu->dev);
/*
* At this this point all bottom halfs have finished and we should be
* getting no more IRQs from the hardware for this context. Once it's
* removed from the IDR (and RCU synchronised) it's safe to free the
* sstp and context.
*/
cxl_context_free(ctx);
cxl_ctx_put();
return 0;
}
static long afu_ioctl_start_work(struct cxl_context *ctx,
struct cxl_ioctl_start_work __user *uwork)
{
struct cxl_ioctl_start_work work;
u64 amr = 0;
int rc;
pr_devel("%s: pe: %i\n", __func__, ctx->pe);
mutex_lock(&ctx->status_mutex);
if (ctx->status != OPENED) {
rc = -EIO;
goto out;
}
if (copy_from_user(&work, uwork,
sizeof(struct cxl_ioctl_start_work))) {
rc = -EFAULT;
goto out;
}
/*
* if any of the reserved fields are set or any of the unused
* flags are set it's invalid
*/
if (work.reserved1 || work.reserved2 || work.reserved3 ||
work.reserved4 || work.reserved5 || work.reserved6 ||
(work.flags & ~CXL_START_WORK_ALL)) {
rc = -EINVAL;
goto out;
}
if (!(work.flags & CXL_START_WORK_NUM_IRQS))
work.num_interrupts = ctx->afu->pp_irqs;
else if ((work.num_interrupts < ctx->afu->pp_irqs) ||
(work.num_interrupts > ctx->afu->irqs_max)) {
rc = -EINVAL;
goto out;
}
if ((rc = afu_register_irqs(ctx, work.num_interrupts)))
goto out;
if (work.flags & CXL_START_WORK_AMR)
amr = work.amr & mfspr(SPRN_UAMOR);
/*
* We grab the PID here and not in the file open to allow for the case
* where a process (master, some daemon, etc) has opened the chardev on
* behalf of another process, so the AFU's mm gets bound to the process
* that performs this ioctl and not the process that opened the file.
*/
ctx->pid = get_pid(get_task_pid(current, PIDTYPE_PID));
if ((rc = cxl_attach_process(ctx, false, work.work_element_descriptor,
amr)))
goto out;
ctx->status = STARTED;
rc = 0;
out:
mutex_unlock(&ctx->status_mutex);
return rc;
}
static long afu_ioctl_process_element(struct cxl_context *ctx,
int __user *upe)
{
pr_devel("%s: pe: %i\n", __func__, ctx->pe);
if (copy_to_user(upe, &ctx->pe, sizeof(__u32)))
return -EFAULT;
return 0;
}
static long afu_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
struct cxl_context *ctx = file->private_data;
if (ctx->status == CLOSED)
return -EIO;
pr_devel("afu_ioctl\n");
switch (cmd) {
case CXL_IOCTL_START_WORK:
return afu_ioctl_start_work(ctx, (struct cxl_ioctl_start_work __user *)arg);
case CXL_IOCTL_GET_PROCESS_ELEMENT:
return afu_ioctl_process_element(ctx, (__u32 __user *)arg);
}
return -EINVAL;
}
static long afu_compat_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
return afu_ioctl(file, cmd, arg);
}
static int afu_mmap(struct file *file, struct vm_area_struct *vm)
{
struct cxl_context *ctx = file->private_data;
/* AFU must be started before we can MMIO */
if (ctx->status != STARTED)
return -EIO;
return cxl_context_iomap(ctx, vm);
}
static unsigned int afu_poll(struct file *file, struct poll_table_struct *poll)
{
struct cxl_context *ctx = file->private_data;
int mask = 0;
unsigned long flags;
poll_wait(file, &ctx->wq, poll);
pr_devel("afu_poll wait done pe: %i\n", ctx->pe);
spin_lock_irqsave(&ctx->lock, flags);
if (ctx->pending_irq || ctx->pending_fault ||
ctx->pending_afu_err)
mask |= POLLIN | POLLRDNORM;
else if (ctx->status == CLOSED)
/* Only error on closed when there are no futher events pending
*/
mask |= POLLERR;
spin_unlock_irqrestore(&ctx->lock, flags);
pr_devel("afu_poll pe: %i returning %#x\n", ctx->pe, mask);
return mask;
}
static inline int ctx_event_pending(struct cxl_context *ctx)
{
return (ctx->pending_irq || ctx->pending_fault ||
ctx->pending_afu_err || (ctx->status == CLOSED));
}
static ssize_t afu_read(struct file *file, char __user *buf, size_t count,
loff_t *off)
{
struct cxl_context *ctx = file->private_data;
struct cxl_event event;
unsigned long flags;
int rc;
DEFINE_WAIT(wait);
if (count < CXL_READ_MIN_SIZE)
return -EINVAL;
spin_lock_irqsave(&ctx->lock, flags);
for (;;) {
prepare_to_wait(&ctx->wq, &wait, TASK_INTERRUPTIBLE);
if (ctx_event_pending(ctx))
break;
if (file->f_flags & O_NONBLOCK) {
rc = -EAGAIN;
goto out;
}
if (signal_pending(current)) {
rc = -ERESTARTSYS;
goto out;
}
spin_unlock_irqrestore(&ctx->lock, flags);
pr_devel("afu_read going to sleep...\n");
schedule();
pr_devel("afu_read woken up\n");
spin_lock_irqsave(&ctx->lock, flags);
}
finish_wait(&ctx->wq, &wait);
memset(&event, 0, sizeof(event));
event.header.process_element = ctx->pe;
event.header.size = sizeof(struct cxl_event_header);
if (ctx->pending_irq) {
pr_devel("afu_read delivering AFU interrupt\n");
event.header.size += sizeof(struct cxl_event_afu_interrupt);
event.header.type = CXL_EVENT_AFU_INTERRUPT;
event.irq.irq = find_first_bit(ctx->irq_bitmap, ctx->irq_count) + 1;
clear_bit(event.irq.irq - 1, ctx->irq_bitmap);
if (bitmap_empty(ctx->irq_bitmap, ctx->irq_count))
ctx->pending_irq = false;
} else if (ctx->pending_fault) {
pr_devel("afu_read delivering data storage fault\n");
event.header.size += sizeof(struct cxl_event_data_storage);
event.header.type = CXL_EVENT_DATA_STORAGE;
event.fault.addr = ctx->fault_addr;
event.fault.dsisr = ctx->fault_dsisr;
ctx->pending_fault = false;
} else if (ctx->pending_afu_err) {
pr_devel("afu_read delivering afu error\n");
event.header.size += sizeof(struct cxl_event_afu_error);
event.header.type = CXL_EVENT_AFU_ERROR;
event.afu_error.error = ctx->afu_err;
ctx->pending_afu_err = false;
} else if (ctx->status == CLOSED) {
pr_devel("afu_read fatal error\n");
spin_unlock_irqrestore(&ctx->lock, flags);
return -EIO;
} else
WARN(1, "afu_read must be buggy\n");
spin_unlock_irqrestore(&ctx->lock, flags);
if (copy_to_user(buf, &event, event.header.size))
return -EFAULT;
return event.header.size;
out:
finish_wait(&ctx->wq, &wait);
spin_unlock_irqrestore(&ctx->lock, flags);
return rc;
}
static const struct file_operations afu_fops = {
.owner = THIS_MODULE,
.open = afu_open,
.poll = afu_poll,
.read = afu_read,
.release = afu_release,
.unlocked_ioctl = afu_ioctl,
.compat_ioctl = afu_compat_ioctl,
.mmap = afu_mmap,
};
static const struct file_operations afu_master_fops = {
.owner = THIS_MODULE,
.open = afu_master_open,
.poll = afu_poll,
.read = afu_read,
.release = afu_release,
.unlocked_ioctl = afu_ioctl,
.compat_ioctl = afu_compat_ioctl,
.mmap = afu_mmap,
};
static char *cxl_devnode(struct device *dev, umode_t *mode)
{
if (CXL_DEVT_IS_CARD(dev->devt)) {
/*
* These minor numbers will eventually be used to program the
* PSL and AFUs once we have dynamic reprogramming support
*/
return NULL;
}
return kasprintf(GFP_KERNEL, "cxl/%s", dev_name(dev));
}
extern struct class *cxl_class;
static int cxl_add_chardev(struct cxl_afu *afu, dev_t devt, struct cdev *cdev,
struct device **chardev, char *postfix, char *desc,
const struct file_operations *fops)
{
struct device *dev;
int rc;
cdev_init(cdev, fops);
if ((rc = cdev_add(cdev, devt, 1))) {
dev_err(&afu->dev, "Unable to add %s chardev: %i\n", desc, rc);
return rc;
}
dev = device_create(cxl_class, &afu->dev, devt, afu,
"afu%i.%i%s", afu->adapter->adapter_num, afu->slice, postfix);
if (IS_ERR(dev)) {
dev_err(&afu->dev, "Unable to create %s chardev in sysfs: %i\n", desc, rc);
rc = PTR_ERR(dev);
goto err;
}
*chardev = dev;
return 0;
err:
cdev_del(cdev);
return rc;
}
int cxl_chardev_d_afu_add(struct cxl_afu *afu)
{
return cxl_add_chardev(afu, CXL_AFU_MKDEV_D(afu), &afu->afu_cdev_d,
&afu->chardev_d, "d", "dedicated",
&afu_master_fops); /* Uses master fops */
}
int cxl_chardev_m_afu_add(struct cxl_afu *afu)
{
return cxl_add_chardev(afu, CXL_AFU_MKDEV_M(afu), &afu->afu_cdev_m,
&afu->chardev_m, "m", "master",
&afu_master_fops);
}
int cxl_chardev_s_afu_add(struct cxl_afu *afu)
{
return cxl_add_chardev(afu, CXL_AFU_MKDEV_S(afu), &afu->afu_cdev_s,
&afu->chardev_s, "s", "shared",
&afu_fops);
}
void cxl_chardev_afu_remove(struct cxl_afu *afu)
{
if (afu->chardev_d) {
cdev_del(&afu->afu_cdev_d);
device_unregister(afu->chardev_d);
afu->chardev_d = NULL;
}
if (afu->chardev_m) {
cdev_del(&afu->afu_cdev_m);
device_unregister(afu->chardev_m);
afu->chardev_m = NULL;
}
if (afu->chardev_s) {
cdev_del(&afu->afu_cdev_s);
device_unregister(afu->chardev_s);
afu->chardev_s = NULL;
}
}
int cxl_register_afu(struct cxl_afu *afu)
{
afu->dev.class = cxl_class;
return device_register(&afu->dev);
}
int cxl_register_adapter(struct cxl *adapter)
{
adapter->dev.class = cxl_class;
/*
* Future: When we support dynamically reprogramming the PSL & AFU we
* will expose the interface to do that via a chardev:
* adapter->dev.devt = CXL_CARD_MKDEV(adapter);
*/
return device_register(&adapter->dev);
}
int __init cxl_file_init(void)
{
int rc;
/*
* If these change we really need to update API. Either change some
* flags or update API version number CXL_API_VERSION.
*/
BUILD_BUG_ON(CXL_API_VERSION != 1);
BUILD_BUG_ON(sizeof(struct cxl_ioctl_start_work) != 64);
BUILD_BUG_ON(sizeof(struct cxl_event_header) != 8);
BUILD_BUG_ON(sizeof(struct cxl_event_afu_interrupt) != 8);
BUILD_BUG_ON(sizeof(struct cxl_event_data_storage) != 32);
BUILD_BUG_ON(sizeof(struct cxl_event_afu_error) != 16);
if ((rc = alloc_chrdev_region(&cxl_dev, 0, CXL_NUM_MINORS, "cxl"))) {
pr_err("Unable to allocate CXL major number: %i\n", rc);
return rc;
}
pr_devel("CXL device allocated, MAJOR %i\n", MAJOR(cxl_dev));
cxl_class = class_create(THIS_MODULE, "cxl");
if (IS_ERR(cxl_class)) {
pr_err("Unable to create CXL class\n");
rc = PTR_ERR(cxl_class);
goto err;
}
cxl_class->devnode = cxl_devnode;
return 0;
err:
unregister_chrdev_region(cxl_dev, CXL_NUM_MINORS);
return rc;
}
void cxl_file_exit(void)
{
unregister_chrdev_region(cxl_dev, CXL_NUM_MINORS);
class_destroy(cxl_class);
}

403
drivers/misc/cxl/irq.c Normal file
View file

@ -0,0 +1,403 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/interrupt.h>
#include <linux/workqueue.h>
#include <linux/sched.h>
#include <linux/wait.h>
#include <linux/slab.h>
#include <linux/pid.h>
#include <asm/cputable.h>
#include <misc/cxl.h>
#include "cxl.h"
/* XXX: This is implementation specific */
static irqreturn_t handle_psl_slice_error(struct cxl_context *ctx, u64 dsisr, u64 errstat)
{
u64 fir1, fir2, fir_slice, serr, afu_debug;
fir1 = cxl_p1_read(ctx->afu->adapter, CXL_PSL_FIR1);
fir2 = cxl_p1_read(ctx->afu->adapter, CXL_PSL_FIR2);
fir_slice = cxl_p1n_read(ctx->afu, CXL_PSL_FIR_SLICE_An);
serr = cxl_p1n_read(ctx->afu, CXL_PSL_SERR_An);
afu_debug = cxl_p1n_read(ctx->afu, CXL_AFU_DEBUG_An);
dev_crit(&ctx->afu->dev, "PSL ERROR STATUS: 0x%.16llx\n", errstat);
dev_crit(&ctx->afu->dev, "PSL_FIR1: 0x%.16llx\n", fir1);
dev_crit(&ctx->afu->dev, "PSL_FIR2: 0x%.16llx\n", fir2);
dev_crit(&ctx->afu->dev, "PSL_SERR_An: 0x%.16llx\n", serr);
dev_crit(&ctx->afu->dev, "PSL_FIR_SLICE_An: 0x%.16llx\n", fir_slice);
dev_crit(&ctx->afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%.16llx\n", afu_debug);
dev_crit(&ctx->afu->dev, "STOPPING CXL TRACE\n");
cxl_stop_trace(ctx->afu->adapter);
return cxl_ack_irq(ctx, 0, errstat);
}
irqreturn_t cxl_slice_irq_err(int irq, void *data)
{
struct cxl_afu *afu = data;
u64 fir_slice, errstat, serr, afu_debug;
WARN(irq, "CXL SLICE ERROR interrupt %i\n", irq);
serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
dev_crit(&afu->dev, "PSL_SERR_An: 0x%.16llx\n", serr);
dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%.16llx\n", fir_slice);
dev_crit(&afu->dev, "CXL_PSL_ErrStat_An: 0x%.16llx\n", errstat);
dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%.16llx\n", afu_debug);
cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
return IRQ_HANDLED;
}
static irqreturn_t cxl_irq_err(int irq, void *data)
{
struct cxl *adapter = data;
u64 fir1, fir2, err_ivte;
WARN(1, "CXL ERROR interrupt %i\n", irq);
err_ivte = cxl_p1_read(adapter, CXL_PSL_ErrIVTE);
dev_crit(&adapter->dev, "PSL_ErrIVTE: 0x%.16llx\n", err_ivte);
dev_crit(&adapter->dev, "STOPPING CXL TRACE\n");
cxl_stop_trace(adapter);
fir1 = cxl_p1_read(adapter, CXL_PSL_FIR1);
fir2 = cxl_p1_read(adapter, CXL_PSL_FIR2);
dev_crit(&adapter->dev, "PSL_FIR1: 0x%.16llx\nPSL_FIR2: 0x%.16llx\n", fir1, fir2);
return IRQ_HANDLED;
}
static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 dar)
{
ctx->dsisr = dsisr;
ctx->dar = dar;
schedule_work(&ctx->fault_work);
return IRQ_HANDLED;
}
static irqreturn_t cxl_irq(int irq, void *data)
{
struct cxl_context *ctx = data;
struct cxl_irq_info irq_info;
u64 dsisr, dar;
int result;
if ((result = cxl_get_irq(ctx, &irq_info))) {
WARN(1, "Unable to get CXL IRQ Info: %i\n", result);
return IRQ_HANDLED;
}
dsisr = irq_info.dsisr;
dar = irq_info.dar;
pr_devel("CXL interrupt %i for afu pe: %i DSISR: %#llx DAR: %#llx\n", irq, ctx->pe, dsisr, dar);
if (dsisr & CXL_PSL_DSISR_An_DS) {
/*
* We don't inherently need to sleep to handle this, but we do
* need to get a ref to the task's mm, which we can't do from
* irq context without the potential for a deadlock since it
* takes the task_lock. An alternate option would be to keep a
* reference to the task's mm the entire time it has cxl open,
* but to do that we need to solve the issue where we hold a
* ref to the mm, but the mm can hold a ref to the fd after an
* mmap preventing anything from being cleaned up.
*/
pr_devel("Scheduling segment miss handling for later pe: %i\n", ctx->pe);
return schedule_cxl_fault(ctx, dsisr, dar);
}
if (dsisr & CXL_PSL_DSISR_An_M)
pr_devel("CXL interrupt: PTE not found\n");
if (dsisr & CXL_PSL_DSISR_An_P)
pr_devel("CXL interrupt: Storage protection violation\n");
if (dsisr & CXL_PSL_DSISR_An_A)
pr_devel("CXL interrupt: AFU lock access to write through or cache inhibited storage\n");
if (dsisr & CXL_PSL_DSISR_An_S)
pr_devel("CXL interrupt: Access was afu_wr or afu_zero\n");
if (dsisr & CXL_PSL_DSISR_An_K)
pr_devel("CXL interrupt: Access not permitted by virtual page class key protection\n");
if (dsisr & CXL_PSL_DSISR_An_DM) {
/*
* In some cases we might be able to handle the fault
* immediately if hash_page would succeed, but we still need
* the task's mm, which as above we can't get without a lock
*/
pr_devel("Scheduling page fault handling for later pe: %i\n", ctx->pe);
return schedule_cxl_fault(ctx, dsisr, dar);
}
if (dsisr & CXL_PSL_DSISR_An_ST)
WARN(1, "CXL interrupt: Segment Table PTE not found\n");
if (dsisr & CXL_PSL_DSISR_An_UR)
pr_devel("CXL interrupt: AURP PTE not found\n");
if (dsisr & CXL_PSL_DSISR_An_PE)
return handle_psl_slice_error(ctx, dsisr, irq_info.errstat);
if (dsisr & CXL_PSL_DSISR_An_AE) {
pr_devel("CXL interrupt: AFU Error %.llx\n", irq_info.afu_err);
if (ctx->pending_afu_err) {
/*
* This shouldn't happen - the PSL treats these errors
* as fatal and will have reset the AFU, so there's not
* much point buffering multiple AFU errors.
* OTOH if we DO ever see a storm of these come in it's
* probably best that we log them somewhere:
*/
dev_err_ratelimited(&ctx->afu->dev, "CXL AFU Error "
"undelivered to pe %i: %.llx\n",
ctx->pe, irq_info.afu_err);
} else {
spin_lock(&ctx->lock);
ctx->afu_err = irq_info.afu_err;
ctx->pending_afu_err = 1;
spin_unlock(&ctx->lock);
wake_up_all(&ctx->wq);
}
cxl_ack_irq(ctx, CXL_PSL_TFC_An_A, 0);
return IRQ_HANDLED;
}
if (dsisr & CXL_PSL_DSISR_An_OC)
pr_devel("CXL interrupt: OS Context Warning\n");
WARN(1, "Unhandled CXL PSL IRQ\n");
return IRQ_HANDLED;
}
static irqreturn_t cxl_irq_multiplexed(int irq, void *data)
{
struct cxl_afu *afu = data;
struct cxl_context *ctx;
int ph = cxl_p2n_read(afu, CXL_PSL_PEHandle_An) & 0xffff;
int ret;
rcu_read_lock();
ctx = idr_find(&afu->contexts_idr, ph);
if (ctx) {
ret = cxl_irq(irq, ctx);
rcu_read_unlock();
return ret;
}
rcu_read_unlock();
WARN(1, "Unable to demultiplex CXL PSL IRQ\n");
return IRQ_HANDLED;
}
static irqreturn_t cxl_irq_afu(int irq, void *data)
{
struct cxl_context *ctx = data;
irq_hw_number_t hwirq = irqd_to_hwirq(irq_get_irq_data(irq));
int irq_off, afu_irq = 1;
__u16 range;
int r;
for (r = 1; r < CXL_IRQ_RANGES; r++) {
irq_off = hwirq - ctx->irqs.offset[r];
range = ctx->irqs.range[r];
if (irq_off >= 0 && irq_off < range) {
afu_irq += irq_off;
break;
}
afu_irq += range;
}
if (unlikely(r >= CXL_IRQ_RANGES)) {
WARN(1, "Recieved AFU IRQ out of range for pe %i (virq %i hwirq %lx)\n",
ctx->pe, irq, hwirq);
return IRQ_HANDLED;
}
pr_devel("Received AFU interrupt %i for pe: %i (virq %i hwirq %lx)\n",
afu_irq, ctx->pe, irq, hwirq);
if (unlikely(!ctx->irq_bitmap)) {
WARN(1, "Recieved AFU IRQ for context with no IRQ bitmap\n");
return IRQ_HANDLED;
}
spin_lock(&ctx->lock);
set_bit(afu_irq - 1, ctx->irq_bitmap);
ctx->pending_irq = true;
spin_unlock(&ctx->lock);
wake_up_all(&ctx->wq);
return IRQ_HANDLED;
}
unsigned int cxl_map_irq(struct cxl *adapter, irq_hw_number_t hwirq,
irq_handler_t handler, void *cookie)
{
unsigned int virq;
int result;
/* IRQ Domain? */
virq = irq_create_mapping(NULL, hwirq);
if (!virq) {
dev_warn(&adapter->dev, "cxl_map_irq: irq_create_mapping failed\n");
return 0;
}
cxl_setup_irq(adapter, hwirq, virq);
pr_devel("hwirq %#lx mapped to virq %u\n", hwirq, virq);
result = request_irq(virq, handler, 0, "cxl", cookie);
if (result) {
dev_warn(&adapter->dev, "cxl_map_irq: request_irq failed: %i\n", result);
return 0;
}
return virq;
}
void cxl_unmap_irq(unsigned int virq, void *cookie)
{
free_irq(virq, cookie);
irq_dispose_mapping(virq);
}
static int cxl_register_one_irq(struct cxl *adapter,
irq_handler_t handler,
void *cookie,
irq_hw_number_t *dest_hwirq,
unsigned int *dest_virq)
{
int hwirq, virq;
if ((hwirq = cxl_alloc_one_irq(adapter)) < 0)
return hwirq;
if (!(virq = cxl_map_irq(adapter, hwirq, handler, cookie)))
goto err;
*dest_hwirq = hwirq;
*dest_virq = virq;
return 0;
err:
cxl_release_one_irq(adapter, hwirq);
return -ENOMEM;
}
int cxl_register_psl_err_irq(struct cxl *adapter)
{
int rc;
if ((rc = cxl_register_one_irq(adapter, cxl_irq_err, adapter,
&adapter->err_hwirq,
&adapter->err_virq)))
return rc;
cxl_p1_write(adapter, CXL_PSL_ErrIVTE, adapter->err_hwirq & 0xffff);
return 0;
}
void cxl_release_psl_err_irq(struct cxl *adapter)
{
cxl_p1_write(adapter, CXL_PSL_ErrIVTE, 0x0000000000000000);
cxl_unmap_irq(adapter->err_virq, adapter);
cxl_release_one_irq(adapter, adapter->err_hwirq);
}
int cxl_register_serr_irq(struct cxl_afu *afu)
{
u64 serr;
int rc;
if ((rc = cxl_register_one_irq(afu->adapter, cxl_slice_irq_err, afu,
&afu->serr_hwirq,
&afu->serr_virq)))
return rc;
serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
return 0;
}
void cxl_release_serr_irq(struct cxl_afu *afu)
{
cxl_p1n_write(afu, CXL_PSL_SERR_An, 0x0000000000000000);
cxl_unmap_irq(afu->serr_virq, afu);
cxl_release_one_irq(afu->adapter, afu->serr_hwirq);
}
int cxl_register_psl_irq(struct cxl_afu *afu)
{
return cxl_register_one_irq(afu->adapter, cxl_irq_multiplexed, afu,
&afu->psl_hwirq, &afu->psl_virq);
}
void cxl_release_psl_irq(struct cxl_afu *afu)
{
cxl_unmap_irq(afu->psl_virq, afu);
cxl_release_one_irq(afu->adapter, afu->psl_hwirq);
}
int afu_register_irqs(struct cxl_context *ctx, u32 count)
{
irq_hw_number_t hwirq;
int rc, r, i;
if ((rc = cxl_alloc_irq_ranges(&ctx->irqs, ctx->afu->adapter, count)))
return rc;
/* Multiplexed PSL Interrupt */
ctx->irqs.offset[0] = ctx->afu->psl_hwirq;
ctx->irqs.range[0] = 1;
ctx->irq_count = count;
ctx->irq_bitmap = kcalloc(BITS_TO_LONGS(count),
sizeof(*ctx->irq_bitmap), GFP_KERNEL);
if (!ctx->irq_bitmap)
return -ENOMEM;
for (r = 1; r < CXL_IRQ_RANGES; r++) {
hwirq = ctx->irqs.offset[r];
for (i = 0; i < ctx->irqs.range[r]; hwirq++, i++) {
cxl_map_irq(ctx->afu->adapter, hwirq,
cxl_irq_afu, ctx);
}
}
return 0;
}
void afu_release_irqs(struct cxl_context *ctx)
{
irq_hw_number_t hwirq;
unsigned int virq;
int r, i;
for (r = 1; r < CXL_IRQ_RANGES; r++) {
hwirq = ctx->irqs.offset[r];
for (i = 0; i < ctx->irqs.range[r]; hwirq++, i++) {
virq = irq_find_mapping(NULL, hwirq);
if (virq)
cxl_unmap_irq(virq, ctx);
}
}
cxl_release_irq_ranges(&ctx->irqs, ctx->afu->adapter);
}

230
drivers/misc/cxl/main.c Normal file
View file

@ -0,0 +1,230 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/spinlock.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/init.h>
#include <linux/list.h>
#include <linux/mm.h>
#include <linux/of.h>
#include <linux/slab.h>
#include <linux/idr.h>
#include <linux/pci.h>
#include <asm/cputable.h>
#include <misc/cxl.h>
#include "cxl.h"
static DEFINE_SPINLOCK(adapter_idr_lock);
static DEFINE_IDR(cxl_adapter_idr);
uint cxl_verbose;
module_param_named(verbose, cxl_verbose, uint, 0600);
MODULE_PARM_DESC(verbose, "Enable verbose dmesg output");
static inline void _cxl_slbia(struct cxl_context *ctx, struct mm_struct *mm)
{
struct task_struct *task;
unsigned long flags;
if (!(task = get_pid_task(ctx->pid, PIDTYPE_PID))) {
pr_devel("%s unable to get task %i\n",
__func__, pid_nr(ctx->pid));
return;
}
if (task->mm != mm)
goto out_put;
pr_devel("%s matched mm - card: %i afu: %i pe: %i\n", __func__,
ctx->afu->adapter->adapter_num, ctx->afu->slice, ctx->pe);
spin_lock_irqsave(&ctx->sste_lock, flags);
memset(ctx->sstp, 0, ctx->sst_size);
spin_unlock_irqrestore(&ctx->sste_lock, flags);
mb();
cxl_afu_slbia(ctx->afu);
out_put:
put_task_struct(task);
}
static inline void cxl_slbia_core(struct mm_struct *mm)
{
struct cxl *adapter;
struct cxl_afu *afu;
struct cxl_context *ctx;
int card, slice, id;
pr_devel("%s called\n", __func__);
spin_lock(&adapter_idr_lock);
idr_for_each_entry(&cxl_adapter_idr, adapter, card) {
/* XXX: Make this lookup faster with link from mm to ctx */
spin_lock(&adapter->afu_list_lock);
for (slice = 0; slice < adapter->slices; slice++) {
afu = adapter->afu[slice];
if (!afu->enabled)
continue;
rcu_read_lock();
idr_for_each_entry(&afu->contexts_idr, ctx, id)
_cxl_slbia(ctx, mm);
rcu_read_unlock();
}
spin_unlock(&adapter->afu_list_lock);
}
spin_unlock(&adapter_idr_lock);
}
static struct cxl_calls cxl_calls = {
.cxl_slbia = cxl_slbia_core,
.owner = THIS_MODULE,
};
int cxl_alloc_sst(struct cxl_context *ctx)
{
unsigned long vsid;
u64 ea_mask, size, sstp0, sstp1;
sstp0 = 0;
sstp1 = 0;
ctx->sst_size = PAGE_SIZE;
ctx->sst_lru = 0;
ctx->sstp = (struct cxl_sste *)get_zeroed_page(GFP_KERNEL);
if (!ctx->sstp) {
pr_err("cxl_alloc_sst: Unable to allocate segment table\n");
return -ENOMEM;
}
pr_devel("SSTP allocated at 0x%p\n", ctx->sstp);
vsid = get_kernel_vsid((u64)ctx->sstp, mmu_kernel_ssize) << 12;
sstp0 |= (u64)mmu_kernel_ssize << CXL_SSTP0_An_B_SHIFT;
sstp0 |= (SLB_VSID_KERNEL | mmu_psize_defs[mmu_linear_psize].sllp) << 50;
size = (((u64)ctx->sst_size >> 8) - 1) << CXL_SSTP0_An_SegTableSize_SHIFT;
if (unlikely(size & ~CXL_SSTP0_An_SegTableSize_MASK)) {
WARN(1, "Impossible segment table size\n");
return -EINVAL;
}
sstp0 |= size;
if (mmu_kernel_ssize == MMU_SEGSIZE_256M)
ea_mask = 0xfffff00ULL;
else
ea_mask = 0xffffffff00ULL;
sstp0 |= vsid >> (50-14); /* Top 14 bits of VSID */
sstp1 |= (vsid << (64-(50-14))) & ~ea_mask;
sstp1 |= (u64)ctx->sstp & ea_mask;
sstp1 |= CXL_SSTP1_An_V;
pr_devel("Looked up %#llx: slbfee. %#llx (ssize: %x, vsid: %#lx), copied to SSTP0: %#llx, SSTP1: %#llx\n",
(u64)ctx->sstp, (u64)ctx->sstp & ESID_MASK, mmu_kernel_ssize, vsid, sstp0, sstp1);
/* Store calculated sstp hardware points for use later */
ctx->sstp0 = sstp0;
ctx->sstp1 = sstp1;
return 0;
}
/* Find a CXL adapter by it's number and increase it's refcount */
struct cxl *get_cxl_adapter(int num)
{
struct cxl *adapter;
spin_lock(&adapter_idr_lock);
if ((adapter = idr_find(&cxl_adapter_idr, num)))
get_device(&adapter->dev);
spin_unlock(&adapter_idr_lock);
return adapter;
}
int cxl_alloc_adapter_nr(struct cxl *adapter)
{
int i;
idr_preload(GFP_KERNEL);
spin_lock(&adapter_idr_lock);
i = idr_alloc(&cxl_adapter_idr, adapter, 0, 0, GFP_NOWAIT);
spin_unlock(&adapter_idr_lock);
idr_preload_end();
if (i < 0)
return i;
adapter->adapter_num = i;
return 0;
}
void cxl_remove_adapter_nr(struct cxl *adapter)
{
idr_remove(&cxl_adapter_idr, adapter->adapter_num);
}
int cxl_afu_select_best_mode(struct cxl_afu *afu)
{
if (afu->modes_supported & CXL_MODE_DIRECTED)
return cxl_afu_activate_mode(afu, CXL_MODE_DIRECTED);
if (afu->modes_supported & CXL_MODE_DEDICATED)
return cxl_afu_activate_mode(afu, CXL_MODE_DEDICATED);
dev_warn(&afu->dev, "No supported programming modes available\n");
/* We don't fail this so the user can inspect sysfs */
return 0;
}
static int __init init_cxl(void)
{
int rc = 0;
if (!cpu_has_feature(CPU_FTR_HVMODE))
return -EPERM;
if ((rc = cxl_file_init()))
return rc;
cxl_debugfs_init();
if ((rc = register_cxl_calls(&cxl_calls)))
goto err;
if ((rc = pci_register_driver(&cxl_pci_driver)))
goto err1;
return 0;
err1:
unregister_cxl_calls(&cxl_calls);
err:
cxl_debugfs_exit();
cxl_file_exit();
return rc;
}
static void exit_cxl(void)
{
pci_unregister_driver(&cxl_pci_driver);
cxl_debugfs_exit();
cxl_file_exit();
unregister_cxl_calls(&cxl_calls);
}
module_init(init_cxl);
module_exit(exit_cxl);
MODULE_DESCRIPTION("IBM Coherent Accelerator");
MODULE_AUTHOR("Ian Munsie <imunsie@au1.ibm.com>");
MODULE_LICENSE("GPL");

681
drivers/misc/cxl/native.c Normal file
View file

@ -0,0 +1,681 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/spinlock.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/mutex.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
#include <asm/synch.h>
#include <misc/cxl.h>
#include "cxl.h"
static int afu_control(struct cxl_afu *afu, u64 command,
u64 result, u64 mask, bool enabled)
{
u64 AFU_Cntl = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
spin_lock(&afu->afu_cntl_lock);
pr_devel("AFU command starting: %llx\n", command);
cxl_p2n_write(afu, CXL_AFU_Cntl_An, AFU_Cntl | command);
AFU_Cntl = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
while ((AFU_Cntl & mask) != result) {
if (time_after_eq(jiffies, timeout)) {
dev_warn(&afu->dev, "WARNING: AFU control timed out!\n");
spin_unlock(&afu->afu_cntl_lock);
return -EBUSY;
}
pr_devel_ratelimited("AFU control... (0x%.16llx)\n",
AFU_Cntl | command);
cpu_relax();
AFU_Cntl = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
};
pr_devel("AFU command complete: %llx\n", command);
afu->enabled = enabled;
spin_unlock(&afu->afu_cntl_lock);
return 0;
}
static int afu_enable(struct cxl_afu *afu)
{
pr_devel("AFU enable request\n");
return afu_control(afu, CXL_AFU_Cntl_An_E,
CXL_AFU_Cntl_An_ES_Enabled,
CXL_AFU_Cntl_An_ES_MASK, true);
}
int cxl_afu_disable(struct cxl_afu *afu)
{
pr_devel("AFU disable request\n");
return afu_control(afu, 0, CXL_AFU_Cntl_An_ES_Disabled,
CXL_AFU_Cntl_An_ES_MASK, false);
}
/* This will disable as well as reset */
int cxl_afu_reset(struct cxl_afu *afu)
{
pr_devel("AFU reset request\n");
return afu_control(afu, CXL_AFU_Cntl_An_RA,
CXL_AFU_Cntl_An_RS_Complete | CXL_AFU_Cntl_An_ES_Disabled,
CXL_AFU_Cntl_An_RS_MASK | CXL_AFU_Cntl_An_ES_MASK,
false);
}
static int afu_check_and_enable(struct cxl_afu *afu)
{
if (afu->enabled)
return 0;
return afu_enable(afu);
}
int cxl_psl_purge(struct cxl_afu *afu)
{
u64 PSL_CNTL = cxl_p1n_read(afu, CXL_PSL_SCNTL_An);
u64 AFU_Cntl = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
u64 dsisr, dar;
u64 start, end;
unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
pr_devel("PSL purge request\n");
if ((AFU_Cntl & CXL_AFU_Cntl_An_ES_MASK) != CXL_AFU_Cntl_An_ES_Disabled) {
WARN(1, "psl_purge request while AFU not disabled!\n");
cxl_afu_disable(afu);
}
cxl_p1n_write(afu, CXL_PSL_SCNTL_An,
PSL_CNTL | CXL_PSL_SCNTL_An_Pc);
start = local_clock();
PSL_CNTL = cxl_p1n_read(afu, CXL_PSL_SCNTL_An);
while ((PSL_CNTL & CXL_PSL_SCNTL_An_Ps_MASK)
== CXL_PSL_SCNTL_An_Ps_Pending) {
if (time_after_eq(jiffies, timeout)) {
dev_warn(&afu->dev, "WARNING: PSL Purge timed out!\n");
return -EBUSY;
}
dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%.16llx PSL_DSISR: 0x%.16llx\n", PSL_CNTL, dsisr);
if (dsisr & CXL_PSL_DSISR_TRANS) {
dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%.16llx, DAR: 0x%.16llx\n", dsisr, dar);
cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
} else if (dsisr) {
dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%.16llx\n", dsisr);
cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
} else {
cpu_relax();
}
PSL_CNTL = cxl_p1n_read(afu, CXL_PSL_SCNTL_An);
};
end = local_clock();
pr_devel("PSL purged in %lld ns\n", end - start);
cxl_p1n_write(afu, CXL_PSL_SCNTL_An,
PSL_CNTL & ~CXL_PSL_SCNTL_An_Pc);
return 0;
}
static int spa_max_procs(int spa_size)
{
/*
* From the CAIA:
* end_of_SPA_area = SPA_Base + ((n+4) * 128) + (( ((n*8) + 127) >> 7) * 128) + 255
* Most of that junk is really just an overly-complicated way of saying
* the last 256 bytes are __aligned(128), so it's really:
* end_of_SPA_area = end_of_PSL_queue_area + __aligned(128) 255
* and
* end_of_PSL_queue_area = SPA_Base + ((n+4) * 128) + (n*8) - 1
* so
* sizeof(SPA) = ((n+4) * 128) + (n*8) + __aligned(128) 256
* Ignore the alignment (which is safe in this case as long as we are
* careful with our rounding) and solve for n:
*/
return ((spa_size / 8) - 96) / 17;
}
static int alloc_spa(struct cxl_afu *afu)
{
u64 spap;
/* Work out how many pages to allocate */
afu->spa_order = 0;
do {
afu->spa_order++;
afu->spa_size = (1 << afu->spa_order) * PAGE_SIZE;
afu->spa_max_procs = spa_max_procs(afu->spa_size);
} while (afu->spa_max_procs < afu->num_procs);
WARN_ON(afu->spa_size > 0x100000); /* Max size supported by the hardware */
if (!(afu->spa = (struct cxl_process_element *)
__get_free_pages(GFP_KERNEL | __GFP_ZERO, afu->spa_order))) {
pr_err("cxl_alloc_spa: Unable to allocate scheduled process area\n");
return -ENOMEM;
}
pr_devel("spa pages: %i afu->spa_max_procs: %i afu->num_procs: %i\n",
1<<afu->spa_order, afu->spa_max_procs, afu->num_procs);
afu->sw_command_status = (__be64 *)((char *)afu->spa +
((afu->spa_max_procs + 3) * 128));
spap = virt_to_phys(afu->spa) & CXL_PSL_SPAP_Addr;
spap |= ((afu->spa_size >> (12 - CXL_PSL_SPAP_Size_Shift)) - 1) & CXL_PSL_SPAP_Size;
spap |= CXL_PSL_SPAP_V;
pr_devel("cxl: SPA allocated at 0x%p. Max processes: %i, sw_command_status: 0x%p CXL_PSL_SPAP_An=0x%016llx\n", afu->spa, afu->spa_max_procs, afu->sw_command_status, spap);
cxl_p1n_write(afu, CXL_PSL_SPAP_An, spap);
return 0;
}
static void release_spa(struct cxl_afu *afu)
{
free_pages((unsigned long) afu->spa, afu->spa_order);
}
int cxl_tlb_slb_invalidate(struct cxl *adapter)
{
unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
pr_devel("CXL adapter wide TLBIA & SLBIA\n");
cxl_p1_write(adapter, CXL_PSL_AFUSEL, CXL_PSL_AFUSEL_A);
cxl_p1_write(adapter, CXL_PSL_TLBIA, CXL_TLB_SLB_IQ_ALL);
while (cxl_p1_read(adapter, CXL_PSL_TLBIA) & CXL_TLB_SLB_P) {
if (time_after_eq(jiffies, timeout)) {
dev_warn(&adapter->dev, "WARNING: CXL adapter wide TLBIA timed out!\n");
return -EBUSY;
}
cpu_relax();
}
cxl_p1_write(adapter, CXL_PSL_SLBIA, CXL_TLB_SLB_IQ_ALL);
while (cxl_p1_read(adapter, CXL_PSL_SLBIA) & CXL_TLB_SLB_P) {
if (time_after_eq(jiffies, timeout)) {
dev_warn(&adapter->dev, "WARNING: CXL adapter wide SLBIA timed out!\n");
return -EBUSY;
}
cpu_relax();
}
return 0;
}
int cxl_afu_slbia(struct cxl_afu *afu)
{
unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
pr_devel("cxl_afu_slbia issuing SLBIA command\n");
cxl_p2n_write(afu, CXL_SLBIA_An, CXL_TLB_SLB_IQ_ALL);
while (cxl_p2n_read(afu, CXL_SLBIA_An) & CXL_TLB_SLB_P) {
if (time_after_eq(jiffies, timeout)) {
dev_warn(&afu->dev, "WARNING: CXL AFU SLBIA timed out!\n");
return -EBUSY;
}
cpu_relax();
}
return 0;
}
static int cxl_write_sstp(struct cxl_afu *afu, u64 sstp0, u64 sstp1)
{
int rc;
/* 1. Disable SSTP by writing 0 to SSTP1[V] */
cxl_p2n_write(afu, CXL_SSTP1_An, 0);
/* 2. Invalidate all SLB entries */
if ((rc = cxl_afu_slbia(afu)))
return rc;
/* 3. Set SSTP0_An */
cxl_p2n_write(afu, CXL_SSTP0_An, sstp0);
/* 4. Set SSTP1_An */
cxl_p2n_write(afu, CXL_SSTP1_An, sstp1);
return 0;
}
/* Using per slice version may improve performance here. (ie. SLBIA_An) */
static void slb_invalid(struct cxl_context *ctx)
{
struct cxl *adapter = ctx->afu->adapter;
u64 slbia;
WARN_ON(!mutex_is_locked(&ctx->afu->spa_mutex));
cxl_p1_write(adapter, CXL_PSL_LBISEL,
((u64)be32_to_cpu(ctx->elem->common.pid) << 32) |
be32_to_cpu(ctx->elem->lpid));
cxl_p1_write(adapter, CXL_PSL_SLBIA, CXL_TLB_SLB_IQ_LPIDPID);
while (1) {
slbia = cxl_p1_read(adapter, CXL_PSL_SLBIA);
if (!(slbia & CXL_TLB_SLB_P))
break;
cpu_relax();
}
}
static int do_process_element_cmd(struct cxl_context *ctx,
u64 cmd, u64 pe_state)
{
u64 state;
unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
WARN_ON(!ctx->afu->enabled);
ctx->elem->software_state = cpu_to_be32(pe_state);
smp_wmb();
*(ctx->afu->sw_command_status) = cpu_to_be64(cmd | 0 | ctx->pe);
smp_mb();
cxl_p1n_write(ctx->afu, CXL_PSL_LLCMD_An, cmd | ctx->pe);
while (1) {
if (time_after_eq(jiffies, timeout)) {
dev_warn(&ctx->afu->dev, "WARNING: Process Element Command timed out!\n");
return -EBUSY;
}
state = be64_to_cpup(ctx->afu->sw_command_status);
if (state == ~0ULL) {
pr_err("cxl: Error adding process element to AFU\n");
return -1;
}
if ((state & (CXL_SPA_SW_CMD_MASK | CXL_SPA_SW_STATE_MASK | CXL_SPA_SW_LINK_MASK)) ==
(cmd | (cmd >> 16) | ctx->pe))
break;
/*
* The command won't finish in the PSL if there are
* outstanding DSIs. Hence we need to yield here in
* case there are outstanding DSIs that we need to
* service. Tuning possiblity: we could wait for a
* while before sched
*/
schedule();
}
return 0;
}
static int add_process_element(struct cxl_context *ctx)
{
int rc = 0;
mutex_lock(&ctx->afu->spa_mutex);
pr_devel("%s Adding pe: %i started\n", __func__, ctx->pe);
if (!(rc = do_process_element_cmd(ctx, CXL_SPA_SW_CMD_ADD, CXL_PE_SOFTWARE_STATE_V)))
ctx->pe_inserted = true;
pr_devel("%s Adding pe: %i finished\n", __func__, ctx->pe);
mutex_unlock(&ctx->afu->spa_mutex);
return rc;
}
static int terminate_process_element(struct cxl_context *ctx)
{
int rc = 0;
/* fast path terminate if it's already invalid */
if (!(ctx->elem->software_state & cpu_to_be32(CXL_PE_SOFTWARE_STATE_V)))
return rc;
mutex_lock(&ctx->afu->spa_mutex);
pr_devel("%s Terminate pe: %i started\n", __func__, ctx->pe);
rc = do_process_element_cmd(ctx, CXL_SPA_SW_CMD_TERMINATE,
CXL_PE_SOFTWARE_STATE_V | CXL_PE_SOFTWARE_STATE_T);
ctx->elem->software_state = 0; /* Remove Valid bit */
pr_devel("%s Terminate pe: %i finished\n", __func__, ctx->pe);
mutex_unlock(&ctx->afu->spa_mutex);
return rc;
}
static int remove_process_element(struct cxl_context *ctx)
{
int rc = 0;
mutex_lock(&ctx->afu->spa_mutex);
pr_devel("%s Remove pe: %i started\n", __func__, ctx->pe);
if (!(rc = do_process_element_cmd(ctx, CXL_SPA_SW_CMD_REMOVE, 0)))
ctx->pe_inserted = false;
slb_invalid(ctx);
pr_devel("%s Remove pe: %i finished\n", __func__, ctx->pe);
mutex_unlock(&ctx->afu->spa_mutex);
return rc;
}
static void assign_psn_space(struct cxl_context *ctx)
{
if (!ctx->afu->pp_size || ctx->master) {
ctx->psn_phys = ctx->afu->psn_phys;
ctx->psn_size = ctx->afu->adapter->ps_size;
} else {
ctx->psn_phys = ctx->afu->psn_phys +
(ctx->afu->pp_offset + ctx->afu->pp_size * ctx->pe);
ctx->psn_size = ctx->afu->pp_size;
}
}
static int activate_afu_directed(struct cxl_afu *afu)
{
int rc;
dev_info(&afu->dev, "Activating AFU directed mode\n");
if (alloc_spa(afu))
return -ENOMEM;
cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_AFU);
cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
afu->current_mode = CXL_MODE_DIRECTED;
afu->num_procs = afu->max_procs_virtualised;
if ((rc = cxl_chardev_m_afu_add(afu)))
return rc;
if ((rc = cxl_sysfs_afu_m_add(afu)))
goto err;
if ((rc = cxl_chardev_s_afu_add(afu)))
goto err1;
return 0;
err1:
cxl_sysfs_afu_m_remove(afu);
err:
cxl_chardev_afu_remove(afu);
return rc;
}
#ifdef CONFIG_CPU_LITTLE_ENDIAN
#define set_endian(sr) ((sr) |= CXL_PSL_SR_An_LE)
#else
#define set_endian(sr) ((sr) &= ~(CXL_PSL_SR_An_LE))
#endif
static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
{
u64 sr;
int r, result;
assign_psn_space(ctx);
ctx->elem->ctxtime = 0; /* disable */
ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
ctx->elem->haurp = 0; /* disable */
ctx->elem->sdr = cpu_to_be64(mfspr(SPRN_SDR1));
sr = 0;
if (ctx->master)
sr |= CXL_PSL_SR_An_MP;
if (mfspr(SPRN_LPCR) & LPCR_TC)
sr |= CXL_PSL_SR_An_TC;
/* HV=0, PR=1, R=1 for userspace
* For kernel contexts: this would need to change
*/
sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;
set_endian(sr);
sr &= ~(CXL_PSL_SR_An_HV);
if (!test_tsk_thread_flag(current, TIF_32BIT))
sr |= CXL_PSL_SR_An_SF;
ctx->elem->common.pid = cpu_to_be32(current->pid);
ctx->elem->common.tid = 0;
ctx->elem->sr = cpu_to_be64(sr);
ctx->elem->common.csrp = 0; /* disable */
ctx->elem->common.aurp0 = 0; /* disable */
ctx->elem->common.aurp1 = 0; /* disable */
cxl_prefault(ctx, wed);
ctx->elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
ctx->elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
for (r = 0; r < CXL_IRQ_RANGES; r++) {
ctx->elem->ivte_offsets[r] = cpu_to_be16(ctx->irqs.offset[r]);
ctx->elem->ivte_ranges[r] = cpu_to_be16(ctx->irqs.range[r]);
}
ctx->elem->common.amr = cpu_to_be64(amr);
ctx->elem->common.wed = cpu_to_be64(wed);
/* first guy needs to enable */
if ((result = afu_check_and_enable(ctx->afu)))
return result;
add_process_element(ctx);
return 0;
}
static int deactivate_afu_directed(struct cxl_afu *afu)
{
dev_info(&afu->dev, "Deactivating AFU directed mode\n");
afu->current_mode = 0;
afu->num_procs = 0;
cxl_sysfs_afu_m_remove(afu);
cxl_chardev_afu_remove(afu);
cxl_afu_reset(afu);
cxl_afu_disable(afu);
cxl_psl_purge(afu);
release_spa(afu);
return 0;
}
static int activate_dedicated_process(struct cxl_afu *afu)
{
dev_info(&afu->dev, "Activating dedicated process mode\n");
cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_Process);
cxl_p1n_write(afu, CXL_PSL_CtxTime_An, 0); /* disable */
cxl_p1n_write(afu, CXL_PSL_SPAP_An, 0); /* disable */
cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
cxl_p1n_write(afu, CXL_PSL_LPID_An, mfspr(SPRN_LPID));
cxl_p1n_write(afu, CXL_HAURP_An, 0); /* disable */
cxl_p1n_write(afu, CXL_PSL_SDR_An, mfspr(SPRN_SDR1));
cxl_p2n_write(afu, CXL_CSRP_An, 0); /* disable */
cxl_p2n_write(afu, CXL_AURP0_An, 0); /* disable */
cxl_p2n_write(afu, CXL_AURP1_An, 0); /* disable */
afu->current_mode = CXL_MODE_DEDICATED;
afu->num_procs = 1;
return cxl_chardev_d_afu_add(afu);
}
static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
{
struct cxl_afu *afu = ctx->afu;
u64 sr;
int rc;
sr = 0;
set_endian(sr);
if (ctx->master)
sr |= CXL_PSL_SR_An_MP;
if (mfspr(SPRN_LPCR) & LPCR_TC)
sr |= CXL_PSL_SR_An_TC;
sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;
if (!test_tsk_thread_flag(current, TIF_32BIT))
sr |= CXL_PSL_SR_An_SF;
cxl_p2n_write(afu, CXL_PSL_PID_TID_An, (u64)current->pid << 32);
cxl_p1n_write(afu, CXL_PSL_SR_An, sr);
if ((rc = cxl_write_sstp(afu, ctx->sstp0, ctx->sstp1)))
return rc;
cxl_prefault(ctx, wed);
cxl_p1n_write(afu, CXL_PSL_IVTE_Offset_An,
(((u64)ctx->irqs.offset[0] & 0xffff) << 48) |
(((u64)ctx->irqs.offset[1] & 0xffff) << 32) |
(((u64)ctx->irqs.offset[2] & 0xffff) << 16) |
((u64)ctx->irqs.offset[3] & 0xffff));
cxl_p1n_write(afu, CXL_PSL_IVTE_Limit_An, (u64)
(((u64)ctx->irqs.range[0] & 0xffff) << 48) |
(((u64)ctx->irqs.range[1] & 0xffff) << 32) |
(((u64)ctx->irqs.range[2] & 0xffff) << 16) |
((u64)ctx->irqs.range[3] & 0xffff));
cxl_p2n_write(afu, CXL_PSL_AMR_An, amr);
/* master only context for dedicated */
assign_psn_space(ctx);
if ((rc = cxl_afu_reset(afu)))
return rc;
cxl_p2n_write(afu, CXL_PSL_WED_An, wed);
return afu_enable(afu);
}
static int deactivate_dedicated_process(struct cxl_afu *afu)
{
dev_info(&afu->dev, "Deactivating dedicated process mode\n");
afu->current_mode = 0;
afu->num_procs = 0;
cxl_chardev_afu_remove(afu);
return 0;
}
int _cxl_afu_deactivate_mode(struct cxl_afu *afu, int mode)
{
if (mode == CXL_MODE_DIRECTED)
return deactivate_afu_directed(afu);
if (mode == CXL_MODE_DEDICATED)
return deactivate_dedicated_process(afu);
return 0;
}
int cxl_afu_deactivate_mode(struct cxl_afu *afu)
{
return _cxl_afu_deactivate_mode(afu, afu->current_mode);
}
int cxl_afu_activate_mode(struct cxl_afu *afu, int mode)
{
if (!mode)
return 0;
if (!(mode & afu->modes_supported))
return -EINVAL;
if (mode == CXL_MODE_DIRECTED)
return activate_afu_directed(afu);
if (mode == CXL_MODE_DEDICATED)
return activate_dedicated_process(afu);
return -EINVAL;
}
int cxl_attach_process(struct cxl_context *ctx, bool kernel, u64 wed, u64 amr)
{
ctx->kernel = kernel;
if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
return attach_afu_directed(ctx, wed, amr);
if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
return attach_dedicated(ctx, wed, amr);
return -EINVAL;
}
static inline int detach_process_native_dedicated(struct cxl_context *ctx)
{
cxl_afu_reset(ctx->afu);
cxl_afu_disable(ctx->afu);
cxl_psl_purge(ctx->afu);
return 0;
}
static inline int detach_process_native_afu_directed(struct cxl_context *ctx)
{
if (!ctx->pe_inserted)
return 0;
if (terminate_process_element(ctx))
return -1;
if (remove_process_element(ctx))
return -1;
return 0;
}
int cxl_detach_process(struct cxl_context *ctx)
{
if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
return detach_process_native_dedicated(ctx);
return detach_process_native_afu_directed(ctx);
}
int cxl_get_irq(struct cxl_context *ctx, struct cxl_irq_info *info)
{
u64 pidtid;
info->dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);
info->dar = cxl_p2n_read(ctx->afu, CXL_PSL_DAR_An);
info->dsr = cxl_p2n_read(ctx->afu, CXL_PSL_DSR_An);
pidtid = cxl_p2n_read(ctx->afu, CXL_PSL_PID_TID_An);
info->pid = pidtid >> 32;
info->tid = pidtid & 0xffffffff;
info->afu_err = cxl_p2n_read(ctx->afu, CXL_AFU_ERR_An);
info->errstat = cxl_p2n_read(ctx->afu, CXL_PSL_ErrStat_An);
return 0;
}
static void recover_psl_err(struct cxl_afu *afu, u64 errstat)
{
u64 dsisr;
pr_devel("RECOVERING FROM PSL ERROR... (0x%.16llx)\n", errstat);
/* Clear PSL_DSISR[PE] */
dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
cxl_p2n_write(afu, CXL_PSL_DSISR_An, dsisr & ~CXL_PSL_DSISR_An_PE);
/* Write 1s to clear error status bits */
cxl_p2n_write(afu, CXL_PSL_ErrStat_An, errstat);
}
int cxl_ack_irq(struct cxl_context *ctx, u64 tfc, u64 psl_reset_mask)
{
if (tfc)
cxl_p2n_write(ctx->afu, CXL_PSL_TFC_An, tfc);
if (psl_reset_mask)
recover_psl_err(ctx->afu, psl_reset_mask);
return 0;
}
int cxl_check_error(struct cxl_afu *afu)
{
return (cxl_p1n_read(afu, CXL_PSL_SCNTL_An) == ~0ULL);
}

1038
drivers/misc/cxl/pci.c Normal file

File diff suppressed because it is too large Load diff

385
drivers/misc/cxl/sysfs.c Normal file
View file

@ -0,0 +1,385 @@
/*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/device.h>
#include <linux/sysfs.h>
#include "cxl.h"
#define to_afu_chardev_m(d) dev_get_drvdata(d)
/********* Adapter attributes **********************************************/
static ssize_t caia_version_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl *adapter = to_cxl_adapter(device);
return scnprintf(buf, PAGE_SIZE, "%i.%i\n", adapter->caia_major,
adapter->caia_minor);
}
static ssize_t psl_revision_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl *adapter = to_cxl_adapter(device);
return scnprintf(buf, PAGE_SIZE, "%i\n", adapter->psl_rev);
}
static ssize_t base_image_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl *adapter = to_cxl_adapter(device);
return scnprintf(buf, PAGE_SIZE, "%i\n", adapter->base_image);
}
static ssize_t image_loaded_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl *adapter = to_cxl_adapter(device);
if (adapter->user_image_loaded)
return scnprintf(buf, PAGE_SIZE, "user\n");
return scnprintf(buf, PAGE_SIZE, "factory\n");
}
static struct device_attribute adapter_attrs[] = {
__ATTR_RO(caia_version),
__ATTR_RO(psl_revision),
__ATTR_RO(base_image),
__ATTR_RO(image_loaded),
};
/********* AFU master specific attributes **********************************/
static ssize_t mmio_size_show_master(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl_afu *afu = to_afu_chardev_m(device);
return scnprintf(buf, PAGE_SIZE, "%llu\n", afu->adapter->ps_size);
}
static ssize_t pp_mmio_off_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl_afu *afu = to_afu_chardev_m(device);
return scnprintf(buf, PAGE_SIZE, "%llu\n", afu->pp_offset);
}
static ssize_t pp_mmio_len_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl_afu *afu = to_afu_chardev_m(device);
return scnprintf(buf, PAGE_SIZE, "%llu\n", afu->pp_size);
}
static struct device_attribute afu_master_attrs[] = {
__ATTR(mmio_size, S_IRUGO, mmio_size_show_master, NULL),
__ATTR_RO(pp_mmio_off),
__ATTR_RO(pp_mmio_len),
};
/********* AFU attributes **************************************************/
static ssize_t mmio_size_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl_afu *afu = to_cxl_afu(device);
if (afu->pp_size)
return scnprintf(buf, PAGE_SIZE, "%llu\n", afu->pp_size);
return scnprintf(buf, PAGE_SIZE, "%llu\n", afu->adapter->ps_size);
}
static ssize_t reset_store_afu(struct device *device,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct cxl_afu *afu = to_cxl_afu(device);
int rc;
/* Not safe to reset if it is currently in use */
mutex_lock(&afu->contexts_lock);
if (!idr_is_empty(&afu->contexts_idr)) {
rc = -EBUSY;
goto err;
}
if ((rc = cxl_afu_reset(afu)))
goto err;
rc = count;
err:
mutex_unlock(&afu->contexts_lock);
return rc;
}
static ssize_t irqs_min_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl_afu *afu = to_cxl_afu(device);
return scnprintf(buf, PAGE_SIZE, "%i\n", afu->pp_irqs);
}
static ssize_t irqs_max_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl_afu *afu = to_cxl_afu(device);
return scnprintf(buf, PAGE_SIZE, "%i\n", afu->irqs_max);
}
static ssize_t irqs_max_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct cxl_afu *afu = to_cxl_afu(device);
ssize_t ret;
int irqs_max;
ret = sscanf(buf, "%i", &irqs_max);
if (ret != 1)
return -EINVAL;
if (irqs_max < afu->pp_irqs)
return -EINVAL;
if (irqs_max > afu->adapter->user_irqs)
return -EINVAL;
afu->irqs_max = irqs_max;
return count;
}
static ssize_t modes_supported_show(struct device *device,
struct device_attribute *attr, char *buf)
{
struct cxl_afu *afu = to_cxl_afu(device);
char *p = buf, *end = buf + PAGE_SIZE;
if (afu->modes_supported & CXL_MODE_DEDICATED)
p += scnprintf(p, end - p, "dedicated_process\n");
if (afu->modes_supported & CXL_MODE_DIRECTED)
p += scnprintf(p, end - p, "afu_directed\n");
return (p - buf);
}
static ssize_t prefault_mode_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl_afu *afu = to_cxl_afu(device);
switch (afu->prefault_mode) {
case CXL_PREFAULT_WED:
return scnprintf(buf, PAGE_SIZE, "work_element_descriptor\n");
case CXL_PREFAULT_ALL:
return scnprintf(buf, PAGE_SIZE, "all\n");
default:
return scnprintf(buf, PAGE_SIZE, "none\n");
}
}
static ssize_t prefault_mode_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct cxl_afu *afu = to_cxl_afu(device);
enum prefault_modes mode = -1;
if (!strncmp(buf, "work_element_descriptor", 23))
mode = CXL_PREFAULT_WED;
if (!strncmp(buf, "all", 3))
mode = CXL_PREFAULT_ALL;
if (!strncmp(buf, "none", 4))
mode = CXL_PREFAULT_NONE;
if (mode == -1)
return -EINVAL;
afu->prefault_mode = mode;
return count;
}
static ssize_t mode_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct cxl_afu *afu = to_cxl_afu(device);
if (afu->current_mode == CXL_MODE_DEDICATED)
return scnprintf(buf, PAGE_SIZE, "dedicated_process\n");
if (afu->current_mode == CXL_MODE_DIRECTED)
return scnprintf(buf, PAGE_SIZE, "afu_directed\n");
return scnprintf(buf, PAGE_SIZE, "none\n");
}
static ssize_t mode_store(struct device *device, struct device_attribute *attr,
const char *buf, size_t count)
{
struct cxl_afu *afu = to_cxl_afu(device);
int old_mode, mode = -1;
int rc = -EBUSY;
/* can't change this if we have a user */
mutex_lock(&afu->contexts_lock);
if (!idr_is_empty(&afu->contexts_idr))
goto err;
if (!strncmp(buf, "dedicated_process", 17))
mode = CXL_MODE_DEDICATED;
if (!strncmp(buf, "afu_directed", 12))
mode = CXL_MODE_DIRECTED;
if (!strncmp(buf, "none", 4))
mode = 0;
if (mode == -1) {
rc = -EINVAL;
goto err;
}
/*
* cxl_afu_deactivate_mode needs to be done outside the lock, prevent
* other contexts coming in before we are ready:
*/
old_mode = afu->current_mode;
afu->current_mode = 0;
afu->num_procs = 0;
mutex_unlock(&afu->contexts_lock);
if ((rc = _cxl_afu_deactivate_mode(afu, old_mode)))
return rc;
if ((rc = cxl_afu_activate_mode(afu, mode)))
return rc;
return count;
err:
mutex_unlock(&afu->contexts_lock);
return rc;
}
static ssize_t api_version_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%i\n", CXL_API_VERSION);
}
static ssize_t api_version_compatible_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%i\n", CXL_API_VERSION_COMPATIBLE);
}
static struct device_attribute afu_attrs[] = {
__ATTR_RO(mmio_size),
__ATTR_RO(irqs_min),
__ATTR_RW(irqs_max),
__ATTR_RO(modes_supported),
__ATTR_RW(mode),
__ATTR_RW(prefault_mode),
__ATTR_RO(api_version),
__ATTR_RO(api_version_compatible),
__ATTR(reset, S_IWUSR, NULL, reset_store_afu),
};
int cxl_sysfs_adapter_add(struct cxl *adapter)
{
int i, rc;
for (i = 0; i < ARRAY_SIZE(adapter_attrs); i++) {
if ((rc = device_create_file(&adapter->dev, &adapter_attrs[i])))
goto err;
}
return 0;
err:
for (i--; i >= 0; i--)
device_remove_file(&adapter->dev, &adapter_attrs[i]);
return rc;
}
void cxl_sysfs_adapter_remove(struct cxl *adapter)
{
int i;
for (i = 0; i < ARRAY_SIZE(adapter_attrs); i++)
device_remove_file(&adapter->dev, &adapter_attrs[i]);
}
int cxl_sysfs_afu_add(struct cxl_afu *afu)
{
int i, rc;
for (i = 0; i < ARRAY_SIZE(afu_attrs); i++) {
if ((rc = device_create_file(&afu->dev, &afu_attrs[i])))
goto err;
}
return 0;
err:
for (i--; i >= 0; i--)
device_remove_file(&afu->dev, &afu_attrs[i]);
return rc;
}
void cxl_sysfs_afu_remove(struct cxl_afu *afu)
{
int i;
for (i = 0; i < ARRAY_SIZE(afu_attrs); i++)
device_remove_file(&afu->dev, &afu_attrs[i]);
}
int cxl_sysfs_afu_m_add(struct cxl_afu *afu)
{
int i, rc;
for (i = 0; i < ARRAY_SIZE(afu_master_attrs); i++) {
if ((rc = device_create_file(afu->chardev_m, &afu_master_attrs[i])))
goto err;
}
return 0;
err:
for (i--; i >= 0; i--)
device_remove_file(afu->chardev_m, &afu_master_attrs[i]);
return rc;
}
void cxl_sysfs_afu_m_remove(struct cxl_afu *afu)
{
int i;
for (i = 0; i < ARRAY_SIZE(afu_master_attrs); i++)
device_remove_file(afu->chardev_m, &afu_master_attrs[i]);
}

114
drivers/misc/dmverity_query.c Executable file
View file

@ -0,0 +1,114 @@
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/mm.h>
#include <linux/types.h>
#include <linux/highmem.h>
#ifdef CONFIG_RKP_CFP_FIX_SMC_BUG
#include <linux/rkp_cfp.h>
#endif
#define CMD_READ_SYSTEM_IMAGE_CHECK_STATUS 3
static inline u64 exynos_smc_verity(u64 cmd, u64 arg1, u64 arg2, u64 arg3)
{
register u64 reg0 __asm__("x0") = cmd;
register u64 reg1 __asm__("x1") = arg1;
register u64 reg2 __asm__("x2") = arg2;
register u64 reg3 __asm__("x3") = arg3;
__asm__ volatile (
#ifdef CONFIG_RKP_CFP_FIX_SMC_BUG
PRE_SMC_INLINE
#endif
"dsb sy\n"
"smc 0\n"
#ifdef CONFIG_RKP_CFP_FIX_SMC_BUG
POST_SMC_INLINE
#endif
: "+r"(reg0), "+r"(reg1), "+r"(reg2), "+r"(reg3)
);
return reg0;
}
static int verity_scm_call(void)
{
return exynos_smc_verity(0x83000006, CMD_READ_SYSTEM_IMAGE_CHECK_STATUS, 0, 0);
}
#define DRIVER_DESC "Read whether odin flash succeeded"
ssize_t dmverity_read(struct file *filep, char __user *buf, size_t size, loff_t *offset)
{
uint32_t odin_flag;
//int ret;
/* First check is to get rid of integer overflow exploits */
if (size < sizeof(uint32_t)) {
printk(KERN_ERR"Size must be atleast %d\n", (int)sizeof(uint32_t));
return -EINVAL;
}
odin_flag = verity_scm_call();
printk(KERN_INFO"dmverity: odin flag: %x\n", odin_flag);
if (copy_to_user(buf, &odin_flag, sizeof(uint32_t))) {
printk(KERN_ERR"Copy to user failed\n");
return -1;
} else
return sizeof(uint32_t);
}
static const struct file_operations dmverity_proc_fops = {
.read = dmverity_read,
};
/**
* dmverity_odin_flag_read_init - Initialization function for DMVERITY
*
* It creates and initializes dmverity proc entry with initialized read handler
*/
static int __init dmverity_odin_flag_read_init(void)
{
//extern int boot_mode_recovery;
if (/* boot_mode_recovery == */ 1) {
/* Only create this in recovery mode. Not sure why I am doing this */
if (proc_create("dmverity_odin_flag", 0644,NULL, &dmverity_proc_fops) == NULL) {
printk(KERN_ERR"dmverity_odin_flag_read_init: Error creating proc entry\n");
goto error_return;
}
printk(KERN_INFO"dmverity_odin_flag_read_init:: Registering /proc/dmverity_odin_flag Interface \n");
} else {
printk(KERN_INFO"dmverity_odin_flag_read_init:: not enabling in non-recovery mode\n");
goto error_return;
}
return 0;
error_return:
return -1;
}
/**
* dmverity_odin_flag_read_exit - Cleanup Code for DMVERITY
*
* It removes /proc/dmverity proc entry and does the required cleanup operations
*/
static void __exit dmverity_odin_flag_read_exit(void)
{
remove_proc_entry("dmverity_odin_flag", NULL);
printk(KERN_INFO"Deregistering /proc/dmverity_odin_flag interface\n");
}
module_init(dmverity_odin_flag_read_init);
module_exit(dmverity_odin_flag_read_exit);
MODULE_DESCRIPTION(DRIVER_DESC);

255
drivers/misc/ds1682.c Normal file
View file

@ -0,0 +1,255 @@
/*
* Dallas Semiconductor DS1682 Elapsed Time Recorder device driver
*
* Written by: Grant Likely <grant.likely@secretlab.ca>
*
* Copyright (C) 2007 Secret Lab Technologies Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/*
* The DS1682 elapsed timer recorder is a simple device that implements
* one elapsed time counter, one event counter, an alarm signal and 10
* bytes of general purpose EEPROM.
*
* This driver provides access to the DS1682 counters and user data via
* the sysfs. The following attributes are added to the device node:
* elapsed_time (u32): Total elapsed event time in ms resolution
* alarm_time (u32): When elapsed time exceeds the value in alarm_time,
* then the alarm pin is asserted.
* event_count (u16): number of times the event pin has gone low.
* eeprom (u8[10]): general purpose EEPROM
*
* Counter registers and user data are both read/write unless the device
* has been write protected. This driver does not support turning off write
* protection. Once write protection is turned on, it is impossible to
* turn it off again, so I have left the feature out of this driver to avoid
* accidental enabling, but it is trivial to add write protect support.
*
*/
#include <linux/module.h>
#include <linux/i2c.h>
#include <linux/string.h>
#include <linux/list.h>
#include <linux/sysfs.h>
#include <linux/ctype.h>
#include <linux/hwmon-sysfs.h>
/* Device registers */
#define DS1682_REG_CONFIG 0x00
#define DS1682_REG_ALARM 0x01
#define DS1682_REG_ELAPSED 0x05
#define DS1682_REG_EVT_CNTR 0x09
#define DS1682_REG_EEPROM 0x0b
#define DS1682_REG_RESET 0x1d
#define DS1682_REG_WRITE_DISABLE 0x1e
#define DS1682_REG_WRITE_MEM_DISABLE 0x1f
#define DS1682_EEPROM_SIZE 10
/*
* Generic counter attributes
*/
static ssize_t ds1682_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr);
struct i2c_client *client = to_i2c_client(dev);
__le32 val = 0;
int rc;
dev_dbg(dev, "ds1682_show() called on %s\n", attr->attr.name);
/* Read the register */
rc = i2c_smbus_read_i2c_block_data(client, sattr->index, sattr->nr,
(u8 *) & val);
if (rc < 0)
return -EIO;
/* Special case: the 32 bit regs are time values with 1/4s
* resolution, scale them up to milliseconds */
if (sattr->nr == 4)
return sprintf(buf, "%llu\n",
((unsigned long long)le32_to_cpu(val)) * 250);
/* Format the output string and return # of bytes */
return sprintf(buf, "%li\n", (long)le32_to_cpu(val));
}
static ssize_t ds1682_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr);
struct i2c_client *client = to_i2c_client(dev);
u64 val;
__le32 val_le;
int rc;
dev_dbg(dev, "ds1682_store() called on %s\n", attr->attr.name);
/* Decode input */
rc = kstrtoull(buf, 0, &val);
if (rc < 0) {
dev_dbg(dev, "input string not a number\n");
return -EINVAL;
}
/* Special case: the 32 bit regs are time values with 1/4s
* resolution, scale input down to quarter-seconds */
if (sattr->nr == 4)
do_div(val, 250);
/* write out the value */
val_le = cpu_to_le32(val);
rc = i2c_smbus_write_i2c_block_data(client, sattr->index, sattr->nr,
(u8 *) & val_le);
if (rc < 0) {
dev_err(dev, "register write failed; reg=0x%x, size=%i\n",
sattr->index, sattr->nr);
return -EIO;
}
return count;
}
/*
* Simple register attributes
*/
static SENSOR_DEVICE_ATTR_2(elapsed_time, S_IRUGO | S_IWUSR, ds1682_show,
ds1682_store, 4, DS1682_REG_ELAPSED);
static SENSOR_DEVICE_ATTR_2(alarm_time, S_IRUGO | S_IWUSR, ds1682_show,
ds1682_store, 4, DS1682_REG_ALARM);
static SENSOR_DEVICE_ATTR_2(event_count, S_IRUGO | S_IWUSR, ds1682_show,
ds1682_store, 2, DS1682_REG_EVT_CNTR);
static const struct attribute_group ds1682_group = {
.attrs = (struct attribute *[]) {
&sensor_dev_attr_elapsed_time.dev_attr.attr,
&sensor_dev_attr_alarm_time.dev_attr.attr,
&sensor_dev_attr_event_count.dev_attr.attr,
NULL,
},
};
/*
* User data attribute
*/
static ssize_t ds1682_eeprom_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr,
char *buf, loff_t off, size_t count)
{
struct i2c_client *client = kobj_to_i2c_client(kobj);
int rc;
dev_dbg(&client->dev, "ds1682_eeprom_read(p=%p, off=%lli, c=%zi)\n",
buf, off, count);
if (off >= DS1682_EEPROM_SIZE)
return 0;
if (off + count > DS1682_EEPROM_SIZE)
count = DS1682_EEPROM_SIZE - off;
rc = i2c_smbus_read_i2c_block_data(client, DS1682_REG_EEPROM + off,
count, buf);
if (rc < 0)
return -EIO;
return count;
}
static ssize_t ds1682_eeprom_write(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr,
char *buf, loff_t off, size_t count)
{
struct i2c_client *client = kobj_to_i2c_client(kobj);
dev_dbg(&client->dev, "ds1682_eeprom_write(p=%p, off=%lli, c=%zi)\n",
buf, off, count);
if (off >= DS1682_EEPROM_SIZE)
return -ENOSPC;
if (off + count > DS1682_EEPROM_SIZE)
count = DS1682_EEPROM_SIZE - off;
/* Write out to the device */
if (i2c_smbus_write_i2c_block_data(client, DS1682_REG_EEPROM + off,
count, buf) < 0)
return -EIO;
return count;
}
static struct bin_attribute ds1682_eeprom_attr = {
.attr = {
.name = "eeprom",
.mode = S_IRUGO | S_IWUSR,
},
.size = DS1682_EEPROM_SIZE,
.read = ds1682_eeprom_read,
.write = ds1682_eeprom_write,
};
/*
* Called when a ds1682 device is matched with this driver
*/
static int ds1682_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
int rc;
if (!i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_I2C_BLOCK)) {
dev_err(&client->dev, "i2c bus does not support the ds1682\n");
rc = -ENODEV;
goto exit;
}
rc = sysfs_create_group(&client->dev.kobj, &ds1682_group);
if (rc)
goto exit;
rc = sysfs_create_bin_file(&client->dev.kobj, &ds1682_eeprom_attr);
if (rc)
goto exit_bin_attr;
return 0;
exit_bin_attr:
sysfs_remove_group(&client->dev.kobj, &ds1682_group);
exit:
return rc;
}
static int ds1682_remove(struct i2c_client *client)
{
sysfs_remove_bin_file(&client->dev.kobj, &ds1682_eeprom_attr);
sysfs_remove_group(&client->dev.kobj, &ds1682_group);
return 0;
}
static const struct i2c_device_id ds1682_id[] = {
{ "ds1682", 0 },
{ }
};
MODULE_DEVICE_TABLE(i2c, ds1682_id);
static struct i2c_driver ds1682_driver = {
.driver = {
.name = "ds1682",
},
.probe = ds1682_probe,
.remove = ds1682_remove,
.id_table = ds1682_id,
};
module_i2c_driver(ds1682_driver);
MODULE_AUTHOR("Grant Likely <grant.likely@secretlab.ca>");
MODULE_DESCRIPTION("DS1682 Elapsed Time Indicator driver");
MODULE_LICENSE("GPL");

64
drivers/misc/dummy-irq.c Normal file
View file

@ -0,0 +1,64 @@
/*
* Dummy IRQ handler driver.
*
* This module only registers itself as a handler that is specified to it
* by the 'irq' parameter.
*
* The sole purpose of this module is to help with debugging of systems on
* which spurious IRQs would happen on disabled IRQ vector.
*
* Copyright (C) 2013 Jiri Kosina
*/
/*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
static int irq = -1;
static irqreturn_t dummy_interrupt(int irq, void *dev_id)
{
static int count = 0;
if (count == 0) {
printk(KERN_INFO "dummy-irq: interrupt occurred on IRQ %d\n",
irq);
count++;
}
return IRQ_NONE;
}
static int __init dummy_irq_init(void)
{
if (irq < 0) {
printk(KERN_ERR "dummy-irq: no IRQ given. Use irq=N\n");
return -EIO;
}
if (request_irq(irq, &dummy_interrupt, IRQF_SHARED, "dummy_irq", &irq)) {
printk(KERN_ERR "dummy-irq: cannot register IRQ %d\n", irq);
return -EIO;
}
printk(KERN_INFO "dummy-irq: registered for IRQ %d\n", irq);
return 0;
}
static void __exit dummy_irq_exit(void)
{
printk(KERN_INFO "dummy-irq unloaded\n");
free_irq(irq, &irq);
}
module_init(dummy_irq_init);
module_exit(dummy_irq_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Jiri Kosina");
module_param(irq, uint, 0444);
MODULE_PARM_DESC(irq, "The IRQ to register for");
MODULE_DESCRIPTION("Dummy IRQ handler driver");

View file

@ -0,0 +1,9 @@
config ECHO
tristate "Line Echo Canceller support"
default n
---help---
This driver provides line echo cancelling support for mISDN and
Zaptel drivers.
To compile this driver as a module, choose M here. The module
will be called echo.

View file

@ -0,0 +1 @@
obj-$(CONFIG_ECHO) += echo.o

674
drivers/misc/echo/echo.c Normal file
View file

@ -0,0 +1,674 @@
/*
* SpanDSP - a series of DSP components for telephony
*
* echo.c - A line echo canceller. This code is being developed
* against and partially complies with G168.
*
* Written by Steve Underwood <steveu@coppice.org>
* and David Rowe <david_at_rowetel_dot_com>
*
* Copyright (C) 2001, 2003 Steve Underwood, 2007 David Rowe
*
* Based on a bit from here, a bit from there, eye of toad, ear of
* bat, 15 years of failed attempts by David and a few fried brain
* cells.
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*! \file */
/* Implementation Notes
David Rowe
April 2007
This code started life as Steve's NLMS algorithm with a tap
rotation algorithm to handle divergence during double talk. I
added a Geigel Double Talk Detector (DTD) [2] and performed some
G168 tests. However I had trouble meeting the G168 requirements,
especially for double talk - there were always cases where my DTD
failed, for example where near end speech was under the 6dB
threshold required for declaring double talk.
So I tried a two path algorithm [1], which has so far given better
results. The original tap rotation/Geigel algorithm is available
in SVN http://svn.rowetel.com/software/oslec/tags/before_16bit.
It's probably possible to make it work if some one wants to put some
serious work into it.
At present no special treatment is provided for tones, which
generally cause NLMS algorithms to diverge. Initial runs of a
subset of the G168 tests for tones (e.g ./echo_test 6) show the
current algorithm is passing OK, which is kind of surprising. The
full set of tests needs to be performed to confirm this result.
One other interesting change is that I have managed to get the NLMS
code to work with 16 bit coefficients, rather than the original 32
bit coefficents. This reduces the MIPs and storage required.
I evaulated the 16 bit port using g168_tests.sh and listening tests
on 4 real-world samples.
I also attempted the implementation of a block based NLMS update
[2] but although this passes g168_tests.sh it didn't converge well
on the real-world samples. I have no idea why, perhaps a scaling
problem. The block based code is also available in SVN
http://svn.rowetel.com/software/oslec/tags/before_16bit. If this
code can be debugged, it will lead to further reduction in MIPS, as
the block update code maps nicely onto DSP instruction sets (it's a
dot product) compared to the current sample-by-sample update.
Steve also has some nice notes on echo cancellers in echo.h
References:
[1] Ochiai, Areseki, and Ogihara, "Echo Canceller with Two Echo
Path Models", IEEE Transactions on communications, COM-25,
No. 6, June
1977.
http://www.rowetel.com/images/echo/dual_path_paper.pdf
[2] The classic, very useful paper that tells you how to
actually build a real world echo canceller:
Messerschmitt, Hedberg, Cole, Haoui, Winship, "Digital Voice
Echo Canceller with a TMS320020,
http://www.rowetel.com/images/echo/spra129.pdf
[3] I have written a series of blog posts on this work, here is
Part 1: http://www.rowetel.com/blog/?p=18
[4] The source code http://svn.rowetel.com/software/oslec/
[5] A nice reference on LMS filters:
http://en.wikipedia.org/wiki/Least_mean_squares_filter
Credits:
Thanks to Steve Underwood, Jean-Marc Valin, and Ramakrishnan
Muthukrishnan for their suggestions and email discussions. Thanks
also to those people who collected echo samples for me such as
Mark, Pawel, and Pavel.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include "echo.h"
#define MIN_TX_POWER_FOR_ADAPTION 64
#define MIN_RX_POWER_FOR_ADAPTION 64
#define DTD_HANGOVER 600 /* 600 samples, or 75ms */
#define DC_LOG2BETA 3 /* log2() of DC filter Beta */
/* adapting coeffs using the traditional stochastic descent (N)LMS algorithm */
#ifdef __bfin__
static inline void lms_adapt_bg(struct oslec_state *ec, int clean, int shift)
{
int i;
int offset1;
int offset2;
int factor;
int exp;
int16_t *phist;
int n;
if (shift > 0)
factor = clean << shift;
else
factor = clean >> -shift;
/* Update the FIR taps */
offset2 = ec->curr_pos;
offset1 = ec->taps - offset2;
phist = &ec->fir_state_bg.history[offset2];
/* st: and en: help us locate the assembler in echo.s */
/* asm("st:"); */
n = ec->taps;
for (i = 0; i < n; i++) {
exp = *phist++ * factor;
ec->fir_taps16[1][i] += (int16_t) ((exp + (1 << 14)) >> 15);
}
/* asm("en:"); */
/* Note the asm for the inner loop above generated by Blackfin gcc
4.1.1 is pretty good (note even parallel instructions used):
R0 = W [P0++] (X);
R0 *= R2;
R0 = R0 + R3 (NS) ||
R1 = W [P1] (X) ||
nop;
R0 >>>= 15;
R0 = R0 + R1;
W [P1++] = R0;
A block based update algorithm would be much faster but the
above can't be improved on much. Every instruction saved in
the loop above is 2 MIPs/ch! The for loop above is where the
Blackfin spends most of it's time - about 17 MIPs/ch measured
with speedtest.c with 256 taps (32ms). Write-back and
Write-through cache gave about the same performance.
*/
}
/*
IDEAS for further optimisation of lms_adapt_bg():
1/ The rounding is quite costly. Could we keep as 32 bit coeffs
then make filter pluck the MS 16-bits of the coeffs when filtering?
However this would lower potential optimisation of filter, as I
think the dual-MAC architecture requires packed 16 bit coeffs.
2/ Block based update would be more efficient, as per comments above,
could use dual MAC architecture.
3/ Look for same sample Blackfin LMS code, see if we can get dual-MAC
packing.
4/ Execute the whole e/c in a block of say 20ms rather than sample
by sample. Processing a few samples every ms is inefficient.
*/
#else
static inline void lms_adapt_bg(struct oslec_state *ec, int clean, int shift)
{
int i;
int offset1;
int offset2;
int factor;
int exp;
if (shift > 0)
factor = clean << shift;
else
factor = clean >> -shift;
/* Update the FIR taps */
offset2 = ec->curr_pos;
offset1 = ec->taps - offset2;
for (i = ec->taps - 1; i >= offset1; i--) {
exp = (ec->fir_state_bg.history[i - offset1] * factor);
ec->fir_taps16[1][i] += (int16_t) ((exp + (1 << 14)) >> 15);
}
for (; i >= 0; i--) {
exp = (ec->fir_state_bg.history[i + offset2] * factor);
ec->fir_taps16[1][i] += (int16_t) ((exp + (1 << 14)) >> 15);
}
}
#endif
static inline int top_bit(unsigned int bits)
{
if (bits == 0)
return -1;
else
return (int)fls((int32_t) bits) - 1;
}
struct oslec_state *oslec_create(int len, int adaption_mode)
{
struct oslec_state *ec;
int i;
const int16_t *history;
ec = kzalloc(sizeof(*ec), GFP_KERNEL);
if (!ec)
return NULL;
ec->taps = len;
ec->log2taps = top_bit(len);
ec->curr_pos = ec->taps - 1;
ec->fir_taps16[0] =
kcalloc(ec->taps, sizeof(int16_t), GFP_KERNEL);
if (!ec->fir_taps16[0])
goto error_oom_0;
ec->fir_taps16[1] =
kcalloc(ec->taps, sizeof(int16_t), GFP_KERNEL);
if (!ec->fir_taps16[1])
goto error_oom_1;
history = fir16_create(&ec->fir_state, ec->fir_taps16[0], ec->taps);
if (!history)
goto error_state;
history = fir16_create(&ec->fir_state_bg, ec->fir_taps16[1], ec->taps);
if (!history)
goto error_state_bg;
for (i = 0; i < 5; i++)
ec->xvtx[i] = ec->yvtx[i] = ec->xvrx[i] = ec->yvrx[i] = 0;
ec->cng_level = 1000;
oslec_adaption_mode(ec, adaption_mode);
ec->snapshot = kcalloc(ec->taps, sizeof(int16_t), GFP_KERNEL);
if (!ec->snapshot)
goto error_snap;
ec->cond_met = 0;
ec->pstates = 0;
ec->ltxacc = ec->lrxacc = ec->lcleanacc = ec->lclean_bgacc = 0;
ec->ltx = ec->lrx = ec->lclean = ec->lclean_bg = 0;
ec->tx_1 = ec->tx_2 = ec->rx_1 = ec->rx_2 = 0;
ec->lbgn = ec->lbgn_acc = 0;
ec->lbgn_upper = 200;
ec->lbgn_upper_acc = ec->lbgn_upper << 13;
return ec;
error_snap:
fir16_free(&ec->fir_state_bg);
error_state_bg:
fir16_free(&ec->fir_state);
error_state:
kfree(ec->fir_taps16[1]);
error_oom_1:
kfree(ec->fir_taps16[0]);
error_oom_0:
kfree(ec);
return NULL;
}
EXPORT_SYMBOL_GPL(oslec_create);
void oslec_free(struct oslec_state *ec)
{
int i;
fir16_free(&ec->fir_state);
fir16_free(&ec->fir_state_bg);
for (i = 0; i < 2; i++)
kfree(ec->fir_taps16[i]);
kfree(ec->snapshot);
kfree(ec);
}
EXPORT_SYMBOL_GPL(oslec_free);
void oslec_adaption_mode(struct oslec_state *ec, int adaption_mode)
{
ec->adaption_mode = adaption_mode;
}
EXPORT_SYMBOL_GPL(oslec_adaption_mode);
void oslec_flush(struct oslec_state *ec)
{
int i;
ec->ltxacc = ec->lrxacc = ec->lcleanacc = ec->lclean_bgacc = 0;
ec->ltx = ec->lrx = ec->lclean = ec->lclean_bg = 0;
ec->tx_1 = ec->tx_2 = ec->rx_1 = ec->rx_2 = 0;
ec->lbgn = ec->lbgn_acc = 0;
ec->lbgn_upper = 200;
ec->lbgn_upper_acc = ec->lbgn_upper << 13;
ec->nonupdate_dwell = 0;
fir16_flush(&ec->fir_state);
fir16_flush(&ec->fir_state_bg);
ec->fir_state.curr_pos = ec->taps - 1;
ec->fir_state_bg.curr_pos = ec->taps - 1;
for (i = 0; i < 2; i++)
memset(ec->fir_taps16[i], 0, ec->taps * sizeof(int16_t));
ec->curr_pos = ec->taps - 1;
ec->pstates = 0;
}
EXPORT_SYMBOL_GPL(oslec_flush);
void oslec_snapshot(struct oslec_state *ec)
{
memcpy(ec->snapshot, ec->fir_taps16[0], ec->taps * sizeof(int16_t));
}
EXPORT_SYMBOL_GPL(oslec_snapshot);
/* Dual Path Echo Canceller */
int16_t oslec_update(struct oslec_state *ec, int16_t tx, int16_t rx)
{
int32_t echo_value;
int clean_bg;
int tmp;
int tmp1;
/*
* Input scaling was found be required to prevent problems when tx
* starts clipping. Another possible way to handle this would be the
* filter coefficent scaling.
*/
ec->tx = tx;
ec->rx = rx;
tx >>= 1;
rx >>= 1;
/*
* Filter DC, 3dB point is 160Hz (I think), note 32 bit precision
* required otherwise values do not track down to 0. Zero at DC, Pole
* at (1-Beta) on real axis. Some chip sets (like Si labs) don't
* need this, but something like a $10 X100P card does. Any DC really
* slows down convergence.
*
* Note: removes some low frequency from the signal, this reduces the
* speech quality when listening to samples through headphones but may
* not be obvious through a telephone handset.
*
* Note that the 3dB frequency in radians is approx Beta, e.g. for Beta
* = 2^(-3) = 0.125, 3dB freq is 0.125 rads = 159Hz.
*/
if (ec->adaption_mode & ECHO_CAN_USE_RX_HPF) {
tmp = rx << 15;
/*
* Make sure the gain of the HPF is 1.0. This can still
* saturate a little under impulse conditions, and it might
* roll to 32768 and need clipping on sustained peak level
* signals. However, the scale of such clipping is small, and
* the error due to any saturation should not markedly affect
* the downstream processing.
*/
tmp -= (tmp >> 4);
ec->rx_1 += -(ec->rx_1 >> DC_LOG2BETA) + tmp - ec->rx_2;
/*
* hard limit filter to prevent clipping. Note that at this
* stage rx should be limited to +/- 16383 due to right shift
* above
*/
tmp1 = ec->rx_1 >> 15;
if (tmp1 > 16383)
tmp1 = 16383;
if (tmp1 < -16383)
tmp1 = -16383;
rx = tmp1;
ec->rx_2 = tmp;
}
/* Block average of power in the filter states. Used for
adaption power calculation. */
{
int new, old;
/* efficient "out with the old and in with the new" algorithm so
we don't have to recalculate over the whole block of
samples. */
new = (int)tx * (int)tx;
old = (int)ec->fir_state.history[ec->fir_state.curr_pos] *
(int)ec->fir_state.history[ec->fir_state.curr_pos];
ec->pstates +=
((new - old) + (1 << (ec->log2taps - 1))) >> ec->log2taps;
if (ec->pstates < 0)
ec->pstates = 0;
}
/* Calculate short term average levels using simple single pole IIRs */
ec->ltxacc += abs(tx) - ec->ltx;
ec->ltx = (ec->ltxacc + (1 << 4)) >> 5;
ec->lrxacc += abs(rx) - ec->lrx;
ec->lrx = (ec->lrxacc + (1 << 4)) >> 5;
/* Foreground filter */
ec->fir_state.coeffs = ec->fir_taps16[0];
echo_value = fir16(&ec->fir_state, tx);
ec->clean = rx - echo_value;
ec->lcleanacc += abs(ec->clean) - ec->lclean;
ec->lclean = (ec->lcleanacc + (1 << 4)) >> 5;
/* Background filter */
echo_value = fir16(&ec->fir_state_bg, tx);
clean_bg = rx - echo_value;
ec->lclean_bgacc += abs(clean_bg) - ec->lclean_bg;
ec->lclean_bg = (ec->lclean_bgacc + (1 << 4)) >> 5;
/* Background Filter adaption */
/* Almost always adap bg filter, just simple DT and energy
detection to minimise adaption in cases of strong double talk.
However this is not critical for the dual path algorithm.
*/
ec->factor = 0;
ec->shift = 0;
if ((ec->nonupdate_dwell == 0)) {
int p, logp, shift;
/* Determine:
f = Beta * clean_bg_rx/P ------ (1)
where P is the total power in the filter states.
The Boffins have shown that if we obey (1) we converge
quickly and avoid instability.
The correct factor f must be in Q30, as this is the fixed
point format required by the lms_adapt_bg() function,
therefore the scaled version of (1) is:
(2^30) * f = (2^30) * Beta * clean_bg_rx/P
factor = (2^30) * Beta * clean_bg_rx/P ----- (2)
We have chosen Beta = 0.25 by experiment, so:
factor = (2^30) * (2^-2) * clean_bg_rx/P
(30 - 2 - log2(P))
factor = clean_bg_rx 2 ----- (3)
To avoid a divide we approximate log2(P) as top_bit(P),
which returns the position of the highest non-zero bit in
P. This approximation introduces an error as large as a
factor of 2, but the algorithm seems to handle it OK.
Come to think of it a divide may not be a big deal on a
modern DSP, so its probably worth checking out the cycles
for a divide versus a top_bit() implementation.
*/
p = MIN_TX_POWER_FOR_ADAPTION + ec->pstates;
logp = top_bit(p) + ec->log2taps;
shift = 30 - 2 - logp;
ec->shift = shift;
lms_adapt_bg(ec, clean_bg, shift);
}
/* very simple DTD to make sure we dont try and adapt with strong
near end speech */
ec->adapt = 0;
if ((ec->lrx > MIN_RX_POWER_FOR_ADAPTION) && (ec->lrx > ec->ltx))
ec->nonupdate_dwell = DTD_HANGOVER;
if (ec->nonupdate_dwell)
ec->nonupdate_dwell--;
/* Transfer logic */
/* These conditions are from the dual path paper [1], I messed with
them a bit to improve performance. */
if ((ec->adaption_mode & ECHO_CAN_USE_ADAPTION) &&
(ec->nonupdate_dwell == 0) &&
/* (ec->Lclean_bg < 0.875*ec->Lclean) */
(8 * ec->lclean_bg < 7 * ec->lclean) &&
/* (ec->Lclean_bg < 0.125*ec->Ltx) */
(8 * ec->lclean_bg < ec->ltx)) {
if (ec->cond_met == 6) {
/*
* BG filter has had better results for 6 consecutive
* samples
*/
ec->adapt = 1;
memcpy(ec->fir_taps16[0], ec->fir_taps16[1],
ec->taps * sizeof(int16_t));
} else
ec->cond_met++;
} else
ec->cond_met = 0;
/* Non-Linear Processing */
ec->clean_nlp = ec->clean;
if (ec->adaption_mode & ECHO_CAN_USE_NLP) {
/*
* Non-linear processor - a fancy way to say "zap small
* signals, to avoid residual echo due to (uLaw/ALaw)
* non-linearity in the channel.".
*/
if ((16 * ec->lclean < ec->ltx)) {
/*
* Our e/c has improved echo by at least 24 dB (each
* factor of 2 is 6dB, so 2*2*2*2=16 is the same as
* 6+6+6+6=24dB)
*/
if (ec->adaption_mode & ECHO_CAN_USE_CNG) {
ec->cng_level = ec->lbgn;
/*
* Very elementary comfort noise generation.
* Just random numbers rolled off very vaguely
* Hoth-like. DR: This noise doesn't sound
* quite right to me - I suspect there are some
* overflow issues in the filtering as it's too
* "crackly".
* TODO: debug this, maybe just play noise at
* high level or look at spectrum.
*/
ec->cng_rndnum =
1664525U * ec->cng_rndnum + 1013904223U;
ec->cng_filter =
((ec->cng_rndnum & 0xFFFF) - 32768 +
5 * ec->cng_filter) >> 3;
ec->clean_nlp =
(ec->cng_filter * ec->cng_level * 8) >> 14;
} else if (ec->adaption_mode & ECHO_CAN_USE_CLIP) {
/* This sounds much better than CNG */
if (ec->clean_nlp > ec->lbgn)
ec->clean_nlp = ec->lbgn;
if (ec->clean_nlp < -ec->lbgn)
ec->clean_nlp = -ec->lbgn;
} else {
/*
* just mute the residual, doesn't sound very
* good, used mainly in G168 tests
*/
ec->clean_nlp = 0;
}
} else {
/*
* Background noise estimator. I tried a few
* algorithms here without much luck. This very simple
* one seems to work best, we just average the level
* using a slow (1 sec time const) filter if the
* current level is less than a (experimentally
* derived) constant. This means we dont include high
* level signals like near end speech. When combined
* with CNG or especially CLIP seems to work OK.
*/
if (ec->lclean < 40) {
ec->lbgn_acc += abs(ec->clean) - ec->lbgn;
ec->lbgn = (ec->lbgn_acc + (1 << 11)) >> 12;
}
}
}
/* Roll around the taps buffer */
if (ec->curr_pos <= 0)
ec->curr_pos = ec->taps;
ec->curr_pos--;
if (ec->adaption_mode & ECHO_CAN_DISABLE)
ec->clean_nlp = rx;
/* Output scaled back up again to match input scaling */
return (int16_t) ec->clean_nlp << 1;
}
EXPORT_SYMBOL_GPL(oslec_update);
/* This function is separated from the echo canceller is it is usually called
as part of the tx process. See rx HP (DC blocking) filter above, it's
the same design.
Some soft phones send speech signals with a lot of low frequency
energy, e.g. down to 20Hz. This can make the hybrid non-linear
which causes the echo canceller to fall over. This filter can help
by removing any low frequency before it gets to the tx port of the
hybrid.
It can also help by removing and DC in the tx signal. DC is bad
for LMS algorithms.
This is one of the classic DC removal filters, adjusted to provide
sufficient bass rolloff to meet the above requirement to protect hybrids
from things that upset them. The difference between successive samples
produces a lousy HPF, and then a suitably placed pole flattens things out.
The final result is a nicely rolled off bass end. The filtering is
implemented with extended fractional precision, which noise shapes things,
giving very clean DC removal.
*/
int16_t oslec_hpf_tx(struct oslec_state *ec, int16_t tx)
{
int tmp;
int tmp1;
if (ec->adaption_mode & ECHO_CAN_USE_TX_HPF) {
tmp = tx << 15;
/*
* Make sure the gain of the HPF is 1.0. The first can still
* saturate a little under impulse conditions, and it might
* roll to 32768 and need clipping on sustained peak level
* signals. However, the scale of such clipping is small, and
* the error due to any saturation should not markedly affect
* the downstream processing.
*/
tmp -= (tmp >> 4);
ec->tx_1 += -(ec->tx_1 >> DC_LOG2BETA) + tmp - ec->tx_2;
tmp1 = ec->tx_1 >> 15;
if (tmp1 > 32767)
tmp1 = 32767;
if (tmp1 < -32767)
tmp1 = -32767;
tx = tmp1;
ec->tx_2 = tmp;
}
return tx;
}
EXPORT_SYMBOL_GPL(oslec_hpf_tx);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("David Rowe");
MODULE_DESCRIPTION("Open Source Line Echo Canceller");
MODULE_VERSION("0.3.0");

187
drivers/misc/echo/echo.h Normal file
View file

@ -0,0 +1,187 @@
/*
* SpanDSP - a series of DSP components for telephony
*
* echo.c - A line echo canceller. This code is being developed
* against and partially complies with G168.
*
* Written by Steve Underwood <steveu@coppice.org>
* and David Rowe <david_at_rowetel_dot_com>
*
* Copyright (C) 2001 Steve Underwood and 2007 David Rowe
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef __ECHO_H
#define __ECHO_H
/*
Line echo cancellation for voice
What does it do?
This module aims to provide G.168-2002 compliant echo cancellation, to remove
electrical echoes (e.g. from 2-4 wire hybrids) from voice calls.
How does it work?
The heart of the echo cancellor is FIR filter. This is adapted to match the
echo impulse response of the telephone line. It must be long enough to
adequately cover the duration of that impulse response. The signal transmitted
to the telephone line is passed through the FIR filter. Once the FIR is
properly adapted, the resulting output is an estimate of the echo signal
received from the line. This is subtracted from the received signal. The result
is an estimate of the signal which originated at the far end of the line, free
from echos of our own transmitted signal.
The least mean squares (LMS) algorithm is attributed to Widrow and Hoff, and
was introduced in 1960. It is the commonest form of filter adaption used in
things like modem line equalisers and line echo cancellers. There it works very
well. However, it only works well for signals of constant amplitude. It works
very poorly for things like speech echo cancellation, where the signal level
varies widely. This is quite easy to fix. If the signal level is normalised -
similar to applying AGC - LMS can work as well for a signal of varying
amplitude as it does for a modem signal. This normalised least mean squares
(NLMS) algorithm is the commonest one used for speech echo cancellation. Many
other algorithms exist - e.g. RLS (essentially the same as Kalman filtering),
FAP, etc. Some perform significantly better than NLMS. However, factors such
as computational complexity and patents favour the use of NLMS.
A simple refinement to NLMS can improve its performance with speech. NLMS tends
to adapt best to the strongest parts of a signal. If the signal is white noise,
the NLMS algorithm works very well. However, speech has more low frequency than
high frequency content. Pre-whitening (i.e. filtering the signal to flatten its
spectrum) the echo signal improves the adapt rate for speech, and ensures the
final residual signal is not heavily biased towards high frequencies. A very
low complexity filter is adequate for this, so pre-whitening adds little to the
compute requirements of the echo canceller.
An FIR filter adapted using pre-whitened NLMS performs well, provided certain
conditions are met:
- The transmitted signal has poor self-correlation.
- There is no signal being generated within the environment being
cancelled.
The difficulty is that neither of these can be guaranteed.
If the adaption is performed while transmitting noise (or something fairly
noise like, such as voice) the adaption works very well. If the adaption is
performed while transmitting something highly correlative (typically narrow
band energy such as signalling tones or DTMF), the adaption can go seriously
wrong. The reason is there is only one solution for the adaption on a near
random signal - the impulse response of the line. For a repetitive signal,
there are any number of solutions which converge the adaption, and nothing
guides the adaption to choose the generalised one. Allowing an untrained
canceller to converge on this kind of narrowband energy probably a good thing,
since at least it cancels the tones. Allowing a well converged canceller to
continue converging on such energy is just a way to ruin its generalised
adaption. A narrowband detector is needed, so adapation can be suspended at
appropriate times.
The adaption process is based on trying to eliminate the received signal. When
there is any signal from within the environment being cancelled it may upset
the adaption process. Similarly, if the signal we are transmitting is small,
noise may dominate and disturb the adaption process. If we can ensure that the
adaption is only performed when we are transmitting a significant signal level,
and the environment is not, things will be OK. Clearly, it is easy to tell when
we are sending a significant signal. Telling, if the environment is generating
a significant signal, and doing it with sufficient speed that the adaption will
not have diverged too much more we stop it, is a little harder.
The key problem in detecting when the environment is sourcing significant
energy is that we must do this very quickly. Given a reasonably long sample of
the received signal, there are a number of strategies which may be used to
assess whether that signal contains a strong far end component. However, by the
time that assessment is complete the far end signal will have already caused
major mis-convergence in the adaption process. An assessment algorithm is
needed which produces a fairly accurate result from a very short burst of far
end energy.
How do I use it?
The echo cancellor processes both the transmit and receive streams sample by
sample. The processing function is not declared inline. Unfortunately,
cancellation requires many operations per sample, so the call overhead is only
a minor burden.
*/
#include "fir.h"
#include "oslec.h"
/*
G.168 echo canceller descriptor. This defines the working state for a line
echo canceller.
*/
struct oslec_state {
int16_t tx;
int16_t rx;
int16_t clean;
int16_t clean_nlp;
int nonupdate_dwell;
int curr_pos;
int taps;
int log2taps;
int adaption_mode;
int cond_met;
int32_t pstates;
int16_t adapt;
int32_t factor;
int16_t shift;
/* Average levels and averaging filter states */
int ltxacc;
int lrxacc;
int lcleanacc;
int lclean_bgacc;
int ltx;
int lrx;
int lclean;
int lclean_bg;
int lbgn;
int lbgn_acc;
int lbgn_upper;
int lbgn_upper_acc;
/* foreground and background filter states */
struct fir16_state_t fir_state;
struct fir16_state_t fir_state_bg;
int16_t *fir_taps16[2];
/* DC blocking filter states */
int tx_1;
int tx_2;
int rx_1;
int rx_2;
/* optional High Pass Filter states */
int32_t xvtx[5];
int32_t yvtx[5];
int32_t xvrx[5];
int32_t yvrx[5];
/* Parameters for the optional Hoth noise generator */
int cng_level;
int cng_rndnum;
int cng_filter;
/* snapshot sample of coeffs used for development */
int16_t *snapshot;
};
#endif /* __ECHO_H */

216
drivers/misc/echo/fir.h Normal file
View file

@ -0,0 +1,216 @@
/*
* SpanDSP - a series of DSP components for telephony
*
* fir.h - General telephony FIR routines
*
* Written by Steve Underwood <steveu@coppice.org>
*
* Copyright (C) 2002 Steve Underwood
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#if !defined(_FIR_H_)
#define _FIR_H_
/*
Blackfin NOTES & IDEAS:
A simple dot product function is used to implement the filter. This performs
just one MAC/cycle which is inefficient but was easy to implement as a first
pass. The current Blackfin code also uses an unrolled form of the filter
history to avoid 0 length hardware loop issues. This is wasteful of
memory.
Ideas for improvement:
1/ Rewrite filter for dual MAC inner loop. The issue here is handling
history sample offsets that are 16 bit aligned - the dual MAC needs
32 bit aligmnent. There are some good examples in libbfdsp.
2/ Use the hardware circular buffer facility tohalve memory usage.
3/ Consider using internal memory.
Using less memory might also improve speed as cache misses will be
reduced. A drop in MIPs and memory approaching 50% should be
possible.
The foreground and background filters currenlty use a total of
about 10 MIPs/ch as measured with speedtest.c on a 256 TAP echo
can.
*/
/*
* 16 bit integer FIR descriptor. This defines the working state for a single
* instance of an FIR filter using 16 bit integer coefficients.
*/
struct fir16_state_t {
int taps;
int curr_pos;
const int16_t *coeffs;
int16_t *history;
};
/*
* 32 bit integer FIR descriptor. This defines the working state for a single
* instance of an FIR filter using 32 bit integer coefficients, and filtering
* 16 bit integer data.
*/
struct fir32_state_t {
int taps;
int curr_pos;
const int32_t *coeffs;
int16_t *history;
};
/*
* Floating point FIR descriptor. This defines the working state for a single
* instance of an FIR filter using floating point coefficients and data.
*/
struct fir_float_state_t {
int taps;
int curr_pos;
const float *coeffs;
float *history;
};
static inline const int16_t *fir16_create(struct fir16_state_t *fir,
const int16_t *coeffs, int taps)
{
fir->taps = taps;
fir->curr_pos = taps - 1;
fir->coeffs = coeffs;
#if defined(__bfin__)
fir->history = kcalloc(2 * taps, sizeof(int16_t), GFP_KERNEL);
#else
fir->history = kcalloc(taps, sizeof(int16_t), GFP_KERNEL);
#endif
return fir->history;
}
static inline void fir16_flush(struct fir16_state_t *fir)
{
#if defined(__bfin__)
memset(fir->history, 0, 2 * fir->taps * sizeof(int16_t));
#else
memset(fir->history, 0, fir->taps * sizeof(int16_t));
#endif
}
static inline void fir16_free(struct fir16_state_t *fir)
{
kfree(fir->history);
}
#ifdef __bfin__
static inline int32_t dot_asm(short *x, short *y, int len)
{
int dot;
len--;
__asm__("I0 = %1;\n\t"
"I1 = %2;\n\t"
"A0 = 0;\n\t"
"R0.L = W[I0++] || R1.L = W[I1++];\n\t"
"LOOP dot%= LC0 = %3;\n\t"
"LOOP_BEGIN dot%=;\n\t"
"A0 += R0.L * R1.L (IS) || R0.L = W[I0++] || R1.L = W[I1++];\n\t"
"LOOP_END dot%=;\n\t"
"A0 += R0.L*R1.L (IS);\n\t"
"R0 = A0;\n\t"
"%0 = R0;\n\t"
: "=&d"(dot)
: "a"(x), "a"(y), "a"(len)
: "I0", "I1", "A1", "A0", "R0", "R1"
);
return dot;
}
#endif
static inline int16_t fir16(struct fir16_state_t *fir, int16_t sample)
{
int32_t y;
#if defined(__bfin__)
fir->history[fir->curr_pos] = sample;
fir->history[fir->curr_pos + fir->taps] = sample;
y = dot_asm((int16_t *) fir->coeffs, &fir->history[fir->curr_pos],
fir->taps);
#else
int i;
int offset1;
int offset2;
fir->history[fir->curr_pos] = sample;
offset2 = fir->curr_pos;
offset1 = fir->taps - offset2;
y = 0;
for (i = fir->taps - 1; i >= offset1; i--)
y += fir->coeffs[i] * fir->history[i - offset1];
for (; i >= 0; i--)
y += fir->coeffs[i] * fir->history[i + offset2];
#endif
if (fir->curr_pos <= 0)
fir->curr_pos = fir->taps;
fir->curr_pos--;
return (int16_t) (y >> 15);
}
static inline const int16_t *fir32_create(struct fir32_state_t *fir,
const int32_t *coeffs, int taps)
{
fir->taps = taps;
fir->curr_pos = taps - 1;
fir->coeffs = coeffs;
fir->history = kcalloc(taps, sizeof(int16_t), GFP_KERNEL);
return fir->history;
}
static inline void fir32_flush(struct fir32_state_t *fir)
{
memset(fir->history, 0, fir->taps * sizeof(int16_t));
}
static inline void fir32_free(struct fir32_state_t *fir)
{
kfree(fir->history);
}
static inline int16_t fir32(struct fir32_state_t *fir, int16_t sample)
{
int i;
int32_t y;
int offset1;
int offset2;
fir->history[fir->curr_pos] = sample;
offset2 = fir->curr_pos;
offset1 = fir->taps - offset2;
y = 0;
for (i = fir->taps - 1; i >= offset1; i--)
y += fir->coeffs[i] * fir->history[i - offset1];
for (; i >= 0; i--)
y += fir->coeffs[i] * fir->history[i + offset2];
if (fir->curr_pos <= 0)
fir->curr_pos = fir->taps;
fir->curr_pos--;
return (int16_t) (y >> 15);
}
#endif

94
drivers/misc/echo/oslec.h Normal file
View file

@ -0,0 +1,94 @@
/*
* OSLEC - A line echo canceller. This code is being developed
* against and partially complies with G168. Using code from SpanDSP
*
* Written by Steve Underwood <steveu@coppice.org>
* and David Rowe <david_at_rowetel_dot_com>
*
* Copyright (C) 2001 Steve Underwood and 2007-2008 David Rowe
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
*/
#ifndef __OSLEC_H
#define __OSLEC_H
/* Mask bits for the adaption mode */
#define ECHO_CAN_USE_ADAPTION 0x01
#define ECHO_CAN_USE_NLP 0x02
#define ECHO_CAN_USE_CNG 0x04
#define ECHO_CAN_USE_CLIP 0x08
#define ECHO_CAN_USE_TX_HPF 0x10
#define ECHO_CAN_USE_RX_HPF 0x20
#define ECHO_CAN_DISABLE 0x40
/**
* oslec_state: G.168 echo canceller descriptor.
*
* This defines the working state for a line echo canceller.
*/
struct oslec_state;
/**
* oslec_create - Create a voice echo canceller context.
* @len: The length of the canceller, in samples.
* @return: The new canceller context, or NULL if the canceller could not be
* created.
*/
struct oslec_state *oslec_create(int len, int adaption_mode);
/**
* oslec_free - Free a voice echo canceller context.
* @ec: The echo canceller context.
*/
void oslec_free(struct oslec_state *ec);
/**
* oslec_flush - Flush (reinitialise) a voice echo canceller context.
* @ec: The echo canceller context.
*/
void oslec_flush(struct oslec_state *ec);
/**
* oslec_adaption_mode - set the adaption mode of a voice echo canceller context.
* @ec The echo canceller context.
* @adaption_mode: The mode.
*/
void oslec_adaption_mode(struct oslec_state *ec, int adaption_mode);
void oslec_snapshot(struct oslec_state *ec);
/**
* oslec_update: Process a sample through a voice echo canceller.
* @ec: The echo canceller context.
* @tx: The transmitted audio sample.
* @rx: The received audio sample.
*
* The return value is the clean (echo cancelled) received sample.
*/
int16_t oslec_update(struct oslec_state *ec, int16_t tx, int16_t rx);
/**
* oslec_hpf_tx: Process to high pass filter the tx signal.
* @ec: The echo canceller context.
* @tx: The transmitted auio sample.
*
* The return value is the HP filtered transmit sample, send this to your D/A.
*/
int16_t oslec_hpf_tx(struct oslec_state *ec, int16_t tx);
#endif /* __OSLEC_H */

112
drivers/misc/eeprom/Kconfig Normal file
View file

@ -0,0 +1,112 @@
menu "EEPROM support"
config EEPROM_AT24
tristate "I2C EEPROMs / RAMs / ROMs from most vendors"
depends on I2C && SYSFS
help
Enable this driver to get read/write support to most I2C EEPROMs
and compatible devices like FRAMs, SRAMs, ROMs etc. After you
configure the driver to know about each chip on your target
board. Use these generic chip names, instead of vendor-specific
ones like at24c64, 24lc02 or fm24c04:
24c00, 24c01, 24c02, spd (readonly 24c02), 24c04, 24c08,
24c16, 24c32, 24c64, 24c128, 24c256, 24c512, 24c1024
Unless you like data loss puzzles, always be sure that any chip
you configure as a 24c32 (32 kbit) or larger is NOT really a
24c16 (16 kbit) or smaller, and vice versa. Marking the chip
as read-only won't help recover from this. Also, if your chip
has any software write-protect mechanism you may want to review the
code to make sure this driver won't turn it on by accident.
If you use this with an SMBus adapter instead of an I2C adapter,
full functionality is not available. Only smaller devices are
supported (24c16 and below, max 4 kByte).
This driver can also be built as a module. If so, the module
will be called at24.
config EEPROM_AT25
tristate "SPI EEPROMs from most vendors"
depends on SPI && SYSFS
help
Enable this driver to get read/write support to most SPI EEPROMs,
after you configure the board init code to know about each eeprom
on your target board.
This driver can also be built as a module. If so, the module
will be called at25.
config EEPROM_LEGACY
tristate "Old I2C EEPROM reader"
depends on I2C && SYSFS
help
If you say yes here you get read-only access to the EEPROM data
available on modern memory DIMMs and Sony Vaio laptops via I2C. Such
EEPROMs could theoretically be available on other devices as well.
This driver can also be built as a module. If so, the module
will be called eeprom.
config EEPROM_MAX6875
tristate "Maxim MAX6874/5 power supply supervisor"
depends on I2C
help
If you say yes here you get read-only support for the user EEPROM of
the Maxim MAX6874/5 EEPROM-programmable, quad power-supply
sequencer/supervisor.
All other features of this chip should be accessed via i2c-dev.
This driver can also be built as a module. If so, the module
will be called max6875.
config EEPROM_93CX6
tristate "EEPROM 93CX6 support"
help
This is a driver for the EEPROM chipsets 93c46 and 93c66.
The driver supports both read as well as write commands.
If unsure, say N.
config EEPROM_93XX46
tristate "Microwire EEPROM 93XX46 support"
depends on SPI && SYSFS
help
Driver for the microwire EEPROM chipsets 93xx46x. The driver
supports both read and write commands and also the command to
erase the whole EEPROM.
This driver can also be built as a module. If so, the module
will be called eeprom_93xx46.
If unsure, say N.
config EEPROM_DIGSY_MTC_CFG
bool "DigsyMTC display configuration EEPROMs device"
depends on GPIO_MPC5200 && SPI_GPIO
help
This option enables access to display configuration EEPROMs
on digsy_mtc board. You have to additionally select Microwire
EEPROM 93XX46 driver. sysfs entries will be created for that
EEPROM allowing to read/write the configuration data or to
erase the whole EEPROM.
If unsure, say N.
config EEPROM_SUNXI_SID
tristate "Allwinner sunxi security ID support"
depends on ARCH_SUNXI && SYSFS
help
This is a driver for the 'security ID' available on various Allwinner
devices.
Due to the potential risks involved with changing e-fuses,
this driver is read-only.
This driver can also be built as a module. If so, the module
will be called sunxi_sid.
endmenu

View file

@ -0,0 +1,8 @@
obj-$(CONFIG_EEPROM_AT24) += at24.o
obj-$(CONFIG_EEPROM_AT25) += at25.o
obj-$(CONFIG_EEPROM_LEGACY) += eeprom.o
obj-$(CONFIG_EEPROM_MAX6875) += max6875.o
obj-$(CONFIG_EEPROM_93CX6) += eeprom_93cx6.o
obj-$(CONFIG_EEPROM_93XX46) += eeprom_93xx46.o
obj-$(CONFIG_EEPROM_SUNXI_SID) += sunxi_sid.o
obj-$(CONFIG_EEPROM_DIGSY_MTC_CFG) += digsy_mtc_eeprom.o

696
drivers/misc/eeprom/at24.c Normal file
View file

@ -0,0 +1,696 @@
/*
* at24.c - handle most I2C EEPROMs
*
* Copyright (C) 2005-2007 David Brownell
* Copyright (C) 2008 Wolfram Sang, Pengutronix
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/mutex.h>
#include <linux/sysfs.h>
#include <linux/mod_devicetable.h>
#include <linux/log2.h>
#include <linux/bitops.h>
#include <linux/jiffies.h>
#include <linux/of.h>
#include <linux/i2c.h>
#include <linux/platform_data/at24.h>
/*
* I2C EEPROMs from most vendors are inexpensive and mostly interchangeable.
* Differences between different vendor product lines (like Atmel AT24C or
* MicroChip 24LC, etc) won't much matter for typical read/write access.
* There are also I2C RAM chips, likewise interchangeable. One example
* would be the PCF8570, which acts like a 24c02 EEPROM (256 bytes).
*
* However, misconfiguration can lose data. "Set 16-bit memory address"
* to a part with 8-bit addressing will overwrite data. Writing with too
* big a page size also loses data. And it's not safe to assume that the
* conventional addresses 0x50..0x57 only hold eeproms; a PCF8563 RTC
* uses 0x51, for just one example.
*
* Accordingly, explicit board-specific configuration data should be used
* in almost all cases. (One partial exception is an SMBus used to access
* "SPD" data for DRAM sticks. Those only use 24c02 EEPROMs.)
*
* So this driver uses "new style" I2C driver binding, expecting to be
* told what devices exist. That may be in arch/X/mach-Y/board-Z.c or
* similar kernel-resident tables; or, configuration data coming from
* a bootloader.
*
* Other than binding model, current differences from "eeprom" driver are
* that this one handles write access and isn't restricted to 24c02 devices.
* It also handles larger devices (32 kbit and up) with two-byte addresses,
* which won't work on pure SMBus systems.
*/
struct at24_data {
struct at24_platform_data chip;
struct memory_accessor macc;
int use_smbus;
/*
* Lock protects against activities from other Linux tasks,
* but not from changes by other I2C masters.
*/
struct mutex lock;
struct bin_attribute bin;
u8 *writebuf;
unsigned write_max;
unsigned num_addresses;
/*
* Some chips tie up multiple I2C addresses; dummy devices reserve
* them for us, and we'll use them with SMBus calls.
*/
struct i2c_client *client[];
};
/*
* This parameter is to help this driver avoid blocking other drivers out
* of I2C for potentially troublesome amounts of time. With a 100 kHz I2C
* clock, one 256 byte read takes about 1/43 second which is excessive;
* but the 1/170 second it takes at 400 kHz may be quite reasonable; and
* at 1 MHz (Fm+) a 1/430 second delay could easily be invisible.
*
* This value is forced to be a power of two so that writes align on pages.
*/
static unsigned io_limit = 128;
module_param(io_limit, uint, 0);
MODULE_PARM_DESC(io_limit, "Maximum bytes per I/O (default 128)");
/*
* Specs often allow 5 msec for a page write, sometimes 20 msec;
* it's important to recover from write timeouts.
*/
static unsigned write_timeout = 25;
module_param(write_timeout, uint, 0);
MODULE_PARM_DESC(write_timeout, "Time (in ms) to try writes (default 25)");
#define AT24_SIZE_BYTELEN 5
#define AT24_SIZE_FLAGS 8
#define AT24_BITMASK(x) (BIT(x) - 1)
/* create non-zero magic value for given eeprom parameters */
#define AT24_DEVICE_MAGIC(_len, _flags) \
((1 << AT24_SIZE_FLAGS | (_flags)) \
<< AT24_SIZE_BYTELEN | ilog2(_len))
static const struct i2c_device_id at24_ids[] = {
/* needs 8 addresses as A0-A2 are ignored */
{ "24c00", AT24_DEVICE_MAGIC(128 / 8, AT24_FLAG_TAKE8ADDR) },
/* old variants can't be handled with this generic entry! */
{ "24c01", AT24_DEVICE_MAGIC(1024 / 8, 0) },
{ "24c02", AT24_DEVICE_MAGIC(2048 / 8, 0) },
/* spd is a 24c02 in memory DIMMs */
{ "spd", AT24_DEVICE_MAGIC(2048 / 8,
AT24_FLAG_READONLY | AT24_FLAG_IRUGO) },
{ "24c04", AT24_DEVICE_MAGIC(4096 / 8, 0) },
/* 24rf08 quirk is handled at i2c-core */
{ "24c08", AT24_DEVICE_MAGIC(8192 / 8, 0) },
{ "24c16", AT24_DEVICE_MAGIC(16384 / 8, 0) },
{ "24c32", AT24_DEVICE_MAGIC(32768 / 8, AT24_FLAG_ADDR16) },
{ "24c64", AT24_DEVICE_MAGIC(65536 / 8, AT24_FLAG_ADDR16) },
{ "24c128", AT24_DEVICE_MAGIC(131072 / 8, AT24_FLAG_ADDR16) },
{ "24c256", AT24_DEVICE_MAGIC(262144 / 8, AT24_FLAG_ADDR16) },
{ "24c512", AT24_DEVICE_MAGIC(524288 / 8, AT24_FLAG_ADDR16) },
{ "24c1024", AT24_DEVICE_MAGIC(1048576 / 8, AT24_FLAG_ADDR16) },
{ "at24", 0 },
{ /* END OF LIST */ }
};
MODULE_DEVICE_TABLE(i2c, at24_ids);
/*-------------------------------------------------------------------------*/
/*
* This routine supports chips which consume multiple I2C addresses. It
* computes the addressing information to be used for a given r/w request.
* Assumes that sanity checks for offset happened at sysfs-layer.
*/
static struct i2c_client *at24_translate_offset(struct at24_data *at24,
unsigned *offset)
{
unsigned i;
if (at24->chip.flags & AT24_FLAG_ADDR16) {
i = *offset >> 16;
*offset &= 0xffff;
} else {
i = *offset >> 8;
*offset &= 0xff;
}
return at24->client[i];
}
static ssize_t at24_eeprom_read(struct at24_data *at24, char *buf,
unsigned offset, size_t count)
{
struct i2c_msg msg[2];
u8 msgbuf[2];
struct i2c_client *client;
unsigned long timeout, read_time;
int status, i;
memset(msg, 0, sizeof(msg));
/*
* REVISIT some multi-address chips don't rollover page reads to
* the next slave address, so we may need to truncate the count.
* Those chips might need another quirk flag.
*
* If the real hardware used four adjacent 24c02 chips and that
* were misconfigured as one 24c08, that would be a similar effect:
* one "eeprom" file not four, but larger reads would fail when
* they crossed certain pages.
*/
/*
* Slave address and byte offset derive from the offset. Always
* set the byte address; on a multi-master board, another master
* may have changed the chip's "current" address pointer.
*/
client = at24_translate_offset(at24, &offset);
if (count > io_limit)
count = io_limit;
switch (at24->use_smbus) {
case I2C_SMBUS_I2C_BLOCK_DATA:
/* Smaller eeproms can work given some SMBus extension calls */
if (count > I2C_SMBUS_BLOCK_MAX)
count = I2C_SMBUS_BLOCK_MAX;
break;
case I2C_SMBUS_WORD_DATA:
count = 2;
break;
case I2C_SMBUS_BYTE_DATA:
count = 1;
break;
default:
/*
* When we have a better choice than SMBus calls, use a
* combined I2C message. Write address; then read up to
* io_limit data bytes. Note that read page rollover helps us
* here (unlike writes). msgbuf is u8 and will cast to our
* needs.
*/
i = 0;
if (at24->chip.flags & AT24_FLAG_ADDR16)
msgbuf[i++] = offset >> 8;
msgbuf[i++] = offset;
msg[0].addr = client->addr;
msg[0].buf = msgbuf;
msg[0].len = i;
msg[1].addr = client->addr;
msg[1].flags = I2C_M_RD;
msg[1].buf = buf;
msg[1].len = count;
}
/*
* Reads fail if the previous write didn't complete yet. We may
* loop a few times until this one succeeds, waiting at least
* long enough for one entire page write to work.
*/
timeout = jiffies + msecs_to_jiffies(write_timeout);
do {
read_time = jiffies;
switch (at24->use_smbus) {
case I2C_SMBUS_I2C_BLOCK_DATA:
status = i2c_smbus_read_i2c_block_data(client, offset,
count, buf);
break;
case I2C_SMBUS_WORD_DATA:
status = i2c_smbus_read_word_data(client, offset);
if (status >= 0) {
buf[0] = status & 0xff;
buf[1] = status >> 8;
status = count;
}
break;
case I2C_SMBUS_BYTE_DATA:
status = i2c_smbus_read_byte_data(client, offset);
if (status >= 0) {
buf[0] = status;
status = count;
}
break;
default:
status = i2c_transfer(client->adapter, msg, 2);
if (status == 2)
status = count;
}
dev_dbg(&client->dev, "read %zu@%d --> %d (%ld)\n",
count, offset, status, jiffies);
if (status == count)
return count;
/* REVISIT: at HZ=100, this is sloooow */
msleep(1);
} while (time_before(read_time, timeout));
return -ETIMEDOUT;
}
static ssize_t at24_read(struct at24_data *at24,
char *buf, loff_t off, size_t count)
{
ssize_t retval = 0;
if (unlikely(!count))
return count;
/*
* Read data from chip, protecting against concurrent updates
* from this host, but not from other I2C masters.
*/
mutex_lock(&at24->lock);
while (count) {
ssize_t status;
status = at24_eeprom_read(at24, buf, off, count);
if (status <= 0) {
if (retval == 0)
retval = status;
break;
}
buf += status;
off += status;
count -= status;
retval += status;
}
mutex_unlock(&at24->lock);
return retval;
}
static ssize_t at24_bin_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr,
char *buf, loff_t off, size_t count)
{
struct at24_data *at24;
at24 = dev_get_drvdata(container_of(kobj, struct device, kobj));
return at24_read(at24, buf, off, count);
}
/*
* Note that if the hardware write-protect pin is pulled high, the whole
* chip is normally write protected. But there are plenty of product
* variants here, including OTP fuses and partial chip protect.
*
* We only use page mode writes; the alternative is sloooow. This routine
* writes at most one page.
*/
static ssize_t at24_eeprom_write(struct at24_data *at24, const char *buf,
unsigned offset, size_t count)
{
struct i2c_client *client;
struct i2c_msg msg;
ssize_t status;
unsigned long timeout, write_time;
unsigned next_page;
/* Get corresponding I2C address and adjust offset */
client = at24_translate_offset(at24, &offset);
/* write_max is at most a page */
if (count > at24->write_max)
count = at24->write_max;
/* Never roll over backwards, to the start of this page */
next_page = roundup(offset + 1, at24->chip.page_size);
if (offset + count > next_page)
count = next_page - offset;
/* If we'll use I2C calls for I/O, set up the message */
if (!at24->use_smbus) {
int i = 0;
msg.addr = client->addr;
msg.flags = 0;
/* msg.buf is u8 and casts will mask the values */
msg.buf = at24->writebuf;
if (at24->chip.flags & AT24_FLAG_ADDR16)
msg.buf[i++] = offset >> 8;
msg.buf[i++] = offset;
memcpy(&msg.buf[i], buf, count);
msg.len = i + count;
}
/*
* Writes fail if the previous one didn't complete yet. We may
* loop a few times until this one succeeds, waiting at least
* long enough for one entire page write to work.
*/
timeout = jiffies + msecs_to_jiffies(write_timeout);
do {
write_time = jiffies;
if (at24->use_smbus) {
status = i2c_smbus_write_i2c_block_data(client,
offset, count, buf);
if (status == 0)
status = count;
} else {
status = i2c_transfer(client->adapter, &msg, 1);
if (status == 1)
status = count;
}
dev_dbg(&client->dev, "write %zu@%d --> %zd (%ld)\n",
count, offset, status, jiffies);
if (status == count)
return count;
/* REVISIT: at HZ=100, this is sloooow */
msleep(1);
} while (time_before(write_time, timeout));
return -ETIMEDOUT;
}
static ssize_t at24_write(struct at24_data *at24, const char *buf, loff_t off,
size_t count)
{
ssize_t retval = 0;
if (unlikely(!count))
return count;
/*
* Write data to chip, protecting against concurrent updates
* from this host, but not from other I2C masters.
*/
mutex_lock(&at24->lock);
while (count) {
ssize_t status;
status = at24_eeprom_write(at24, buf, off, count);
if (status <= 0) {
if (retval == 0)
retval = status;
break;
}
buf += status;
off += status;
count -= status;
retval += status;
}
mutex_unlock(&at24->lock);
return retval;
}
static ssize_t at24_bin_write(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr,
char *buf, loff_t off, size_t count)
{
struct at24_data *at24;
if (unlikely(off >= attr->size))
return -EFBIG;
at24 = dev_get_drvdata(container_of(kobj, struct device, kobj));
return at24_write(at24, buf, off, count);
}
/*-------------------------------------------------------------------------*/
/*
* This lets other kernel code access the eeprom data. For example, it
* might hold a board's Ethernet address, or board-specific calibration
* data generated on the manufacturing floor.
*/
static ssize_t at24_macc_read(struct memory_accessor *macc, char *buf,
off_t offset, size_t count)
{
struct at24_data *at24 = container_of(macc, struct at24_data, macc);
return at24_read(at24, buf, offset, count);
}
static ssize_t at24_macc_write(struct memory_accessor *macc, const char *buf,
off_t offset, size_t count)
{
struct at24_data *at24 = container_of(macc, struct at24_data, macc);
return at24_write(at24, buf, offset, count);
}
/*-------------------------------------------------------------------------*/
#ifdef CONFIG_OF
static void at24_get_ofdata(struct i2c_client *client,
struct at24_platform_data *chip)
{
const __be32 *val;
struct device_node *node = client->dev.of_node;
if (node) {
if (of_get_property(node, "read-only", NULL))
chip->flags |= AT24_FLAG_READONLY;
val = of_get_property(node, "pagesize", NULL);
if (val)
chip->page_size = be32_to_cpup(val);
}
}
#else
static void at24_get_ofdata(struct i2c_client *client,
struct at24_platform_data *chip)
{ }
#endif /* CONFIG_OF */
static int at24_probe(struct i2c_client *client, const struct i2c_device_id *id)
{
struct at24_platform_data chip;
bool writable;
int use_smbus = 0;
struct at24_data *at24;
int err;
unsigned i, num_addresses;
kernel_ulong_t magic;
if (client->dev.platform_data) {
chip = *(struct at24_platform_data *)client->dev.platform_data;
} else {
if (!id->driver_data)
return -ENODEV;
magic = id->driver_data;
chip.byte_len = BIT(magic & AT24_BITMASK(AT24_SIZE_BYTELEN));
magic >>= AT24_SIZE_BYTELEN;
chip.flags = magic & AT24_BITMASK(AT24_SIZE_FLAGS);
/*
* This is slow, but we can't know all eeproms, so we better
* play safe. Specifying custom eeprom-types via platform_data
* is recommended anyhow.
*/
chip.page_size = 1;
/* update chipdata if OF is present */
at24_get_ofdata(client, &chip);
chip.setup = NULL;
chip.context = NULL;
}
if (!is_power_of_2(chip.byte_len))
dev_warn(&client->dev,
"byte_len looks suspicious (no power of 2)!\n");
if (!chip.page_size) {
dev_err(&client->dev, "page_size must not be 0!\n");
return -EINVAL;
}
if (!is_power_of_2(chip.page_size))
dev_warn(&client->dev,
"page_size looks suspicious (no power of 2)!\n");
/* Use I2C operations unless we're stuck with SMBus extensions. */
if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
if (chip.flags & AT24_FLAG_ADDR16)
return -EPFNOSUPPORT;
if (i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_READ_I2C_BLOCK)) {
use_smbus = I2C_SMBUS_I2C_BLOCK_DATA;
} else if (i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_READ_WORD_DATA)) {
use_smbus = I2C_SMBUS_WORD_DATA;
} else if (i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_READ_BYTE_DATA)) {
use_smbus = I2C_SMBUS_BYTE_DATA;
} else {
return -EPFNOSUPPORT;
}
}
if (chip.flags & AT24_FLAG_TAKE8ADDR)
num_addresses = 8;
else
num_addresses = DIV_ROUND_UP(chip.byte_len,
(chip.flags & AT24_FLAG_ADDR16) ? 65536 : 256);
at24 = devm_kzalloc(&client->dev, sizeof(struct at24_data) +
num_addresses * sizeof(struct i2c_client *), GFP_KERNEL);
if (!at24)
return -ENOMEM;
mutex_init(&at24->lock);
at24->use_smbus = use_smbus;
at24->chip = chip;
at24->num_addresses = num_addresses;
/*
* Export the EEPROM bytes through sysfs, since that's convenient.
* By default, only root should see the data (maybe passwords etc)
*/
sysfs_bin_attr_init(&at24->bin);
at24->bin.attr.name = "eeprom";
at24->bin.attr.mode = chip.flags & AT24_FLAG_IRUGO ? S_IRUGO : S_IRUSR;
at24->bin.read = at24_bin_read;
at24->bin.size = chip.byte_len;
at24->macc.read = at24_macc_read;
writable = !(chip.flags & AT24_FLAG_READONLY);
if (writable) {
if (!use_smbus || i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_WRITE_I2C_BLOCK)) {
unsigned write_max = chip.page_size;
at24->macc.write = at24_macc_write;
at24->bin.write = at24_bin_write;
at24->bin.attr.mode |= S_IWUSR;
if (write_max > io_limit)
write_max = io_limit;
if (use_smbus && write_max > I2C_SMBUS_BLOCK_MAX)
write_max = I2C_SMBUS_BLOCK_MAX;
at24->write_max = write_max;
/* buffer (data + address at the beginning) */
at24->writebuf = devm_kzalloc(&client->dev,
write_max + 2, GFP_KERNEL);
if (!at24->writebuf)
return -ENOMEM;
} else {
dev_warn(&client->dev,
"cannot write due to controller restrictions.");
}
}
at24->client[0] = client;
/* use dummy devices for multiple-address chips */
for (i = 1; i < num_addresses; i++) {
at24->client[i] = i2c_new_dummy(client->adapter,
client->addr + i);
if (!at24->client[i]) {
dev_err(&client->dev, "address 0x%02x unavailable\n",
client->addr + i);
err = -EADDRINUSE;
goto err_clients;
}
}
err = sysfs_create_bin_file(&client->dev.kobj, &at24->bin);
if (err)
goto err_clients;
i2c_set_clientdata(client, at24);
dev_info(&client->dev, "%zu byte %s EEPROM, %s, %u bytes/write\n",
at24->bin.size, client->name,
writable ? "writable" : "read-only", at24->write_max);
if (use_smbus == I2C_SMBUS_WORD_DATA ||
use_smbus == I2C_SMBUS_BYTE_DATA) {
dev_notice(&client->dev, "Falling back to %s reads, "
"performance will suffer\n", use_smbus ==
I2C_SMBUS_WORD_DATA ? "word" : "byte");
}
/* export data to kernel code */
if (chip.setup)
chip.setup(&at24->macc, chip.context);
return 0;
err_clients:
for (i = 1; i < num_addresses; i++)
if (at24->client[i])
i2c_unregister_device(at24->client[i]);
return err;
}
static int at24_remove(struct i2c_client *client)
{
struct at24_data *at24;
int i;
at24 = i2c_get_clientdata(client);
sysfs_remove_bin_file(&client->dev.kobj, &at24->bin);
for (i = 1; i < at24->num_addresses; i++)
i2c_unregister_device(at24->client[i]);
return 0;
}
/*-------------------------------------------------------------------------*/
static struct i2c_driver at24_driver = {
.driver = {
.name = "at24",
.owner = THIS_MODULE,
},
.probe = at24_probe,
.remove = at24_remove,
.id_table = at24_ids,
};
static int __init at24_init(void)
{
if (!io_limit) {
pr_err("at24: io_limit must not be 0!\n");
return -EINVAL;
}
io_limit = rounddown_pow_of_two(io_limit);
return i2c_add_driver(&at24_driver);
}
module_init(at24_init);
static void __exit at24_exit(void)
{
i2c_del_driver(&at24_driver);
}
module_exit(at24_exit);
MODULE_DESCRIPTION("Driver for most I2C EEPROMs");
MODULE_AUTHOR("David Brownell and Wolfram Sang");
MODULE_LICENSE("GPL");

485
drivers/misc/eeprom/at25.c Normal file
View file

@ -0,0 +1,485 @@
/*
* at25.c -- support most SPI EEPROMs, such as Atmel AT25 models
*
* Copyright (C) 2006 David Brownell
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/sched.h>
#include <linux/spi/spi.h>
#include <linux/spi/eeprom.h>
#include <linux/of.h>
/*
* NOTE: this is an *EEPROM* driver. The vagaries of product naming
* mean that some AT25 products are EEPROMs, and others are FLASH.
* Handle FLASH chips with the drivers/mtd/devices/m25p80.c driver,
* not this one!
*/
struct at25_data {
struct spi_device *spi;
struct memory_accessor mem;
struct mutex lock;
struct spi_eeprom chip;
struct bin_attribute bin;
unsigned addrlen;
};
#define AT25_WREN 0x06 /* latch the write enable */
#define AT25_WRDI 0x04 /* reset the write enable */
#define AT25_RDSR 0x05 /* read status register */
#define AT25_WRSR 0x01 /* write status register */
#define AT25_READ 0x03 /* read byte(s) */
#define AT25_WRITE 0x02 /* write byte(s)/sector */
#define AT25_SR_nRDY 0x01 /* nRDY = write-in-progress */
#define AT25_SR_WEN 0x02 /* write enable (latched) */
#define AT25_SR_BP0 0x04 /* BP for software writeprotect */
#define AT25_SR_BP1 0x08
#define AT25_SR_WPEN 0x80 /* writeprotect enable */
#define AT25_INSTR_BIT3 0x08 /* Additional address bit in instr */
#define EE_MAXADDRLEN 3 /* 24 bit addresses, up to 2 MBytes */
/* Specs often allow 5 msec for a page write, sometimes 20 msec;
* it's important to recover from write timeouts.
*/
#define EE_TIMEOUT 25
/*-------------------------------------------------------------------------*/
#define io_limit PAGE_SIZE /* bytes */
static ssize_t
at25_ee_read(
struct at25_data *at25,
char *buf,
unsigned offset,
size_t count
)
{
u8 command[EE_MAXADDRLEN + 1];
u8 *cp;
ssize_t status;
struct spi_transfer t[2];
struct spi_message m;
u8 instr;
if (unlikely(offset >= at25->bin.size))
return 0;
if ((offset + count) > at25->bin.size)
count = at25->bin.size - offset;
if (unlikely(!count))
return count;
cp = command;
instr = AT25_READ;
if (at25->chip.flags & EE_INSTR_BIT3_IS_ADDR)
if (offset >= (1U << (at25->addrlen * 8)))
instr |= AT25_INSTR_BIT3;
*cp++ = instr;
/* 8/16/24-bit address is written MSB first */
switch (at25->addrlen) {
default: /* case 3 */
*cp++ = offset >> 16;
case 2:
*cp++ = offset >> 8;
case 1:
case 0: /* can't happen: for better codegen */
*cp++ = offset >> 0;
}
spi_message_init(&m);
memset(t, 0, sizeof t);
t[0].tx_buf = command;
t[0].len = at25->addrlen + 1;
spi_message_add_tail(&t[0], &m);
t[1].rx_buf = buf;
t[1].len = count;
spi_message_add_tail(&t[1], &m);
mutex_lock(&at25->lock);
/* Read it all at once.
*
* REVISIT that's potentially a problem with large chips, if
* other devices on the bus need to be accessed regularly or
* this chip is clocked very slowly
*/
status = spi_sync(at25->spi, &m);
dev_dbg(&at25->spi->dev,
"read %Zd bytes at %d --> %d\n",
count, offset, (int) status);
mutex_unlock(&at25->lock);
return status ? status : count;
}
static ssize_t
at25_bin_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
struct device *dev;
struct at25_data *at25;
dev = container_of(kobj, struct device, kobj);
at25 = dev_get_drvdata(dev);
return at25_ee_read(at25, buf, off, count);
}
static ssize_t
at25_ee_write(struct at25_data *at25, const char *buf, loff_t off,
size_t count)
{
ssize_t status = 0;
unsigned written = 0;
unsigned buf_size;
u8 *bounce;
if (unlikely(off >= at25->bin.size))
return -EFBIG;
if ((off + count) > at25->bin.size)
count = at25->bin.size - off;
if (unlikely(!count))
return count;
/* Temp buffer starts with command and address */
buf_size = at25->chip.page_size;
if (buf_size > io_limit)
buf_size = io_limit;
bounce = kmalloc(buf_size + at25->addrlen + 1, GFP_KERNEL);
if (!bounce)
return -ENOMEM;
/* For write, rollover is within the page ... so we write at
* most one page, then manually roll over to the next page.
*/
mutex_lock(&at25->lock);
do {
unsigned long timeout, retries;
unsigned segment;
unsigned offset = (unsigned) off;
u8 *cp = bounce;
int sr;
u8 instr;
*cp = AT25_WREN;
status = spi_write(at25->spi, cp, 1);
if (status < 0) {
dev_dbg(&at25->spi->dev, "WREN --> %d\n",
(int) status);
break;
}
instr = AT25_WRITE;
if (at25->chip.flags & EE_INSTR_BIT3_IS_ADDR)
if (offset >= (1U << (at25->addrlen * 8)))
instr |= AT25_INSTR_BIT3;
*cp++ = instr;
/* 8/16/24-bit address is written MSB first */
switch (at25->addrlen) {
default: /* case 3 */
*cp++ = offset >> 16;
case 2:
*cp++ = offset >> 8;
case 1:
case 0: /* can't happen: for better codegen */
*cp++ = offset >> 0;
}
/* Write as much of a page as we can */
segment = buf_size - (offset % buf_size);
if (segment > count)
segment = count;
memcpy(cp, buf, segment);
status = spi_write(at25->spi, bounce,
segment + at25->addrlen + 1);
dev_dbg(&at25->spi->dev,
"write %u bytes at %u --> %d\n",
segment, offset, (int) status);
if (status < 0)
break;
/* REVISIT this should detect (or prevent) failed writes
* to readonly sections of the EEPROM...
*/
/* Wait for non-busy status */
timeout = jiffies + msecs_to_jiffies(EE_TIMEOUT);
retries = 0;
do {
sr = spi_w8r8(at25->spi, AT25_RDSR);
if (sr < 0 || (sr & AT25_SR_nRDY)) {
dev_dbg(&at25->spi->dev,
"rdsr --> %d (%02x)\n", sr, sr);
/* at HZ=100, this is sloooow */
msleep(1);
continue;
}
if (!(sr & AT25_SR_nRDY))
break;
} while (retries++ < 3 || time_before_eq(jiffies, timeout));
if ((sr < 0) || (sr & AT25_SR_nRDY)) {
dev_err(&at25->spi->dev,
"write %d bytes offset %d, "
"timeout after %u msecs\n",
segment, offset,
jiffies_to_msecs(jiffies -
(timeout - EE_TIMEOUT)));
status = -ETIMEDOUT;
break;
}
off += segment;
buf += segment;
count -= segment;
written += segment;
} while (count > 0);
mutex_unlock(&at25->lock);
kfree(bounce);
return written ? written : status;
}
static ssize_t
at25_bin_write(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
struct device *dev;
struct at25_data *at25;
dev = container_of(kobj, struct device, kobj);
at25 = dev_get_drvdata(dev);
return at25_ee_write(at25, buf, off, count);
}
/*-------------------------------------------------------------------------*/
/* Let in-kernel code access the eeprom data. */
static ssize_t at25_mem_read(struct memory_accessor *mem, char *buf,
off_t offset, size_t count)
{
struct at25_data *at25 = container_of(mem, struct at25_data, mem);
return at25_ee_read(at25, buf, offset, count);
}
static ssize_t at25_mem_write(struct memory_accessor *mem, const char *buf,
off_t offset, size_t count)
{
struct at25_data *at25 = container_of(mem, struct at25_data, mem);
return at25_ee_write(at25, buf, offset, count);
}
/*-------------------------------------------------------------------------*/
static int at25_np_to_chip(struct device *dev,
struct device_node *np,
struct spi_eeprom *chip)
{
u32 val;
memset(chip, 0, sizeof(*chip));
strncpy(chip->name, np->name, sizeof(chip->name));
if (of_property_read_u32(np, "size", &val) == 0 ||
of_property_read_u32(np, "at25,byte-len", &val) == 0) {
chip->byte_len = val;
} else {
dev_err(dev, "Error: missing \"size\" property\n");
return -ENODEV;
}
if (of_property_read_u32(np, "pagesize", &val) == 0 ||
of_property_read_u32(np, "at25,page-size", &val) == 0) {
chip->page_size = (u16)val;
} else {
dev_err(dev, "Error: missing \"pagesize\" property\n");
return -ENODEV;
}
if (of_property_read_u32(np, "at25,addr-mode", &val) == 0) {
chip->flags = (u16)val;
} else {
if (of_property_read_u32(np, "address-width", &val)) {
dev_err(dev,
"Error: missing \"address-width\" property\n");
return -ENODEV;
}
switch (val) {
case 8:
chip->flags |= EE_ADDR1;
break;
case 16:
chip->flags |= EE_ADDR2;
break;
case 24:
chip->flags |= EE_ADDR3;
break;
default:
dev_err(dev,
"Error: bad \"address-width\" property: %u\n",
val);
return -ENODEV;
}
if (of_find_property(np, "read-only", NULL))
chip->flags |= EE_READONLY;
}
return 0;
}
static int at25_probe(struct spi_device *spi)
{
struct at25_data *at25 = NULL;
struct spi_eeprom chip;
struct device_node *np = spi->dev.of_node;
int err;
int sr;
int addrlen;
/* Chip description */
if (!spi->dev.platform_data) {
if (np) {
err = at25_np_to_chip(&spi->dev, np, &chip);
if (err)
return err;
} else {
dev_err(&spi->dev, "Error: no chip description\n");
return -ENODEV;
}
} else
chip = *(struct spi_eeprom *)spi->dev.platform_data;
/* For now we only support 8/16/24 bit addressing */
if (chip.flags & EE_ADDR1)
addrlen = 1;
else if (chip.flags & EE_ADDR2)
addrlen = 2;
else if (chip.flags & EE_ADDR3)
addrlen = 3;
else {
dev_dbg(&spi->dev, "unsupported address type\n");
return -EINVAL;
}
/* Ping the chip ... the status register is pretty portable,
* unlike probing manufacturer IDs. We do expect that system
* firmware didn't write it in the past few milliseconds!
*/
sr = spi_w8r8(spi, AT25_RDSR);
if (sr < 0 || sr & AT25_SR_nRDY) {
dev_dbg(&spi->dev, "rdsr --> %d (%02x)\n", sr, sr);
return -ENXIO;
}
at25 = devm_kzalloc(&spi->dev, sizeof(struct at25_data), GFP_KERNEL);
if (!at25)
return -ENOMEM;
mutex_init(&at25->lock);
at25->chip = chip;
at25->spi = spi_dev_get(spi);
spi_set_drvdata(spi, at25);
at25->addrlen = addrlen;
/* Export the EEPROM bytes through sysfs, since that's convenient.
* And maybe to other kernel code; it might hold a board's Ethernet
* address, or board-specific calibration data generated on the
* manufacturing floor.
*
* Default to root-only access to the data; EEPROMs often hold data
* that's sensitive for read and/or write, like ethernet addresses,
* security codes, board-specific manufacturing calibrations, etc.
*/
sysfs_bin_attr_init(&at25->bin);
at25->bin.attr.name = "eeprom";
at25->bin.attr.mode = S_IRUSR;
at25->bin.read = at25_bin_read;
at25->mem.read = at25_mem_read;
at25->bin.size = at25->chip.byte_len;
if (!(chip.flags & EE_READONLY)) {
at25->bin.write = at25_bin_write;
at25->bin.attr.mode |= S_IWUSR;
at25->mem.write = at25_mem_write;
}
err = sysfs_create_bin_file(&spi->dev.kobj, &at25->bin);
if (err)
return err;
if (chip.setup)
chip.setup(&at25->mem, chip.context);
dev_info(&spi->dev, "%Zd %s %s eeprom%s, pagesize %u\n",
(at25->bin.size < 1024)
? at25->bin.size
: (at25->bin.size / 1024),
(at25->bin.size < 1024) ? "Byte" : "KByte",
at25->chip.name,
(chip.flags & EE_READONLY) ? " (readonly)" : "",
at25->chip.page_size);
return 0;
}
static int at25_remove(struct spi_device *spi)
{
struct at25_data *at25;
at25 = spi_get_drvdata(spi);
sysfs_remove_bin_file(&spi->dev.kobj, &at25->bin);
return 0;
}
/*-------------------------------------------------------------------------*/
static const struct of_device_id at25_of_match[] = {
{ .compatible = "atmel,at25", },
{ }
};
MODULE_DEVICE_TABLE(of, at25_of_match);
static struct spi_driver at25_driver = {
.driver = {
.name = "at25",
.owner = THIS_MODULE,
.of_match_table = at25_of_match,
},
.probe = at25_probe,
.remove = at25_remove,
};
module_spi_driver(at25_driver);
MODULE_DESCRIPTION("Driver for most SPI EEPROMs");
MODULE_AUTHOR("David Brownell");
MODULE_LICENSE("GPL");
MODULE_ALIAS("spi:at25");

View file

@ -0,0 +1,85 @@
/*
* EEPROMs access control driver for display configuration EEPROMs
* on DigsyMTC board.
*
* (C) 2011 DENX Software Engineering, Anatolij Gustschin <agust@denx.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/gpio.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/spi/spi.h>
#include <linux/spi/spi_gpio.h>
#include <linux/eeprom_93xx46.h>
#define GPIO_EEPROM_CLK 216
#define GPIO_EEPROM_CS 210
#define GPIO_EEPROM_DI 217
#define GPIO_EEPROM_DO 249
#define GPIO_EEPROM_OE 255
#define EE_SPI_BUS_NUM 1
static void digsy_mtc_op_prepare(void *p)
{
/* enable */
gpio_set_value(GPIO_EEPROM_OE, 0);
}
static void digsy_mtc_op_finish(void *p)
{
/* disable */
gpio_set_value(GPIO_EEPROM_OE, 1);
}
struct eeprom_93xx46_platform_data digsy_mtc_eeprom_data = {
.flags = EE_ADDR8,
.prepare = digsy_mtc_op_prepare,
.finish = digsy_mtc_op_finish,
};
static struct spi_gpio_platform_data eeprom_spi_gpio_data = {
.sck = GPIO_EEPROM_CLK,
.mosi = GPIO_EEPROM_DI,
.miso = GPIO_EEPROM_DO,
.num_chipselect = 1,
};
static struct platform_device digsy_mtc_eeprom = {
.name = "spi_gpio",
.id = EE_SPI_BUS_NUM,
.dev = {
.platform_data = &eeprom_spi_gpio_data,
},
};
static struct spi_board_info digsy_mtc_eeprom_info[] __initdata = {
{
.modalias = "93xx46",
.max_speed_hz = 1000000,
.bus_num = EE_SPI_BUS_NUM,
.chip_select = 0,
.mode = SPI_MODE_0,
.controller_data = (void *)GPIO_EEPROM_CS,
.platform_data = &digsy_mtc_eeprom_data,
},
};
static int __init digsy_mtc_eeprom_devices_init(void)
{
int ret;
ret = gpio_request_one(GPIO_EEPROM_OE, GPIOF_OUT_INIT_HIGH,
"93xx46 EEPROMs OE");
if (ret) {
pr_err("can't request gpio %d\n", GPIO_EEPROM_OE);
return ret;
}
spi_register_board_info(digsy_mtc_eeprom_info,
ARRAY_SIZE(digsy_mtc_eeprom_info));
return platform_device_register(&digsy_mtc_eeprom);
}
device_initcall(digsy_mtc_eeprom_devices_init);

View file

@ -0,0 +1,226 @@
/*
* Copyright (C) 1998, 1999 Frodo Looijaard <frodol@dds.nl> and
* Philip Edelbrock <phil@netroedge.com>
* Copyright (C) 2003 Greg Kroah-Hartman <greg@kroah.com>
* Copyright (C) 2003 IBM Corp.
* Copyright (C) 2004 Jean Delvare <jdelvare@suse.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/jiffies.h>
#include <linux/i2c.h>
#include <linux/mutex.h>
/* Addresses to scan */
static const unsigned short normal_i2c[] = { 0x50, 0x51, 0x52, 0x53, 0x54,
0x55, 0x56, 0x57, I2C_CLIENT_END };
/* Size of EEPROM in bytes */
#define EEPROM_SIZE 256
/* possible types of eeprom devices */
enum eeprom_nature {
UNKNOWN,
VAIO,
};
/* Each client has this additional data */
struct eeprom_data {
struct mutex update_lock;
u8 valid; /* bitfield, bit!=0 if slice is valid */
unsigned long last_updated[8]; /* In jiffies, 8 slices */
u8 data[EEPROM_SIZE]; /* Register values */
enum eeprom_nature nature;
};
static void eeprom_update_client(struct i2c_client *client, u8 slice)
{
struct eeprom_data *data = i2c_get_clientdata(client);
int i;
mutex_lock(&data->update_lock);
if (!(data->valid & (1 << slice)) ||
time_after(jiffies, data->last_updated[slice] + 300 * HZ)) {
dev_dbg(&client->dev, "Starting eeprom update, slice %u\n", slice);
if (i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_READ_I2C_BLOCK)) {
for (i = slice << 5; i < (slice + 1) << 5; i += 32)
if (i2c_smbus_read_i2c_block_data(client, i,
32, data->data + i)
!= 32)
goto exit;
} else {
for (i = slice << 5; i < (slice + 1) << 5; i += 2) {
int word = i2c_smbus_read_word_data(client, i);
if (word < 0)
goto exit;
data->data[i] = word & 0xff;
data->data[i + 1] = word >> 8;
}
}
data->last_updated[slice] = jiffies;
data->valid |= (1 << slice);
}
exit:
mutex_unlock(&data->update_lock);
}
static ssize_t eeprom_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
struct i2c_client *client = to_i2c_client(container_of(kobj, struct device, kobj));
struct eeprom_data *data = i2c_get_clientdata(client);
u8 slice;
if (off > EEPROM_SIZE)
return 0;
if (off + count > EEPROM_SIZE)
count = EEPROM_SIZE - off;
/* Only refresh slices which contain requested bytes */
for (slice = off >> 5; slice <= (off + count - 1) >> 5; slice++)
eeprom_update_client(client, slice);
/* Hide Vaio private settings to regular users:
- BIOS passwords: bytes 0x00 to 0x0f
- UUID: bytes 0x10 to 0x1f
- Serial number: 0xc0 to 0xdf */
if (data->nature == VAIO && !capable(CAP_SYS_ADMIN)) {
int i;
for (i = 0; i < count; i++) {
if ((off + i <= 0x1f) ||
(off + i >= 0xc0 && off + i <= 0xdf))
buf[i] = 0;
else
buf[i] = data->data[off + i];
}
} else {
memcpy(buf, &data->data[off], count);
}
return count;
}
static struct bin_attribute eeprom_attr = {
.attr = {
.name = "eeprom",
.mode = S_IRUGO,
},
.size = EEPROM_SIZE,
.read = eeprom_read,
};
/* Return 0 if detection is successful, -ENODEV otherwise */
static int eeprom_detect(struct i2c_client *client, struct i2c_board_info *info)
{
struct i2c_adapter *adapter = client->adapter;
/* EDID EEPROMs are often 24C00 EEPROMs, which answer to all
addresses 0x50-0x57, but we only care about 0x50. So decline
attaching to addresses >= 0x51 on DDC buses */
if (!(adapter->class & I2C_CLASS_SPD) && client->addr >= 0x51)
return -ENODEV;
/* There are four ways we can read the EEPROM data:
(1) I2C block reads (faster, but unsupported by most adapters)
(2) Word reads (128% overhead)
(3) Consecutive byte reads (88% overhead, unsafe)
(4) Regular byte data reads (265% overhead)
The third and fourth methods are not implemented by this driver
because all known adapters support one of the first two. */
if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_READ_WORD_DATA)
&& !i2c_check_functionality(adapter, I2C_FUNC_SMBUS_READ_I2C_BLOCK))
return -ENODEV;
strlcpy(info->type, "eeprom", I2C_NAME_SIZE);
return 0;
}
static int eeprom_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct i2c_adapter *adapter = client->adapter;
struct eeprom_data *data;
data = devm_kzalloc(&client->dev, sizeof(struct eeprom_data),
GFP_KERNEL);
if (!data)
return -ENOMEM;
memset(data->data, 0xff, EEPROM_SIZE);
i2c_set_clientdata(client, data);
mutex_init(&data->update_lock);
data->nature = UNKNOWN;
/* Detect the Vaio nature of EEPROMs.
We use the "PCG-" or "VGN-" prefix as the signature. */
if (client->addr == 0x57
&& i2c_check_functionality(adapter, I2C_FUNC_SMBUS_READ_BYTE_DATA)) {
char name[4];
name[0] = i2c_smbus_read_byte_data(client, 0x80);
name[1] = i2c_smbus_read_byte_data(client, 0x81);
name[2] = i2c_smbus_read_byte_data(client, 0x82);
name[3] = i2c_smbus_read_byte_data(client, 0x83);
if (!memcmp(name, "PCG-", 4) || !memcmp(name, "VGN-", 4)) {
dev_info(&client->dev, "Vaio EEPROM detected, "
"enabling privacy protection\n");
data->nature = VAIO;
}
}
/* create the sysfs eeprom file */
return sysfs_create_bin_file(&client->dev.kobj, &eeprom_attr);
}
static int eeprom_remove(struct i2c_client *client)
{
sysfs_remove_bin_file(&client->dev.kobj, &eeprom_attr);
return 0;
}
static const struct i2c_device_id eeprom_id[] = {
{ "eeprom", 0 },
{ }
};
static struct i2c_driver eeprom_driver = {
.driver = {
.name = "eeprom",
},
.probe = eeprom_probe,
.remove = eeprom_remove,
.id_table = eeprom_id,
.class = I2C_CLASS_DDC | I2C_CLASS_SPD,
.detect = eeprom_detect,
.address_list = normal_i2c,
};
module_i2c_driver(eeprom_driver);
MODULE_AUTHOR("Frodo Looijaard <frodol@dds.nl> and "
"Philip Edelbrock <phil@netroedge.com> and "
"Greg Kroah-Hartman <greg@kroah.com>");
MODULE_DESCRIPTION("I2C EEPROM driver");
MODULE_LICENSE("GPL");

View file

@ -0,0 +1,321 @@
/*
* Copyright (C) 2004 - 2006 rt2x00 SourceForge Project
* <http://rt2x00.serialmonkey.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Module: eeprom_93cx6
* Abstract: EEPROM reader routines for 93cx6 chipsets.
* Supported chipsets: 93c46 & 93c66.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/eeprom_93cx6.h>
MODULE_AUTHOR("http://rt2x00.serialmonkey.com");
MODULE_VERSION("1.0");
MODULE_DESCRIPTION("EEPROM 93cx6 chip driver");
MODULE_LICENSE("GPL");
static inline void eeprom_93cx6_pulse_high(struct eeprom_93cx6 *eeprom)
{
eeprom->reg_data_clock = 1;
eeprom->register_write(eeprom);
/*
* Add a short delay for the pulse to work.
* According to the specifications the "maximum minimum"
* time should be 450ns.
*/
ndelay(450);
}
static inline void eeprom_93cx6_pulse_low(struct eeprom_93cx6 *eeprom)
{
eeprom->reg_data_clock = 0;
eeprom->register_write(eeprom);
/*
* Add a short delay for the pulse to work.
* According to the specifications the "maximum minimum"
* time should be 450ns.
*/
ndelay(450);
}
static void eeprom_93cx6_startup(struct eeprom_93cx6 *eeprom)
{
/*
* Clear all flags, and enable chip select.
*/
eeprom->register_read(eeprom);
eeprom->reg_data_in = 0;
eeprom->reg_data_out = 0;
eeprom->reg_data_clock = 0;
eeprom->reg_chip_select = 1;
eeprom->drive_data = 1;
eeprom->register_write(eeprom);
/*
* kick a pulse.
*/
eeprom_93cx6_pulse_high(eeprom);
eeprom_93cx6_pulse_low(eeprom);
}
static void eeprom_93cx6_cleanup(struct eeprom_93cx6 *eeprom)
{
/*
* Clear chip_select and data_in flags.
*/
eeprom->register_read(eeprom);
eeprom->reg_data_in = 0;
eeprom->reg_chip_select = 0;
eeprom->register_write(eeprom);
/*
* kick a pulse.
*/
eeprom_93cx6_pulse_high(eeprom);
eeprom_93cx6_pulse_low(eeprom);
}
static void eeprom_93cx6_write_bits(struct eeprom_93cx6 *eeprom,
const u16 data, const u16 count)
{
unsigned int i;
eeprom->register_read(eeprom);
/*
* Clear data flags.
*/
eeprom->reg_data_in = 0;
eeprom->reg_data_out = 0;
eeprom->drive_data = 1;
/*
* Start writing all bits.
*/
for (i = count; i > 0; i--) {
/*
* Check if this bit needs to be set.
*/
eeprom->reg_data_in = !!(data & (1 << (i - 1)));
/*
* Write the bit to the eeprom register.
*/
eeprom->register_write(eeprom);
/*
* Kick a pulse.
*/
eeprom_93cx6_pulse_high(eeprom);
eeprom_93cx6_pulse_low(eeprom);
}
eeprom->reg_data_in = 0;
eeprom->register_write(eeprom);
}
static void eeprom_93cx6_read_bits(struct eeprom_93cx6 *eeprom,
u16 *data, const u16 count)
{
unsigned int i;
u16 buf = 0;
eeprom->register_read(eeprom);
/*
* Clear data flags.
*/
eeprom->reg_data_in = 0;
eeprom->reg_data_out = 0;
eeprom->drive_data = 0;
/*
* Start reading all bits.
*/
for (i = count; i > 0; i--) {
eeprom_93cx6_pulse_high(eeprom);
eeprom->register_read(eeprom);
/*
* Clear data_in flag.
*/
eeprom->reg_data_in = 0;
/*
* Read if the bit has been set.
*/
if (eeprom->reg_data_out)
buf |= (1 << (i - 1));
eeprom_93cx6_pulse_low(eeprom);
}
*data = buf;
}
/**
* eeprom_93cx6_read - Read multiple words from eeprom
* @eeprom: Pointer to eeprom structure
* @word: Word index from where we should start reading
* @data: target pointer where the information will have to be stored
*
* This function will read the eeprom data as host-endian word
* into the given data pointer.
*/
void eeprom_93cx6_read(struct eeprom_93cx6 *eeprom, const u8 word,
u16 *data)
{
u16 command;
/*
* Initialize the eeprom register
*/
eeprom_93cx6_startup(eeprom);
/*
* Select the read opcode and the word to be read.
*/
command = (PCI_EEPROM_READ_OPCODE << eeprom->width) | word;
eeprom_93cx6_write_bits(eeprom, command,
PCI_EEPROM_WIDTH_OPCODE + eeprom->width);
/*
* Read the requested 16 bits.
*/
eeprom_93cx6_read_bits(eeprom, data, 16);
/*
* Cleanup eeprom register.
*/
eeprom_93cx6_cleanup(eeprom);
}
EXPORT_SYMBOL_GPL(eeprom_93cx6_read);
/**
* eeprom_93cx6_multiread - Read multiple words from eeprom
* @eeprom: Pointer to eeprom structure
* @word: Word index from where we should start reading
* @data: target pointer where the information will have to be stored
* @words: Number of words that should be read.
*
* This function will read all requested words from the eeprom,
* this is done by calling eeprom_93cx6_read() multiple times.
* But with the additional change that while the eeprom_93cx6_read
* will return host ordered bytes, this method will return little
* endian words.
*/
void eeprom_93cx6_multiread(struct eeprom_93cx6 *eeprom, const u8 word,
__le16 *data, const u16 words)
{
unsigned int i;
u16 tmp;
for (i = 0; i < words; i++) {
tmp = 0;
eeprom_93cx6_read(eeprom, word + i, &tmp);
data[i] = cpu_to_le16(tmp);
}
}
EXPORT_SYMBOL_GPL(eeprom_93cx6_multiread);
/**
* eeprom_93cx6_wren - set the write enable state
* @eeprom: Pointer to eeprom structure
* @enable: true to enable writes, otherwise disable writes
*
* Set the EEPROM write enable state to either allow or deny
* writes depending on the @enable value.
*/
void eeprom_93cx6_wren(struct eeprom_93cx6 *eeprom, bool enable)
{
u16 command;
/* start the command */
eeprom_93cx6_startup(eeprom);
/* create command to enable/disable */
command = enable ? PCI_EEPROM_EWEN_OPCODE : PCI_EEPROM_EWDS_OPCODE;
command <<= (eeprom->width - 2);
eeprom_93cx6_write_bits(eeprom, command,
PCI_EEPROM_WIDTH_OPCODE + eeprom->width);
eeprom_93cx6_cleanup(eeprom);
}
EXPORT_SYMBOL_GPL(eeprom_93cx6_wren);
/**
* eeprom_93cx6_write - write data to the EEPROM
* @eeprom: Pointer to eeprom structure
* @addr: Address to write data to.
* @data: The data to write to address @addr.
*
* Write the @data to the specified @addr in the EEPROM and
* waiting for the device to finish writing.
*
* Note, since we do not expect large number of write operations
* we delay in between parts of the operation to avoid using excessive
* amounts of CPU time busy waiting.
*/
void eeprom_93cx6_write(struct eeprom_93cx6 *eeprom, u8 addr, u16 data)
{
int timeout = 100;
u16 command;
/* start the command */
eeprom_93cx6_startup(eeprom);
command = PCI_EEPROM_WRITE_OPCODE << eeprom->width;
command |= addr;
/* send write command */
eeprom_93cx6_write_bits(eeprom, command,
PCI_EEPROM_WIDTH_OPCODE + eeprom->width);
/* send data */
eeprom_93cx6_write_bits(eeprom, data, 16);
/* get ready to check for busy */
eeprom->drive_data = 0;
eeprom->reg_chip_select = 1;
eeprom->register_write(eeprom);
/* wait at-least 250ns to get DO to be the busy signal */
usleep_range(1000, 2000);
/* wait for DO to go high to signify finish */
while (true) {
eeprom->register_read(eeprom);
if (eeprom->reg_data_out)
break;
usleep_range(1000, 2000);
if (--timeout <= 0) {
printk(KERN_ERR "%s: timeout\n", __func__);
break;
}
}
eeprom_93cx6_cleanup(eeprom);
}
EXPORT_SYMBOL_GPL(eeprom_93cx6_write);

View file

@ -0,0 +1,398 @@
/*
* Driver for 93xx46 EEPROMs
*
* (C) 2011 DENX Software Engineering, Anatolij Gustschin <agust@denx.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/spi/spi.h>
#include <linux/sysfs.h>
#include <linux/eeprom_93xx46.h>
#define OP_START 0x4
#define OP_WRITE (OP_START | 0x1)
#define OP_READ (OP_START | 0x2)
#define ADDR_EWDS 0x00
#define ADDR_ERAL 0x20
#define ADDR_EWEN 0x30
struct eeprom_93xx46_dev {
struct spi_device *spi;
struct eeprom_93xx46_platform_data *pdata;
struct bin_attribute bin;
struct mutex lock;
int addrlen;
};
static ssize_t
eeprom_93xx46_bin_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
struct eeprom_93xx46_dev *edev;
struct device *dev;
struct spi_message m;
struct spi_transfer t[2];
int bits, ret;
u16 cmd_addr;
dev = container_of(kobj, struct device, kobj);
edev = dev_get_drvdata(dev);
if (unlikely(off >= edev->bin.size))
return 0;
if ((off + count) > edev->bin.size)
count = edev->bin.size - off;
if (unlikely(!count))
return count;
cmd_addr = OP_READ << edev->addrlen;
if (edev->addrlen == 7) {
cmd_addr |= off & 0x7f;
bits = 10;
} else {
cmd_addr |= off & 0x3f;
bits = 9;
}
dev_dbg(&edev->spi->dev, "read cmd 0x%x, %d Hz\n",
cmd_addr, edev->spi->max_speed_hz);
spi_message_init(&m);
memset(t, 0, sizeof(t));
t[0].tx_buf = (char *)&cmd_addr;
t[0].len = 2;
t[0].bits_per_word = bits;
spi_message_add_tail(&t[0], &m);
t[1].rx_buf = buf;
t[1].len = count;
t[1].bits_per_word = 8;
spi_message_add_tail(&t[1], &m);
mutex_lock(&edev->lock);
if (edev->pdata->prepare)
edev->pdata->prepare(edev);
ret = spi_sync(edev->spi, &m);
/* have to wait at least Tcsl ns */
ndelay(250);
if (ret) {
dev_err(&edev->spi->dev, "read %zu bytes at %d: err. %d\n",
count, (int)off, ret);
}
if (edev->pdata->finish)
edev->pdata->finish(edev);
mutex_unlock(&edev->lock);
return ret ? : count;
}
static int eeprom_93xx46_ew(struct eeprom_93xx46_dev *edev, int is_on)
{
struct spi_message m;
struct spi_transfer t;
int bits, ret;
u16 cmd_addr;
cmd_addr = OP_START << edev->addrlen;
if (edev->addrlen == 7) {
cmd_addr |= (is_on ? ADDR_EWEN : ADDR_EWDS) << 1;
bits = 10;
} else {
cmd_addr |= (is_on ? ADDR_EWEN : ADDR_EWDS);
bits = 9;
}
dev_dbg(&edev->spi->dev, "ew cmd 0x%04x\n", cmd_addr);
spi_message_init(&m);
memset(&t, 0, sizeof(t));
t.tx_buf = &cmd_addr;
t.len = 2;
t.bits_per_word = bits;
spi_message_add_tail(&t, &m);
mutex_lock(&edev->lock);
if (edev->pdata->prepare)
edev->pdata->prepare(edev);
ret = spi_sync(edev->spi, &m);
/* have to wait at least Tcsl ns */
ndelay(250);
if (ret)
dev_err(&edev->spi->dev, "erase/write %sable error %d\n",
is_on ? "en" : "dis", ret);
if (edev->pdata->finish)
edev->pdata->finish(edev);
mutex_unlock(&edev->lock);
return ret;
}
static ssize_t
eeprom_93xx46_write_word(struct eeprom_93xx46_dev *edev,
const char *buf, unsigned off)
{
struct spi_message m;
struct spi_transfer t[2];
int bits, data_len, ret;
u16 cmd_addr;
cmd_addr = OP_WRITE << edev->addrlen;
if (edev->addrlen == 7) {
cmd_addr |= off & 0x7f;
bits = 10;
data_len = 1;
} else {
cmd_addr |= off & 0x3f;
bits = 9;
data_len = 2;
}
dev_dbg(&edev->spi->dev, "write cmd 0x%x\n", cmd_addr);
spi_message_init(&m);
memset(t, 0, sizeof(t));
t[0].tx_buf = (char *)&cmd_addr;
t[0].len = 2;
t[0].bits_per_word = bits;
spi_message_add_tail(&t[0], &m);
t[1].tx_buf = buf;
t[1].len = data_len;
t[1].bits_per_word = 8;
spi_message_add_tail(&t[1], &m);
ret = spi_sync(edev->spi, &m);
/* have to wait program cycle time Twc ms */
mdelay(6);
return ret;
}
static ssize_t
eeprom_93xx46_bin_write(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
struct eeprom_93xx46_dev *edev;
struct device *dev;
int i, ret, step = 1;
dev = container_of(kobj, struct device, kobj);
edev = dev_get_drvdata(dev);
if (unlikely(off >= edev->bin.size))
return -EFBIG;
if ((off + count) > edev->bin.size)
count = edev->bin.size - off;
if (unlikely(!count))
return count;
/* only write even number of bytes on 16-bit devices */
if (edev->addrlen == 6) {
step = 2;
count &= ~1;
}
/* erase/write enable */
ret = eeprom_93xx46_ew(edev, 1);
if (ret)
return ret;
mutex_lock(&edev->lock);
if (edev->pdata->prepare)
edev->pdata->prepare(edev);
for (i = 0; i < count; i += step) {
ret = eeprom_93xx46_write_word(edev, &buf[i], off + i);
if (ret) {
dev_err(&edev->spi->dev, "write failed at %d: %d\n",
(int)off + i, ret);
break;
}
}
if (edev->pdata->finish)
edev->pdata->finish(edev);
mutex_unlock(&edev->lock);
/* erase/write disable */
eeprom_93xx46_ew(edev, 0);
return ret ? : count;
}
static int eeprom_93xx46_eral(struct eeprom_93xx46_dev *edev)
{
struct eeprom_93xx46_platform_data *pd = edev->pdata;
struct spi_message m;
struct spi_transfer t;
int bits, ret;
u16 cmd_addr;
cmd_addr = OP_START << edev->addrlen;
if (edev->addrlen == 7) {
cmd_addr |= ADDR_ERAL << 1;
bits = 10;
} else {
cmd_addr |= ADDR_ERAL;
bits = 9;
}
spi_message_init(&m);
memset(&t, 0, sizeof(t));
t.tx_buf = &cmd_addr;
t.len = 2;
t.bits_per_word = bits;
spi_message_add_tail(&t, &m);
mutex_lock(&edev->lock);
if (edev->pdata->prepare)
edev->pdata->prepare(edev);
ret = spi_sync(edev->spi, &m);
if (ret)
dev_err(&edev->spi->dev, "erase error %d\n", ret);
/* have to wait erase cycle time Tec ms */
mdelay(6);
if (pd->finish)
pd->finish(edev);
mutex_unlock(&edev->lock);
return ret;
}
static ssize_t eeprom_93xx46_store_erase(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct eeprom_93xx46_dev *edev = dev_get_drvdata(dev);
int erase = 0, ret;
sscanf(buf, "%d", &erase);
if (erase) {
ret = eeprom_93xx46_ew(edev, 1);
if (ret)
return ret;
ret = eeprom_93xx46_eral(edev);
if (ret)
return ret;
ret = eeprom_93xx46_ew(edev, 0);
if (ret)
return ret;
}
return count;
}
static DEVICE_ATTR(erase, S_IWUSR, NULL, eeprom_93xx46_store_erase);
static int eeprom_93xx46_probe(struct spi_device *spi)
{
struct eeprom_93xx46_platform_data *pd;
struct eeprom_93xx46_dev *edev;
int err;
pd = spi->dev.platform_data;
if (!pd) {
dev_err(&spi->dev, "missing platform data\n");
return -ENODEV;
}
edev = kzalloc(sizeof(*edev), GFP_KERNEL);
if (!edev)
return -ENOMEM;
if (pd->flags & EE_ADDR8)
edev->addrlen = 7;
else if (pd->flags & EE_ADDR16)
edev->addrlen = 6;
else {
dev_err(&spi->dev, "unspecified address type\n");
err = -EINVAL;
goto fail;
}
mutex_init(&edev->lock);
edev->spi = spi_dev_get(spi);
edev->pdata = pd;
sysfs_bin_attr_init(&edev->bin);
edev->bin.attr.name = "eeprom";
edev->bin.attr.mode = S_IRUSR;
edev->bin.read = eeprom_93xx46_bin_read;
edev->bin.size = 128;
if (!(pd->flags & EE_READONLY)) {
edev->bin.write = eeprom_93xx46_bin_write;
edev->bin.attr.mode |= S_IWUSR;
}
err = sysfs_create_bin_file(&spi->dev.kobj, &edev->bin);
if (err)
goto fail;
dev_info(&spi->dev, "%d-bit eeprom %s\n",
(pd->flags & EE_ADDR8) ? 8 : 16,
(pd->flags & EE_READONLY) ? "(readonly)" : "");
if (!(pd->flags & EE_READONLY)) {
if (device_create_file(&spi->dev, &dev_attr_erase))
dev_err(&spi->dev, "can't create erase interface\n");
}
spi_set_drvdata(spi, edev);
return 0;
fail:
kfree(edev);
return err;
}
static int eeprom_93xx46_remove(struct spi_device *spi)
{
struct eeprom_93xx46_dev *edev = spi_get_drvdata(spi);
if (!(edev->pdata->flags & EE_READONLY))
device_remove_file(&spi->dev, &dev_attr_erase);
sysfs_remove_bin_file(&spi->dev.kobj, &edev->bin);
kfree(edev);
return 0;
}
static struct spi_driver eeprom_93xx46_driver = {
.driver = {
.name = "93xx46",
.owner = THIS_MODULE,
},
.probe = eeprom_93xx46_probe,
.remove = eeprom_93xx46_remove,
};
module_spi_driver(eeprom_93xx46_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Driver for 93xx46 EEPROMs");
MODULE_AUTHOR("Anatolij Gustschin <agust@denx.de>");
MODULE_ALIAS("spi:93xx46");

View file

@ -0,0 +1,214 @@
/*
* max6875.c - driver for MAX6874/MAX6875
*
* Copyright (C) 2005 Ben Gardner <bgardner@wabtec.com>
*
* Based on eeprom.c
*
* The MAX6875 has a bank of registers and two banks of EEPROM.
* Address ranges are defined as follows:
* * 0x0000 - 0x0046 = configuration registers
* * 0x8000 - 0x8046 = configuration EEPROM
* * 0x8100 - 0x82FF = user EEPROM
*
* This driver makes the user EEPROM available for read.
*
* The registers & config EEPROM should be accessed via i2c-dev.
*
* The MAX6875 ignores the lowest address bit, so each chip responds to
* two addresses - 0x50/0x51 and 0x52/0x53.
*
* Note that the MAX6875 uses i2c_smbus_write_byte_data() to set the read
* address, so this driver is destructive if loaded for the wrong EEPROM chip.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/i2c.h>
#include <linux/mutex.h>
/* The MAX6875 can only read/write 16 bytes at a time */
#define SLICE_SIZE 16
#define SLICE_BITS 4
/* USER EEPROM is at addresses 0x8100 - 0x82FF */
#define USER_EEPROM_BASE 0x8100
#define USER_EEPROM_SIZE 0x0200
#define USER_EEPROM_SLICES 32
/* MAX6875 commands */
#define MAX6875_CMD_BLK_READ 0x84
/* Each client has this additional data */
struct max6875_data {
struct i2c_client *fake_client;
struct mutex update_lock;
u32 valid;
u8 data[USER_EEPROM_SIZE];
unsigned long last_updated[USER_EEPROM_SLICES];
};
static void max6875_update_slice(struct i2c_client *client, int slice)
{
struct max6875_data *data = i2c_get_clientdata(client);
int i, j, addr;
u8 *buf;
if (slice >= USER_EEPROM_SLICES)
return;
mutex_lock(&data->update_lock);
buf = &data->data[slice << SLICE_BITS];
if (!(data->valid & (1 << slice)) ||
time_after(jiffies, data->last_updated[slice])) {
dev_dbg(&client->dev, "Starting update of slice %u\n", slice);
data->valid &= ~(1 << slice);
addr = USER_EEPROM_BASE + (slice << SLICE_BITS);
/* select the eeprom address */
if (i2c_smbus_write_byte_data(client, addr >> 8, addr & 0xFF)) {
dev_err(&client->dev, "address set failed\n");
goto exit_up;
}
if (i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_READ_I2C_BLOCK)) {
if (i2c_smbus_read_i2c_block_data(client,
MAX6875_CMD_BLK_READ,
SLICE_SIZE,
buf) != SLICE_SIZE) {
goto exit_up;
}
} else {
for (i = 0; i < SLICE_SIZE; i++) {
j = i2c_smbus_read_byte(client);
if (j < 0) {
goto exit_up;
}
buf[i] = j;
}
}
data->last_updated[slice] = jiffies;
data->valid |= (1 << slice);
}
exit_up:
mutex_unlock(&data->update_lock);
}
static ssize_t max6875_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
struct i2c_client *client = kobj_to_i2c_client(kobj);
struct max6875_data *data = i2c_get_clientdata(client);
int slice, max_slice;
if (off > USER_EEPROM_SIZE)
return 0;
if (off + count > USER_EEPROM_SIZE)
count = USER_EEPROM_SIZE - off;
/* refresh slices which contain requested bytes */
max_slice = (off + count - 1) >> SLICE_BITS;
for (slice = (off >> SLICE_BITS); slice <= max_slice; slice++)
max6875_update_slice(client, slice);
memcpy(buf, &data->data[off], count);
return count;
}
static struct bin_attribute user_eeprom_attr = {
.attr = {
.name = "eeprom",
.mode = S_IRUGO,
},
.size = USER_EEPROM_SIZE,
.read = max6875_read,
};
static int max6875_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct i2c_adapter *adapter = client->adapter;
struct max6875_data *data;
int err;
if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_WRITE_BYTE_DATA
| I2C_FUNC_SMBUS_READ_BYTE))
return -ENODEV;
/* Only bind to even addresses */
if (client->addr & 1)
return -ENODEV;
if (!(data = kzalloc(sizeof(struct max6875_data), GFP_KERNEL)))
return -ENOMEM;
/* A fake client is created on the odd address */
data->fake_client = i2c_new_dummy(client->adapter, client->addr + 1);
if (!data->fake_client) {
err = -ENOMEM;
goto exit_kfree;
}
/* Init real i2c_client */
i2c_set_clientdata(client, data);
mutex_init(&data->update_lock);
err = sysfs_create_bin_file(&client->dev.kobj, &user_eeprom_attr);
if (err)
goto exit_remove_fake;
return 0;
exit_remove_fake:
i2c_unregister_device(data->fake_client);
exit_kfree:
kfree(data);
return err;
}
static int max6875_remove(struct i2c_client *client)
{
struct max6875_data *data = i2c_get_clientdata(client);
i2c_unregister_device(data->fake_client);
sysfs_remove_bin_file(&client->dev.kobj, &user_eeprom_attr);
kfree(data);
return 0;
}
static const struct i2c_device_id max6875_id[] = {
{ "max6875", 0 },
{ }
};
static struct i2c_driver max6875_driver = {
.driver = {
.name = "max6875",
},
.probe = max6875_probe,
.remove = max6875_remove,
.id_table = max6875_id,
};
module_i2c_driver(max6875_driver);
MODULE_AUTHOR("Ben Gardner <bgardner@wabtec.com>");
MODULE_DESCRIPTION("MAX6875 driver");
MODULE_LICENSE("GPL");

View file

@ -0,0 +1,157 @@
/*
* Copyright (c) 2013 Oliver Schinagl <oliver@schinagl.nl>
* http://www.linux-sunxi.org
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* This driver exposes the Allwinner security ID, efuses exported in byte-
* sized chunks.
*/
#include <linux/compiler.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/export.h>
#include <linux/fs.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/kobject.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <linux/stat.h>
#include <linux/sysfs.h>
#include <linux/types.h>
#define DRV_NAME "sunxi-sid"
struct sunxi_sid_data {
void __iomem *reg_base;
unsigned int keysize;
};
/* We read the entire key, due to a 32 bit read alignment requirement. Since we
* want to return the requested byte, this results in somewhat slower code and
* uses 4 times more reads as needed but keeps code simpler. Since the SID is
* only very rarely probed, this is not really an issue.
*/
static u8 sunxi_sid_read_byte(const struct sunxi_sid_data *sid_data,
const unsigned int offset)
{
u32 sid_key;
if (offset >= sid_data->keysize)
return 0;
sid_key = ioread32be(sid_data->reg_base + round_down(offset, 4));
sid_key >>= (offset % 4) * 8;
return sid_key; /* Only return the last byte */
}
static ssize_t sid_read(struct file *fd, struct kobject *kobj,
struct bin_attribute *attr, char *buf,
loff_t pos, size_t size)
{
struct platform_device *pdev;
struct sunxi_sid_data *sid_data;
int i;
pdev = to_platform_device(kobj_to_dev(kobj));
sid_data = platform_get_drvdata(pdev);
if (pos < 0 || pos >= sid_data->keysize)
return 0;
if (size > sid_data->keysize - pos)
size = sid_data->keysize - pos;
for (i = 0; i < size; i++)
buf[i] = sunxi_sid_read_byte(sid_data, pos + i);
return i;
}
static struct bin_attribute sid_bin_attr = {
.attr = { .name = "eeprom", .mode = S_IRUGO, },
.read = sid_read,
};
static int sunxi_sid_remove(struct platform_device *pdev)
{
device_remove_bin_file(&pdev->dev, &sid_bin_attr);
dev_dbg(&pdev->dev, "driver unloaded\n");
return 0;
}
static const struct of_device_id sunxi_sid_of_match[] = {
{ .compatible = "allwinner,sun4i-a10-sid", .data = (void *)16},
{ .compatible = "allwinner,sun7i-a20-sid", .data = (void *)512},
{/* sentinel */},
};
MODULE_DEVICE_TABLE(of, sunxi_sid_of_match);
static int sunxi_sid_probe(struct platform_device *pdev)
{
struct sunxi_sid_data *sid_data;
struct resource *res;
const struct of_device_id *of_dev_id;
u8 *entropy;
unsigned int i;
sid_data = devm_kzalloc(&pdev->dev, sizeof(struct sunxi_sid_data),
GFP_KERNEL);
if (!sid_data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
sid_data->reg_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(sid_data->reg_base))
return PTR_ERR(sid_data->reg_base);
of_dev_id = of_match_device(sunxi_sid_of_match, &pdev->dev);
if (!of_dev_id)
return -ENODEV;
sid_data->keysize = (int)of_dev_id->data;
platform_set_drvdata(pdev, sid_data);
sid_bin_attr.size = sid_data->keysize;
if (device_create_bin_file(&pdev->dev, &sid_bin_attr))
return -ENODEV;
entropy = kzalloc(sizeof(u8) * sid_data->keysize, GFP_KERNEL);
for (i = 0; i < sid_data->keysize; i++)
entropy[i] = sunxi_sid_read_byte(sid_data, i);
add_device_randomness(entropy, sid_data->keysize);
kfree(entropy);
dev_dbg(&pdev->dev, "loaded\n");
return 0;
}
static struct platform_driver sunxi_sid_driver = {
.probe = sunxi_sid_probe,
.remove = sunxi_sid_remove,
.driver = {
.name = DRV_NAME,
.owner = THIS_MODULE,
.of_match_table = sunxi_sid_of_match,
},
};
module_platform_driver(sunxi_sid_driver);
MODULE_AUTHOR("Oliver Schinagl <oliver@schinagl.nl>");
MODULE_DESCRIPTION("Allwinner sunxi security id driver");
MODULE_LICENSE("GPL");

570
drivers/misc/enclosure.c Normal file
View file

@ -0,0 +1,570 @@
/*
* Enclosure Services
*
* Copyright (C) 2008 James Bottomley <James.Bottomley@HansenPartnership.com>
*
**-----------------------------------------------------------------------------
**
** This program is free software; you can redistribute it and/or
** modify it under the terms of the GNU General Public License
** version 2 as published by the Free Software Foundation.
**
** This program is distributed in the hope that it will be useful,
** but WITHOUT ANY WARRANTY; without even the implied warranty of
** MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
** GNU General Public License for more details.
**
** You should have received a copy of the GNU General Public License
** along with this program; if not, write to the Free Software
** Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
**
**-----------------------------------------------------------------------------
*/
#include <linux/device.h>
#include <linux/enclosure.h>
#include <linux/err.h>
#include <linux/list.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
static LIST_HEAD(container_list);
static DEFINE_MUTEX(container_list_lock);
static struct class enclosure_class;
/**
* enclosure_find - find an enclosure given a parent device
* @dev: the parent to match against
* @start: Optional enclosure device to start from (NULL if none)
*
* Looks through the list of registered enclosures to find all those
* with @dev as a parent. Returns NULL if no enclosure is
* found. @start can be used as a starting point to obtain multiple
* enclosures per parent (should begin with NULL and then be set to
* each returned enclosure device). Obtains a reference to the
* enclosure class device which must be released with device_put().
* If @start is not NULL, a reference must be taken on it which is
* released before returning (this allows a loop through all
* enclosures to exit with only the reference on the enclosure of
* interest held). Note that the @dev may correspond to the actual
* device housing the enclosure, in which case no iteration via @start
* is required.
*/
struct enclosure_device *enclosure_find(struct device *dev,
struct enclosure_device *start)
{
struct enclosure_device *edev;
mutex_lock(&container_list_lock);
edev = list_prepare_entry(start, &container_list, node);
if (start)
put_device(&start->edev);
list_for_each_entry_continue(edev, &container_list, node) {
struct device *parent = edev->edev.parent;
/* parent might not be immediate, so iterate up to
* the root of the tree if necessary */
while (parent) {
if (parent == dev) {
get_device(&edev->edev);
mutex_unlock(&container_list_lock);
return edev;
}
parent = parent->parent;
}
}
mutex_unlock(&container_list_lock);
return NULL;
}
EXPORT_SYMBOL_GPL(enclosure_find);
/**
* enclosure_for_each_device - calls a function for each enclosure
* @fn: the function to call
* @data: the data to pass to each call
*
* Loops over all the enclosures calling the function.
*
* Note, this function uses a mutex which will be held across calls to
* @fn, so it must have non atomic context, and @fn may (although it
* should not) sleep or otherwise cause the mutex to be held for
* indefinite periods
*/
int enclosure_for_each_device(int (*fn)(struct enclosure_device *, void *),
void *data)
{
int error = 0;
struct enclosure_device *edev;
mutex_lock(&container_list_lock);
list_for_each_entry(edev, &container_list, node) {
error = fn(edev, data);
if (error)
break;
}
mutex_unlock(&container_list_lock);
return error;
}
EXPORT_SYMBOL_GPL(enclosure_for_each_device);
/**
* enclosure_register - register device as an enclosure
*
* @dev: device containing the enclosure
* @components: number of components in the enclosure
*
* This sets up the device for being an enclosure. Note that @dev does
* not have to be a dedicated enclosure device. It may be some other type
* of device that additionally responds to enclosure services
*/
struct enclosure_device *
enclosure_register(struct device *dev, const char *name, int components,
struct enclosure_component_callbacks *cb)
{
struct enclosure_device *edev =
kzalloc(sizeof(struct enclosure_device) +
sizeof(struct enclosure_component)*components,
GFP_KERNEL);
int err, i;
BUG_ON(!cb);
if (!edev)
return ERR_PTR(-ENOMEM);
edev->components = components;
edev->edev.class = &enclosure_class;
edev->edev.parent = get_device(dev);
edev->cb = cb;
dev_set_name(&edev->edev, "%s", name);
err = device_register(&edev->edev);
if (err)
goto err;
for (i = 0; i < components; i++)
edev->component[i].number = -1;
mutex_lock(&container_list_lock);
list_add_tail(&edev->node, &container_list);
mutex_unlock(&container_list_lock);
return edev;
err:
put_device(edev->edev.parent);
kfree(edev);
return ERR_PTR(err);
}
EXPORT_SYMBOL_GPL(enclosure_register);
static struct enclosure_component_callbacks enclosure_null_callbacks;
/**
* enclosure_unregister - remove an enclosure
*
* @edev: the registered enclosure to remove;
*/
void enclosure_unregister(struct enclosure_device *edev)
{
int i;
mutex_lock(&container_list_lock);
list_del(&edev->node);
mutex_unlock(&container_list_lock);
for (i = 0; i < edev->components; i++)
if (edev->component[i].number != -1)
device_unregister(&edev->component[i].cdev);
/* prevent any callbacks into service user */
edev->cb = &enclosure_null_callbacks;
device_unregister(&edev->edev);
}
EXPORT_SYMBOL_GPL(enclosure_unregister);
#define ENCLOSURE_NAME_SIZE 64
static void enclosure_link_name(struct enclosure_component *cdev, char *name)
{
strcpy(name, "enclosure_device:");
strcat(name, dev_name(&cdev->cdev));
}
static void enclosure_remove_links(struct enclosure_component *cdev)
{
char name[ENCLOSURE_NAME_SIZE];
/*
* In odd circumstances, like multipath devices, something else may
* already have removed the links, so check for this condition first.
*/
if (!cdev->dev->kobj.sd)
return;
enclosure_link_name(cdev, name);
sysfs_remove_link(&cdev->dev->kobj, name);
sysfs_remove_link(&cdev->cdev.kobj, "device");
}
static int enclosure_add_links(struct enclosure_component *cdev)
{
int error;
char name[ENCLOSURE_NAME_SIZE];
error = sysfs_create_link(&cdev->cdev.kobj, &cdev->dev->kobj, "device");
if (error)
return error;
enclosure_link_name(cdev, name);
error = sysfs_create_link(&cdev->dev->kobj, &cdev->cdev.kobj, name);
if (error)
sysfs_remove_link(&cdev->cdev.kobj, "device");
return error;
}
static void enclosure_release(struct device *cdev)
{
struct enclosure_device *edev = to_enclosure_device(cdev);
put_device(cdev->parent);
kfree(edev);
}
static void enclosure_component_release(struct device *dev)
{
struct enclosure_component *cdev = to_enclosure_component(dev);
if (cdev->dev) {
enclosure_remove_links(cdev);
put_device(cdev->dev);
}
put_device(dev->parent);
}
static const struct attribute_group *enclosure_component_groups[];
/**
* enclosure_component_register - add a particular component to an enclosure
* @edev: the enclosure to add the component
* @num: the device number
* @type: the type of component being added
* @name: an optional name to appear in sysfs (leave NULL if none)
*
* Registers the component. The name is optional for enclosures that
* give their components a unique name. If not, leave the field NULL
* and a name will be assigned.
*
* Returns a pointer to the enclosure component or an error.
*/
struct enclosure_component *
enclosure_component_register(struct enclosure_device *edev,
unsigned int number,
enum enclosure_component_type type,
const char *name)
{
struct enclosure_component *ecomp;
struct device *cdev;
int err;
if (number >= edev->components)
return ERR_PTR(-EINVAL);
ecomp = &edev->component[number];
if (ecomp->number != -1)
return ERR_PTR(-EINVAL);
ecomp->type = type;
ecomp->number = number;
cdev = &ecomp->cdev;
cdev->parent = get_device(&edev->edev);
if (name && name[0])
dev_set_name(cdev, "%s", name);
else
dev_set_name(cdev, "%u", number);
cdev->release = enclosure_component_release;
cdev->groups = enclosure_component_groups;
err = device_register(cdev);
if (err) {
ecomp->number = -1;
put_device(cdev);
return ERR_PTR(err);
}
return ecomp;
}
EXPORT_SYMBOL_GPL(enclosure_component_register);
/**
* enclosure_add_device - add a device as being part of an enclosure
* @edev: the enclosure device being added to.
* @num: the number of the component
* @dev: the device being added
*
* Declares a real device to reside in slot (or identifier) @num of an
* enclosure. This will cause the relevant sysfs links to appear.
* This function may also be used to change a device associated with
* an enclosure without having to call enclosure_remove_device() in
* between.
*
* Returns zero on success or an error.
*/
int enclosure_add_device(struct enclosure_device *edev, int component,
struct device *dev)
{
struct enclosure_component *cdev;
if (!edev || component >= edev->components)
return -EINVAL;
cdev = &edev->component[component];
if (cdev->dev == dev)
return -EEXIST;
if (cdev->dev)
enclosure_remove_links(cdev);
put_device(cdev->dev);
cdev->dev = get_device(dev);
return enclosure_add_links(cdev);
}
EXPORT_SYMBOL_GPL(enclosure_add_device);
/**
* enclosure_remove_device - remove a device from an enclosure
* @edev: the enclosure device
* @num: the number of the component to remove
*
* Returns zero on success or an error.
*
*/
int enclosure_remove_device(struct enclosure_device *edev, struct device *dev)
{
struct enclosure_component *cdev;
int i;
if (!edev || !dev)
return -EINVAL;
for (i = 0; i < edev->components; i++) {
cdev = &edev->component[i];
if (cdev->dev == dev) {
enclosure_remove_links(cdev);
device_del(&cdev->cdev);
put_device(dev);
cdev->dev = NULL;
return device_add(&cdev->cdev);
}
}
return -ENODEV;
}
EXPORT_SYMBOL_GPL(enclosure_remove_device);
/*
* sysfs pieces below
*/
static ssize_t components_show(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct enclosure_device *edev = to_enclosure_device(cdev);
return snprintf(buf, 40, "%d\n", edev->components);
}
static DEVICE_ATTR_RO(components);
static struct attribute *enclosure_class_attrs[] = {
&dev_attr_components.attr,
NULL,
};
ATTRIBUTE_GROUPS(enclosure_class);
static struct class enclosure_class = {
.name = "enclosure",
.owner = THIS_MODULE,
.dev_release = enclosure_release,
.dev_groups = enclosure_class_groups,
};
static const char *const enclosure_status [] = {
[ENCLOSURE_STATUS_UNSUPPORTED] = "unsupported",
[ENCLOSURE_STATUS_OK] = "OK",
[ENCLOSURE_STATUS_CRITICAL] = "critical",
[ENCLOSURE_STATUS_NON_CRITICAL] = "non-critical",
[ENCLOSURE_STATUS_UNRECOVERABLE] = "unrecoverable",
[ENCLOSURE_STATUS_NOT_INSTALLED] = "not installed",
[ENCLOSURE_STATUS_UNKNOWN] = "unknown",
[ENCLOSURE_STATUS_UNAVAILABLE] = "unavailable",
[ENCLOSURE_STATUS_MAX] = NULL,
};
static const char *const enclosure_type [] = {
[ENCLOSURE_COMPONENT_DEVICE] = "device",
[ENCLOSURE_COMPONENT_ARRAY_DEVICE] = "array device",
};
static ssize_t get_component_fault(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct enclosure_device *edev = to_enclosure_device(cdev->parent);
struct enclosure_component *ecomp = to_enclosure_component(cdev);
if (edev->cb->get_fault)
edev->cb->get_fault(edev, ecomp);
return snprintf(buf, 40, "%d\n", ecomp->fault);
}
static ssize_t set_component_fault(struct device *cdev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct enclosure_device *edev = to_enclosure_device(cdev->parent);
struct enclosure_component *ecomp = to_enclosure_component(cdev);
int val = simple_strtoul(buf, NULL, 0);
if (edev->cb->set_fault)
edev->cb->set_fault(edev, ecomp, val);
return count;
}
static ssize_t get_component_status(struct device *cdev,
struct device_attribute *attr,char *buf)
{
struct enclosure_device *edev = to_enclosure_device(cdev->parent);
struct enclosure_component *ecomp = to_enclosure_component(cdev);
if (edev->cb->get_status)
edev->cb->get_status(edev, ecomp);
return snprintf(buf, 40, "%s\n", enclosure_status[ecomp->status]);
}
static ssize_t set_component_status(struct device *cdev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct enclosure_device *edev = to_enclosure_device(cdev->parent);
struct enclosure_component *ecomp = to_enclosure_component(cdev);
int i;
for (i = 0; enclosure_status[i]; i++) {
if (strncmp(buf, enclosure_status[i],
strlen(enclosure_status[i])) == 0 &&
(buf[strlen(enclosure_status[i])] == '\n' ||
buf[strlen(enclosure_status[i])] == '\0'))
break;
}
if (enclosure_status[i] && edev->cb->set_status) {
edev->cb->set_status(edev, ecomp, i);
return count;
} else
return -EINVAL;
}
static ssize_t get_component_active(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct enclosure_device *edev = to_enclosure_device(cdev->parent);
struct enclosure_component *ecomp = to_enclosure_component(cdev);
if (edev->cb->get_active)
edev->cb->get_active(edev, ecomp);
return snprintf(buf, 40, "%d\n", ecomp->active);
}
static ssize_t set_component_active(struct device *cdev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct enclosure_device *edev = to_enclosure_device(cdev->parent);
struct enclosure_component *ecomp = to_enclosure_component(cdev);
int val = simple_strtoul(buf, NULL, 0);
if (edev->cb->set_active)
edev->cb->set_active(edev, ecomp, val);
return count;
}
static ssize_t get_component_locate(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct enclosure_device *edev = to_enclosure_device(cdev->parent);
struct enclosure_component *ecomp = to_enclosure_component(cdev);
if (edev->cb->get_locate)
edev->cb->get_locate(edev, ecomp);
return snprintf(buf, 40, "%d\n", ecomp->locate);
}
static ssize_t set_component_locate(struct device *cdev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct enclosure_device *edev = to_enclosure_device(cdev->parent);
struct enclosure_component *ecomp = to_enclosure_component(cdev);
int val = simple_strtoul(buf, NULL, 0);
if (edev->cb->set_locate)
edev->cb->set_locate(edev, ecomp, val);
return count;
}
static ssize_t get_component_type(struct device *cdev,
struct device_attribute *attr, char *buf)
{
struct enclosure_component *ecomp = to_enclosure_component(cdev);
return snprintf(buf, 40, "%s\n", enclosure_type[ecomp->type]);
}
static DEVICE_ATTR(fault, S_IRUGO | S_IWUSR, get_component_fault,
set_component_fault);
static DEVICE_ATTR(status, S_IRUGO | S_IWUSR, get_component_status,
set_component_status);
static DEVICE_ATTR(active, S_IRUGO | S_IWUSR, get_component_active,
set_component_active);
static DEVICE_ATTR(locate, S_IRUGO | S_IWUSR, get_component_locate,
set_component_locate);
static DEVICE_ATTR(type, S_IRUGO, get_component_type, NULL);
static struct attribute *enclosure_component_attrs[] = {
&dev_attr_fault.attr,
&dev_attr_status.attr,
&dev_attr_active.attr,
&dev_attr_locate.attr,
&dev_attr_type.attr,
NULL
};
ATTRIBUTE_GROUPS(enclosure_component);
static int __init enclosure_init(void)
{
int err;
err = class_register(&enclosure_class);
if (err)
return err;
return 0;
}
static void __exit enclosure_exit(void)
{
class_unregister(&enclosure_class);
}
module_init(enclosure_init);
module_exit(enclosure_exit);
MODULE_AUTHOR("James Bottomley");
MODULE_DESCRIPTION("Enclosure Services");
MODULE_LICENSE("GPL v2");

549
drivers/misc/fsa9480.c Normal file
View file

@ -0,0 +1,549 @@
/*
* fsa9480.c - FSA9480 micro USB switch device driver
*
* Copyright (C) 2010 Samsung Electronics
* Minkyu Kang <mk7.kang@samsung.com>
* Wonguk Jeong <wonguk.jeong@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/err.h>
#include <linux/i2c.h>
#include <linux/platform_data/fsa9480.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/workqueue.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/pm_runtime.h>
/* FSA9480 I2C registers */
#define FSA9480_REG_DEVID 0x01
#define FSA9480_REG_CTRL 0x02
#define FSA9480_REG_INT1 0x03
#define FSA9480_REG_INT2 0x04
#define FSA9480_REG_INT1_MASK 0x05
#define FSA9480_REG_INT2_MASK 0x06
#define FSA9480_REG_ADC 0x07
#define FSA9480_REG_TIMING1 0x08
#define FSA9480_REG_TIMING2 0x09
#define FSA9480_REG_DEV_T1 0x0a
#define FSA9480_REG_DEV_T2 0x0b
#define FSA9480_REG_BTN1 0x0c
#define FSA9480_REG_BTN2 0x0d
#define FSA9480_REG_CK 0x0e
#define FSA9480_REG_CK_INT1 0x0f
#define FSA9480_REG_CK_INT2 0x10
#define FSA9480_REG_CK_INTMASK1 0x11
#define FSA9480_REG_CK_INTMASK2 0x12
#define FSA9480_REG_MANSW1 0x13
#define FSA9480_REG_MANSW2 0x14
/* Control */
#define CON_SWITCH_OPEN (1 << 4)
#define CON_RAW_DATA (1 << 3)
#define CON_MANUAL_SW (1 << 2)
#define CON_WAIT (1 << 1)
#define CON_INT_MASK (1 << 0)
#define CON_MASK (CON_SWITCH_OPEN | CON_RAW_DATA | \
CON_MANUAL_SW | CON_WAIT)
/* Device Type 1 */
#define DEV_USB_OTG (1 << 7)
#define DEV_DEDICATED_CHG (1 << 6)
#define DEV_USB_CHG (1 << 5)
#define DEV_CAR_KIT (1 << 4)
#define DEV_UART (1 << 3)
#define DEV_USB (1 << 2)
#define DEV_AUDIO_2 (1 << 1)
#define DEV_AUDIO_1 (1 << 0)
#define DEV_T1_USB_MASK (DEV_USB_OTG | DEV_USB)
#define DEV_T1_UART_MASK (DEV_UART)
#define DEV_T1_CHARGER_MASK (DEV_DEDICATED_CHG | DEV_USB_CHG)
/* Device Type 2 */
#define DEV_AV (1 << 6)
#define DEV_TTY (1 << 5)
#define DEV_PPD (1 << 4)
#define DEV_JIG_UART_OFF (1 << 3)
#define DEV_JIG_UART_ON (1 << 2)
#define DEV_JIG_USB_OFF (1 << 1)
#define DEV_JIG_USB_ON (1 << 0)
#define DEV_T2_USB_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON)
#define DEV_T2_UART_MASK (DEV_JIG_UART_OFF | DEV_JIG_UART_ON)
#define DEV_T2_JIG_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON | \
DEV_JIG_UART_OFF | DEV_JIG_UART_ON)
/*
* Manual Switch
* D- [7:5] / D+ [4:2]
* 000: Open all / 001: USB / 010: AUDIO / 011: UART / 100: V_AUDIO
*/
#define SW_VAUDIO ((4 << 5) | (4 << 2))
#define SW_UART ((3 << 5) | (3 << 2))
#define SW_AUDIO ((2 << 5) | (2 << 2))
#define SW_DHOST ((1 << 5) | (1 << 2))
#define SW_AUTO ((0 << 5) | (0 << 2))
/* Interrupt 1 */
#define INT_DETACH (1 << 1)
#define INT_ATTACH (1 << 0)
struct fsa9480_usbsw {
struct i2c_client *client;
struct fsa9480_platform_data *pdata;
int dev1;
int dev2;
int mansw;
};
static struct fsa9480_usbsw *chip;
static int fsa9480_write_reg(struct i2c_client *client,
int reg, int value)
{
int ret;
ret = i2c_smbus_write_byte_data(client, reg, value);
if (ret < 0)
dev_err(&client->dev, "%s: err %d\n", __func__, ret);
return ret;
}
static int fsa9480_read_reg(struct i2c_client *client, int reg)
{
int ret;
ret = i2c_smbus_read_byte_data(client, reg);
if (ret < 0)
dev_err(&client->dev, "%s: err %d\n", __func__, ret);
return ret;
}
static int fsa9480_read_irq(struct i2c_client *client, int *value)
{
int ret;
ret = i2c_smbus_read_i2c_block_data(client,
FSA9480_REG_INT1, 2, (u8 *)value);
*value &= 0xffff;
if (ret < 0)
dev_err(&client->dev, "%s: err %d\n", __func__, ret);
return ret;
}
static void fsa9480_set_switch(const char *buf)
{
struct fsa9480_usbsw *usbsw = chip;
struct i2c_client *client = usbsw->client;
unsigned int value;
unsigned int path = 0;
value = fsa9480_read_reg(client, FSA9480_REG_CTRL);
if (!strncmp(buf, "VAUDIO", 6)) {
path = SW_VAUDIO;
value &= ~CON_MANUAL_SW;
} else if (!strncmp(buf, "UART", 4)) {
path = SW_UART;
value &= ~CON_MANUAL_SW;
} else if (!strncmp(buf, "AUDIO", 5)) {
path = SW_AUDIO;
value &= ~CON_MANUAL_SW;
} else if (!strncmp(buf, "DHOST", 5)) {
path = SW_DHOST;
value &= ~CON_MANUAL_SW;
} else if (!strncmp(buf, "AUTO", 4)) {
path = SW_AUTO;
value |= CON_MANUAL_SW;
} else {
printk(KERN_ERR "Wrong command\n");
return;
}
usbsw->mansw = path;
fsa9480_write_reg(client, FSA9480_REG_MANSW1, path);
fsa9480_write_reg(client, FSA9480_REG_CTRL, value);
}
static ssize_t fsa9480_get_switch(char *buf)
{
struct fsa9480_usbsw *usbsw = chip;
struct i2c_client *client = usbsw->client;
unsigned int value;
value = fsa9480_read_reg(client, FSA9480_REG_MANSW1);
if (value == SW_VAUDIO)
return sprintf(buf, "VAUDIO\n");
else if (value == SW_UART)
return sprintf(buf, "UART\n");
else if (value == SW_AUDIO)
return sprintf(buf, "AUDIO\n");
else if (value == SW_DHOST)
return sprintf(buf, "DHOST\n");
else if (value == SW_AUTO)
return sprintf(buf, "AUTO\n");
else
return sprintf(buf, "%x", value);
}
static ssize_t fsa9480_show_device(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fsa9480_usbsw *usbsw = dev_get_drvdata(dev);
struct i2c_client *client = usbsw->client;
int dev1, dev2;
dev1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1);
dev2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2);
if (!dev1 && !dev2)
return sprintf(buf, "NONE\n");
/* USB */
if (dev1 & DEV_T1_USB_MASK || dev2 & DEV_T2_USB_MASK)
return sprintf(buf, "USB\n");
/* UART */
if (dev1 & DEV_T1_UART_MASK || dev2 & DEV_T2_UART_MASK)
return sprintf(buf, "UART\n");
/* CHARGER */
if (dev1 & DEV_T1_CHARGER_MASK)
return sprintf(buf, "CHARGER\n");
/* JIG */
if (dev2 & DEV_T2_JIG_MASK)
return sprintf(buf, "JIG\n");
return sprintf(buf, "UNKNOWN\n");
}
static ssize_t fsa9480_show_manualsw(struct device *dev,
struct device_attribute *attr, char *buf)
{
return fsa9480_get_switch(buf);
}
static ssize_t fsa9480_set_manualsw(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
fsa9480_set_switch(buf);
return count;
}
static DEVICE_ATTR(device, S_IRUGO, fsa9480_show_device, NULL);
static DEVICE_ATTR(switch, S_IRUGO | S_IWUSR,
fsa9480_show_manualsw, fsa9480_set_manualsw);
static struct attribute *fsa9480_attributes[] = {
&dev_attr_device.attr,
&dev_attr_switch.attr,
NULL
};
static const struct attribute_group fsa9480_group = {
.attrs = fsa9480_attributes,
};
static void fsa9480_detect_dev(struct fsa9480_usbsw *usbsw, int intr)
{
int val1, val2, ctrl;
struct fsa9480_platform_data *pdata = usbsw->pdata;
struct i2c_client *client = usbsw->client;
val1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1);
val2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2);
ctrl = fsa9480_read_reg(client, FSA9480_REG_CTRL);
dev_info(&client->dev, "intr: 0x%x, dev1: 0x%x, dev2: 0x%x\n",
intr, val1, val2);
if (!intr)
goto out;
if (intr & INT_ATTACH) { /* Attached */
/* USB */
if (val1 & DEV_T1_USB_MASK || val2 & DEV_T2_USB_MASK) {
if (pdata->usb_cb)
pdata->usb_cb(FSA9480_ATTACHED);
if (usbsw->mansw) {
fsa9480_write_reg(client,
FSA9480_REG_MANSW1, usbsw->mansw);
}
}
/* UART */
if (val1 & DEV_T1_UART_MASK || val2 & DEV_T2_UART_MASK) {
if (pdata->uart_cb)
pdata->uart_cb(FSA9480_ATTACHED);
if (!(ctrl & CON_MANUAL_SW)) {
fsa9480_write_reg(client,
FSA9480_REG_MANSW1, SW_UART);
}
}
/* CHARGER */
if (val1 & DEV_T1_CHARGER_MASK) {
if (pdata->charger_cb)
pdata->charger_cb(FSA9480_ATTACHED);
}
/* JIG */
if (val2 & DEV_T2_JIG_MASK) {
if (pdata->jig_cb)
pdata->jig_cb(FSA9480_ATTACHED);
}
} else if (intr & INT_DETACH) { /* Detached */
/* USB */
if (usbsw->dev1 & DEV_T1_USB_MASK ||
usbsw->dev2 & DEV_T2_USB_MASK) {
if (pdata->usb_cb)
pdata->usb_cb(FSA9480_DETACHED);
}
/* UART */
if (usbsw->dev1 & DEV_T1_UART_MASK ||
usbsw->dev2 & DEV_T2_UART_MASK) {
if (pdata->uart_cb)
pdata->uart_cb(FSA9480_DETACHED);
}
/* CHARGER */
if (usbsw->dev1 & DEV_T1_CHARGER_MASK) {
if (pdata->charger_cb)
pdata->charger_cb(FSA9480_DETACHED);
}
/* JIG */
if (usbsw->dev2 & DEV_T2_JIG_MASK) {
if (pdata->jig_cb)
pdata->jig_cb(FSA9480_DETACHED);
}
}
usbsw->dev1 = val1;
usbsw->dev2 = val2;
out:
ctrl &= ~CON_INT_MASK;
fsa9480_write_reg(client, FSA9480_REG_CTRL, ctrl);
}
static irqreturn_t fsa9480_irq_handler(int irq, void *data)
{
struct fsa9480_usbsw *usbsw = data;
struct i2c_client *client = usbsw->client;
int intr;
/* clear interrupt */
fsa9480_read_irq(client, &intr);
/* device detection */
fsa9480_detect_dev(usbsw, intr);
return IRQ_HANDLED;
}
static int fsa9480_irq_init(struct fsa9480_usbsw *usbsw)
{
struct fsa9480_platform_data *pdata = usbsw->pdata;
struct i2c_client *client = usbsw->client;
int ret;
int intr;
unsigned int ctrl = CON_MASK;
/* clear interrupt */
fsa9480_read_irq(client, &intr);
/* unmask interrupt (attach/detach only) */
fsa9480_write_reg(client, FSA9480_REG_INT1_MASK, 0xfc);
fsa9480_write_reg(client, FSA9480_REG_INT2_MASK, 0x1f);
usbsw->mansw = fsa9480_read_reg(client, FSA9480_REG_MANSW1);
if (usbsw->mansw)
ctrl &= ~CON_MANUAL_SW; /* Manual Switching Mode */
fsa9480_write_reg(client, FSA9480_REG_CTRL, ctrl);
if (pdata && pdata->cfg_gpio)
pdata->cfg_gpio();
if (client->irq) {
ret = request_threaded_irq(client->irq, NULL,
fsa9480_irq_handler,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
"fsa9480 micro USB", usbsw);
if (ret) {
dev_err(&client->dev, "failed to request IRQ\n");
return ret;
}
if (pdata)
device_init_wakeup(&client->dev, pdata->wakeup);
}
return 0;
}
static int fsa9480_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent);
struct fsa9480_usbsw *usbsw;
int ret = 0;
if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
return -EIO;
usbsw = kzalloc(sizeof(struct fsa9480_usbsw), GFP_KERNEL);
if (!usbsw) {
dev_err(&client->dev, "failed to allocate driver data\n");
return -ENOMEM;
}
usbsw->client = client;
usbsw->pdata = client->dev.platform_data;
chip = usbsw;
i2c_set_clientdata(client, usbsw);
ret = fsa9480_irq_init(usbsw);
if (ret)
goto fail1;
ret = sysfs_create_group(&client->dev.kobj, &fsa9480_group);
if (ret) {
dev_err(&client->dev,
"failed to create fsa9480 attribute group\n");
goto fail2;
}
/* ADC Detect Time: 500ms */
fsa9480_write_reg(client, FSA9480_REG_TIMING1, 0x6);
if (chip->pdata->reset_cb)
chip->pdata->reset_cb();
/* device detection */
fsa9480_detect_dev(usbsw, INT_ATTACH);
pm_runtime_set_active(&client->dev);
return 0;
fail2:
if (client->irq)
free_irq(client->irq, usbsw);
fail1:
kfree(usbsw);
return ret;
}
static int fsa9480_remove(struct i2c_client *client)
{
struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client);
if (client->irq)
free_irq(client->irq, usbsw);
sysfs_remove_group(&client->dev.kobj, &fsa9480_group);
device_init_wakeup(&client->dev, 0);
kfree(usbsw);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int fsa9480_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client);
struct fsa9480_platform_data *pdata = usbsw->pdata;
if (device_may_wakeup(&client->dev) && client->irq)
enable_irq_wake(client->irq);
if (pdata->usb_power)
pdata->usb_power(0);
return 0;
}
static int fsa9480_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client);
int dev1, dev2;
if (device_may_wakeup(&client->dev) && client->irq)
disable_irq_wake(client->irq);
/*
* Clear Pending interrupt. Note that detect_dev does what
* the interrupt handler does. So, we don't miss pending and
* we reenable interrupt if there is one.
*/
fsa9480_read_reg(client, FSA9480_REG_INT1);
fsa9480_read_reg(client, FSA9480_REG_INT2);
dev1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1);
dev2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2);
/* device detection */
fsa9480_detect_dev(usbsw, (dev1 || dev2) ? INT_ATTACH : INT_DETACH);
return 0;
}
static SIMPLE_DEV_PM_OPS(fsa9480_pm_ops, fsa9480_suspend, fsa9480_resume);
#define FSA9480_PM_OPS (&fsa9480_pm_ops)
#else
#define FSA9480_PM_OPS NULL
#endif /* CONFIG_PM_SLEEP */
static const struct i2c_device_id fsa9480_id[] = {
{"fsa9480", 0},
{}
};
MODULE_DEVICE_TABLE(i2c, fsa9480_id);
static struct i2c_driver fsa9480_i2c_driver = {
.driver = {
.name = "fsa9480",
.pm = FSA9480_PM_OPS,
},
.probe = fsa9480_probe,
.remove = fsa9480_remove,
.id_table = fsa9480_id,
};
module_i2c_driver(fsa9480_i2c_driver);
MODULE_AUTHOR("Minkyu Kang <mk7.kang@samsung.com>");
MODULE_DESCRIPTION("FSA9480 USB Switch driver");
MODULE_LICENSE("GPL");

View file

@ -0,0 +1 @@
obj-$(CONFIG_ARCH_TEGRA) += tegra/

View file

@ -0,0 +1,19 @@
#
# IBM Accelerator Family 'GenWQE'
#
menuconfig GENWQE
tristate "GenWQE PCIe Accelerator"
depends on PCI && 64BIT
select CRC_ITU_T
default n
help
Enables PCIe card driver for IBM GenWQE accelerators.
The user-space interface is described in
include/linux/genwqe/genwqe_card.h.
config GENWQE_PLATFORM_ERROR_RECOVERY
int "Use platform recovery procedures (0=off, 1=on)"
depends on GENWQE
default 1 if PPC64
default 0

View file

@ -0,0 +1,7 @@
#
# Makefile for GenWQE driver
#
obj-$(CONFIG_GENWQE) := genwqe_card.o
genwqe_card-objs := card_base.o card_dev.o card_ddcb.o card_sysfs.o \
card_debugfs.o card_utils.o

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,584 @@
#ifndef __CARD_BASE_H__
#define __CARD_BASE_H__
/**
* IBM Accelerator Family 'GenWQE'
*
* (C) Copyright IBM Corp. 2013
*
* Author: Frank Haverkamp <haver@linux.vnet.ibm.com>
* Author: Joerg-Stephan Vogt <jsvogt@de.ibm.com>
* Author: Michael Jung <mijung@gmx.net>
* Author: Michael Ruettger <michael@ibmra.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only)
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
/*
* Interfaces within the GenWQE module. Defines genwqe_card and
* ddcb_queue as well as ddcb_requ.
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/cdev.h>
#include <linux/stringify.h>
#include <linux/pci.h>
#include <linux/semaphore.h>
#include <linux/uaccess.h>
#include <linux/io.h>
#include <linux/version.h>
#include <linux/debugfs.h>
#include <linux/slab.h>
#include <linux/genwqe/genwqe_card.h>
#include "genwqe_driver.h"
#define GENWQE_MSI_IRQS 4 /* Just one supported, no MSIx */
#define GENWQE_FLAG_MSI_ENABLED (1 << 0)
#define GENWQE_MAX_VFS 15 /* maximum 15 VFs are possible */
#define GENWQE_MAX_FUNCS 16 /* 1 PF and 15 VFs */
#define GENWQE_CARD_NO_MAX (16 * GENWQE_MAX_FUNCS)
/* Compile parameters, some of them appear in debugfs for later adjustment */
#define genwqe_ddcb_max 32 /* DDCBs on the work-queue */
#define genwqe_polling_enabled 0 /* in case of irqs not working */
#define genwqe_ddcb_software_timeout 10 /* timeout per DDCB in seconds */
#define genwqe_kill_timeout 8 /* time until process gets killed */
#define genwqe_vf_jobtimeout_msec 250 /* 250 msec */
#define genwqe_pf_jobtimeout_msec 8000 /* 8 sec should be ok */
#define genwqe_health_check_interval 4 /* <= 0: disabled */
/* Sysfs attribute groups used when we create the genwqe device */
extern const struct attribute_group *genwqe_attribute_groups[];
/*
* Config space for Genwqe5 A7:
* 00:[14 10 4b 04]40 00 10 00[00 00 00 12]00 00 00 00
* 10: 0c 00 00 f0 07 3c 00 00 00 00 00 00 00 00 00 00
* 20: 00 00 00 00 00 00 00 00 00 00 00 00[14 10 4b 04]
* 30: 00 00 00 00 50 00 00 00 00 00 00 00 00 00 00 00
*/
#define PCI_DEVICE_GENWQE 0x044b /* Genwqe DeviceID */
#define PCI_SUBSYSTEM_ID_GENWQE5 0x035f /* Genwqe A5 Subsystem-ID */
#define PCI_SUBSYSTEM_ID_GENWQE5_NEW 0x044b /* Genwqe A5 Subsystem-ID */
#define PCI_CLASSCODE_GENWQE5 0x1200 /* UNKNOWN */
#define PCI_SUBVENDOR_ID_IBM_SRIOV 0x0000
#define PCI_SUBSYSTEM_ID_GENWQE5_SRIOV 0x0000 /* Genwqe A5 Subsystem-ID */
#define PCI_CLASSCODE_GENWQE5_SRIOV 0x1200 /* UNKNOWN */
#define GENWQE_SLU_ARCH_REQ 2 /* Required SLU architecture level */
/**
* struct genwqe_reg - Genwqe data dump functionality
*/
struct genwqe_reg {
u32 addr;
u32 idx;
u64 val;
};
/*
* enum genwqe_dbg_type - Specify chip unit to dump/debug
*/
enum genwqe_dbg_type {
GENWQE_DBG_UNIT0 = 0, /* captured before prev errs cleared */
GENWQE_DBG_UNIT1 = 1,
GENWQE_DBG_UNIT2 = 2,
GENWQE_DBG_UNIT3 = 3,
GENWQE_DBG_UNIT4 = 4,
GENWQE_DBG_UNIT5 = 5,
GENWQE_DBG_UNIT6 = 6,
GENWQE_DBG_UNIT7 = 7,
GENWQE_DBG_REGS = 8,
GENWQE_DBG_DMA = 9,
GENWQE_DBG_UNITS = 10, /* max number of possible debug units */
};
/* Software error injection to simulate card failures */
#define GENWQE_INJECT_HARDWARE_FAILURE 0x00000001 /* injects -1 reg reads */
#define GENWQE_INJECT_BUS_RESET_FAILURE 0x00000002 /* pci_bus_reset fail */
#define GENWQE_INJECT_GFIR_FATAL 0x00000004 /* GFIR = 0x0000ffff */
#define GENWQE_INJECT_GFIR_INFO 0x00000008 /* GFIR = 0xffff0000 */
/*
* Genwqe card description and management data.
*
* Error-handling in case of card malfunction
* ------------------------------------------
*
* If the card is detected to be defective the outside environment
* will cause the PCI layer to call deinit (the cleanup function for
* probe). This is the same effect like doing a unbind/bind operation
* on the card.
*
* The genwqe card driver implements a health checking thread which
* verifies the card function. If this detects a problem the cards
* device is being shutdown and restarted again, along with a reset of
* the card and queue.
*
* All functions accessing the card device return either -EIO or -ENODEV
* code to indicate the malfunction to the user. The user has to close
* the file descriptor and open a new one, once the card becomes
* available again.
*
* If the open file descriptor is setup to receive SIGIO, the signal is
* genereated for the application which has to provide a handler to
* react on it. If the application does not close the open
* file descriptor a SIGKILL is send to enforce freeing the cards
* resources.
*
* I did not find a different way to prevent kernel problems due to
* reference counters for the cards character devices getting out of
* sync. The character device deallocation does not block, even if
* there is still an open file descriptor pending. If this pending
* descriptor is closed, the data structures used by the character
* device is reinstantiated, which will lead to the reference counter
* dropping below the allowed values.
*
* Card recovery
* -------------
*
* To test the internal driver recovery the following command can be used:
* sudo sh -c 'echo 0xfffff > /sys/class/genwqe/genwqe0_card/err_inject'
*/
/**
* struct dma_mapping_type - Mapping type definition
*
* To avoid memcpying data arround we use user memory directly. To do
* this we need to pin/swap-in the memory and request a DMA address
* for it.
*/
enum dma_mapping_type {
GENWQE_MAPPING_RAW = 0, /* contignous memory buffer */
GENWQE_MAPPING_SGL_TEMP, /* sglist dynamically used */
GENWQE_MAPPING_SGL_PINNED, /* sglist used with pinning */
};
/**
* struct dma_mapping - Information about memory mappings done by the driver
*/
struct dma_mapping {
enum dma_mapping_type type;
void *u_vaddr; /* user-space vaddr/non-aligned */
void *k_vaddr; /* kernel-space vaddr/non-aligned */
dma_addr_t dma_addr; /* physical DMA address */
struct page **page_list; /* list of pages used by user buff */
dma_addr_t *dma_list; /* list of dma addresses per page */
unsigned int nr_pages; /* number of pages */
unsigned int size; /* size in bytes */
struct list_head card_list; /* list of usr_maps for card */
struct list_head pin_list; /* list of pinned memory for dev */
};
static inline void genwqe_mapping_init(struct dma_mapping *m,
enum dma_mapping_type type)
{
memset(m, 0, sizeof(*m));
m->type = type;
}
/**
* struct ddcb_queue - DDCB queue data
* @ddcb_max: Number of DDCBs on the queue
* @ddcb_next: Next free DDCB
* @ddcb_act: Next DDCB supposed to finish
* @ddcb_seq: Sequence number of last DDCB
* @ddcbs_in_flight: Currently enqueued DDCBs
* @ddcbs_completed: Number of already completed DDCBs
* @return_on_busy: Number of -EBUSY returns on full queue
* @wait_on_busy: Number of waits on full queue
* @ddcb_daddr: DMA address of first DDCB in the queue
* @ddcb_vaddr: Kernel virtual address of first DDCB in the queue
* @ddcb_req: Associated requests (one per DDCB)
* @ddcb_waitqs: Associated wait queues (one per DDCB)
* @ddcb_lock: Lock to protect queuing operations
* @ddcb_waitq: Wait on next DDCB finishing
*/
struct ddcb_queue {
int ddcb_max; /* amount of DDCBs */
int ddcb_next; /* next available DDCB num */
int ddcb_act; /* DDCB to be processed */
u16 ddcb_seq; /* slc seq num */
unsigned int ddcbs_in_flight; /* number of ddcbs in processing */
unsigned int ddcbs_completed;
unsigned int ddcbs_max_in_flight;
unsigned int return_on_busy; /* how many times -EBUSY? */
unsigned int wait_on_busy;
dma_addr_t ddcb_daddr; /* DMA address */
struct ddcb *ddcb_vaddr; /* kernel virtual addr for DDCBs */
struct ddcb_requ **ddcb_req; /* ddcb processing parameter */
wait_queue_head_t *ddcb_waitqs; /* waitqueue per ddcb */
spinlock_t ddcb_lock; /* exclusive access to queue */
wait_queue_head_t busy_waitq; /* wait for ddcb processing */
/* registers or the respective queue to be used */
u32 IO_QUEUE_CONFIG;
u32 IO_QUEUE_STATUS;
u32 IO_QUEUE_SEGMENT;
u32 IO_QUEUE_INITSQN;
u32 IO_QUEUE_WRAP;
u32 IO_QUEUE_OFFSET;
u32 IO_QUEUE_WTIME;
u32 IO_QUEUE_ERRCNTS;
u32 IO_QUEUE_LRW;
};
/*
* GFIR, SLU_UNITCFG, APP_UNITCFG
* 8 Units with FIR/FEC + 64 * 2ndary FIRS/FEC.
*/
#define GENWQE_FFDC_REGS (3 + (8 * (2 + 2 * 64)))
struct genwqe_ffdc {
unsigned int entries;
struct genwqe_reg *regs;
};
/**
* struct genwqe_dev - GenWQE device information
* @card_state: Card operation state, see above
* @ffdc: First Failure Data Capture buffers for each unit
* @card_thread: Working thread to operate the DDCB queue
* @card_waitq: Wait queue used in card_thread
* @queue: DDCB queue
* @health_thread: Card monitoring thread (only for PFs)
* @health_waitq: Wait queue used in health_thread
* @pci_dev: Associated PCI device (function)
* @mmio: Base address of 64-bit register space
* @mmio_len: Length of register area
* @file_lock: Lock to protect access to file_list
* @file_list: List of all processes with open GenWQE file descriptors
*
* This struct contains all information needed to communicate with a
* GenWQE card. It is initialized when a GenWQE device is found and
* destroyed when it goes away. It holds data to maintain the queue as
* well as data needed to feed the user interfaces.
*/
struct genwqe_dev {
enum genwqe_card_state card_state;
spinlock_t print_lock;
int card_idx; /* card index 0..CARD_NO_MAX-1 */
u64 flags; /* general flags */
/* FFDC data gathering */
struct genwqe_ffdc ffdc[GENWQE_DBG_UNITS];
/* DDCB workqueue */
struct task_struct *card_thread;
wait_queue_head_t queue_waitq;
struct ddcb_queue queue; /* genwqe DDCB queue */
unsigned int irqs_processed;
/* Card health checking thread */
struct task_struct *health_thread;
wait_queue_head_t health_waitq;
int use_platform_recovery; /* use platform recovery mechanisms */
/* char device */
dev_t devnum_genwqe; /* major/minor num card */
struct class *class_genwqe; /* reference to class object */
struct device *dev; /* for device creation */
struct cdev cdev_genwqe; /* char device for card */
struct dentry *debugfs_root; /* debugfs card root directory */
struct dentry *debugfs_genwqe; /* debugfs driver root directory */
/* pci resources */
struct pci_dev *pci_dev; /* PCI device */
void __iomem *mmio; /* BAR-0 MMIO start */
unsigned long mmio_len;
int num_vfs;
u32 vf_jobtimeout_msec[GENWQE_MAX_VFS];
int is_privileged; /* access to all regs possible */
/* config regs which we need often */
u64 slu_unitcfg;
u64 app_unitcfg;
u64 softreset;
u64 err_inject;
u64 last_gfir;
char app_name[5];
spinlock_t file_lock; /* lock for open files */
struct list_head file_list; /* list of open files */
/* debugfs parameters */
int ddcb_software_timeout; /* wait until DDCB times out */
int skip_recovery; /* circumvention if recovery fails */
int kill_timeout; /* wait after sending SIGKILL */
};
/**
* enum genwqe_requ_state - State of a DDCB execution request
*/
enum genwqe_requ_state {
GENWQE_REQU_NEW = 0,
GENWQE_REQU_ENQUEUED = 1,
GENWQE_REQU_TAPPED = 2,
GENWQE_REQU_FINISHED = 3,
GENWQE_REQU_STATE_MAX,
};
/**
* struct genwqe_sgl - Scatter gather list describing user-space memory
* @sgl: scatter gather list needs to be 128 byte aligned
* @sgl_dma_addr: dma address of sgl
* @sgl_size: size of area used for sgl
* @user_addr: user-space address of memory area
* @user_size: size of user-space memory area
* @page: buffer for partial pages if needed
* @page_dma_addr: dma address partial pages
*/
struct genwqe_sgl {
dma_addr_t sgl_dma_addr;
struct sg_entry *sgl;
size_t sgl_size; /* size of sgl */
void __user *user_addr; /* user-space base-address */
size_t user_size; /* size of memory area */
unsigned long nr_pages;
unsigned long fpage_offs;
size_t fpage_size;
size_t lpage_size;
void *fpage;
dma_addr_t fpage_dma_addr;
void *lpage;
dma_addr_t lpage_dma_addr;
};
int genwqe_alloc_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl,
void __user *user_addr, size_t user_size);
int genwqe_setup_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl,
dma_addr_t *dma_list);
int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl);
/**
* struct ddcb_requ - Kernel internal representation of the DDCB request
* @cmd: User space representation of the DDCB execution request
*/
struct ddcb_requ {
/* kernel specific content */
enum genwqe_requ_state req_state; /* request status */
int num; /* ddcb_no for this request */
struct ddcb_queue *queue; /* associated queue */
struct dma_mapping dma_mappings[DDCB_FIXUPS];
struct genwqe_sgl sgls[DDCB_FIXUPS];
/* kernel/user shared content */
struct genwqe_ddcb_cmd cmd; /* ddcb_no for this request */
struct genwqe_debug_data debug_data;
};
/**
* struct genwqe_file - Information for open GenWQE devices
*/
struct genwqe_file {
struct genwqe_dev *cd;
struct genwqe_driver *client;
struct file *filp;
struct fasync_struct *async_queue;
struct task_struct *owner;
struct list_head list; /* entry in list of open files */
spinlock_t map_lock; /* lock for dma_mappings */
struct list_head map_list; /* list of dma_mappings */
spinlock_t pin_lock; /* lock for pinned memory */
struct list_head pin_list; /* list of pinned memory */
};
int genwqe_setup_service_layer(struct genwqe_dev *cd); /* for PF only */
int genwqe_finish_queue(struct genwqe_dev *cd);
int genwqe_release_service_layer(struct genwqe_dev *cd);
/**
* genwqe_get_slu_id() - Read Service Layer Unit Id
* Return: 0x00: Development code
* 0x01: SLC1 (old)
* 0x02: SLC2 (sept2012)
* 0x03: SLC2 (feb2013, generic driver)
*/
static inline int genwqe_get_slu_id(struct genwqe_dev *cd)
{
return (int)((cd->slu_unitcfg >> 32) & 0xff);
}
int genwqe_ddcbs_in_flight(struct genwqe_dev *cd);
u8 genwqe_card_type(struct genwqe_dev *cd);
int genwqe_card_reset(struct genwqe_dev *cd);
int genwqe_set_interrupt_capability(struct genwqe_dev *cd, int count);
void genwqe_reset_interrupt_capability(struct genwqe_dev *cd);
int genwqe_device_create(struct genwqe_dev *cd);
int genwqe_device_remove(struct genwqe_dev *cd);
/* debugfs */
int genwqe_init_debugfs(struct genwqe_dev *cd);
void genqwe_exit_debugfs(struct genwqe_dev *cd);
int genwqe_read_softreset(struct genwqe_dev *cd);
/* Hardware Circumventions */
int genwqe_recovery_on_fatal_gfir_required(struct genwqe_dev *cd);
int genwqe_flash_readback_fails(struct genwqe_dev *cd);
/**
* genwqe_write_vreg() - Write register in VF window
* @cd: genwqe device
* @reg: register address
* @val: value to write
* @func: 0: PF, 1: VF0, ..., 15: VF14
*/
int genwqe_write_vreg(struct genwqe_dev *cd, u32 reg, u64 val, int func);
/**
* genwqe_read_vreg() - Read register in VF window
* @cd: genwqe device
* @reg: register address
* @func: 0: PF, 1: VF0, ..., 15: VF14
*
* Return: content of the register
*/
u64 genwqe_read_vreg(struct genwqe_dev *cd, u32 reg, int func);
/* FFDC Buffer Management */
int genwqe_ffdc_buff_size(struct genwqe_dev *cd, int unit_id);
int genwqe_ffdc_buff_read(struct genwqe_dev *cd, int unit_id,
struct genwqe_reg *regs, unsigned int max_regs);
int genwqe_read_ffdc_regs(struct genwqe_dev *cd, struct genwqe_reg *regs,
unsigned int max_regs, int all);
int genwqe_ffdc_dump_dma(struct genwqe_dev *cd,
struct genwqe_reg *regs, unsigned int max_regs);
int genwqe_init_debug_data(struct genwqe_dev *cd,
struct genwqe_debug_data *d);
void genwqe_init_crc32(void);
int genwqe_read_app_id(struct genwqe_dev *cd, char *app_name, int len);
/* Memory allocation/deallocation; dma address handling */
int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m,
void *uaddr, unsigned long size,
struct ddcb_requ *req);
int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m,
struct ddcb_requ *req);
static inline bool dma_mapping_used(struct dma_mapping *m)
{
if (!m)
return 0;
return m->size != 0;
}
/**
* __genwqe_execute_ddcb() - Execute DDCB request with addr translation
*
* This function will do the address translation changes to the DDCBs
* according to the definitions required by the ATS field. It looks up
* the memory allocation buffer or does vmap/vunmap for the respective
* user-space buffers, inclusive page pinning and scatter gather list
* buildup and teardown.
*/
int __genwqe_execute_ddcb(struct genwqe_dev *cd,
struct genwqe_ddcb_cmd *cmd, unsigned int f_flags);
/**
* __genwqe_execute_raw_ddcb() - Execute DDCB request without addr translation
*
* This version will not do address translation or any modifcation of
* the DDCB data. It is used e.g. for the MoveFlash DDCB which is
* entirely prepared by the driver itself. That means the appropriate
* DMA addresses are already in the DDCB and do not need any
* modification.
*/
int __genwqe_execute_raw_ddcb(struct genwqe_dev *cd,
struct genwqe_ddcb_cmd *cmd,
unsigned int f_flags);
int __genwqe_enqueue_ddcb(struct genwqe_dev *cd,
struct ddcb_requ *req,
unsigned int f_flags);
int __genwqe_wait_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req);
int __genwqe_purge_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req);
/* register access */
int __genwqe_writeq(struct genwqe_dev *cd, u64 byte_offs, u64 val);
u64 __genwqe_readq(struct genwqe_dev *cd, u64 byte_offs);
int __genwqe_writel(struct genwqe_dev *cd, u64 byte_offs, u32 val);
u32 __genwqe_readl(struct genwqe_dev *cd, u64 byte_offs);
void *__genwqe_alloc_consistent(struct genwqe_dev *cd, size_t size,
dma_addr_t *dma_handle);
void __genwqe_free_consistent(struct genwqe_dev *cd, size_t size,
void *vaddr, dma_addr_t dma_handle);
/* Base clock frequency in MHz */
int genwqe_base_clock_frequency(struct genwqe_dev *cd);
/* Before FFDC is captured the traps should be stopped. */
void genwqe_stop_traps(struct genwqe_dev *cd);
void genwqe_start_traps(struct genwqe_dev *cd);
/* Hardware circumvention */
bool genwqe_need_err_masking(struct genwqe_dev *cd);
/**
* genwqe_is_privileged() - Determine operation mode for PCI function
*
* On Intel with SRIOV support we see:
* PF: is_physfn = 1 is_virtfn = 0
* VF: is_physfn = 0 is_virtfn = 1
*
* On Systems with no SRIOV support _and_ virtualized systems we get:
* is_physfn = 0 is_virtfn = 0
*
* Other vendors have individual pci device ids to distinguish between
* virtual function drivers and physical function drivers. GenWQE
* unfortunately has just on pci device id for both, VFs and PF.
*
* The following code is used to distinguish if the card is running in
* privileged mode, either as true PF or in a virtualized system with
* full register access e.g. currently on PowerPC.
*
* if (pci_dev->is_virtfn)
* cd->is_privileged = 0;
* else
* cd->is_privileged = (__genwqe_readq(cd, IO_SLU_BITSTREAM)
* != IO_ILLEGAL_VALUE);
*/
static inline int genwqe_is_privileged(struct genwqe_dev *cd)
{
return cd->is_privileged;
}
#endif /* __CARD_BASE_H__ */

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,188 @@
#ifndef __CARD_DDCB_H__
#define __CARD_DDCB_H__
/**
* IBM Accelerator Family 'GenWQE'
*
* (C) Copyright IBM Corp. 2013
*
* Author: Frank Haverkamp <haver@linux.vnet.ibm.com>
* Author: Joerg-Stephan Vogt <jsvogt@de.ibm.com>
* Author: Michael Jung <mijung@gmx.net>
* Author: Michael Ruettger <michael@ibmra.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/types.h>
#include <asm/byteorder.h>
#include "genwqe_driver.h"
#include "card_base.h"
/**
* struct ddcb - Device Driver Control Block DDCB
* @hsi: Hardware software interlock
* @shi: Software hardware interlock. Hsi and shi are used to interlock
* software and hardware activities. We are using a compare and
* swap operation to ensure that there are no races when
* activating new DDCBs on the queue, or when we need to
* purge a DDCB from a running queue.
* @acfunc: Accelerator function addresses a unit within the chip
* @cmd: Command to work on
* @cmdopts_16: Options for the command
* @asiv: Input data
* @asv: Output data
*
* The DDCB data format is big endian. Multiple consequtive DDBCs form
* a DDCB queue.
*/
#define ASIV_LENGTH 104 /* Old specification without ATS field */
#define ASIV_LENGTH_ATS 96 /* New specification with ATS field */
#define ASV_LENGTH 64
struct ddcb {
union {
__be32 icrc_hsi_shi_32; /* iCRC, Hardware/SW interlock */
struct {
__be16 icrc_16;
u8 hsi;
u8 shi;
};
};
u8 pre; /* Preamble */
u8 xdir; /* Execution Directives */
__be16 seqnum_16; /* Sequence Number */
u8 acfunc; /* Accelerator Function.. */
u8 cmd; /* Command. */
__be16 cmdopts_16; /* Command Options */
u8 sur; /* Status Update Rate */
u8 psp; /* Protection Section Pointer */
__be16 rsvd_0e_16; /* Reserved invariant */
__be64 fwiv_64; /* Firmware Invariant. */
union {
struct {
__be64 ats_64; /* Address Translation Spec */
u8 asiv[ASIV_LENGTH_ATS]; /* New ASIV */
} n;
u8 __asiv[ASIV_LENGTH]; /* obsolete */
};
u8 asv[ASV_LENGTH]; /* Appl Spec Variant */
__be16 rsvd_c0_16; /* Reserved Variant */
__be16 vcrc_16; /* Variant CRC */
__be32 rsvd_32; /* Reserved unprotected */
__be64 deque_ts_64; /* Deque Time Stamp. */
__be16 retc_16; /* Return Code */
__be16 attn_16; /* Attention/Extended Error Codes */
__be32 progress_32; /* Progress indicator. */
__be64 cmplt_ts_64; /* Completion Time Stamp. */
/* The following layout matches the new service layer format */
__be32 ibdc_32; /* Inbound Data Count (* 256) */
__be32 obdc_32; /* Outbound Data Count (* 256) */
__be64 rsvd_SLH_64; /* Reserved for hardware */
union { /* private data for driver */
u8 priv[8];
__be64 priv_64;
};
__be64 disp_ts_64; /* Dispatch TimeStamp */
} __attribute__((__packed__));
/* CRC polynomials for DDCB */
#define CRC16_POLYNOMIAL 0x1021
/*
* SHI: Software to Hardware Interlock
* This 1 byte field is written by software to interlock the
* movement of one queue entry to another with the hardware in the
* chip.
*/
#define DDCB_SHI_INTR 0x04 /* Bit 2 */
#define DDCB_SHI_PURGE 0x02 /* Bit 1 */
#define DDCB_SHI_NEXT 0x01 /* Bit 0 */
/*
* HSI: Hardware to Software interlock
* This 1 byte field is written by hardware to interlock the movement
* of one queue entry to another with the software in the chip.
*/
#define DDCB_HSI_COMPLETED 0x40 /* Bit 6 */
#define DDCB_HSI_FETCHED 0x04 /* Bit 2 */
/*
* Accessing HSI/SHI is done 32-bit wide
* Normally 16-bit access would work too, but on some platforms the
* 16 compare and swap operation is not supported. Therefore
* switching to 32-bit such that those platforms will work too.
*
* iCRC HSI/SHI
*/
#define DDCB_INTR_BE32 cpu_to_be32(0x00000004)
#define DDCB_PURGE_BE32 cpu_to_be32(0x00000002)
#define DDCB_NEXT_BE32 cpu_to_be32(0x00000001)
#define DDCB_COMPLETED_BE32 cpu_to_be32(0x00004000)
#define DDCB_FETCHED_BE32 cpu_to_be32(0x00000400)
/* Definitions of DDCB presets */
#define DDCB_PRESET_PRE 0x80
#define ICRC_LENGTH(n) ((n) + 8 + 8 + 8) /* used ASIV + hdr fields */
#define VCRC_LENGTH(n) ((n)) /* used ASV */
/*
* Genwqe Scatter Gather list
* Each element has up to 8 entries.
* The chaining element is element 0 cause of prefetching needs.
*/
/*
* 0b0110 Chained descriptor. The descriptor is describing the next
* descriptor list.
*/
#define SG_CHAINED (0x6)
/*
* 0b0010 First entry of a descriptor list. Start from a Buffer-Empty
* condition.
*/
#define SG_DATA (0x2)
/*
* 0b0000 Early terminator. This is the last entry on the list
* irregardless of the length indicated.
*/
#define SG_END_LIST (0x0)
/**
* struct sglist - Scatter gather list
* @target_addr: Either a dma addr of memory to work on or a
* dma addr or a subsequent sglist block.
* @len: Length of the data block.
* @flags: See above.
*
* Depending on the command the GenWQE card can use a scatter gather
* list to describe the memory it works on. Always 8 sg_entry's form
* a block.
*/
struct sg_entry {
__be64 target_addr;
__be32 len;
__be32 flags;
};
#endif /* __CARD_DDCB_H__ */

View file

@ -0,0 +1,508 @@
/**
* IBM Accelerator Family 'GenWQE'
*
* (C) Copyright IBM Corp. 2013
*
* Author: Frank Haverkamp <haver@linux.vnet.ibm.com>
* Author: Joerg-Stephan Vogt <jsvogt@de.ibm.com>
* Author: Michael Jung <mijung@gmx.net>
* Author: Michael Ruettger <michael@ibmra.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only)
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
/*
* Debugfs interfaces for the GenWQE card. Help to debug potential
* problems. Dump internal chip state for debugging and failure
* determination.
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/debugfs.h>
#include <linux/seq_file.h>
#include <linux/uaccess.h>
#include "card_base.h"
#include "card_ddcb.h"
#define GENWQE_DEBUGFS_RO(_name, _showfn) \
static int genwqe_debugfs_##_name##_open(struct inode *inode, \
struct file *file) \
{ \
return single_open(file, _showfn, inode->i_private); \
} \
static const struct file_operations genwqe_##_name##_fops = { \
.open = genwqe_debugfs_##_name##_open, \
.read = seq_read, \
.llseek = seq_lseek, \
.release = single_release, \
}
static void dbg_uidn_show(struct seq_file *s, struct genwqe_reg *regs,
int entries)
{
unsigned int i;
u32 v_hi, v_lo;
for (i = 0; i < entries; i++) {
v_hi = (regs[i].val >> 32) & 0xffffffff;
v_lo = (regs[i].val) & 0xffffffff;
seq_printf(s, " 0x%08x 0x%08x 0x%08x 0x%08x EXT_ERR_REC\n",
regs[i].addr, regs[i].idx, v_hi, v_lo);
}
}
static int curr_dbg_uidn_show(struct seq_file *s, void *unused, int uid)
{
struct genwqe_dev *cd = s->private;
int entries;
struct genwqe_reg *regs;
entries = genwqe_ffdc_buff_size(cd, uid);
if (entries < 0)
return -EINVAL;
if (entries == 0)
return 0;
regs = kcalloc(entries, sizeof(*regs), GFP_KERNEL);
if (regs == NULL)
return -ENOMEM;
genwqe_stop_traps(cd); /* halt the traps while dumping data */
genwqe_ffdc_buff_read(cd, uid, regs, entries);
genwqe_start_traps(cd);
dbg_uidn_show(s, regs, entries);
kfree(regs);
return 0;
}
static int genwqe_curr_dbg_uid0_show(struct seq_file *s, void *unused)
{
return curr_dbg_uidn_show(s, unused, 0);
}
GENWQE_DEBUGFS_RO(curr_dbg_uid0, genwqe_curr_dbg_uid0_show);
static int genwqe_curr_dbg_uid1_show(struct seq_file *s, void *unused)
{
return curr_dbg_uidn_show(s, unused, 1);
}
GENWQE_DEBUGFS_RO(curr_dbg_uid1, genwqe_curr_dbg_uid1_show);
static int genwqe_curr_dbg_uid2_show(struct seq_file *s, void *unused)
{
return curr_dbg_uidn_show(s, unused, 2);
}
GENWQE_DEBUGFS_RO(curr_dbg_uid2, genwqe_curr_dbg_uid2_show);
static int prev_dbg_uidn_show(struct seq_file *s, void *unused, int uid)
{
struct genwqe_dev *cd = s->private;
dbg_uidn_show(s, cd->ffdc[uid].regs, cd->ffdc[uid].entries);
return 0;
}
static int genwqe_prev_dbg_uid0_show(struct seq_file *s, void *unused)
{
return prev_dbg_uidn_show(s, unused, 0);
}
GENWQE_DEBUGFS_RO(prev_dbg_uid0, genwqe_prev_dbg_uid0_show);
static int genwqe_prev_dbg_uid1_show(struct seq_file *s, void *unused)
{
return prev_dbg_uidn_show(s, unused, 1);
}
GENWQE_DEBUGFS_RO(prev_dbg_uid1, genwqe_prev_dbg_uid1_show);
static int genwqe_prev_dbg_uid2_show(struct seq_file *s, void *unused)
{
return prev_dbg_uidn_show(s, unused, 2);
}
GENWQE_DEBUGFS_RO(prev_dbg_uid2, genwqe_prev_dbg_uid2_show);
static int genwqe_curr_regs_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int i;
struct genwqe_reg *regs;
regs = kcalloc(GENWQE_FFDC_REGS, sizeof(*regs), GFP_KERNEL);
if (regs == NULL)
return -ENOMEM;
genwqe_stop_traps(cd);
genwqe_read_ffdc_regs(cd, regs, GENWQE_FFDC_REGS, 1);
genwqe_start_traps(cd);
for (i = 0; i < GENWQE_FFDC_REGS; i++) {
if (regs[i].addr == 0xffffffff)
break; /* invalid entries */
if (regs[i].val == 0x0ull)
continue; /* do not print 0x0 FIRs */
seq_printf(s, " 0x%08x 0x%016llx\n",
regs[i].addr, regs[i].val);
}
return 0;
}
GENWQE_DEBUGFS_RO(curr_regs, genwqe_curr_regs_show);
static int genwqe_prev_regs_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int i;
struct genwqe_reg *regs = cd->ffdc[GENWQE_DBG_REGS].regs;
if (regs == NULL)
return -EINVAL;
for (i = 0; i < GENWQE_FFDC_REGS; i++) {
if (regs[i].addr == 0xffffffff)
break; /* invalid entries */
if (regs[i].val == 0x0ull)
continue; /* do not print 0x0 FIRs */
seq_printf(s, " 0x%08x 0x%016llx\n",
regs[i].addr, regs[i].val);
}
return 0;
}
GENWQE_DEBUGFS_RO(prev_regs, genwqe_prev_regs_show);
static int genwqe_jtimer_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int vf_num;
u64 jtimer;
jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, 0);
seq_printf(s, " PF 0x%016llx %d msec\n", jtimer,
genwqe_pf_jobtimeout_msec);
for (vf_num = 0; vf_num < cd->num_vfs; vf_num++) {
jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT,
vf_num + 1);
seq_printf(s, " VF%-2d 0x%016llx %d msec\n", vf_num, jtimer,
cd->vf_jobtimeout_msec[vf_num]);
}
return 0;
}
GENWQE_DEBUGFS_RO(jtimer, genwqe_jtimer_show);
static int genwqe_queue_working_time_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int vf_num;
u64 t;
t = genwqe_read_vreg(cd, IO_SLC_VF_QUEUE_WTIME, 0);
seq_printf(s, " PF 0x%016llx\n", t);
for (vf_num = 0; vf_num < cd->num_vfs; vf_num++) {
t = genwqe_read_vreg(cd, IO_SLC_VF_QUEUE_WTIME, vf_num + 1);
seq_printf(s, " VF%-2d 0x%016llx\n", vf_num, t);
}
return 0;
}
GENWQE_DEBUGFS_RO(queue_working_time, genwqe_queue_working_time_show);
static int genwqe_ddcb_info_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int i;
struct ddcb_queue *queue;
struct ddcb *pddcb;
queue = &cd->queue;
seq_puts(s, "DDCB QUEUE:\n");
seq_printf(s, " ddcb_max: %d\n"
" ddcb_daddr: %016llx - %016llx\n"
" ddcb_vaddr: %016llx\n"
" ddcbs_in_flight: %u\n"
" ddcbs_max_in_flight: %u\n"
" ddcbs_completed: %u\n"
" return_on_busy: %u\n"
" wait_on_busy: %u\n"
" irqs_processed: %u\n",
queue->ddcb_max, (long long)queue->ddcb_daddr,
(long long)queue->ddcb_daddr +
(queue->ddcb_max * DDCB_LENGTH),
(long long)queue->ddcb_vaddr, queue->ddcbs_in_flight,
queue->ddcbs_max_in_flight, queue->ddcbs_completed,
queue->return_on_busy, queue->wait_on_busy,
cd->irqs_processed);
/* Hardware State */
seq_printf(s, " 0x%08x 0x%016llx IO_QUEUE_CONFIG\n"
" 0x%08x 0x%016llx IO_QUEUE_STATUS\n"
" 0x%08x 0x%016llx IO_QUEUE_SEGMENT\n"
" 0x%08x 0x%016llx IO_QUEUE_INITSQN\n"
" 0x%08x 0x%016llx IO_QUEUE_WRAP\n"
" 0x%08x 0x%016llx IO_QUEUE_OFFSET\n"
" 0x%08x 0x%016llx IO_QUEUE_WTIME\n"
" 0x%08x 0x%016llx IO_QUEUE_ERRCNTS\n"
" 0x%08x 0x%016llx IO_QUEUE_LRW\n",
queue->IO_QUEUE_CONFIG,
__genwqe_readq(cd, queue->IO_QUEUE_CONFIG),
queue->IO_QUEUE_STATUS,
__genwqe_readq(cd, queue->IO_QUEUE_STATUS),
queue->IO_QUEUE_SEGMENT,
__genwqe_readq(cd, queue->IO_QUEUE_SEGMENT),
queue->IO_QUEUE_INITSQN,
__genwqe_readq(cd, queue->IO_QUEUE_INITSQN),
queue->IO_QUEUE_WRAP,
__genwqe_readq(cd, queue->IO_QUEUE_WRAP),
queue->IO_QUEUE_OFFSET,
__genwqe_readq(cd, queue->IO_QUEUE_OFFSET),
queue->IO_QUEUE_WTIME,
__genwqe_readq(cd, queue->IO_QUEUE_WTIME),
queue->IO_QUEUE_ERRCNTS,
__genwqe_readq(cd, queue->IO_QUEUE_ERRCNTS),
queue->IO_QUEUE_LRW,
__genwqe_readq(cd, queue->IO_QUEUE_LRW));
seq_printf(s, "DDCB list (ddcb_act=%d/ddcb_next=%d):\n",
queue->ddcb_act, queue->ddcb_next);
pddcb = queue->ddcb_vaddr;
for (i = 0; i < queue->ddcb_max; i++) {
seq_printf(s, " %-3d: RETC=%03x SEQ=%04x HSI/SHI=%02x/%02x ",
i, be16_to_cpu(pddcb->retc_16),
be16_to_cpu(pddcb->seqnum_16),
pddcb->hsi, pddcb->shi);
seq_printf(s, "PRIV=%06llx CMD=%02x\n",
be64_to_cpu(pddcb->priv_64), pddcb->cmd);
pddcb++;
}
return 0;
}
GENWQE_DEBUGFS_RO(ddcb_info, genwqe_ddcb_info_show);
static int genwqe_info_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
u16 val16, type;
u64 app_id, slu_id, bitstream = -1;
struct pci_dev *pci_dev = cd->pci_dev;
slu_id = __genwqe_readq(cd, IO_SLU_UNITCFG);
app_id = __genwqe_readq(cd, IO_APP_UNITCFG);
if (genwqe_is_privileged(cd))
bitstream = __genwqe_readq(cd, IO_SLU_BITSTREAM);
val16 = (u16)(slu_id & 0x0fLLU);
type = (u16)((slu_id >> 20) & 0xffLLU);
seq_printf(s, "%s driver version: %s\n"
" Device Name/Type: %s %s CardIdx: %d\n"
" SLU/APP Config : 0x%016llx/0x%016llx\n"
" Build Date : %u/%x/%u\n"
" Base Clock : %u MHz\n"
" Arch/SVN Release: %u/%llx\n"
" Bitstream : %llx\n",
GENWQE_DEVNAME, DRV_VERSION, dev_name(&pci_dev->dev),
genwqe_is_privileged(cd) ?
"Physical" : "Virtual or no SR-IOV",
cd->card_idx, slu_id, app_id,
(u16)((slu_id >> 12) & 0x0fLLU), /* month */
(u16)((slu_id >> 4) & 0xffLLU), /* day */
(u16)((slu_id >> 16) & 0x0fLLU) + 2010, /* year */
genwqe_base_clock_frequency(cd),
(u16)((slu_id >> 32) & 0xffLLU), slu_id >> 40,
bitstream);
return 0;
}
GENWQE_DEBUGFS_RO(info, genwqe_info_show);
int genwqe_init_debugfs(struct genwqe_dev *cd)
{
struct dentry *root;
struct dentry *file;
int ret;
char card_name[64];
char name[64];
unsigned int i;
sprintf(card_name, "%s%d_card", GENWQE_DEVNAME, cd->card_idx);
root = debugfs_create_dir(card_name, cd->debugfs_genwqe);
if (!root) {
ret = -ENOMEM;
goto err0;
}
/* non privileged interfaces are done here */
file = debugfs_create_file("ddcb_info", S_IRUGO, root, cd,
&genwqe_ddcb_info_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("info", S_IRUGO, root, cd,
&genwqe_info_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_x64("err_inject", 0666, root, &cd->err_inject);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_u32("ddcb_software_timeout", 0666, root,
&cd->ddcb_software_timeout);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_u32("kill_timeout", 0666, root,
&cd->kill_timeout);
if (!file) {
ret = -ENOMEM;
goto err1;
}
/* privileged interfaces follow here */
if (!genwqe_is_privileged(cd)) {
cd->debugfs_root = root;
return 0;
}
file = debugfs_create_file("curr_regs", S_IRUGO, root, cd,
&genwqe_curr_regs_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("curr_dbg_uid0", S_IRUGO, root, cd,
&genwqe_curr_dbg_uid0_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("curr_dbg_uid1", S_IRUGO, root, cd,
&genwqe_curr_dbg_uid1_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("curr_dbg_uid2", S_IRUGO, root, cd,
&genwqe_curr_dbg_uid2_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("prev_regs", S_IRUGO, root, cd,
&genwqe_prev_regs_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("prev_dbg_uid0", S_IRUGO, root, cd,
&genwqe_prev_dbg_uid0_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("prev_dbg_uid1", S_IRUGO, root, cd,
&genwqe_prev_dbg_uid1_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("prev_dbg_uid2", S_IRUGO, root, cd,
&genwqe_prev_dbg_uid2_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
for (i = 0; i < GENWQE_MAX_VFS; i++) {
sprintf(name, "vf%u_jobtimeout_msec", i);
file = debugfs_create_u32(name, 0666, root,
&cd->vf_jobtimeout_msec[i]);
if (!file) {
ret = -ENOMEM;
goto err1;
}
}
file = debugfs_create_file("jobtimer", S_IRUGO, root, cd,
&genwqe_jtimer_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("queue_working_time", S_IRUGO, root, cd,
&genwqe_queue_working_time_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_u32("skip_recovery", 0666, root,
&cd->skip_recovery);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_u32("use_platform_recovery", 0666, root,
&cd->use_platform_recovery);
if (!file) {
ret = -ENOMEM;
goto err1;
}
cd->debugfs_root = root;
return 0;
err1:
debugfs_remove_recursive(root);
err0:
return ret;
}
void genqwe_exit_debugfs(struct genwqe_dev *cd)
{
debugfs_remove_recursive(cd->debugfs_root);
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,304 @@
/**
* IBM Accelerator Family 'GenWQE'
*
* (C) Copyright IBM Corp. 2013
*
* Author: Frank Haverkamp <haver@linux.vnet.ibm.com>
* Author: Joerg-Stephan Vogt <jsvogt@de.ibm.com>
* Author: Michael Jung <mijung@gmx.net>
* Author: Michael Ruettger <michael@ibmra.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only)
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
/*
* Sysfs interfaces for the GenWQE card. There are attributes to query
* the version of the bitstream as well as some for the driver. For
* debugging, please also see the debugfs interfaces of this driver.
*/
#include <linux/version.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/string.h>
#include <linux/fs.h>
#include <linux/sysfs.h>
#include <linux/ctype.h>
#include <linux/device.h>
#include "card_base.h"
#include "card_ddcb.h"
static const char * const genwqe_types[] = {
[GENWQE_TYPE_ALTERA_230] = "GenWQE4-230",
[GENWQE_TYPE_ALTERA_530] = "GenWQE4-530",
[GENWQE_TYPE_ALTERA_A4] = "GenWQE5-A4",
[GENWQE_TYPE_ALTERA_A7] = "GenWQE5-A7",
};
static ssize_t status_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct genwqe_dev *cd = dev_get_drvdata(dev);
const char *cs[GENWQE_CARD_STATE_MAX] = { "unused", "used", "error" };
return sprintf(buf, "%s\n", cs[cd->card_state]);
}
static DEVICE_ATTR_RO(status);
static ssize_t appid_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
char app_name[5];
struct genwqe_dev *cd = dev_get_drvdata(dev);
genwqe_read_app_id(cd, app_name, sizeof(app_name));
return sprintf(buf, "%s\n", app_name);
}
static DEVICE_ATTR_RO(appid);
static ssize_t version_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
u64 slu_id, app_id;
struct genwqe_dev *cd = dev_get_drvdata(dev);
slu_id = __genwqe_readq(cd, IO_SLU_UNITCFG);
app_id = __genwqe_readq(cd, IO_APP_UNITCFG);
return sprintf(buf, "%016llx.%016llx\n", slu_id, app_id);
}
static DEVICE_ATTR_RO(version);
static ssize_t type_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
u8 card_type;
struct genwqe_dev *cd = dev_get_drvdata(dev);
card_type = genwqe_card_type(cd);
return sprintf(buf, "%s\n", (card_type >= ARRAY_SIZE(genwqe_types)) ?
"invalid" : genwqe_types[card_type]);
}
static DEVICE_ATTR_RO(type);
static ssize_t tempsens_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
u64 tempsens;
struct genwqe_dev *cd = dev_get_drvdata(dev);
tempsens = __genwqe_readq(cd, IO_SLU_TEMPERATURE_SENSOR);
return sprintf(buf, "%016llx\n", tempsens);
}
static DEVICE_ATTR_RO(tempsens);
static ssize_t freerunning_timer_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
u64 t;
struct genwqe_dev *cd = dev_get_drvdata(dev);
t = __genwqe_readq(cd, IO_SLC_FREE_RUNNING_TIMER);
return sprintf(buf, "%016llx\n", t);
}
static DEVICE_ATTR_RO(freerunning_timer);
static ssize_t queue_working_time_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
u64 t;
struct genwqe_dev *cd = dev_get_drvdata(dev);
t = __genwqe_readq(cd, IO_SLC_QUEUE_WTIME);
return sprintf(buf, "%016llx\n", t);
}
static DEVICE_ATTR_RO(queue_working_time);
static ssize_t base_clock_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
u64 base_clock;
struct genwqe_dev *cd = dev_get_drvdata(dev);
base_clock = genwqe_base_clock_frequency(cd);
return sprintf(buf, "%lld\n", base_clock);
}
static DEVICE_ATTR_RO(base_clock);
/**
* curr_bitstream_show() - Show the current bitstream id
*
* There is a bug in some old versions of the CPLD which selects the
* bitstream, which causes the IO_SLU_BITSTREAM register to report
* unreliable data in very rare cases. This makes this sysfs
* unreliable up to the point were a new CPLD version is being used.
*
* Unfortunately there is no automatic way yet to query the CPLD
* version, such that you need to manually ensure via programming
* tools that you have a recent version of the CPLD software.
*
* The proposed circumvention is to use a special recovery bitstream
* on the backup partition (0) to identify problems while loading the
* image.
*/
static ssize_t curr_bitstream_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int curr_bitstream;
struct genwqe_dev *cd = dev_get_drvdata(dev);
curr_bitstream = __genwqe_readq(cd, IO_SLU_BITSTREAM) & 0x1;
return sprintf(buf, "%d\n", curr_bitstream);
}
static DEVICE_ATTR_RO(curr_bitstream);
/**
* next_bitstream_show() - Show the next activated bitstream
*
* IO_SLC_CFGREG_SOFTRESET: This register can only be accessed by the PF.
*/
static ssize_t next_bitstream_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int next_bitstream;
struct genwqe_dev *cd = dev_get_drvdata(dev);
switch ((cd->softreset & 0xc) >> 2) {
case 0x2:
next_bitstream = 0;
break;
case 0x3:
next_bitstream = 1;
break;
default:
next_bitstream = -1;
break; /* error */
}
return sprintf(buf, "%d\n", next_bitstream);
}
static ssize_t next_bitstream_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
int partition;
struct genwqe_dev *cd = dev_get_drvdata(dev);
if (kstrtoint(buf, 0, &partition) < 0)
return -EINVAL;
switch (partition) {
case 0x0:
cd->softreset = 0x78;
break;
case 0x1:
cd->softreset = 0x7c;
break;
default:
return -EINVAL;
}
__genwqe_writeq(cd, IO_SLC_CFGREG_SOFTRESET, cd->softreset);
return count;
}
static DEVICE_ATTR_RW(next_bitstream);
static ssize_t reload_bitstream_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
int reload;
struct genwqe_dev *cd = dev_get_drvdata(dev);
if (kstrtoint(buf, 0, &reload) < 0)
return -EINVAL;
if (reload == 0x1) {
if (cd->card_state == GENWQE_CARD_UNUSED ||
cd->card_state == GENWQE_CARD_USED)
cd->card_state = GENWQE_CARD_RELOAD_BITSTREAM;
else
return -EIO;
} else {
return -EINVAL;
}
return count;
}
static DEVICE_ATTR_WO(reload_bitstream);
/*
* Create device_attribute structures / params: name, mode, show, store
* additional flag if valid in VF
*/
static struct attribute *genwqe_attributes[] = {
&dev_attr_tempsens.attr,
&dev_attr_next_bitstream.attr,
&dev_attr_curr_bitstream.attr,
&dev_attr_base_clock.attr,
&dev_attr_type.attr,
&dev_attr_version.attr,
&dev_attr_appid.attr,
&dev_attr_status.attr,
&dev_attr_freerunning_timer.attr,
&dev_attr_queue_working_time.attr,
&dev_attr_reload_bitstream.attr,
NULL,
};
static struct attribute *genwqe_normal_attributes[] = {
&dev_attr_type.attr,
&dev_attr_version.attr,
&dev_attr_appid.attr,
&dev_attr_status.attr,
&dev_attr_freerunning_timer.attr,
&dev_attr_queue_working_time.attr,
NULL,
};
/**
* genwqe_is_visible() - Determine if sysfs attribute should be visible or not
*
* VFs have restricted mmio capabilities, so not all sysfs entries
* are allowed in VFs.
*/
static umode_t genwqe_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
unsigned int j;
struct device *dev = container_of(kobj, struct device, kobj);
struct genwqe_dev *cd = dev_get_drvdata(dev);
umode_t mode = attr->mode;
if (genwqe_is_privileged(cd))
return mode;
for (j = 0; genwqe_normal_attributes[j] != NULL; j++)
if (genwqe_normal_attributes[j] == attr)
return mode;
return 0;
}
static struct attribute_group genwqe_attribute_group = {
.is_visible = genwqe_is_visible,
.attrs = genwqe_attributes,
};
const struct attribute_group *genwqe_attribute_groups[] = {
&genwqe_attribute_group,
NULL,
};

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,77 @@
#ifndef __GENWQE_DRIVER_H__
#define __GENWQE_DRIVER_H__
/**
* IBM Accelerator Family 'GenWQE'
*
* (C) Copyright IBM Corp. 2013
*
* Author: Frank Haverkamp <haver@linux.vnet.ibm.com>
* Author: Joerg-Stephan Vogt <jsvogt@de.ibm.com>
* Author: Michael Jung <mijung@gmx.net>
* Author: Michael Ruettger <michael@ibmra.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only)
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/cdev.h>
#include <linux/list.h>
#include <linux/kthread.h>
#include <linux/scatterlist.h>
#include <linux/iommu.h>
#include <linux/spinlock.h>
#include <linux/mutex.h>
#include <linux/platform_device.h>
#include <linux/printk.h>
#include <asm/byteorder.h>
#include <linux/genwqe/genwqe_card.h>
#define DRV_VERSION "2.0.25"
/*
* Static minor number assignement, until we decide/implement
* something dynamic.
*/
#define GENWQE_MAX_MINOR 128 /* up to 128 possible genwqe devices */
/**
* genwqe_requ_alloc() - Allocate a new DDCB execution request
*
* This data structure contains the user visiable fields of the DDCB
* to be executed.
*
* Return: ptr to genwqe_ddcb_cmd data structure
*/
struct genwqe_ddcb_cmd *ddcb_requ_alloc(void);
/**
* ddcb_requ_free() - Free DDCB execution request.
* @req: ptr to genwqe_ddcb_cmd data structure.
*/
void ddcb_requ_free(struct genwqe_ddcb_cmd *req);
u32 genwqe_crc32(u8 *buff, size_t len, u32 init);
static inline void genwqe_hexdump(struct pci_dev *pci_dev,
const void *buff, unsigned int size)
{
char prefix[32];
scnprintf(prefix, sizeof(prefix), "%s %s: ",
GENWQE_DEVNAME, pci_name(pci_dev));
print_hex_dump_debug(prefix, DUMP_PREFIX_OFFSET, 16, 1, buff,
size, true);
}
#endif /* __GENWQE_DRIVER_H__ */

View file

@ -0,0 +1,6 @@
config GNSS_SHMEM_IF
bool "Samsung Shared memory Interface for GNSS"
depends on MCU_IPC
default n
---help---
Samsung Shared Memory Interface for GNSS.

View file

@ -0,0 +1,10 @@
# Makefile of gnss_if
# obj-$(CONFIG_GNSS_SHMEM_IF) += gnss_main.o gnss_io_device.o gnss_link_device_shmem.o \
# gnss_keplerctl_device.o gnss_utils.o
obj-$(CONFIG_GNSS_SHMEM_IF) += gnss_main.o gnss_io_device.o \
gnss_keplerctl_device.o \
gnss_link_device_shmem.o \
gnss_link_device_memory.o pmu-gnss.o \
gnss_utils.o

View file

@ -0,0 +1,825 @@
/*
* Copyright (C) 2010 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/poll.h>
#include <linux/irq.h>
#include <linux/gpio.h>
#include <linux/if_arp.h>
#include <linux/ip.h>
#include <linux/if_ether.h>
#include <linux/etherdevice.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
#include "gnss_prj.h"
#include "gnss_utils.h"
#define WAKE_TIME (HZ/2) /* 500 msec */
static void exynos_build_header(struct io_device *iod, struct link_device *ld,
u8 *buff, u16 cfg, u8 ctl, size_t count);
static inline void iodev_lock_wlock(struct io_device *iod)
{
if (iod->waketime > 0 && !wake_lock_active(&iod->wakelock))
wake_lock_timeout(&iod->wakelock, iod->waketime);
}
static inline int queue_skb_to_iod(struct sk_buff *skb, struct io_device *iod)
{
struct sk_buff_head *rxq = &iod->sk_rx_q;
skb_queue_tail(rxq, skb);
if (rxq->qlen > MAX_IOD_RXQ_LEN) {
gif_err("%s: %s application may be dead (rxq->qlen %d > %d)\n",
iod->name, iod->app ? iod->app : "corresponding",
rxq->qlen, MAX_IOD_RXQ_LEN);
skb_queue_purge(rxq);
return -ENOSPC;
} else {
gif_debug("%s: rxq->qlen = %d\n", iod->name, rxq->qlen);
wake_up(&iod->wq);
return 0;
}
}
static inline int rx_frame_with_link_header(struct sk_buff *skb)
{
struct exynos_link_header *hdr;
/* Remove EXYNOS link header */
hdr = (struct exynos_link_header *)skb->data;
skb_pull(skb, EXYNOS_HEADER_SIZE);
/* Print received data from GNSS */
/*
gnss_log_ipc_pkt(skb, RX);
*/
return queue_skb_to_iod(skb, skbpriv(skb)->iod);
}
static int rx_fmt_ipc(struct sk_buff *skb)
{
return rx_frame_with_link_header(skb);
}
static int rx_demux(struct link_device *ld, struct sk_buff *skb)
{
struct io_device *iod;
iod = ld->iod;
if (unlikely(!iod)) {
gif_err("%s: ERR! no iod!\n", ld->name);
return -ENODEV;
}
skbpriv(skb)->ld = ld;
skbpriv(skb)->iod = iod;
if (atomic_read(&iod->opened) <= 0) {
gif_err_limited("%s: ERR! %s is not opened\n", ld->name, iod->name);
return -ENODEV;
}
return rx_fmt_ipc(skb);
}
static int rx_frame_done(struct io_device *iod, struct link_device *ld,
struct sk_buff *skb)
{
/* Cut off the padding of the current frame */
skb_trim(skb, exynos_get_frame_len(skb->data));
gif_debug("%s->%s: frame length = %d\n", ld->name, iod->name, skb->len);
return rx_demux(ld, skb);
}
static int recv_frame_from_skb(struct io_device *iod, struct link_device *ld,
struct sk_buff *skb)
{
struct sk_buff *clone;
unsigned int rest;
unsigned int rcvd;
unsigned int tot; /* total length including padding */
int err = 0;
/*
** If there is only one EXYNOS frame in @skb, receive the EXYNOS frame and
** return immediately. In this case, the frame verification must already
** have been done at the link device.
*/
if (skbpriv(skb)->single_frame) {
err = rx_frame_done(iod, ld, skb);
if (err < 0)
goto exit;
return 0;
}
/*
** The routine from here is used only if there may be multiple EXYNOS
** frames in @skb.
*/
/* Check the config field of the first frame in @skb */
if (!exynos_start_valid(skb->data)) {
gif_err("%s->%s: ERR! INVALID config 0x%02X\n",
ld->name, iod->name, skb->data[0]);
err = -EINVAL;
goto exit;
}
/* Get the total length of the frame with a padding */
tot = exynos_get_total_len(skb->data);
/* Verify the total length of the first frame */
rest = skb->len;
if (unlikely(tot > rest)) {
gif_err("%s->%s: ERR! tot %d > skb->len %d)\n",
ld->name, iod->name, tot, rest);
err = -EINVAL;
goto exit;
}
/* If there is only one EXYNOS frame in @skb, */
if (likely(tot == rest)) {
/* Receive the EXYNOS frame and return immediately */
err = rx_frame_done(iod, ld, skb);
if (err < 0)
goto exit;
return 0;
}
/*
** This routine is used only if there are multiple EXYNOS frames in @skb.
*/
rcvd = 0;
while (rest > 0) {
clone = skb_clone(skb, GFP_ATOMIC);
if (unlikely(!clone)) {
gif_err("%s->%s: ERR! skb_clone fail\n",
ld->name, iod->name);
err = -ENOMEM;
goto exit;
}
/* Get the start of an EXYNOS frame */
skb_pull(clone, rcvd);
if (!exynos_start_valid(clone->data)) {
gif_err("%s->%s: ERR! INVALID config 0x%02X\n",
ld->name, iod->name, clone->data[0]);
dev_kfree_skb_any(clone);
err = -EINVAL;
goto exit;
}
/* Get the total length of the current frame with a padding */
tot = exynos_get_total_len(clone->data);
if (unlikely(tot > rest)) {
gif_err("%s->%s: ERR! dirty frame (tot %d > rest %d)\n",
ld->name, iod->name, tot, rest);
dev_kfree_skb_any(clone);
err = -EINVAL;
goto exit;
}
/* Cut off the padding of the current frame */
skb_trim(clone, exynos_get_frame_len(clone->data));
/* Demux the frame */
err = rx_demux(ld, clone);
if (err < 0) {
gif_err("%s->%s: ERR! rx_demux fail (err %d)\n",
ld->name, iod->name, err);
dev_kfree_skb_any(clone);
goto exit;
}
/* Calculate the start of the next frame */
rcvd += tot;
/* Calculate the rest size of data in @skb */
rest -= tot;
}
exit:
dev_kfree_skb_any(skb);
return err;
}
/* called from link device when a packet arrives for this io device */
static int io_dev_recv_skb_from_link_dev(struct io_device *iod,
struct link_device *ld, struct sk_buff *skb)
{
int err;
iodev_lock_wlock(iod);
err = recv_frame_from_skb(iod, ld, skb);
if (err < 0) {
gif_err("%s->%s: ERR! recv_frame_from_skb fail(err %d)\n",
ld->name, iod->name, err);
}
return err;
}
/* called from link device when a packet arrives fo this io device */
static int io_dev_recv_skb_single_from_link_dev(struct io_device *iod,
struct link_device *ld, struct sk_buff *skb)
{
int err;
iodev_lock_wlock(iod);
if (skbpriv(skb)->lnk_hdr)
skb_trim(skb, exynos_get_frame_len(skb->data));
err = rx_demux(ld, skb);
if (err < 0)
gif_err_limited("%s<-%s: ERR! rx_demux fail (err %d)\n",
iod->name, ld->name, err);
return err;
}
static void io_dev_gnss_state_changed(struct io_device *iod,
enum gnss_state state)
{
struct gnss_ctl *gc = iod->gc;
int old_state = gc->gnss_state;
if (old_state != state) {
gc->gnss_state = state;
gif_err("%s state changed (%s -> %s)\n", gc->name,
get_gnss_state_str(old_state), get_gnss_state_str(state));
}
if (state == STATE_OFFLINE || state == STATE_FAULT)
wake_up(&iod->wq);
}
static int misc_open(struct inode *inode, struct file *filp)
{
struct io_device *iod = to_io_device(filp->private_data);
struct link_device *ld;
int ref_cnt;
filp->private_data = (void *)iod;
ld = iod->ld;
ref_cnt = atomic_inc_return(&iod->opened);
gif_err("%s (opened %d) by %s\n", iod->name, ref_cnt, current->comm);
return 0;
}
static int misc_release(struct inode *inode, struct file *filp)
{
struct io_device *iod = (struct io_device *)filp->private_data;
int ref_cnt;
skb_queue_purge(&iod->sk_rx_q);
ref_cnt = atomic_dec_return(&iod->opened);
gif_err("%s (opened %d) by %s\n", iod->name, ref_cnt, current->comm);
return 0;
}
static unsigned int misc_poll(struct file *filp, struct poll_table_struct *wait)
{
struct io_device *iod = (struct io_device *)filp->private_data;
struct gnss_ctl *gc = iod->gc;
poll_wait(filp, &iod->wq, wait);
if (!skb_queue_empty(&iod->sk_rx_q) && gc->gnss_state != STATE_OFFLINE)
return POLLIN | POLLRDNORM;
if (gc->gnss_state == STATE_OFFLINE || gc->gnss_state == STATE_FAULT) {
gif_err("POLL wakeup in abnormal state!!!\n");
return POLLHUP;
} else {
return 0;
}
}
int valid_cmd_arg(unsigned int cmd, unsigned long arg)
{
switch(cmd) {
case GNSS_IOCTL_RESET:
case GNSS_IOCTL_LOAD_FIRMWARE:
case GNSS_IOCTL_REQ_FAULT_INFO:
case GNSS_IOCTL_REQ_BCMD:
return access_ok(VERIFY_READ, (const void *)arg, sizeof(arg));
case GNSS_IOCTL_READ_FIRMWARE:
return access_ok(VERIFY_WRITE, (const void *)arg, sizeof(arg));
default:
return true;
}
}
static int send_bcmd(struct io_device *iod, unsigned long arg)
{
struct link_device *ld = iod->ld;
struct kepler_bcmd_args bcmd_args;
int err = 0;
memset(&bcmd_args, 0, sizeof(struct kepler_bcmd_args));
err = copy_from_user(&bcmd_args, (const void __user *)arg,
sizeof(struct kepler_bcmd_args));
if (err) {
gif_err("copy_from_user fail(to get structure)\n");
err = -EFAULT;
goto bcmd_exit;
}
if (ld != NULL) {
gif_debug("flags : %d, cmd_id : %d, param1 : %d, param2 : %d(0x%x)\n",
bcmd_args.flags, bcmd_args.cmd_id, bcmd_args.param1,
bcmd_args.param2, bcmd_args.param2);
err = ld->req_bcmd(ld, bcmd_args.cmd_id, bcmd_args.flags,
bcmd_args.param1, bcmd_args.param2);
if (err == -EIO) { /* BCMD timeout */
gif_err("BCMD timeout cmd_id : %d\n", bcmd_args.cmd_id);
} else {
bcmd_args.ret_val = err;
err = copy_to_user((void __user *)arg,
(void *)&bcmd_args, sizeof(bcmd_args));
if (err) {
gif_err("copy_to_user fail(to send bcmd params)\n");
err = -EFAULT;
}
}
}
bcmd_exit:
return err;
}
static int gnss_load_firmware(struct io_device *iod,
struct kepler_firmware_args firmware_arg)
{
int err = 0;
gif_debug("Load Firmware - fw size : %d, fw_offset : %d\n",
firmware_arg.firmware_size, firmware_arg.offset);
if (firmware_arg.offset + firmware_arg.firmware_size > SZ_2M) {
gif_err("Unacceptable arguments!\n");
err = -EFAULT;
goto load_firmware_exit;
}
gif_debug("base addr = 0x%p\n", iod->ld->mdm_data->gnss_base);
err = copy_from_user(
(void *)iod->ld->mdm_data->gnss_base + firmware_arg.offset,
(void __user *)firmware_arg.firmware_bin,
firmware_arg.firmware_size);
if (err) {
gif_err("copy_from_user fail(to get fw binary)\n");
err = -EFAULT;
goto load_firmware_exit;
}
load_firmware_exit:
return err;
}
static int parsing_load_firmware(struct io_device *iod, unsigned long arg)
{
struct kepler_firmware_args firmware_arg;
int err = 0;
memset(&firmware_arg, 0, sizeof(struct kepler_firmware_args));
err = copy_from_user(&firmware_arg, (const void __user *)arg,
sizeof(struct kepler_firmware_args));
if (err) {
gif_err("copy_from_user fail(to get structure)\n");
err = -EFAULT;
return err;
}
return gnss_load_firmware(iod, firmware_arg);
}
static int gnss_read_firmware(struct io_device *iod,
struct kepler_firmware_args firmware_arg)
{
int err = 0;
gif_debug("Read Firmware - fw size : %d, fw_offset : %d\n",
firmware_arg.firmware_size, firmware_arg.offset);
if (firmware_arg.offset + firmware_arg.firmware_size > SZ_2M) {
gif_err("Unacceptable arguments!\n");
err = -EFAULT;
goto read_firmware_exit;
}
err = copy_to_user((void __user *)firmware_arg.firmware_bin,
(void *)iod->ld->mdm_data->gnss_base + firmware_arg.offset,
firmware_arg.firmware_size);
if (err) {
gif_err("copy_to_user fail(to get fw binary)\n");
err = -EFAULT;
}
read_firmware_exit:
return err;
}
static int parsing_read_firmware(struct io_device *iod, unsigned long arg)
{
struct kepler_firmware_args firmware_arg;
int err = 0;
memset(&firmware_arg, 0, sizeof(struct kepler_firmware_args));
err = copy_from_user(&firmware_arg, (const void __user *)arg,
sizeof(struct kepler_firmware_args));
if (err) {
gif_err("copy_from_user fail(to get structure)\n");
err = -EFAULT;
return err;
}
return gnss_read_firmware(iod, firmware_arg);
}
static int change_tcxo_mode(struct gnss_ctl *gc, unsigned long arg)
{
enum gnss_tcxo_mode tcxo_mode;
int ret;
ret = copy_from_user(&tcxo_mode, (const void __user *)arg,
sizeof(enum gnss_tcxo_mode));
if (ret) {
gif_err("copy_from_user fail(to get tcxo mode)\n");
ret = -EFAULT;
goto change_mode_exit;
}
if (gc->pmu_ops.change_tcxo_mode) {
ret = gc->pmu_ops.change_tcxo_mode(gc, tcxo_mode);
}
change_mode_exit:
return ret;
}
static long misc_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
struct io_device *iod = (struct io_device *)filp->private_data;
struct link_device *ld = iod->ld;
struct gnss_ctl *gc = iod->gc;
u32 *fault_info_regs;
int err = 0;
int size;
if (!valid_cmd_arg(cmd, arg))
return -ENOTTY;
switch (cmd) {
case GNSS_IOCTL_RESET:
if (gc->ops.gnss_hold_reset) {
gif_err("%s: GNSS_IOCTL_RESET\n", iod->name);
gc->ops.gnss_hold_reset(gc);
skb_queue_purge(&iod->sk_rx_q);
return 0;
}
gif_err("%s: !gc->ops.gnss_reset\n", iod->name);
return -EINVAL;
case GNSS_IOCTL_REQ_FAULT_INFO:
if (gc->ops.gnss_req_fault_info) {
gif_err("%s: GNSS_IOCTL_REQ_FAULT_INFO\n", iod->name);
size = gc->ops.gnss_req_fault_info(gc, &fault_info_regs);
gif_err("gnss_req_fault_info returned %d\n", size);
if (size < 0) {
gif_err("Can't get fault info from Kepler\n");
return -EFAULT;
}
if (size > 0) {
err = copy_to_user((void __user *)arg,
(void *)fault_info_regs, size);
kfree(fault_info_regs);
if (err) {
gif_err("copy_to_user fail(to copy fault info)\n");
return -EFAULT;
}
}
}
else {
gif_err("%s: !gc->ops.req_fault_info\n", iod->name);
return -EFAULT;
}
return size;
case GNSS_IOCTL_REQ_BCMD:
if (ld->req_bcmd) {
gif_debug("%s: GNSS_IOCTL_REQ_BCMD\n", iod->name);
return send_bcmd(iod, arg);
}
return 0;
case GNSS_IOCTL_LOAD_FIRMWARE:
gif_debug("%s: GNSS_IOCTL_LOAD_FIRMWARE\n", iod->name);
return parsing_load_firmware(iod, arg);
case GNSS_IOCTL_READ_FIRMWARE:
gif_debug("%s: GNSS_IOCTL_READ_FIRMWARE\n", iod->name);
return parsing_read_firmware(iod, arg);
case GNSS_IOCTL_CHANGE_SENSOR_GPIO:
gif_err("%s: GNSS_IOCTL_CHANGE_SENSOR_GPIO\n", iod->name);
if (gc->ops.change_sensor_gpio) {
return gc->ops.change_sensor_gpio(gc);
}
return -EFAULT;
case GNSS_IOCTL_CHANGE_TCXO_MODE:
gif_err("%s: GNSS_IOCTL_CHANGE_TCXO_MODE\n", iod->name);
return change_tcxo_mode(gc, arg);
case GNSS_IOCTL_SET_SENSOR_POWER:
if (gc->ops.set_sensor_power) {
gif_err("%s: GNSS_IOCTL_SENSOR_POWER\n", iod->name);
return gc->ops.set_sensor_power(gc, arg);
}
return -EFAULT;
default:
gif_err("%s: ERR! undefined cmd 0x%X\n", iod->name, cmd);
return -EINVAL;
}
return 0;
}
#ifdef CONFIG_COMPAT
static int parsing_load_firmware32(struct io_device *iod, unsigned long arg)
{
struct kepler_firmware_args firmware_arg;
struct kepler_firmware_args32 arg32;
int err = 0;
memset(&firmware_arg, 0, sizeof(firmware_arg));
err = copy_from_user(&arg32, (const void __user *)arg,
sizeof(struct kepler_firmware_args32));
if (err) {
gif_err("copy_from_user fail(to get structure)\n");
err = -EFAULT;
return err;
}
firmware_arg.firmware_size = arg32.firmware_size;
firmware_arg.offset = arg32.offset;
firmware_arg.firmware_bin = compat_ptr(arg32.firmware_bin);
return gnss_load_firmware(iod, firmware_arg);
}
static int parsing_read_firmware32(struct io_device *iod, unsigned long arg)
{
struct kepler_firmware_args firmware_arg;
struct kepler_firmware_args32 arg32;
int err = 0;
memset(&firmware_arg, 0, sizeof(firmware_arg));
err = copy_from_user(&arg32, (const void __user *)arg,
sizeof(struct kepler_firmware_args32));
if (err) {
gif_err("copy_from_user fail(to get structure)\n");
err = -EFAULT;
return err;
}
firmware_arg.firmware_size = arg32.firmware_size;
firmware_arg.offset = arg32.offset;
firmware_arg.firmware_bin = compat_ptr(arg32.firmware_bin);
return gnss_read_firmware(iod, firmware_arg);
}
static long misc_compat_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
struct io_device *iod = (struct io_device *)filp->private_data;
unsigned long realarg = (unsigned long)compat_ptr(arg);
if (!valid_cmd_arg(cmd, realarg))
return -ENOTTY;
switch (cmd) {
case GNSS_IOCTL_LOAD_FIRMWARE:
gif_debug("%s: GNSS_IOCTL_LOAD_FIRMWARE (32-bit)\n", iod->name);
return parsing_load_firmware32(iod, realarg);
case GNSS_IOCTL_READ_FIRMWARE:
gif_debug("%s: GNSS_IOCTL_READ_FIRMWARE (32-bit)\n", iod->name);
return parsing_read_firmware32(iod, realarg);
}
return misc_ioctl(filp, cmd, realarg);
}
#endif
static ssize_t misc_write(struct file *filp, const char __user *data,
size_t count, loff_t *fpos)
{
struct io_device *iod = (struct io_device *)filp->private_data;
struct link_device *ld = iod->ld;
struct sk_buff *skb;
u8 *buff;
int ret;
size_t headroom;
size_t tailroom;
size_t tx_bytes;
u16 fr_cfg;
fr_cfg = EXYNOS_SINGLE_MASK << 8;
headroom = EXYNOS_HEADER_SIZE;
tailroom = exynos_calc_padding_size(EXYNOS_HEADER_SIZE + count);
tx_bytes = headroom + count + tailroom;
skb = alloc_skb(tx_bytes, GFP_KERNEL);
if (!skb) {
gif_debug("%s: ERR! alloc_skb fail (tx_bytes:%ld)\n",
iod->name, tx_bytes);
return -ENOMEM;
}
/* Store the IO device, the link device, etc. */
skbpriv(skb)->iod = iod;
skbpriv(skb)->ld = ld;
skbpriv(skb)->lnk_hdr = iod->link_header;
skbpriv(skb)->exynos_ch = 0; /* Single channel should be 0. */
/* Build EXYNOS link header */
if (fr_cfg) {
buff = skb_put(skb, headroom);
exynos_build_header(iod, ld, buff, fr_cfg, 0, count);
}
/* Store IPC message */
buff = skb_put(skb, count);
if (copy_from_user(buff, data, count)) {
gif_err("%s->%s: ERR! copy_from_user fail (count %ld)\n",
iod->name, ld->name, count);
dev_kfree_skb_any(skb);
return -EFAULT;
}
/* Apply padding */
if (tailroom)
skb_put(skb, tailroom);
/* send data with sk_buff, link device will put sk_buff
* into the specific sk_buff_q and run work-q to send data
*/
skbpriv(skb)->iod = iod;
skbpriv(skb)->ld = ld;
ret = ld->send(ld, iod, skb);
if (ret < 0) {
gif_err("%s->%s: ERR! ld->send fail (err %d, tx_bytes %ld)\n",
iod->name, ld->name, ret, tx_bytes);
return ret;
}
if (ret != tx_bytes) {
gif_debug("%s->%s: WARNING! ret %d != tx_bytes %ld (count %ld)\n",
iod->name, ld->name, ret, tx_bytes, count);
}
return count;
}
static ssize_t misc_read(struct file *filp, char *buf, size_t count,
loff_t *fpos)
{
struct io_device *iod = (struct io_device *)filp->private_data;
struct sk_buff_head *rxq = &iod->sk_rx_q;
struct sk_buff *skb;
int copied = 0;
if (skb_queue_empty(rxq)) {
gif_debug("%s: ERR! no data in rxq\n", iod->name);
return 0;
}
skb = skb_dequeue(rxq);
if (unlikely(!skb)) {
gif_debug("%s: No data in RXQ\n", iod->name);
return 0;
}
copied = skb->len > count ? count : skb->len;
if (copy_to_user(buf, skb->data, copied)) {
gif_err("%s: ERR! copy_to_user fail\n", iod->name);
dev_kfree_skb_any(skb);
return -EFAULT;
}
gif_debug("%s: data:%d copied:%d qlen:%d\n",
iod->name, skb->len, copied, rxq->qlen);
if (skb->len > count) {
skb_pull(skb, count);
skb_queue_head(rxq, skb);
} else {
dev_kfree_skb_any(skb);
}
return copied;
}
static const struct file_operations misc_io_fops = {
.owner = THIS_MODULE,
.open = misc_open,
.release = misc_release,
.poll = misc_poll,
.unlocked_ioctl = misc_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = misc_compat_ioctl,
#endif
.write = misc_write,
.read = misc_read,
};
static void exynos_build_header(struct io_device *iod, struct link_device *ld,
u8 *buff, u16 cfg, u8 ctl, size_t count)
{
u16 *exynos_header = (u16 *)(buff + EXYNOS_START_OFFSET);
u16 *frame_seq = (u16 *)(buff + EXYNOS_FRAME_SEQ_OFFSET);
u16 *frag_cfg = (u16 *)(buff + EXYNOS_FRAG_CONFIG_OFFSET);
u16 *size = (u16 *)(buff + EXYNOS_LEN_OFFSET);
struct exynos_seq_num *seq_num = &(iod->seq_num);
*exynos_header = EXYNOS_START_MASK;
*frame_seq = ++seq_num->frame_cnt;
*frag_cfg = cfg;
*size = (u16)(EXYNOS_HEADER_SIZE + count);
buff[EXYNOS_CH_ID_OFFSET] = 0; /* single channel, should be 0. */
if (cfg == EXYNOS_SINGLE_MASK)
*frag_cfg = cfg;
buff[EXYNOS_CH_SEQ_OFFSET] = ++seq_num->ch_cnt[0];
}
int exynos_init_gnss_io_device(struct io_device *iod)
{
int ret = 0;
/* Matt - GNSS uses link headers; placeholder code */
iod->link_header = true;
/* Get gnss state from gnss control device */
iod->gnss_state_changed = io_dev_gnss_state_changed;
/* Get data from link device */
gif_debug("%s: init\n", iod->name);
iod->recv_skb = io_dev_recv_skb_from_link_dev;
iod->recv_skb_single = io_dev_recv_skb_single_from_link_dev;
/* Register misc device */
init_waitqueue_head(&iod->wq);
skb_queue_head_init(&iod->sk_rx_q);
iod->miscdev.minor = MISC_DYNAMIC_MINOR;
iod->miscdev.name = iod->name;
iod->miscdev.fops = &misc_io_fops;
iod->waketime = WAKE_TIME;
wake_lock_init(&iod->wakelock, WAKE_LOCK_SUSPEND, iod->name);
ret = misc_register(&iod->miscdev);
if (ret)
gif_debug("%s: ERR! misc_register failed\n", iod->name);
return ret;
}

View file

@ -0,0 +1,486 @@
/*
* Copyright (C) 2010 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/gpio.h>
#include <linux/delay.h>
#include <linux/platform_device.h>
#include <linux/regulator/consumer.h>
#include <linux/clk-private.h>
#include <linux/mcu_ipc.h>
#include <asm/cacheflush.h>
#include "gnss_prj.h"
#include "gnss_link_device_shmem.h"
#include "pmu-gnss.h"
static irqreturn_t kepler_active_isr(int irq, void *arg)
{
struct gnss_ctl *gc = (struct gnss_ctl *)arg;
struct io_device *iod = gc->iod;
gif_err("ACTIVE Interrupt occurred!\n");
if (!wake_lock_active(&gc->gc_fault_wake_lock))
wake_lock_timeout(&gc->gc_fault_wake_lock, HZ);
gc->iod->gnss_state_changed(gc->iod, STATE_FAULT);
wake_up(&iod->wq);
gc->pmu_ops.clear_int(gc, GNSS_INT_ACTIVE_CLEAR);
return IRQ_HANDLED;
}
static irqreturn_t kepler_wdt_isr(int irq, void *arg)
{
struct gnss_ctl *gc = (struct gnss_ctl *)arg;
struct io_device *iod = gc->iod;
gif_err("WDT Interrupt occurred!\n");
if (!wake_lock_active(&gc->gc_fault_wake_lock))
wake_lock_timeout(&gc->gc_fault_wake_lock, HZ);
gc->iod->gnss_state_changed(gc->iod, STATE_FAULT);
wake_up(&iod->wq);
gc->pmu_ops.clear_int(gc, GNSS_INT_WDT_RESET_CLEAR);
return IRQ_HANDLED;
}
static irqreturn_t kepler_wakelock_isr(int irq, void *arg)
{
struct gnss_ctl *gc = (struct gnss_ctl *)arg;
struct gnss_mbox *mbx = gc->gnss_data->mbx;
struct link_device *ld = gc->iod->ld;
struct shmem_link_device *shmd = to_shmem_link_device(ld);
/*
u32 rx_tail, rx_head, tx_tail, tx_head, gnss_ipc_msg, ap_ipc_msg;
*/
#ifdef USE_SIMPLE_WAKE_LOCK
gif_err("Unexpected interrupt occurred(%s)!!!!\n", __func__);
return IRQ_HANDLED;
#endif
/* This is for debugging
tx_head = get_txq_head(shmd);
tx_tail = get_txq_tail(shmd);
rx_head = get_rxq_head(shmd);
rx_tail = get_rxq_tail(shmd);
gnss_ipc_msg = mbox_get_value(MCU_GNSS, shmd->irq_gnss2ap_ipc_msg);
ap_ipc_msg = read_int2gnss(shmd);
gif_err("RX_H[0x%x], RX_T[0x%x], TX_H[0x%x], TX_T[0x%x],\
AP_IPC[0x%x], GNSS_IPC[0x%x]\n",
rx_head, rx_tail, tx_head, tx_tail, ap_ipc_msg, gnss_ipc_msg);
*/
/* Clear wake_lock */
if (wake_lock_active(&shmd->wlock))
wake_unlock(&shmd->wlock);
gif_debug("Wake Lock ISR!!!!\n");
gif_err(">>>>DBUS_SW_WAKE_INT\n");
/* 1. Set wake-lock-timeout(). */
if (!wake_lock_active(&gc->gc_wake_lock))
wake_lock_timeout(&gc->gc_wake_lock, HZ); /* 1 sec */
/* 2. Disable DBUS_SW_WAKE_INT interrupts. */
disable_irq_nosync(gc->wake_lock_irq);
/* 3. Write 0x1 to MBOX_reg[6]. */
/* MBOX_req[6] is WAKE_LOCK */
if (gnss_read_reg(shmd->reg[GNSS_REG_WAKE_LOCK]) == 0X1) {
gif_err("@@ reg_wake_lock is already 0x1!!!!!!\n");
return IRQ_HANDLED;
} else {
gnss_write_reg(shmd->reg[GNSS_REG_WAKE_LOCK], 0x1);
}
/* 4. Send interrupt MBOX1[3]. */
/* Interrupt MBOX1[3] is RSP_WAKE_LOCK_SET */
mbox_set_interrupt(MCU_GNSS, mbx->int_ap2gnss_ack_wake_set);
return IRQ_HANDLED;
}
#ifdef USE_SIMPLE_WAKE_LOCK
static void mbox_kepler_simple_lock(void *arg)
{
struct gnss_ctl *gc = (struct gnss_ctl *)arg;
struct gnss_mbox *mbx = gc->gnss_data->mbx;
gif_debug("[GNSS] WAKE interrupt(Mbox15) occurred\n");
mbox_set_interrupt(MCU_GNSS, mbx->int_ap2gnss_ack_wake_set);
gc->pmu_ops.clear_int(gc, GNSS_INT_WAKEUP_CLEAR);
}
#endif
static void mbox_kepler_wake_clr(void *arg)
{
struct gnss_ctl *gc = (struct gnss_ctl *)arg;
struct gnss_mbox *mbx = gc->gnss_data->mbx;
/*
struct link_device *ld = gc->iod->ld;
struct shmem_link_device *shmd = to_shmem_link_device(ld);
u32 rx_tail, rx_head, tx_tail, tx_head, gnss_ipc_msg, ap_ipc_msg;
*/
#ifdef USE_SIMPLE_WAKE_LOCK
gif_err("Unexpected interrupt occurred(%s)!!!!\n", __func__);
return ;
#endif
/*
tx_head = get_txq_head(shmd);
tx_tail = get_txq_tail(shmd);
rx_head = get_rxq_head(shmd);
rx_tail = get_rxq_tail(shmd);
gnss_ipc_msg = mbox_get_value(MCU_GNSS, shmd->irq_gnss2ap_ipc_msg);
ap_ipc_msg = read_int2gnss(shmd);
gif_eff("RX_H[0x%x], RX_T[0x%x], TX_H[0x%x], TX_T[0x%x], AP_IPC[0x%x], GNSS_IPC[0x%x]\n",
rx_head, rx_tail, tx_head, tx_tail, ap_ipc_msg, gnss_ipc_msg);
*/
gc->pmu_ops.clear_int(gc, GNSS_INT_WAKEUP_CLEAR);
gif_debug("Wake Lock Clear!!!!\n");
gif_err(">>>>DBUS_SW_WAKE_INT CLEAR\n");
wake_unlock(&gc->gc_wake_lock);
enable_irq(gc->wake_lock_irq);
if (gnss_read_reg(gc->gnss_data->reg[GNSS_REG_WAKE_LOCK]) == 0X0) {
gif_err("@@ reg_wake_lock is already 0x0!!!!!!\n");
return ;
}
gnss_write_reg(gc->gnss_data->reg[GNSS_REG_WAKE_LOCK], 0x0);
mbox_set_interrupt(MCU_GNSS, mbx->int_ap2gnss_ack_wake_clr);
}
static void mbox_kepler_rsp_fault_info(void *arg)
{
struct gnss_ctl *gc = (struct gnss_ctl *)arg;
complete(&gc->fault_cmpl);
}
static int kepler_hold_reset(struct gnss_ctl *gc)
{
gif_err("%s\n", __func__);
if (gc->gnss_state == STATE_OFFLINE) {
gif_err("Current Kerpler status is OFFLINE, so it will be ignored\n");
return 0;
}
gc->iod->gnss_state_changed(gc->iod, STATE_HOLD_RESET);
if (gc->ccore_qch_lh_gnss) {
clk_disable_unprepare(gc->ccore_qch_lh_gnss);
gif_err("Disabled GNSS Qch\n");
}
gc->pmu_ops.hold_reset(gc);
mbox_sw_reset(MCU_GNSS);
return 0;
}
static int kepler_release_reset(struct gnss_ctl *gc)
{
int ret;
gif_err("%s\n", __func__);
gc->iod->gnss_state_changed(gc->iod, STATE_ONLINE);
gc->pmu_ops.release_reset(gc);
if (gc->ccore_qch_lh_gnss) {
ret = clk_prepare_enable(gc->ccore_qch_lh_gnss);
if (!ret)
gif_err("GNSS Qch enabled\n");
else
gif_err("Could not enable Qch (%d)\n", ret);
}
return 0;
}
static int kepler_power_on(struct gnss_ctl *gc)
{
int ret;
gif_err("%s\n", __func__);
gc->iod->gnss_state_changed(gc->iod, STATE_ONLINE);
gc->pmu_ops.power_on(gc, GNSS_POWER_ON);
if (gc->ccore_qch_lh_gnss) {
ret = clk_prepare_enable(gc->ccore_qch_lh_gnss);
if (!ret)
gif_err("GNSS Qch enabled\n");
else
gif_err("Could not enable Qch (%d)\n", ret);
}
return 0;
}
static int kepler_req_fault_info(struct gnss_ctl *gc, u32 **fault_info_regs)
{
int ret;
struct gnss_data *pdata;
struct gnss_mbox *mbx;
unsigned long timeout = msecs_to_jiffies(1000);
u32 size = 0;
if (!fault_info_regs) {
gif_err("Cannot access fault_info_regs!\n");
return -EINVAL;
}
if (!gc) {
gif_err("No gnss_ctl info!\n");
return -ENODEV;
}
pdata = gc->gnss_data;
mbx = pdata->mbx;
mbox_set_interrupt(MCU_GNSS, mbx->int_ap2gnss_req_fault_info);
ret = wait_for_completion_timeout(&gc->fault_cmpl, timeout);
if (ret == 0) {
gif_err("Req Fault Info TIMEOUT!\n");
return -EIO;
}
switch (pdata->fault_info.device) {
case GNSS_IPC_MBOX:
size = pdata->fault_info.size * sizeof(u32);
if (size == 0) {
gif_err("No fault info to read.\n");
}
else {
*fault_info_regs = kmalloc(size, GFP_KERNEL);
if (*fault_info_regs) {
int i;
for (i = 0; i < pdata->fault_info.size; i++) {
(*fault_info_regs)[i] = mbox_get_value(MCU_GNSS,
pdata->fault_info.value.index + i);
}
}
else {
gif_err("Could not allocate fault info\n");
return -ENOMEM;
}
}
break;
case GNSS_IPC_SHMEM:
size = mbox_get_value(MCU_GNSS, mbx->reg_bcmd_ctrl[CTRL3]);
if (size > pdata->fault_info.size) {
gif_err("Requested %d bytes when a max of %d bytes is allowed.\n",
size, pdata->fault_info.size);
return -EINVAL;
}
if (size == 0) {
gif_err("No fault info to read.\n");
}
else {
(*fault_info_regs) = kmalloc(size, GFP_KERNEL);
if (*fault_info_regs) {
memcpy((*fault_info_regs), pdata->fault_info.value.addr,
size);
}
else {
gif_err("Could not allocate fault info\n");
return -ENOMEM;
}
}
break;
default:
gif_err("Don't know where to dump fault info.\n");
}
wake_unlock(&gc->gc_fault_wake_lock);
return size;
}
static int kepler_suspend(struct gnss_ctl *gc)
{
return 0;
}
static int kepler_resume(struct gnss_ctl *gc)
{
#ifdef USE_SIMPLE_WAKE_LOCK
gc->pmu_ops.clear_int(gc, GNSS_INT_WAKEUP_CLEAR);
#endif
return 0;
}
static int kepler_change_gpio(struct gnss_ctl *gc)
{
int status = 0;
gif_err("Change GPIO for sensor\n");
if (!IS_ERR(gc->gnss_sensor_gpio)) {
status = pinctrl_select_state(gc->gnss_gpio, gc->gnss_sensor_gpio);
if (status) {
gif_err("Can't change sensor GPIO(%d)\n", status);
}
} else {
gif_err("gnss_sensor_gpio is not valid(0x%p)\n", gc->gnss_sensor_gpio);
status = -EIO;
}
return status;
}
static int kepler_set_sensor_power(struct gnss_ctl *gc, unsigned long arg)
{
int ret;
int reg_en = *((enum sensor_power*)arg);
if (reg_en == 0) {
ret = regulator_disable(gc->vdd_sensor_reg);
if (ret != 0)
gif_err("Failed : Disable sensor power.\n");
else
gif_err("Success : Disable sensor power.\n");
} else {
ret = regulator_enable(gc->vdd_sensor_reg);
if (ret != 0)
gif_err("Failed : Enable sensor power.\n");
else
gif_err("Success : Enable sensor power.\n");
}
return ret;
}
static void gnss_get_ops(struct gnss_ctl *gc)
{
gc->ops.gnss_hold_reset = kepler_hold_reset;
gc->ops.gnss_release_reset = kepler_release_reset;
gc->ops.gnss_power_on = kepler_power_on;
gc->ops.gnss_req_fault_info = kepler_req_fault_info;
gc->ops.suspend_gnss_ctrl = kepler_suspend;
gc->ops.resume_gnss_ctrl = kepler_resume;
gc->ops.change_sensor_gpio = kepler_change_gpio;
gc->ops.set_sensor_power = kepler_set_sensor_power;
}
static void gnss_get_pmu_ops(struct gnss_ctl *gc)
{
gc->pmu_ops.hold_reset = gnss_pmu_hold_reset;
gc->pmu_ops.release_reset = gnss_pmu_release_reset;
gc->pmu_ops.power_on = gnss_pmu_power_on;
gc->pmu_ops.clear_int = gnss_pmu_clear_interrupt;
gc->pmu_ops.init_conf = gnss_pmu_init_conf;
gc->pmu_ops.change_tcxo_mode = gnss_change_tcxo_mode;
}
int init_gnssctl_device(struct gnss_ctl *gc, struct gnss_data *pdata)
{
int ret = 0, irq = 0;
struct platform_device *pdev = NULL;
struct gnss_mbox *mbox = gc->gnss_data->mbx;
gif_err("[GNSS IF] Initializing GNSS Control\n");
gnss_get_ops(gc);
gnss_get_pmu_ops(gc);
dev_set_drvdata(gc->dev, gc);
wake_lock_init(&gc->gc_fault_wake_lock,
WAKE_LOCK_SUSPEND, "gnss_fault_wake_lock");
wake_lock_init(&gc->gc_wake_lock,
WAKE_LOCK_SUSPEND, "gnss_wake_lock");
init_completion(&gc->fault_cmpl);
pdev = to_platform_device(gc->dev);
/* GNSS_ACTIVE */
irq = platform_get_irq(pdev, 0);
ret = devm_request_irq(&pdev->dev, irq, kepler_active_isr, 0,
"kepler_active_handler", gc);
if (ret) {
gif_err("Request irq fail - kepler_active_isr(%d)\n", ret);
return ret;
}
enable_irq_wake(irq);
/* GNSS_WATCHDOG */
irq = platform_get_irq(pdev, 1);
ret = devm_request_irq(&pdev->dev, irq, kepler_wdt_isr, 0,
"kepler_wdt_handler", gc);
if (ret) {
gif_err("Request irq fail - kepler_wdt_isr(%d)\n", ret);
return ret;
}
enable_irq_wake(irq);
/* GNSS_WAKEUP */
gc->wake_lock_irq = platform_get_irq(pdev, 2);
ret = devm_request_irq(&pdev->dev, gc->wake_lock_irq, kepler_wakelock_isr,
0, "kepler_wakelock_handler", gc);
if (ret) {
gif_err("Request irq fail - kepler_wakelock_isr(%d)\n", ret);
return ret;
}
enable_irq_wake(irq);
#ifdef USE_SIMPLE_WAKE_LOCK
disable_irq(gc->wake_lock_irq);
gif_err("Using simple lock sequence!!!\n");
mbox_request_irq(MCU_GNSS, 15, mbox_kepler_simple_lock, (void *)gc);
#endif
/* Initializing Shared Memory for GNSS */
gif_err("Initializing shared memory for GNSS.\n");
gc->pmu_ops.init_conf(gc);
gc->gnss_state = STATE_OFFLINE;
gif_info("[GNSS IF] Register mailbox for GNSS2AP fault handling\n");
mbox_request_irq(MCU_GNSS, mbox->irq_gnss2ap_req_wake_clr,
mbox_kepler_wake_clr, (void *)gc);
mbox_request_irq(MCU_GNSS, mbox->irq_gnss2ap_rsp_fault_info,
mbox_kepler_rsp_fault_info, (void *)gc);
gc->gnss_gpio = devm_pinctrl_get(&pdev->dev);
if (IS_ERR(gc->gnss_gpio)) {
gif_err("Can't get gpio for GNSS sensor.\n");
} else {
gc->gnss_sensor_gpio = pinctrl_lookup_state(gc->gnss_gpio,
"gnss_sensor");
}
gc->vdd_sensor_reg = devm_regulator_get(gc->dev, "vdd_sensor_2p85");
if (IS_ERR(gc->vdd_sensor_reg)) {
gif_err("Cannot get the regulator \"vdd_sensor_2p85\"\n");
}
gif_err("---\n");
return ret;
}

View file

@ -0,0 +1,415 @@
/*
* Copyright (C) 2011 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/irq.h>
#include <linux/gpio.h>
#include <linux/time.h>
#include <linux/interrupt.h>
#include <linux/timer.h>
#include <linux/wakelock.h>
#include <linux/delay.h>
#include <linux/wait.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
#include <linux/if_arp.h>
#include <linux/platform_device.h>
#include <linux/kallsyms.h>
#include <linux/suspend.h>
#include "gnss_prj.h"
#include "gnss_link_device_memory.h"
void gnss_msq_reset(struct mem_status_queue *msq)
{
unsigned long flags;
spin_lock_irqsave(&msq->lock, flags);
msq->out = msq->in;
spin_unlock_irqrestore(&msq->lock, flags);
}
/**
* gnss_msq_get_free_slot
* @trq : pointer to an instance of mem_status_queue structure
*
* Succeeds always by dropping the oldest slot if a "msq" is full.
*/
struct mem_status *gnss_msq_get_free_slot(struct mem_status_queue *msq)
{
int qsize = MAX_MEM_LOG_CNT;
int in;
int out;
unsigned long flags;
struct mem_status *stat;
spin_lock_irqsave(&msq->lock, flags);
in = msq->in;
out = msq->out;
if (circ_get_space(qsize, in, out) < 1) {
/* Make the oldest slot empty */
out++;
msq->out = (out == qsize) ? 0 : out;
}
/* Get a free slot */
stat = &msq->stat[in];
/* Make it as "data" slot */
in++;
msq->in = (in == qsize) ? 0 : in;
spin_unlock_irqrestore(&msq->lock, flags);
return stat;
}
struct mem_status *gnss_msq_get_data_slot(struct mem_status_queue *msq)
{
int qsize = MAX_MEM_LOG_CNT;
int in;
int out;
unsigned long flags;
struct mem_status *stat;
spin_lock_irqsave(&msq->lock, flags);
in = msq->in;
out = msq->out;
if (in == out) {
stat = NULL;
goto exit;
}
/* Get a data slot */
stat = &msq->stat[out];
/* Make it "free" slot */
out++;
msq->out = (out == qsize) ? 0 : out;
exit:
spin_unlock_irqrestore(&msq->lock, flags);
return stat;
}
/**
* gnss_memcpy16_from_io
* @to: pointer to "real" memory
* @from: pointer to IO memory
* @count: data length in bytes to be copied
*
* Copies data from IO memory space to "real" memory space.
*/
void gnss_memcpy16_from_io(const void *to, const void __iomem *from, u32 count)
{
u16 *d = (u16 *)to;
u16 *s = (u16 *)from;
u32 words = count >> 1;
while (words--)
*d++ = ioread16(s++);
}
/**
* gnss_memcpy16_to_io
* @to: pointer to IO memory
* @from: pointer to "real" memory
* @count: data length in bytes to be copied
*
* Copies data from "real" memory space to IO memory space.
*/
void gnss_memcpy16_to_io(const void __iomem *to, const void *from, u32 count)
{
u16 *d = (u16 *)to;
u16 *s = (u16 *)from;
u32 words = count >> 1;
while (words--)
iowrite16(*s++, d++);
}
/**
* gnss_memcmp16_to_io
* @to: pointer to IO memory
* @from: pointer to "real" memory
* @count: data length in bytes to be compared
*
* Compares data from "real" memory space to IO memory space.
*/
int gnss_memcmp16_to_io(const void __iomem *to, const void *from, u32 count)
{
u16 *d = (u16 *)to;
u16 *s = (u16 *)from;
int words = count >> 1;
int diff = 0;
int i;
u16 d1;
u16 s1;
for (i = 0; i < words; i++) {
d1 = ioread16(d);
s1 = *s;
if (d1 != s1) {
diff++;
gif_err("ERR! [%d] d:0x%04X != s:0x%04X\n", i, d1, s1);
}
d++;
s++;
}
return diff;
}
/**
* gnss_circ_read16_from_io
* @dst: start address of the destination buffer
* @src: start address of the buffer in a circular queue
* @qsize: size of the circular queue
* @out: offset to read
* @len: length of data to be read
*
* Should be invoked after checking data length
*/
void gnss_circ_read16_from_io(void *dst, void *src, u32 qsize, u32 out, u32 len)
{
if ((out + len) <= qsize) {
/* ----- (out) (in) ----- */
/* ----- 7f 00 00 7e ----- */
gnss_memcpy16_from_io(dst, (src + out), len);
} else {
/* (in) ----------- (out) */
/* 00 7e ----------- 7f 00 */
unsigned len1 = qsize - out;
/* 1) data start (out) ~ buffer end */
gnss_memcpy16_from_io(dst, (src + out), len1);
/* 2) buffer start ~ data end (in - 1) */
gnss_memcpy16_from_io((dst + len1), src, (len - len1));
}
}
/**
* gnss_circ_write16_to_io
* @dst: pointer to the start of the circular queue
* @src: pointer to the source
* @qsize: size of the circular queue
* @in: offset to write
* @len: length of data to be written
*
* Should be invoked after checking free space
*/
void gnss_circ_write16_to_io(void *dst, void *src, u32 qsize, u32 in, u32 len)
{
u32 space;
if ((in + len) < qsize) {
/* (in) ----------- (out) */
/* 00 7e ----------- 7f 00 */
gnss_memcpy16_to_io((dst + in), src, len);
} else {
/* ----- (out) (in) ----- */
/* ----- 7f 00 00 7e ----- */
/* 1) space start (in) ~ buffer end */
space = qsize - in;
gnss_memcpy16_to_io((dst + in), src, ((len > space) ? space : len));
/* 2) buffer start ~ data end */
if (len > space)
gnss_memcpy16_to_io(dst, (src + space), (len - space));
}
}
/**
* gnss_copy_circ_to_user
* @dst: start address of the destination buffer
* @src: start address of the buffer in a circular queue
* @qsize: size of the circular queue
* @out: offset to read
* @len: length of data to be read
*
* Should be invoked after checking data length
*/
int gnss_copy_circ_to_user(void __user *dst, void *src, u32 qsize, u32 out, u32 len)
{
if ((out + len) <= qsize) {
/* ----- (out) (in) ----- */
/* ----- 7f 00 00 7e ----- */
if (copy_to_user(dst, (src + out), len)) {
gif_err("ERR! <called by %pf> copy_to_user fail\n",
CALLER);
return -EFAULT;
}
} else {
/* (in) ----------- (out) */
/* 00 7e ----------- 7f 00 */
unsigned len1 = qsize - out;
/* 1) data start (out) ~ buffer end */
if (copy_to_user(dst, (src + out), len1)) {
gif_err("ERR! <called by %pf> copy_to_user fail\n",
CALLER);
return -EFAULT;
}
/* 2) buffer start ~ data end (in?) */
if (copy_to_user((dst + len1), src, (len - len1))) {
gif_err("ERR! <called by %pf> copy_to_user fail\n",
CALLER);
return -EFAULT;
}
}
return 0;
}
/**
* gnss_copy_user_to_circ
* @dst: pointer to the start of the circular queue
* @src: pointer to the source
* @qsize: size of the circular queue
* @in: offset to write
* @len: length of data to be written
*
* Should be invoked after checking free space
*/
int gnss_copy_user_to_circ(void *dst, void __user *src, u32 qsize, u32 in, u32 len)
{
u32 space;
u32 len1;
if ((in + len) < qsize) {
/* (in) ----------- (out) */
/* 00 7e ----------- 7f 00 */
if (copy_from_user((dst + in), src, len)) {
gif_err("ERR! <called by %pf> copy_from_user fail\n",
CALLER);
return -EFAULT;
}
} else {
/* ----- (out) (in) ----- */
/* ----- 7f 00 00 7e ----- */
/* 1) space start (in) ~ buffer end */
space = qsize - in;
len1 = (len > space) ? space : len;
if (copy_from_user((dst + in), src, len1)) {
gif_err("ERR! <called by %pf> copy_from_user fail\n",
CALLER);
return -EFAULT;
}
/* 2) buffer start ~ data end */
if (len > len1) {
if (copy_from_user(dst, (src + space), (len - len1))) {
gif_err("ERR! <called by %pf> copy_from_user fail\n",
CALLER);
return -EFAULT;
}
}
}
return 0;
}
/**
* gnss_capture_mem_dump
* @ld: pointer to an instance of link_device structure
* @base: base virtual address to a memory interface medium
* @size: size of the memory interface medium
*
* Captures a dump for a memory interface medium.
*
* Returns the pointer to a memory dump buffer.
*/
u8 *gnss_capture_mem_dump(struct link_device *ld, u8 *base, u32 size)
{
u8 *buff = kzalloc(size, GFP_ATOMIC);
if (!buff) {
gif_err("%s: ERR! kzalloc(%d) fail\n", ld->name, size);
return NULL;
} else {
gnss_memcpy16_from_io(buff, base, size);
return buff;
}
}
/**
* gnss_trq_get_free_slot
* @trq : pointer to an instance of trace_data_queue structure
*
* Succeeds always by dropping the oldest slot if a "trq" is full.
*/
struct trace_data *gnss_trq_get_free_slot(struct trace_data_queue *trq)
{
int qsize = MAX_TRACE_SIZE;
int in;
int out;
unsigned long flags;
struct trace_data *trd;
spin_lock_irqsave(&trq->lock, flags);
in = trq->in;
out = trq->out;
/* The oldest slot can be dropped. */
if (circ_get_space(qsize, in, out) < 1) {
/* Free the data buffer in the oldest slot */
trd = &trq->trd[out];
kfree(trd->data);
/* Make the oldest slot empty */
out++;
trq->out = (out == qsize) ? 0 : out;
}
/* Get a free slot and make it occupied */
trd = &trq->trd[in++];
trq->in = (in == qsize) ? 0 : in;
spin_unlock_irqrestore(&trq->lock, flags);
memset(trd, 0, sizeof(struct trace_data));
return trd;
}
struct trace_data *gnss_trq_get_data_slot(struct trace_data_queue *trq)
{
int qsize = MAX_TRACE_SIZE;
int in;
int out;
unsigned long flags;
struct trace_data *trd;
spin_lock_irqsave(&trq->lock, flags);
in = trq->in;
out = trq->out;
if (circ_get_usage(qsize, in, out) < 1) {
spin_unlock_irqrestore(&trq->lock, flags);
return NULL;
}
/* Get a data slot and make it empty */
trd = &trq->trd[out++];
trq->out = (out == qsize) ? 0 : out;
spin_unlock_irqrestore(&trq->lock, flags);
return trd;
}

View file

@ -0,0 +1,232 @@
/*
* Copyright (C) 2010 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __GNSS_LINK_DEVICE_MEMORY_H__
#define __GNSS_LINK_DEVICE_MEMORY_H__
#include <linux/spinlock.h>
#include <linux/wakelock.h>
#include <linux/workqueue.h>
#include <linux/timer.h>
#include <linux/notifier.h>
#if defined(CONFIG_HAS_EARLYSUSPEND)
#include <linux/earlysuspend.h>
#elif defined(CONFIG_FB)
#include <linux/fb.h>
#endif
#include "gnss_prj.h"
/* special interrupt cmd indicating gnss boot failure. */
#define INT_POWERSAFE_FAIL 0xDEAD
#define DUMP_TIMEOUT (30 * HZ)
#define DUMP_START_TIMEOUT (100 * HZ)
#define DUMP_WAIT_TIMEOUT (HZ >> 10) /* 1/1024 second */
#define REQ_BCMD_TIMEOUT 200 /* 10 ms */
#define MAX_TIMEOUT_CNT 1000
#define MAX_SKB_TXQ_DEPTH 1024
#define MAX_RETRY_CNT 3
enum circ_ptr_type {
HEAD,
TAIL,
};
static inline bool circ_valid(u32 qsize, u32 in, u32 out)
{
if (in >= qsize)
return false;
if (out >= qsize)
return false;
return true;
}
static inline u32 circ_get_space(u32 qsize, u32 in, u32 out)
{
return (in < out) ? (out - in - 1) : (qsize + out - in - 1);
}
static inline u32 circ_get_usage(u32 qsize, u32 in, u32 out)
{
return (in >= out) ? (in - out) : (qsize - out + in);
}
static inline u32 circ_new_pointer(u32 qsize, u32 p, u32 len)
{
p += len;
return (p < qsize) ? p : (p - qsize);
}
/**
* circ_read
* @dst: start address of the destination buffer
* @src: start address of the buffer in a circular queue
* @qsize: size of the circular queue
* @out: offset to read
* @len: length of data to be read
*
* Should be invoked after checking data length
*/
static inline void circ_read(void *dst, void *src, u32 qsize, u32 out, u32 len)
{
unsigned len1;
if ((out + len) <= qsize) {
/* ----- (out) (in) ----- */
/* ----- 7f 00 00 7e ----- */
memcpy(dst, (src + out), len);
} else {
/* (in) ----------- (out) */
/* 00 7e ----------- 7f 00 */
/* 1) data start (out) ~ buffer end */
len1 = qsize - out;
memcpy(dst, (src + out), len1);
/* 2) buffer start ~ data end (in?) */
memcpy((dst + len1), src, (len - len1));
}
}
/**
* circ_write
* @dst: pointer to the start of the circular queue
* @src: pointer to the source
* @qsize: size of the circular queue
* @in: offset to write
* @len: length of data to be written
*
* Should be invoked after checking free space
*/
static inline void circ_write(void *dst, void *src, u32 qsize, u32 in, u32 len)
{
u32 space;
if ((in + len) <= qsize) {
/* (in) ----------- (out) */
/* 00 7e ----------- 7f 00 */
memcpy((dst + in), src, len);
} else {
/* ----- (out) (in) ----- */
/* ----- 7f 00 00 7e ----- */
/* 1) space start (in) ~ buffer end */
space = qsize - in;
memcpy((dst + in), src, space);
/* 2) buffer start ~ data end */
memcpy(dst, (src + space), (len - space));
}
}
/**
* circ_dir
* @dir: communication direction (enum direction)
*
* Returns the direction of a circular queue
*
*/
static const inline char *circ_dir(enum direction dir)
{
if (dir == TX)
return "TXQ";
else
return "RXQ";
}
/**
* circ_ptr
* @ptr: circular queue pointer (enum circ_ptr_type)
*
* Returns the name of a circular queue pointer
*
*/
static const inline char *circ_ptr(enum circ_ptr_type ptr)
{
if (ptr == HEAD)
return "head";
else
return "tail";
}
void gnss_memcpy16_from_io(const void *to, const void __iomem *from, u32 count);
void gnss_memcpy16_to_io(const void __iomem *to, const void *from, u32 count);
int gnss_memcmp16_to_io(const void __iomem *to, const void *from, u32 count);
void gnss_circ_read16_from_io(void *dst, void *src, u32 qsize, u32 out, u32 len);
void gnss_circ_write16_to_io(void *dst, void *src, u32 qsize, u32 in, u32 len);
int gnss_copy_circ_to_user(void __user *dst, void *src, u32 qsize, u32 out, u32 len);
int gnss_copy_user_to_circ(void *dst, void __user *src, u32 qsize, u32 in, u32 len);
#define MAX_MEM_LOG_CNT 8192
#define MAX_TRACE_SIZE 1024
struct mem_status {
/* Timestamp */
struct timespec ts;
/* Direction (TX or RX) */
enum direction dir;
/* The status of memory interface at the time */
u32 head[MAX_DIR];
u32 tail[MAX_DIR];
u16 int2ap;
u16 int2gnss;
};
struct mem_status_queue {
spinlock_t lock;
u32 in;
u32 out;
struct mem_status stat[MAX_MEM_LOG_CNT];
};
struct circ_status {
u8 *buff;
u32 qsize; /* the size of a circular buffer */
u32 in;
u32 out;
u32 size; /* the size of free space or received data */
};
struct trace_data {
struct timespec ts;
struct circ_status circ_stat;
u8 *data;
u32 size;
};
struct trace_data_queue {
spinlock_t lock;
u32 in;
u32 out;
struct trace_data trd[MAX_TRACE_SIZE];
};
void gnss_msq_reset(struct mem_status_queue *msq);
struct mem_status *gnss_msq_get_free_slot(struct mem_status_queue *msq);
struct mem_status *gnss_msq_get_data_slot(struct mem_status_queue *msq);
u8 *gnss_capture_mem_dump(struct link_device *ld, u8 *base, u32 size);
struct trace_data *gnss_trq_get_free_slot(struct trace_data_queue *trq);
struct trace_data *gnss_trq_get_data_slot(struct trace_data_queue *trq);
#endif

View file

@ -0,0 +1,946 @@
/*
* Copyright (C) 2010 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/irq.h>
#include <linux/gpio.h>
#include <linux/time.h>
#include <linux/interrupt.h>
#include <linux/timer.h>
#include <linux/wakelock.h>
#include <linux/delay.h>
#include <linux/wait.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
#include <linux/if_arp.h>
#include <linux/platform_device.h>
#include <linux/kallsyms.h>
#include <linux/suspend.h>
#include <linux/notifier.h>
#include <linux/smc.h>
#include <linux/skbuff.h>
#ifdef CONFIG_OF_RESERVED_MEM
#include <linux/of_reserved_mem.h>
#endif
#include "include/gnss.h"
#include "gnss_link_device_shmem.h"
#include "../mcu_ipc/mcu_ipc.h"
#include "gnss_prj.h"
struct shmem_conf shmem_conf;
void gnss_write_reg(struct gnss_shared_reg *gnss_reg, u32 value)
{
if (gnss_reg) {
switch(gnss_reg->device) {
case GNSS_IPC_MBOX:
mbox_set_value(MCU_GNSS, gnss_reg->value.index, value);
break;
case GNSS_IPC_SHMEM:
iowrite32(value, gnss_reg->value.addr);
break;
default:
gif_err("Don't know where to write register! (%d)\n",
gnss_reg->device);
}
}
else {
gif_err("Couldn't find the register node.\n");
}
return;
}
u32 gnss_read_reg(struct gnss_shared_reg *gnss_reg)
{
u32 ret = 0;
if (gnss_reg) {
switch(gnss_reg->device) {
case GNSS_IPC_MBOX:
ret = mbox_get_value(MCU_GNSS, gnss_reg->value.index);
break;
case GNSS_IPC_SHMEM:
ret = ioread32(gnss_reg->value.addr);
break;
default:
gif_err("Don't know where to read register from! (%d)\n",
gnss_reg->device);
}
}
else {
gif_err("Couldn't find the register node.\n");
}
return ret;
}
/**
* recv_int2ap
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the value of the GNSS-to-AP interrupt register.
*/
static inline u16 recv_int2ap(struct shmem_link_device *shmd)
{
return (u16)mbox_get_value(MCU_GNSS, shmd->irq_gnss2ap_ipc_msg);
}
/**
* send_int2cp
* @shmd: pointer to an instance of shmem_link_device structure
* @mask: value to be written to the AP-to-GNSS interrupt register
*/
static inline void send_int2gnss(struct shmem_link_device *shmd, u16 mask)
{
gnss_write_reg(shmd->reg[GNSS_REG_TX_IPC_MSG], mask);
mbox_set_interrupt(MCU_GNSS, shmd->int_ap2gnss_ipc_msg);
}
/**
* get_shmem_status
* @shmd: pointer to an instance of shmem_link_device structure
* @dir: direction of communication (TX or RX)
* @mst: pointer to an instance of mem_status structure
*
* Takes a snapshot of the current status of a SHMEM.
*/
static void get_shmem_status(struct shmem_link_device *shmd,
enum direction dir, struct mem_status *mst)
{
mst->dir = dir;
mst->head[TX] = get_txq_head(shmd);
mst->tail[TX] = get_txq_tail(shmd);
mst->head[RX] = get_rxq_head(shmd);
mst->tail[RX] = get_rxq_tail(shmd);
mst->int2ap = recv_int2ap(shmd);
mst->int2gnss = read_int2gnss(shmd);
gif_debug("----- %s -----\n", __func__);
gif_debug("%s: mst->dir = %d\n", __func__, mst->dir);
gif_debug("%s: mst->head[TX] = %d\n", __func__, mst->head[TX]);
gif_debug("%s: mst->tail[TX] = %d\n", __func__, mst->tail[TX]);
gif_debug("%s: mst->head[RX] = %d\n", __func__, mst->head[RX]);
gif_debug("%s: mst->tail[RX] = %d\n", __func__, mst->tail[RX]);
gif_debug("%s: mst->int2ap = %d\n", __func__, mst->int2ap);
gif_debug("%s: mst->int2gnss = %d\n", __func__, mst->int2gnss);
gif_debug("----- %s -----\n", __func__);
}
static inline void update_rxq_tail_status(struct shmem_link_device *shmd,
struct mem_status *mst)
{
mst->tail[RX] = get_rxq_tail(shmd);
}
/**
* ipc_rx_work
* @ws: pointer to an instance of work_struct structure
*
* Invokes the recv method in the io_device instance to perform receiving IPC
* messages from each skb.
*/
static void msg_rx_work(struct work_struct *ws)
{
struct shmem_link_device *shmd;
struct link_device *ld;
struct io_device *iod;
struct sk_buff *skb;
shmd = container_of(ws, struct shmem_link_device, msg_rx_dwork.work);
ld = &shmd->ld;
iod = ld->iod;
while (1) {
skb = skb_dequeue(ld->skb_rxq);
if (!skb)
break;
if (iod->recv_skb_single)
iod->recv_skb_single(iod, ld, skb);
else
gif_err("ERR! iod->recv_skb_single undefined!\n");
}
}
/**
* rx_ipc_frames
* @shmd: pointer to an instance of shmem_link_device structure
* @mst: pointer to an instance of mem_status structure
*
* Returns
* ret < 0 : error
* ret == 0 : ILLEGAL status
* ret > 0 : valid data
*
* Must be invoked only when there is data in the corresponding RXQ.
*
* Requires a recv_skb method in the io_device instance, so this function must
* be used for only EXYNOS.
*/
static int rx_ipc_frames(struct shmem_link_device *shmd,
struct circ_status *circ)
{
struct link_device *ld = &shmd->ld;
struct io_device *iod;
struct sk_buff_head *rxq = ld->skb_rxq;
struct sk_buff *skb;
/**
* variables for the status of the circular queue
*/
u8 *src;
u8 hdr[EXYNOS_HEADER_SIZE];
/**
* variables for RX processing
*/
int qsize; /* size of the queue */
int rcvd; /* size of data in the RXQ or error */
int rest; /* size of the rest data */
int out; /* index to the start of current frame */
int tot; /* total length including padding data */
src = circ->buff;
qsize = circ->qsize;
out = circ->out;
rcvd = circ->size;
rest = circ->size;
tot = 0;
while (rest > 0) {
u8 ch;
/* Copy the header in the frame to the header buffer */
circ_read(hdr, src, qsize, out, EXYNOS_HEADER_SIZE);
/*
gif_err("src : 0x%p, out : 0x%x, recvd : 0x%x, qsize : 0x%x\n",
src, out, rcvd, qsize);
*/
/* Check the config field in the header */
if (unlikely(!exynos_start_valid(hdr))) {
gif_err("%s: ERR! %s INVALID config 0x%02X (rcvd %d, rest %d)\n",
ld->name, "FMT", hdr[0],
rcvd, rest);
goto bad_msg;
}
/* Verify the total length of the frame (data + padding) */
tot = exynos_get_total_len(hdr);
if (unlikely(tot > rest)) {
gif_err("%s: ERR! %s tot %d > rest %d (rcvd %d)\n",
ld->name, "FMT", tot, rest, rcvd);
goto bad_msg;
}
/* Allocate an skb */
skb = dev_alloc_skb(tot);
if (!skb) {
gif_err("%s: ERR! %s dev_alloc_skb(%d) fail\n",
ld->name, "FMT", tot);
goto no_mem;
}
/* Set the attribute of the skb as "single frame" */
skbpriv(skb)->single_frame = true;
/* Read the frame from the RXQ */
circ_read(skb_put(skb, tot), src, qsize, out, tot);
/* Store the skb to the corresponding skb_rxq */
skb_queue_tail(rxq, skb);
ch = exynos_get_ch(skb->data);
iod = ld->iod;
if (!iod) {
gif_err("%s: ERR! no IPC_BOOT iod\n", ld->name);
break;
}
skbpriv(skb)->lnk_hdr = iod->link_header;
skbpriv(skb)->exynos_ch = ch;
/* Calculate new out value */
rest -= tot;
out += tot;
if (unlikely(out >= qsize))
out -= qsize;
}
/* Update tail (out) pointer to empty out the RXQ */
set_rxq_tail(shmd, circ->in);
return rcvd;
no_mem:
/* Update tail (out) pointer to the frame to be read in the future */
set_rxq_tail(shmd, out);
rcvd -= rest;
return rcvd;
bad_msg:
return -EBADMSG;
}
/**
* msg_handler: receives IPC messages from every RXQ
* @shmd: pointer to an instance of shmem_link_device structure
* @mst: pointer to an instance of mem_status structure
*
* 1) Receives all IPC message frames currently in every IPC RXQ.
* 2) Sends RES_ACK responses if there are REQ_ACK requests from a GNSS.
* 3) Completes all threads waiting for the corresponding RES_ACK from a GNSS if
* there is any RES_ACK response.
*/
static void msg_handler(struct shmem_link_device *shmd, struct mem_status *mst)
{
struct link_device *ld = &shmd->ld;
struct circ_status circ;
int ret = 0;
/*
if (!ipc_active(shmd)) {
gif_err("%s: ERR! IPC is NOT ACTIVE!!!\n", ld->name);
trigger_forced_cp_crash(shmd);
return;
}
*/
/* Skip RX processing if there is no data in the RXQ */
if (mst->head[RX] == mst->tail[RX]) {
/* Release wakelock */
/* Write 0x0 to mbox register 6 */
/* done_req_ack(shmd); */
return;
}
/* Get the size of data in the RXQ */
ret = get_rxq_rcvd(shmd, mst, &circ);
if (unlikely(ret < 0)) {
gif_err("%s: ERR! get_rxq_rcvd fail (err %d)\n",
ld->name, ret);
return;
}
/* Read data in the RXQ */
ret = rx_ipc_frames(shmd, &circ);
if (unlikely(ret < 0)) {
return;
}
}
/**
* ipc_rx_task: processes a SHMEM command or receives IPC messages
* @shmd: pointer to an instance of shmem_link_device structure
* @mst: pointer to an instance of mem_status structure
*
* Invokes cmd_handler for commands or msg_handler for IPC messages.
*/
static void ipc_rx_task(unsigned long data)
{
struct shmem_link_device *shmd = (struct shmem_link_device *)data;
while (1) {
struct mem_status *mst;
mst = gnss_msq_get_data_slot(&shmd->rx_msq);
if (!mst)
break;
memset(mst, 0, sizeof(struct mem_status));
get_shmem_status(shmd, RX, mst);
/* Update tail variables with the current tail pointers */
//update_rxq_tail_status(shmd, mst);
msg_handler(shmd, mst);
queue_delayed_work(system_wq, &shmd->msg_rx_dwork, 0);
}
}
/**
* shmem_irq_handler: interrupt handler for a MCU_IPC interrupt
* @data: pointer to a data
*
* 1) Reads the interrupt value
* 2) Performs interrupt handling
*
* Flow for normal interrupt handling:
* shmem_irq_handler -> udl_handler
* shmem_irq_handler -> ipc_rx_task -> msg_handler -> rx_ipc_frames -> ...
*/
static void shmem_irq_msg_handler(void *data)
{
struct shmem_link_device *shmd = (struct shmem_link_device *)data;
//struct mem_status *mst = gnss_msq_get_free_slot(&shmd->rx_msq);
gnss_msq_get_free_slot(&shmd->rx_msq);
/*
intr = recv_int2ap(shmd);
if (unlikely(!INT_VALID(intr))) {
gif_debug("%s: ERR! invalid intr 0x%X\n", ld->name, intr);
return;
}
*/
tasklet_hi_schedule(&shmd->rx_tsk);
}
static void shmem_irq_bcmd_handler(void *data)
{
struct shmem_link_device *shmd = (struct shmem_link_device *)data;
struct link_device *ld = (struct link_device *)&shmd->ld;
u16 intr;
#ifndef USE_SIMPLE_WAKE_LOCK
if (wake_lock_active(&shmd->wlock))
wake_unlock(&shmd->wlock);
#endif
intr = mbox_get_value(MCU_GNSS, shmd->irq_gnss2ap_bcmd);
/* Signal kepler_req_bcmd */
complete(&ld->bcmd_cmpl);
}
/**
* write_ipc_to_txq
* @shmd: pointer to an instance of shmem_link_device structure
* @circ: pointer to an instance of circ_status structure
* @skb: pointer to an instance of sk_buff structure
*
* Must be invoked only when there is enough space in the TXQ.
*/
static void write_ipc_to_txq(struct shmem_link_device *shmd,
struct circ_status *circ, struct sk_buff *skb)
{
u32 qsize = circ->qsize;
u32 in = circ->in;
u8 *buff = circ->buff;
u8 *src = skb->data;
u32 len = skb->len;
/* Print send data to GNSS */
/* gnss_log_ipc_pkt(skb, TX); */
/* Write data to the TXQ */
circ_write(buff, src, qsize, in, len);
/* Update new head (in) pointer */
set_txq_head(shmd, circ_new_pointer(qsize, in, len));
}
/**
* xmit_ipc_msg
* @shmd: pointer to an instance of shmem_link_device structure
*
* Tries to transmit IPC messages in the skb_txq of @dev as many as possible.
*
* Returns total length of IPC messages transmit or an error code.
*/
static int xmit_ipc_msg(struct shmem_link_device *shmd)
{
struct link_device *ld = &shmd->ld;
struct sk_buff_head *txq = ld->skb_txq;
struct sk_buff *skb;
unsigned long flags;
struct circ_status circ;
int space;
int copied = 0;
bool chk_nospc = false;
/* Acquire the spin lock for a TXQ */
spin_lock_irqsave(&shmd->tx_lock, flags);
while (1) {
/* Get the size of free space in the TXQ */
space = get_txq_space(shmd, &circ);
if (unlikely(space < 0)) {
/* Empty out the TXQ */
reset_txq_circ(shmd);
copied = -EIO;
break;
}
skb = skb_dequeue(txq);
if (unlikely(!skb))
break;
/* CAUTION : Uplink size is limited to 16KB and
this limitation is used ONLY in North America Prj.
Check the free space size,
- FMT : comparing with skb->len
- RAW : check used buffer size */
chk_nospc = (space < skb->len) ? true : false;
if (unlikely(chk_nospc)) {
/* Set res_required flag */
atomic_set(&shmd->res_required, 1);
/* Take the skb back to the skb_txq */
skb_queue_head(txq, skb);
gif_err("%s: <by %pf> NOSPC in %s_TXQ {qsize:%u in:%u out:%u} free:%u < len:%u\n",
ld->name, CALLER, "FMT",
circ.qsize, circ.in, circ.out, space, skb->len);
copied = -ENOSPC;
break;
}
/* TX only when there is enough space in the TXQ */
write_ipc_to_txq(shmd, &circ, skb);
copied += skb->len;
dev_kfree_skb_any(skb);
}
/* Release the spin lock */
spin_unlock_irqrestore(&shmd->tx_lock, flags);
return copied;
}
/**
* fmt_tx_work: performs TX for FMT IPC device under SHMEM flow control
* @ws: pointer to an instance of the work_struct structure
*
* 1) Starts waiting for RES_ACK of FMT IPC device.
* 2) Returns immediately if the wait is interrupted.
* 3) Restarts SHMEM flow control if there is a timeout from the wait.
* 4) Otherwise, it performs processing RES_ACK for FMT IPC device.
*/
static void fmt_tx_work(struct work_struct *ws)
{
struct link_device *ld;
ld = container_of(ws, struct link_device, fmt_tx_dwork.work);
queue_delayed_work(ld->tx_wq, ld->tx_dwork, 0);
return;
}
/**
* shmem_send_ipc
* @shmd: pointer to an instance of shmem_link_device structure
* @skb: pointer to an skb that will be transmitted
*
* 1) Tries to transmit IPC messages in the skb_txq with xmit_ipc_msg().
* 2) Sends an interrupt to GNSS if there is no error from xmit_ipc_msg().
* 3) Starts SHMEM flow control if xmit_ipc_msg() returns -ENOSPC.
*/
static int shmem_send_ipc(struct shmem_link_device *shmd)
{
struct link_device *ld = &shmd->ld;
int ret;
if (atomic_read(&shmd->res_required) > 0) {
gif_err("%s: %s_TXQ is full\n", ld->name, "FMT");
return 0;
}
ret = xmit_ipc_msg(shmd);
if (likely(ret > 0)) {
send_int2gnss(shmd, 0x82);
goto exit;
}
/* If there was no TX, just exit */
if (ret == 0)
goto exit;
/* At this point, ret < 0 */
if (ret == -ENOSPC || ret == -EBUSY) {
/*----------------------------------------------------*/
/* shmd->res_required was set in xmit_ipc_msg(). */
/*----------------------------------------------------*/
queue_delayed_work(ld->tx_wq, ld->tx_dwork,
msecs_to_jiffies(1));
}
exit:
return ret;
}
/**
* shmem_try_send_ipc
* @shmd: pointer to an instance of shmem_link_device structure
* @iod: pointer to an instance of the io_device structure
* @skb: pointer to an skb that will be transmitted
*
* 1) Enqueues an skb to the skb_txq for @dev in the link device instance.
* 2) Tries to transmit IPC messages with shmem_send_ipc().
*/
static void shmem_try_send_ipc(struct shmem_link_device *shmd,
struct io_device *iod, struct sk_buff *skb)
{
struct link_device *ld = &shmd->ld;
struct sk_buff_head *txq = ld->skb_txq;
int ret;
if (unlikely(txq->qlen >= MAX_SKB_TXQ_DEPTH)) {
gif_err("%s: %s txq->qlen %d >= %d\n", ld->name,
"FMT", txq->qlen, MAX_SKB_TXQ_DEPTH);
dev_kfree_skb_any(skb);
return;
}
skb_queue_tail(txq, skb);
ret = shmem_send_ipc(shmd);
if (ret < 0) {
gif_err("%s->%s: ERR! shmem_send_ipc fail (err %d)\n",
iod->name, ld->name, ret);
}
}
/**
* shmem_send
* @ld: pointer to an instance of the link_device structure
* @iod: pointer to an instance of the io_device structure
* @skb: pointer to an skb that will be transmitted
*
* Returns the length of data transmitted or an error code.
*
* Normal call flow for an IPC message:
* shmem_try_send_ipc -> shmem_send_ipc -> xmit_ipc_msg -> write_ipc_to_txq
*
* Call flow on congestion in a IPC TXQ:
* shmem_try_send_ipc -> shmem_send_ipc -> xmit_ipc_msg ,,, queue_delayed_work
* => xxx_tx_work -> wait_for_res_ack
* => msg_handler
* => process_res_ack -> xmit_ipc_msg (,,, queue_delayed_work ...)
*/
static int shmem_send(struct link_device *ld, struct io_device *iod,
struct sk_buff *skb)
{
struct shmem_link_device *shmd = to_shmem_link_device(ld);
int len = skb->len;
#ifndef USE_SIMPLE_WAKE_LOCK
wake_lock_timeout(&shmd->wlock, IPC_WAKELOCK_TIMEOUT);
#endif
shmem_try_send_ipc(shmd, iod, skb);
return len;
}
static void shmem_remap_ipc_region(struct shmem_link_device *shmd)
{
struct shmem_ipc_device *dev;
struct gnss_data *gnss;
u32 tx_size, rx_size, sh_reg_size;
u8 *tmap;
u32 *reg_base;
int i;
tmap = (u8 *)shmd->base;
gnss = shmd->ld.mdm_data;
shmd->ipc_reg_cnt = gnss->ipc_reg_cnt;
shmd->reg = gnss->reg;
/* FMT */
dev = &shmd->ipc_map.dev;
sh_reg_size = shmd->ipc_reg_cnt * sizeof(u32);
rx_size = shmd->size / 2;
tx_size = shmd->size / 2 - sh_reg_size;
dev->rxq.buff = (u8 __iomem *)(tmap);
dev->rxq.size = rx_size;
dev->txq.buff = (u8 __iomem *)(tmap + rx_size);
dev->txq.size = tx_size;
reg_base = (u32 *)(tmap + shmd->size - sh_reg_size);
gif_err("RX region : %x @ %p\n", dev->rxq.size, dev->rxq.buff);
gif_err("TX region : %x @ %p\n", dev->txq.size, dev->txq.buff);
for (i = 0; i < GNSS_REG_COUNT; i++) {
if (shmd->reg[i]) {
if (shmd->reg[i]->device == GNSS_IPC_SHMEM) {
shmd->reg[i]->value.addr = reg_base + shmd->reg[i]->value.index;
gif_err("Reg %s -> %p\n", shmd->reg[i]->name, shmd->reg[i]->value.addr);
}
}
}
}
static int shmem_init_ipc_map(struct shmem_link_device *shmd)
{
struct gnss_data *gnss = shmd->ld.mdm_data;
int i;
shmem_remap_ipc_region(shmd);
memset(shmd->base, 0, shmd->size);
shmd->dev = &shmd->ipc_map.dev;
/* Retrieve SHMEM MBOX#, IRQ#, etc. */
shmd->int_ap2gnss_bcmd = gnss->mbx->int_ap2gnss_bcmd;
shmd->int_ap2gnss_ipc_msg = gnss->mbx->int_ap2gnss_ipc_msg;
shmd->irq_gnss2ap_bcmd = gnss->mbx->irq_gnss2ap_bcmd;
shmd->irq_gnss2ap_ipc_msg = gnss->mbx->irq_gnss2ap_ipc_msg;
for (i = 0; i < BCMD_CTRL_COUNT; i++) {
shmd->reg_bcmd_ctrl[i] = gnss->mbx->reg_bcmd_ctrl[i];
}
return 0;
}
void __iomem *gnss_shm_request_region(unsigned int sh_addr,
unsigned int size)
{
int i;
struct page **pages;
void *pv;
pages = kmalloc((size >> PAGE_SHIFT) * sizeof(*pages), GFP_KERNEL);
if (!pages)
return NULL;
for (i = 0; i < (size >> PAGE_SHIFT); i++) {
pages[i] = phys_to_page(sh_addr);
sh_addr += PAGE_SIZE;
}
pv = vmap(pages, size >> PAGE_SHIFT, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
kfree(pages);
return (void __iomem *)pv;
}
void gnss_release_sh_region(void *rgn)
{
vunmap(rgn);
}
int kepler_req_bcmd(struct link_device *ld, u16 cmd_id, u16 flags,
u32 param1, u32 param2)
{
struct shmem_link_device *shmd = to_shmem_link_device(ld);
u32 ctrl[BCMD_CTRL_COUNT], ret_val;
unsigned long timeout = msecs_to_jiffies(REQ_BCMD_TIMEOUT);
int ret;
#ifndef USE_SIMPLE_WAKE_LOCK
wake_lock_timeout(&shmd->wlock, BCMD_WAKELOCK_TIMEOUT);
#endif
/* Parse arguments */
/* Flags: Command flags */
/* Param1/2 : Paramter 1/2 */
ctrl[CTRL0] = (flags << 16) + cmd_id;
ctrl[CTRL1] = param1;
ctrl[CTRL2] = param2;
gif_debug("%s : set param 0 : 0x%x, 1 : 0x%x, 2 : 0x%x\n",
__func__, ctrl[CTRL0], ctrl[CTRL1], ctrl[CTRL2]);
mbox_set_value(MCU_GNSS, shmd->reg_bcmd_ctrl[CTRL0], ctrl[CTRL0]);
mbox_set_value(MCU_GNSS, shmd->reg_bcmd_ctrl[CTRL1], ctrl[CTRL1]);
mbox_set_value(MCU_GNSS, shmd->reg_bcmd_ctrl[CTRL2], ctrl[CTRL2]);
/*
* 0xff is MAGIC number to avoid confuging that
* register is set from Kepler.
*/
mbox_set_value(MCU_GNSS, shmd->reg_bcmd_ctrl[CTRL3], 0xff);
mbox_set_interrupt(MCU_GNSS, shmd->int_ap2gnss_bcmd);
if (ld->gc->gnss_state == STATE_OFFLINE) {
gif_debug("Set POWER ON!!!!\n");
ld->gc->ops.gnss_power_on(ld->gc);
} else if (ld->gc->gnss_state == STATE_HOLD_RESET) {
purge_txq(ld);
purge_rxq(ld);
clear_shmem_map(shmd);
gif_debug("Set RELEASE RESET!!!!\n");
ld->gc->ops.gnss_release_reset(ld->gc);
}
if (cmd_id == 0x4) /* BLC_Branch does not have return value */
return 0;
ret = wait_for_completion_interruptible_timeout(&ld->bcmd_cmpl,
timeout);
if (ret == 0) {
#ifndef USE_SIMPLE_WAKE_LOCK
wake_unlock(&shmd->wlock);
#endif
gif_err("%s: bcmd TIMEOUT!\n", ld->name);
return -EIO;
}
ret_val = mbox_get_value(MCU_GNSS, shmd->reg_bcmd_ctrl[CTRL3]);
gif_debug("BCMD cmd_id 0x%x returned 0x%x\n", cmd_id, ret_val);
return ret_val;
}
#ifdef CONFIG_OF_RESERVED_MEM
static int __init gnss_if_reserved_mem_setup(struct reserved_mem *remem)
{
pr_debug("%s: memory reserved: paddr=%#lx, t_size=%zd\n",
__func__, (unsigned long)remem->base, (size_t)remem->size);
shmem_conf.shmem_base = remem->base;
shmem_conf.shmem_size = remem->size;
return 0;
}
RESERVEDMEM_OF_DECLARE(gnss_if, "exynos,gnss_if", gnss_if_reserved_mem_setup);
#endif
struct link_device *gnss_shmem_create_link_device(struct platform_device *pdev)
{
struct shmem_link_device *shmd = NULL;
struct link_device *ld = NULL;
struct gnss_data *gnss = NULL;
struct device *dev = &pdev->dev;
int err = 0;
gif_debug("+++\n");
/* Get the gnss (platform) data */
gnss = (struct gnss_data *)dev->platform_data;
if (!gnss) {
gif_err("ERR! gnss == NULL\n");
return NULL;
}
gif_err("%s: %s\n", "SHMEM", gnss->name);
if (!gnss->mbx) {
gif_err("%s: ERR! %s->mbx == NULL\n",
"SHMEM", gnss->name);
return NULL;
}
/* Alloc an instance of shmem_link_device structure */
shmd = devm_kzalloc(dev, sizeof(struct shmem_link_device), GFP_KERNEL);
if (!shmd) {
gif_err("%s: ERR! shmd kzalloc fail\n", "SHMEM");
goto error;
}
ld = &shmd->ld;
/* Retrieve gnss data and SHMEM control data from the gnss data */
ld->mdm_data = gnss;
ld->timeout_cnt = 0;
ld->name = "GNSS_SHDMEM";
/* Set attributes as a link device */
ld->send = shmem_send;
ld->req_bcmd = kepler_req_bcmd;
skb_queue_head_init(&ld->sk_fmt_tx_q);
ld->skb_txq = &ld->sk_fmt_tx_q;
skb_queue_head_init(&ld->sk_fmt_rx_q);
ld->skb_rxq = &ld->sk_fmt_rx_q;
/* Initialize GNSS Reserved mem */
gnss->gnss_base = gnss_shm_request_region(gnss->shmem_base,
gnss->ipcmem_offset);
if (!gnss->gnss_base) {
gif_err("%s: ERR! gnss_reserved_region fail\n", ld->name);
goto error;
}
gif_err("%s: gnss phys_addr:0x%08X virt_addr:0x%p size: %d\n", ld->name,
gnss->shmem_base, gnss->gnss_base, gnss->ipcmem_offset);
/* Create fault info area */
if (gnss->fault_info.device == GNSS_IPC_SHMEM) {
gnss->fault_info.value.addr = gnss_shm_request_region(
gnss->shmem_base + gnss->fault_info.value.index,
gnss->fault_info.size);
gif_err("%s: fault phys_addr:0x%08X virt_addr:0x%p size:%d\n",
ld->name, gnss->shmem_base + gnss->fault_info.value.index,
gnss->fault_info.value.addr, gnss->fault_info.size);
}
shmd->start = gnss->shmem_base + gnss->ipcmem_offset;
shmd->size = gnss->ipc_size;
shmd->base = gnss_shm_request_region(shmd->start, shmd->size);
if (!shmd->base) {
gif_err("%s: ERR! gnss_shm_request_region fail\n", ld->name);
goto error;
}
gif_err("%s: phys_addr:0x%08X virt_addr:0x%8p size:%d\n",
ld->name, shmd->start, shmd->base, shmd->size);
/* Initialize SHMEM maps (physical map -> logical map) */
err = shmem_init_ipc_map(shmd);
if (err < 0) {
gif_err("%s: ERR! shmem_init_ipc_map fail (err %d)\n",
ld->name, err);
goto error;
}
#ifndef USE_SIMPLE_WAKE_LOCK
/* Initialize locks, completions, and bottom halves */
snprintf(shmd->wlock_name, MIF_MAX_NAME_LEN, "%s_wlock", ld->name);
wake_lock_init(&shmd->wlock, WAKE_LOCK_SUSPEND, shmd->wlock_name);
#endif
init_completion(&ld->bcmd_cmpl);
tasklet_init(&shmd->rx_tsk, ipc_rx_task, (unsigned long)shmd);
INIT_DELAYED_WORK(&shmd->msg_rx_dwork, msg_rx_work);
spin_lock_init(&shmd->tx_lock);
ld->tx_wq = create_singlethread_workqueue("shmem_tx_wq");
if (!ld->tx_wq) {
gif_err("%s: ERR! fail to create tx_wq\n", ld->name);
goto error;
}
INIT_DELAYED_WORK(&ld->fmt_tx_dwork, fmt_tx_work);
ld->tx_dwork = &ld->fmt_tx_dwork;
spin_lock_init(&shmd->tx_msq.lock);
spin_lock_init(&shmd->rx_msq.lock);
/* Register interrupt handlers */
err = mbox_request_irq(MCU_GNSS, shmd->irq_gnss2ap_ipc_msg,
shmem_irq_msg_handler, shmd);
if (err) {
gif_err("%s: ERR! mbox_request_irq fail (err %d)\n",
ld->name, err);
goto error;
}
err = mbox_request_irq(MCU_GNSS, shmd->irq_gnss2ap_bcmd,
shmem_irq_bcmd_handler, shmd);
if (err) {
gif_err("%s: ERR! mbox_request_irq fail (err %d)\n",
ld->name, err);
goto error;
}
gif_debug("---\n");
return ld;
error:
gif_err("xxx\n");
devm_kfree(dev, shmd);
return NULL;
}

View file

@ -0,0 +1,499 @@
/*
* Copyright (C) 2010 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __GNSS_LINK_DEVICE_SHMEM_H__
#define __GNSS_LINK_DEVICE_SHMEM_H__
#include <linux/mcu_ipc.h>
#include "gnss_link_device_memory.h"
/* for checking gnss infomation */
#define SHM_2M_FMT_TX_BUFF_SZ (1024 * 1024)
#define SHM_2M_FMT_RX_BUFF_SZ (1024 * 1024)
#define IPC_WAKELOCK_TIMEOUT (HZ)
#define BCMD_WAKELOCK_TIMEOUT (HZ / 10) /* 100 msec */
struct shmem_circ {
u32 __iomem *head;
u32 __iomem *tail;
u8 __iomem *buff;
u32 size;
};
struct shmem_ipc_device {
struct shmem_circ txq;
struct shmem_circ rxq;
};
struct shmem_ipc_map {
u32 __iomem *magic;
u32 __iomem *access;
struct shmem_ipc_device dev;
};
struct shmem_link_device {
struct link_device ld;
struct gnss_mbox *mbx;
struct gnss_shared_reg **reg;
/* SHMEM (SHARED MEMORY) address, size, IRQ# */
u32 start; /* physical "start" address of SHMEM */
u32 size; /* size of SHMEM */
u32 __iomem *base; /* virtual address to the "IPC" region */
u32 ipc_reg_cnt;
/* IPC device map */
struct shmem_ipc_map ipc_map;
/* Pointers (aliases) to IPC device map */
u32 __iomem *magic;
u32 __iomem *access;
struct shmem_ipc_device *dev;
/* MBOX number & IRQ */
int int_ap2gnss_bcmd;
int int_ap2gnss_ipc_msg;
int irq_gnss2ap_bcmd;
int irq_gnss2ap_ipc_msg;
unsigned reg_bcmd_ctrl[BCMD_CTRL_COUNT];
/* Wakelock for SHMEM device */
struct wake_lock wlock;
char wlock_name[GNSS_MAX_NAME_LEN];
/* for locking TX process */
spinlock_t tx_lock;
/* for retransmission under SHMEM flow control after TXQ full state */
atomic_t res_required;
struct completion req_ack_cmpl;
/* for efficient RX process */
struct tasklet_struct rx_tsk;
struct delayed_work msg_rx_dwork;
struct io_device *iod;
/* for logging SHMEM status */
struct mem_status_queue tx_msq;
struct mem_status_queue rx_msq;
/* for logging SHMEM dump */
struct trace_data_queue trace_list;
/* to hold/release "cp_wakeup" for PM (power-management) */
struct delayed_work cp_sleep_dwork;
atomic_t ref_cnt;
spinlock_t pm_lock;
};
/* converts from struct link_device* to struct xxx_link_device* */
#define to_shmem_link_device(linkdev) \
container_of(linkdev, struct shmem_link_device, ld)
void gnss_write_reg(struct gnss_shared_reg *gnss_reg, u32 value);
u32 gnss_read_reg(struct gnss_shared_reg *gnss_reg);
/**
* get_txq_head
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the value of a head (in) pointer in a TX queue.
*/
static inline u32 get_txq_head(struct shmem_link_device *shmd)
{
return gnss_read_reg(shmd->reg[GNSS_REG_TX_HEAD]);
}
/**
* get_txq_tail
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the value of a tail (out) pointer in a TX queue.
*
* It is useless for an AP to read a tail pointer in a TX queue twice to verify
* whether or not the value in the pointer is valid, because it can already have
* been updated by a GNSS after the first access from the AP.
*/
static inline u32 get_txq_tail(struct shmem_link_device *shmd)
{
return gnss_read_reg(shmd->reg[GNSS_REG_TX_TAIL]);
}
/**
* get_txq_buff
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the start address of the buffer in a TXQ.
*/
static inline u8 *get_txq_buff(struct shmem_link_device *shmd)
{
return shmd->dev->txq.buff;
}
/**
* get_txq_buff_size
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the size of the buffer in a TXQ.
*/
static inline u32 get_txq_buff_size(struct shmem_link_device *shmd)
{
return shmd->dev->txq.size;
}
/**
* get_rxq_head
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the value of a head (in) pointer in an RX queue.
*
* It is useless for an AP to read a head pointer in an RX queue twice to verify
* whether or not the value in the pointer is valid, because it can already have
* been updated by a GNSS after the first access from the AP.
*/
static inline u32 get_rxq_head(struct shmem_link_device *shmd)
{
return gnss_read_reg(shmd->reg[GNSS_REG_RX_HEAD]);
}
/**
* get_rxq_tail
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the value of a tail (in) pointer in an RX queue.
*/
static inline u32 get_rxq_tail(struct shmem_link_device *shmd)
{
return gnss_read_reg(shmd->reg[GNSS_REG_RX_TAIL]);
}
/**
* get_rxq_buff
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the start address of the buffer in an RXQ.
*/
static inline u8 *get_rxq_buff(struct shmem_link_device *shmd)
{
return shmd->dev->rxq.buff;
}
/**
* get_rxq_buff_size
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the size of the buffer in an RXQ.
*/
static inline u32 get_rxq_buff_size(struct shmem_link_device *shmd)
{
return shmd->dev->rxq.size;
}
/**
* set_txq_head
* @shmd: pointer to an instance of shmem_link_device structure
* @in: value to be written to the head pointer in a TXQ
*/
static inline void set_txq_head(struct shmem_link_device *shmd, u32 in)
{
gnss_write_reg(shmd->reg[GNSS_REG_TX_HEAD], in);
}
/**
* set_txq_tail
* @shmd: pointer to an instance of shmem_link_device structure
* @out: value to be written to the tail pointer in a TXQ
*/
static inline void set_txq_tail(struct shmem_link_device *shmd, u32 out)
{
gnss_write_reg(shmd->reg[GNSS_REG_TX_TAIL], out);
}
/**
* set_rxq_head
* @shmd: pointer to an instance of shmem_link_device structure
* @in: value to be written to the head pointer in an RXQ
*/
static inline void set_rxq_head(struct shmem_link_device *shmd, u32 in)
{
gnss_write_reg(shmd->reg[GNSS_REG_RX_HEAD], in);
}
/**
* set_rxq_tail
* @shmd: pointer to an instance of shmem_link_device structure
* @out: value to be written to the tail pointer in an RXQ
*/
static inline void set_rxq_tail(struct shmem_link_device *shmd, u32 out)
{
gnss_write_reg(shmd->reg[GNSS_REG_RX_TAIL], out);
}
/**
* read_int2gnss
* @shmd: pointer to an instance of shmem_link_device structure
*
* Returns the value of the AP-to-GNSS interrupt register.
*/
static inline u16 read_int2gnss(struct shmem_link_device *shmd)
{
return mbox_get_value(MCU_GNSS, shmd->int_ap2gnss_ipc_msg);
}
/**
* reset_txq_circ
* @shmd: pointer to an instance of shmem_link_device structure
* @dev: IPC device (IPC_FMT, IPC_RAW, etc.)
*
* Empties a TXQ by resetting the head (in) pointer with the value in the tail
* (out) pointer.
*/
static inline void reset_txq_circ(struct shmem_link_device *shmd)
{
struct link_device *ld = &shmd->ld;
u32 head = get_txq_head(shmd);
u32 tail = get_txq_tail(shmd);
gif_err("%s: %s_TXQ: HEAD[%u] <== TAIL[%u]\n",
ld->name, "FMT", head, tail);
set_txq_head(shmd, tail);
}
/**
* reset_rxq_circ
* @shmd: pointer to an instance of shmem_link_device structure
* @dev: IPC device (IPC_FMT, IPC_RAW, etc.)
*
* Empties an RXQ by resetting the tail (out) pointer with the value in the head
* (in) pointer.
*/
static inline void reset_rxq_circ(struct shmem_link_device *shmd)
{
struct link_device *ld = &shmd->ld;
u32 head = get_rxq_head(shmd);
u32 tail = get_rxq_tail(shmd);
gif_err("%s: %s_RXQ: TAIL[%u] <== HEAD[%u]\n",
ld->name, "FMT", tail, head);
set_rxq_tail(shmd, head);
}
/**
* get_rxq_rcvd
* @shmd: pointer to an instance of shmem_link_device structure
* @mst: pointer to an instance of mem_status structure
* OUT @circ: pointer to an instance of circ_status structure
*
* Stores {start address of the buffer in a RXQ, size of the buffer, in & out
* pointer values, size of received data} into the 'circ' instance.
*
* Returns an error code.
*/
static inline int get_rxq_rcvd(struct shmem_link_device *shmd,
struct mem_status *mst, struct circ_status *circ)
{
struct link_device *ld = &shmd->ld;
circ->buff = get_rxq_buff(shmd);
circ->qsize = get_rxq_buff_size(shmd);
circ->in = mst->head[RX];
circ->out = mst->tail[RX];
circ->size = circ_get_usage(circ->qsize, circ->in, circ->out);
if (circ_valid(circ->qsize, circ->in, circ->out)) {
gif_debug("%s: %s_RXQ qsize[%u] in[%u] out[%u] rcvd[%u]\n",
ld->name, "FMT", circ->qsize, circ->in,
circ->out, circ->size);
return 0;
} else {
gif_err("%s: ERR! %s_RXQ invalid (qsize[%d] in[%d] out[%d])\n",
ld->name, "FMT", circ->qsize, circ->in,
circ->out);
return -EIO;
}
}
/*
* shmem_purge_rxq
* @ld: pointer to an instance of the link_device structure
*
* Purges pending transfers from the RXQ.
*/
static inline void purge_rxq(struct link_device *ld)
{
skb_queue_purge(ld->skb_rxq);
}
/**
* get_txq_space
* @shmd: pointer to an instance of shmem_link_device structure
* OUT @circ: pointer to an instance of circ_status structure
*
* Stores {start address of the buffer in a TXQ, size of the buffer, in & out
* pointer values, size of free space} into the 'circ' instance.
*
* Returns the size of free space in the buffer or an error code.
*/
static inline int get_txq_space(struct shmem_link_device *shmd,
struct circ_status *circ)
{
struct link_device *ld = &shmd->ld;
int cnt = 0;
u32 qsize;
u32 head;
u32 tail;
int space;
while (1) {
qsize = get_txq_buff_size(shmd);
head = get_txq_head(shmd);
tail = get_txq_tail(shmd);
space = circ_get_space(qsize, head, tail);
gif_debug("%s: %s_TXQ{qsize:%u in:%u out:%u space:%u}\n",
ld->name, "FMT", qsize, head, tail, space);
if (circ_valid(qsize, head, tail))
break;
cnt++;
gif_err("%s: ERR! invalid %s_TXQ{qsize:%d in:%d out:%d space:%d}, count %d\n",
ld->name, "FMT", qsize, head, tail,
space, cnt);
if (cnt >= MAX_RETRY_CNT) {
space = -EIO;
break;
}
udelay(100);
}
circ->buff = get_txq_buff(shmd);
circ->qsize = qsize;
circ->in = head;
circ->out = tail;
circ->size = space;
return space;
}
/**
* get_txq_saved
* @shmd: pointer to an instance of shmem_link_device structure
* @mst: pointer to an instance of mem_status structure
* OUT @circ: pointer to an instance of circ_status structure
*
* Stores {start address of the buffer in a TXQ, size of the buffer, in & out
* pointer values, size of stored data} into the 'circ' instance.
*
* Returns an error code.
*/
static inline int get_txq_saved(struct shmem_link_device *shmd,
struct circ_status *circ)
{
struct link_device *ld = &shmd->ld;
int cnt = 0;
u32 qsize;
u32 head;
u32 tail;
int saved;
while (1) {
qsize = get_txq_buff_size(shmd);
head = get_txq_head(shmd);
tail = get_txq_tail(shmd);
saved = circ_get_usage(qsize, head, tail);
gif_debug("%s: %s_TXQ{qsize:%u in:%u out:%u saved:%u}\n",
ld->name, "FMT", qsize, head, tail, saved);
if (circ_valid(qsize, head, tail))
break;
cnt++;
gif_err("%s: ERR! invalid %s_TXQ{qsize:%d in:%d out:%d saved:%d}, count %d\n",
ld->name, "FMT", qsize, head, tail,
saved, cnt);
if (cnt >= MAX_RETRY_CNT) {
saved = -EIO;
break;
}
udelay(100);
}
circ->buff = get_txq_buff(shmd);
circ->qsize = qsize;
circ->in = head;
circ->out = tail;
circ->size = saved;
return saved;
}
/**
* shmem_purge_txq
* @ld: pointer to an instance of the link_device structure
*
* Purges pending transfers from the TXQ.
*/
static inline void purge_txq(struct link_device *ld)
{
struct shmem_link_device *shmd = to_shmem_link_device(ld);
unsigned long flags;
spin_lock_irqsave(&shmd->tx_lock, flags);
skb_queue_purge(ld->skb_txq);
spin_unlock_irqrestore(&shmd->tx_lock, flags);
}
/**
* clear_shmem_map
* @shmd: pointer to an instance of shmem_link_device structure
*
* Clears all pointers in every circular queue.
*/
static inline void clear_shmem_map(struct shmem_link_device *shmd)
{
set_txq_head(shmd, 0);
set_txq_tail(shmd, 0);
set_rxq_head(shmd, 0);
set_rxq_tail(shmd, 0);
memset(shmd->base, 0x0, shmd->size);
}
/**
* reset_shmem_ipc
* @shmd: pointer to an instance of shmem_link_device structure
*
* Reset SHMEM with IPC map.
*/
static inline void reset_shmem_ipc(struct shmem_link_device *shmd)
{
clear_shmem_map(shmd);
atomic_set(&shmd->res_required, 0);
atomic_set(&shmd->ref_cnt, 0);
}
#endif

View file

@ -0,0 +1,502 @@
/* linux/drivers/misc/gnss/gnss_main.c
*
* Copyright (C) 2010 Google, Inc.
* Copyright (C) 2010 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/miscdevice.h>
#include <linux/if_arp.h>
#include <linux/uaccess.h>
#include <linux/fs.h>
#include <linux/io.h>
#include <linux/wait.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/mutex.h>
#include <linux/irq.h>
#include <linux/gpio.h>
#include <linux/delay.h>
#include <linux/wakelock.h>
#include <linux/mfd/syscon.h>
#include <linux/clk-private.h>
#ifdef CONFIG_OF
#include <linux/of.h>
#include <linux/of_platform.h>
#endif
#include "gnss_prj.h"
extern struct shmem_conf shmem_conf;
static struct gnss_ctl *create_gnssctl_device(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct gnss_data *pdata = pdev->dev.platform_data;
struct gnss_ctl *gnssctl;
struct clk *qch_clk;
int ret;
/* create GNSS control device */
gnssctl = devm_kzalloc(dev, sizeof(struct gnss_ctl), GFP_KERNEL);
if (!gnssctl) {
gif_err("%s: gnssctl devm_kzalloc fail\n", pdata->name);
return NULL;
}
gnssctl->dev = dev;
gnssctl->gnss_state = STATE_OFFLINE;
gnssctl->gnss_data = pdata;
gnssctl->name = pdata->name;
qch_clk = devm_clk_get(dev, "ccore_qch_lh_gnss");
if (!IS_ERR(qch_clk)) {
gif_err("Found Qch clk!\n");
gnssctl->ccore_qch_lh_gnss = qch_clk;
}
else {
gnssctl->ccore_qch_lh_gnss = NULL;
}
#ifdef USE_IOREMAP_NOPMU
gnssctl->pmu_reg = devm_ioremap(dev, PMU_ADDR, PMU_SIZE);
if (gnssctl->pmu_reg == NULL) {
gif_err("%s: pmu ioremap failed.\n", pdata->name);
return NULL;
} else
gif_err("pmu_reg : 0x%p\n", gnssctl->pmu_reg);
#endif
/* init gnssctl device for getting gnssctl operations */
ret = init_gnssctl_device(gnssctl, pdata);
if (ret) {
gif_err("%s: init_gnssctl_device fail (err %d)\n",
pdata->name, ret);
devm_kfree(dev, gnssctl);
return NULL;
}
gif_info("%s is created!!!\n", pdata->name);
return gnssctl;
}
static struct io_device *create_io_device(struct platform_device *pdev,
struct gnss_io_t *io_t, struct link_device *ld,
struct gnss_ctl *gnssctl, struct gnss_data *pdata)
{
int ret;
struct device *dev = &pdev->dev;
struct io_device *iod;
iod = devm_kzalloc(dev, sizeof(struct io_device), GFP_KERNEL);
if (!iod) {
gif_err("iod is NULL\n");
return NULL;
}
iod->name = io_t->name;
iod->app = io_t->app;
atomic_set(&iod->opened, 0);
/* link between io device and gnss control */
iod->gc = gnssctl;
gnssctl->iod = iod;
/* link between io device and link device */
iod->ld = ld;
ld->iod = iod;
/* register misc device */
ret = exynos_init_gnss_io_device(iod);
if (ret) {
devm_kfree(dev, iod);
gif_err("exynos_init_gnss_io_device fail (%d)\n", ret);
return NULL;
}
gif_info("%s created\n", iod->name);
return iod;
}
#ifdef CONFIG_OF
static int parse_dt_common_pdata(struct device_node *np,
struct gnss_data *pdata)
{
gif_dt_read_string(np, "shmem,name", pdata->name);
gif_dt_read_string(np, "shmem,device_node_name", pdata->device_node_name);
gif_dt_read_u32(np, "shmem,ipc_offset", pdata->ipcmem_offset);
gif_dt_read_u32(np, "shmem,ipc_size", pdata->ipc_size);
gif_dt_read_u32(np, "shmem,ipc_reg_cnt", pdata->ipc_reg_cnt);
/* Shared Memory Configuration from reserved_mem */
pdata->shmem_base = shmem_conf.shmem_base;
pdata->shmem_size = shmem_conf.shmem_size;
return 0;
}
static int parse_dt_mbox_pdata(struct device *dev, struct device_node *np,
struct gnss_data *pdata)
{
struct gnss_mbox *mbox = pdata->mbx;
mbox = devm_kzalloc(dev, sizeof(struct gnss_mbox), GFP_KERNEL);
if (!mbox) {
gif_err("mbox: failed to alloc memory\n");
return -ENOMEM;
}
pdata->mbx = mbox;
gif_dt_read_u32(np, "mbx,int_ap2gnss_bcmd", mbox->int_ap2gnss_bcmd);
gif_dt_read_u32(np, "mbx,int_ap2gnss_req_fault_info",
mbox->int_ap2gnss_req_fault_info);
gif_dt_read_u32(np, "mbx,int_ap2gnss_ipc_msg", mbox->int_ap2gnss_ipc_msg);
gif_dt_read_u32(np, "mbx,int_ap2gnss_ack_wake_set",
mbox->int_ap2gnss_ack_wake_set);
gif_dt_read_u32(np, "mbx,int_ap2gnss_ack_wake_clr",
mbox->int_ap2gnss_ack_wake_clr);
gif_dt_read_u32(np, "mbx,irq_gnss2ap_bcmd", mbox->irq_gnss2ap_bcmd);
gif_dt_read_u32(np, "mbx,irq_gnss2ap_rsp_fault_info",
mbox->irq_gnss2ap_rsp_fault_info);
gif_dt_read_u32(np, "mbx,irq_gnss2ap_ipc_msg", mbox->irq_gnss2ap_ipc_msg);
gif_dt_read_u32(np, "mbx,irq_gnss2ap_req_wake_clr",
mbox->irq_gnss2ap_req_wake_clr);
gif_dt_read_u32_array(np, "mbx,reg_bcmd_ctrl", mbox->reg_bcmd_ctrl,
BCMD_CTRL_COUNT);
return 0;
}
static int alloc_gnss_reg(struct device *dev, struct gnss_shared_reg **areg,
const char *reg_name, u32 reg_device, u32 reg_value)
{
struct gnss_shared_reg *ret = NULL;
if (!(*areg)) {
ret = devm_kzalloc(dev, sizeof(struct gnss_shared_reg), GFP_KERNEL);
if (ret) {
ret->name = reg_name;
ret->device = reg_device;
ret->value.index = reg_value;
*areg = ret;
}
}
else {
gif_err("Register %s is already allocated!\n", reg_name);
}
return (*areg != NULL);
}
static int parse_single_dt_reg(struct device *dev, const char *propname,
struct gnss_shared_reg **reg)
{
struct device_node *np = dev->of_node;
u32 val[2];
if (!of_property_read_u32_array(np, propname, val, 2)) {
if (!alloc_gnss_reg(dev, reg, propname, val[0], val[1]))
return -EINVAL;
}
return 0;
}
static int parse_dt_reg_mbox_pdata(struct device *dev, struct gnss_data *pdata)
{
int i;
if (parse_single_dt_reg(dev, "reg_rx_ipc_msg",
&pdata->reg[GNSS_REG_RX_IPC_MSG]) != 0) {
goto parse_dt_reg_nomem;
}
if (parse_single_dt_reg(dev, "reg_tx_ipc_msg",
&pdata->reg[GNSS_REG_TX_IPC_MSG]) != 0) {
goto parse_dt_reg_nomem;
}
if (parse_single_dt_reg(dev, "reg_wake_lock",
&pdata->reg[GNSS_REG_WAKE_LOCK]) != 0) {
goto parse_dt_reg_nomem;
}
if (parse_single_dt_reg(dev, "reg_rx_head",
&pdata->reg[GNSS_REG_RX_HEAD]) != 0) {
goto parse_dt_reg_nomem;
}
if (parse_single_dt_reg(dev, "reg_rx_tail",
&pdata->reg[GNSS_REG_RX_TAIL]) != 0) {
goto parse_dt_reg_nomem;
}
if (parse_single_dt_reg(dev, "reg_tx_head",
&pdata->reg[GNSS_REG_TX_HEAD]) != 0) {
goto parse_dt_reg_nomem;
}
if (parse_single_dt_reg(dev, "reg_tx_tail",
&pdata->reg[GNSS_REG_TX_TAIL]) != 0) {
goto parse_dt_reg_nomem;
}
return 0;
parse_dt_reg_nomem:
for (i = 0; i < GNSS_REG_COUNT; i++)
if (pdata->reg[i])
devm_kfree(dev, pdata->reg[i]);
gif_err("reg: could not allocate register memory\n");
return -ENOMEM;
}
static int parse_dt_fault_pdata(struct device *dev, struct gnss_data *pdata)
{
struct device_node *np = dev->of_node;
u32 tmp[3];
if (!of_property_read_u32_array(np, "fault_info", tmp, 3)) {
(pdata)->fault_info.name = "gnss_fault_info";
(pdata)->fault_info.device = tmp[0];
(pdata)->fault_info.value.index = tmp[1];
(pdata)->fault_info.size = tmp[2];
}
else {
return -EINVAL;
}
return 0;
}
static struct gnss_data *gnss_if_parse_dt_pdata(struct device *dev)
{
struct gnss_data *pdata;
int i;
u32 ret;
pdata = devm_kzalloc(dev, sizeof(struct gnss_data), GFP_KERNEL);
if (!pdata) {
gif_err("gnss_data: alloc fail\n");
return ERR_PTR(-ENOMEM);
}
ret = parse_dt_common_pdata(dev->of_node, pdata);
if (ret != 0) {
gif_err("Failed to parse common pdata.\n");
goto parse_dt_pdata_err;
}
ret = parse_dt_mbox_pdata(dev, dev->of_node, pdata);
if (ret != 0) {
gif_err("Failed to parse mailbox pdata.\n");
goto parse_dt_pdata_err;
}
ret = parse_dt_reg_mbox_pdata(dev, pdata);
if (ret != 0) {
gif_err("Failed to parse mbox register pdata.\n");
goto parse_dt_pdata_err;
}
ret = parse_dt_fault_pdata(dev, pdata);
if (ret != 0) {
gif_err("Failed to parse fault info pdata.\n");
goto parse_dt_pdata_err;
}
for (i = 0; i < GNSS_REG_COUNT; i++) {
if (pdata->reg[i])
gif_err("Found reg: [%d:%d] %s\n",
pdata->reg[i]->device,
pdata->reg[i]->value.index,
pdata->reg[i]->name);
}
gif_err("Fault info: %s [%d:%d:%d]\n",
pdata->fault_info.name,
pdata->fault_info.device,
pdata->fault_info.value.index,
pdata->fault_info.size);
dev->platform_data = pdata;
gif_info("DT parse complete!\n");
return pdata;
parse_dt_pdata_err:
if (pdata)
devm_kfree(dev, pdata);
return ERR_PTR(-EINVAL);
}
static const struct of_device_id sec_gnss_match[] = {
{ .compatible = "samsung,gnss_shdmem_if", },
{},
};
MODULE_DEVICE_TABLE(of, sec_gnss_match);
#else /* !CONFIG_OF */
static struct gnss_data *gnss_if_parse_dt_pdata(struct device *dev)
{
return ERR_PTR(-ENODEV);
}
#endif /* CONFIG_OF */
static int gnss_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct gnss_data *pdata = dev->platform_data;
struct gnss_ctl *gnssctl;
struct io_device *iod;
struct link_device *ld;
unsigned size;
gif_err("%s: +++\n", pdev->name);
if (dev->of_node) {
pdata = gnss_if_parse_dt_pdata(dev);
if (IS_ERR(pdata)) {
gif_err("GIF DT parse error!\n");
return PTR_ERR(pdata);
}
}
/* allocate iodev */
size = sizeof(struct gnss_io_t);
pdata->iodev = devm_kzalloc(dev, size, GFP_KERNEL);
if (!pdata->iodev) {
if (pdata->iodev)
devm_kfree(dev, pdata->iodev);
gif_err("iodev: failed to alloc memory\n");
return PTR_ERR(pdata);
}
gnssctl = create_gnssctl_device(pdev);
if (!gnssctl) {
gif_err("%s: gnssctl == NULL\n", pdata->name);
return -ENOMEM;
}
/* GNSS uses one IO device and does not need to be parsed from DT. */
pdata->iodev->name = pdata->device_node_name;
pdata->iodev->id = 0; /* Fixed channel 0. */
pdata->iodev->app = "SLL";
/* create link device */
ld = gnss_shmem_create_link_device(pdev);
if (!ld)
goto free_gc;
ld->gc = gnssctl;
gif_err("%s: %s link created\n", pdata->name, ld->name);
/* create io device and connect to gnssctl device */
size = sizeof(struct io_device *);
iod = (struct io_device *)devm_kzalloc(dev, size, GFP_KERNEL);
iod = create_io_device(pdev, pdata->iodev, ld, gnssctl, pdata);
if (!iod) {
gif_err("%s: iod == NULL\n", pdata->name);
goto free_iod;
}
/* attach device */
gif_debug("set %s->%s\n", iod->name, ld->name);
set_current_link(iod, iod->ld);
platform_set_drvdata(pdev, gnssctl);
gif_err("%s: ---\n", pdata->name);
return 0;
free_iod:
devm_kfree(dev, iod);
free_gc:
devm_kfree(dev, gnssctl);
gif_err("%s: xxx\n", pdata->name);
return -ENOMEM;
}
static void gnss_shutdown(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct gnss_ctl *gc = dev_get_drvdata(dev);
/* Matt - Implement Shutdown */
gc->gnss_state = STATE_OFFLINE;
}
#ifdef CONFIG_PM
static int gnss_suspend(struct device *pdev)
{
struct gnss_ctl *gc = dev_get_drvdata(pdev);
/* Matt - Implement Suspend */
if (gc->ops.suspend_gnss_ctrl != NULL) {
gif_err("%s: pd_active:0\n", gc->name);
gc->ops.suspend_gnss_ctrl(gc);
}
return 0;
}
static int gnss_resume(struct device *pdev)
{
struct gnss_ctl *gc = dev_get_drvdata(pdev);
/* Matt - Implement Resume */
if (gc->ops.resume_gnss_ctrl != NULL) {
gif_err("%s: pd_active:1\n", gc->name);
gc->ops.resume_gnss_ctrl(gc);
}
return 0;
}
#else
#define gnss_suspend NULL
#define gnss_resume NULL
#endif
static const struct dev_pm_ops gnss_pm_ops = {
.suspend = gnss_suspend,
.resume = gnss_resume,
};
static struct platform_driver gnss_driver = {
.probe = gnss_probe,
.shutdown = gnss_shutdown,
.driver = {
.name = "gif_exynos",
.owner = THIS_MODULE,
.pm = &gnss_pm_ops,
#ifdef CONFIG_OF
.of_match_table = of_match_ptr(sec_gnss_match),
#endif
},
};
module_platform_driver(gnss_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Samsung GNSS Interface Driver");

View file

@ -0,0 +1,440 @@
/*
* Copyright (C) 2010 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __GNSS_PRJ_H__
#define __GNSS_PRJ_H__
#include <linux/wait.h>
#include <linux/miscdevice.h>
#include <linux/skbuff.h>
#include <linux/interrupt.h>
#include <linux/completion.h>
#include <linux/wakelock.h>
#include <linux/rbtree.h>
#include <linux/spinlock.h>
#include <linux/cdev.h>
#include <linux/types.h>
#include "include/gnss.h"
#include "include/exynos_ipc.h"
#include "pmu-gnss.h"
#define CALLER (__builtin_return_address(0))
#define MAX_IOD_RXQ_LEN 2048
#define GNSS_IOC_MAGIC ('K')
#define GNSS_IOCTL_RESET _IO(GNSS_IOC_MAGIC, 0x00)
#define GNSS_IOCTL_LOAD_FIRMWARE _IO(GNSS_IOC_MAGIC, 0x01)
#define GNSS_IOCTL_REQ_FAULT_INFO _IO(GNSS_IOC_MAGIC, 0x02)
#define GNSS_IOCTL_REQ_BCMD _IO(GNSS_IOC_MAGIC, 0x03)
#define GNSS_IOCTL_READ_FIRMWARE _IO(GNSS_IOC_MAGIC, 0x04)
#define GNSS_IOCTL_CHANGE_SENSOR_GPIO _IO(GNSS_IOC_MAGIC, 0x05)
#define GNSS_IOCTL_CHANGE_TCXO_MODE _IO(GNSS_IOC_MAGIC, 0x06)
#define GNSS_IOCTL_SET_SENSOR_POWER _IO(GNSS_IOC_MAGIC, 0x07)
enum sensor_power {
SENSOR_OFF,
SENSOR_ON,
};
#ifndef ARCH_EXYNOS
/* Exynos PMU API functions are only available when ARCH_EXYNOS is defined.
* Otherwise, we must hardcode the PMU address for setting the PMU registers.
*/
#define USE_IOREMAP_NOPMU
#endif
#define USE_SIMPLE_WAKE_LOCK
#ifdef USE_IOREMAP_NOPMU
#if defined(CONFIG_SOC_EXYNOS7870)
#define PMU_ADDR (0x10480000)
#define PMU_SIZE (SZ_64K)
#elif defined(CONFIG_SOC_EXYNOS7880)
#define PMU_ADDR (0x106B0000)
#define PMU_SIZE (SZ_64K)
#elif defined(CONFIG_SOC_EXYNOS7570)
#define PMU_ADDR (0x11C80000)
#define PMU_SIZE (SZ_64K)
#endif
#endif /* USE_IOREMAP_NOPMU */
struct kepler_bcmd_args {
u16 flags;
u16 cmd_id;
u32 param1;
u32 param2;
u32 ret_val;
};
struct kepler_firmware_args {
u32 firmware_size;
u32 offset;
char *firmware_bin;
};
struct kepler_fault_args {
u32 dump_size;
char *dumped_data;
};
#ifdef CONFIG_COMPAT
struct kepler_firmware_args32 {
u32 firmware_size;
u32 offset;
compat_uptr_t firmware_bin;
};
struct kepler_fault_args32 {
u32 dump_size;
compat_uptr_t dumped_data;
};
#endif
/* gnss status */
#define HDLC_HEADER_MAX_SIZE 6 /* fmt 3, raw 6, rfs 6 */
#define PSD_DATA_CHID_BEGIN 0x2A
#define PSD_DATA_CHID_END 0x38
#define PS_DATA_CH_LAST 24
#define IP6VERSION 6
#define GNSS_MAX_NAME_LEN 64
#define MAX_HEX_LEN 16
#define MAX_NAME_LEN 64
#define MAX_PREFIX_LEN 128
#define MAX_STR_LEN 256
#define NO_WAKEUP_LOCK
/* Does gnss ctl structure will use state ? or status defined below ?*/
enum gnss_state {
STATE_OFFLINE,
STATE_FIRMWARE_DL, /* no firmware */
STATE_ONLINE,
STATE_HOLD_RESET,
STATE_FAULT, /* ACTIVE/WDT */
};
static const char const *gnss_state_str[] = {
[STATE_OFFLINE] = "OFFLINE",
[STATE_FIRMWARE_DL] = "FIRMWARE_DL",
[STATE_ONLINE] = "ONLINE",
[STATE_HOLD_RESET] = "HOLD_RESET",
[STATE_FAULT] = "FAULT",
};
enum direction {
TX = 0,
AP2GNSS = 0,
RX = 1,
GNSS2AP = 1,
MAX_DIR = 2
};
/**
@brief return the gnss_state string
@param state the state of a GNSS
*/
static const inline char *get_gnss_state_str(int state)
{
return gnss_state_str[state];
}
struct header_data {
char hdr[HDLC_HEADER_MAX_SIZE];
u32 len;
u32 frag_len;
char start; /*hdlc start header 0x7F*/
};
struct fmt_hdr {
u16 len;
u8 control;
} __packed;
/* for fragmented data from link devices */
struct fragmented_data {
struct sk_buff *skb_recv;
struct header_data h_data;
struct exynos_frame_data f_data;
/* page alloc fail retry*/
unsigned realloc_offset;
};
#define fragdata(iod, ld) (&(iod)->fragments)
/** struct skbuff_priv - private data of struct sk_buff
* this is matched to char cb[48] of struct sk_buff
*/
struct skbuff_private {
struct io_device *iod;
struct link_device *ld;
struct io_device *real_iod; /* for rx multipdp */
/* for time-stamping */
struct timespec ts;
u32 lnk_hdr:1,
reserved:15,
exynos_ch:8,
frm_ctrl:8;
/* for indicating that thers is only one IPC frame in an skb */
bool single_frame;
} __packed;
static inline struct skbuff_private *skbpriv(struct sk_buff *skb)
{
BUILD_BUG_ON(sizeof(struct skbuff_private) > sizeof(skb->cb));
return (struct skbuff_private *)&skb->cb;
}
struct meminfo {
unsigned int base_addr;
unsigned int size;
};
struct io_device {
/* Name of the IO device */
char *name;
/* Link to link device */
struct link_device *ld;
/* Reference count */
atomic_t opened;
/* Wait queue for the IO device */
wait_queue_head_t wq;
/* Misc and net device structures for the IO device */
struct miscdevice miscdev;
/* The name of the application that will use this IO device */
char *app;
bool link_header;
/* Rx queue of sk_buff */
struct sk_buff_head sk_rx_q;
/*
** work for each io device, when delayed work needed
** use this for private io device rx action
*/
struct delayed_work rx_work;
struct fragmented_data fragments;
/* called from linkdevice when a packet arrives for this iodevice */
int (*recv_skb)(struct io_device *iod, struct link_device *ld,
struct sk_buff *skb);
int (*recv_skb_single)(struct io_device *iod, struct link_device *ld,
struct sk_buff *skb);
/* inform the IO device that the gnss is now online or offline or
* crashing or whatever...
*/
void (*gnss_state_changed)(struct io_device *iod, enum gnss_state);
struct gnss_ctl *gc;
struct wake_lock wakelock;
long waketime;
struct exynos_seq_num seq_num;
/* DO NOT use __current_link directly
* you MUST use skbpriv(skb)->ld in mc, link, etc..
*/
struct link_device *__current_link;
};
#define to_io_device(misc) container_of(misc, struct io_device, miscdev)
/* get_current_link, set_current_link don't need to use locks.
* In ARM, set_current_link and get_current_link are compiled to
* each one instruction (str, ldr) as atomic_set, atomic_read.
* And, the order of set_current_link and get_current_link is not important.
*/
#define get_current_link(iod) ((iod)->__current_link)
#define set_current_link(iod, ld) ((iod)->__current_link = (ld))
struct KEP_IOCTL_BCMD
{
u16 bcmd_id;
u16 flags;
u32 param1;
u32 param2;
};
struct link_device {
struct list_head list;
char *name;
/* Modem data */
struct gnss_data *mdm_data;
/* Modem control */
struct gnss_ctl *gc;
/* link to io device */
struct io_device *iod;
/* completion for bcmd messages */
struct completion bcmd_cmpl;
/* completion for waiting for link initialization */
struct completion init_cmpl;
struct io_device *fmt_iod;
/* TX queue of socket buffers */
struct sk_buff_head sk_fmt_tx_q;
struct sk_buff_head *skb_txq;
/* RX queue of socket buffers */
struct sk_buff_head sk_fmt_rx_q;
struct sk_buff_head *skb_rxq;
int timeout_cnt;
struct workqueue_struct *tx_wq;
struct work_struct tx_work;
struct delayed_work tx_delayed_work;
struct delayed_work *tx_dwork;
struct delayed_work fmt_tx_dwork;
struct workqueue_struct *rx_wq;
struct work_struct rx_work;
struct delayed_work rx_delayed_work;
/* called by an io_device when it has a packet to send over link
* - the io device is passed so the link device can look at id and
* format fields to determine how to route/format the packet
*/
int (*send)(struct link_device *ld, struct io_device *iod,
struct sk_buff *skb);
/* method for GNSS BCMD Request */
int (*req_bcmd)(struct link_device *ld, u16 cmd_id, u16 flags, \
u32 param1, u32 param2);
};
/** rx_alloc_skb - allocate an skbuff and set skb's iod, ld
* @length: length to allocate
* @iod: struct io_device *
* @ld: struct link_device *
*
* %NULL is returned if there is no free memory.
*/
static inline struct sk_buff *rx_alloc_skb(unsigned int length,
struct io_device *iod, struct link_device *ld)
{
struct sk_buff *skb;
skb = alloc_skb(length, GFP_ATOMIC);
if (likely(skb)) {
skbpriv(skb)->iod = iod;
skbpriv(skb)->ld = ld;
}
return skb;
}
enum gnss_mode;
enum gnss_int_clear;
enum gnss_tcxo_mode;
struct gnssctl_pmu_ops {
int (*init_conf)(struct gnss_ctl *);
int (*hold_reset)(struct gnss_ctl *);
int (*release_reset)(struct gnss_ctl *);
int (*power_on)(struct gnss_ctl *, enum gnss_mode);
int (*clear_int)(struct gnss_ctl *, enum gnss_int_clear);
int (*change_tcxo_mode)(struct gnss_ctl *, enum gnss_tcxo_mode);
};
struct gnssctl_ops {
int (*gnss_hold_reset)(struct gnss_ctl *);
int (*gnss_release_reset)(struct gnss_ctl *);
int (*gnss_power_on)(struct gnss_ctl *);
int (*gnss_req_fault_info)(struct gnss_ctl *, u32 **);
int (*suspend_gnss_ctrl)(struct gnss_ctl *);
int (*resume_gnss_ctrl)(struct gnss_ctl *);
int (*change_sensor_gpio)(struct gnss_ctl *);
int (*set_sensor_power)(struct gnss_ctl *, unsigned long);
};
struct gnss_ctl {
struct device *dev;
char *name;
struct gnss_data *gnss_data;
enum gnss_state gnss_state;
struct clk *ccore_qch_lh_gnss;
#ifdef USE_IOREMAP_NOPMU
void __iomem *pmu_reg;
#endif
struct delayed_work dwork;
struct work_struct work;
struct gnssctl_ops ops;
struct gnssctl_pmu_ops pmu_ops;
struct io_device *iod;
/* Wakelock for gnss_ctl */
struct wake_lock gc_fault_wake_lock;
struct wake_lock gc_wake_lock;
int wake_lock_irq;
struct completion fault_cmpl;
struct pinctrl *gnss_gpio;
struct pinctrl_state *gnss_sensor_gpio;
struct regulator *vdd_sensor_reg;
};
unsigned long shm_get_phys_base(void);
unsigned long shm_get_phys_size(void);
unsigned long shm_get_ipc_rgn_size(void);
unsigned long shm_get_ipc_rgn_offset(void);
extern int exynos_init_gnss_io_device(struct io_device *iod);
#define STD_UDL_STEP_MASK 0x0000000F
#define STD_UDL_SEND 0x1
#define STD_UDL_CRC 0xC
struct std_dload_info {
u32 size;
u32 mtu;
u32 num_frames;
} __packed;
u32 std_udl_get_cmd(u8 *frm);
bool std_udl_with_payload(u32 cmd);
int init_gnssctl_device(struct gnss_ctl *mc, struct gnss_data *pdata);
struct link_device *gnss_shmem_create_link_device(struct platform_device *pdev);
#endif

View file

@ -0,0 +1,128 @@
/*
* Copyright (C) 2011 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/miscdevice.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <linux/ip.h>
#include <net/ip.h>
#include <linux/tcp.h>
#include <linux/udp.h>
#include <linux/rtc.h>
#include <linux/time.h>
#include <linux/uaccess.h>
#include <linux/fs.h>
#include <linux/io.h>
#include <linux/wait.h>
#include <linux/time.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/mutex.h>
#include <linux/irq.h>
#include <linux/gpio.h>
#include <linux/delay.h>
#include <linux/wakelock.h>
#include "gnss_prj.h"
#include "gnss_utils.h"
static const char *hex = "0123456789abcdef";
/* dump2hex
* dump data to hex as fast as possible.
* the length of @buff must be greater than "@len * 3"
* it need 3 bytes per one data byte to print.
*/
static inline int dump2hex(char *buff, const char *data, size_t len)
{
char *dest = buff;
int i;
for (i = 0; i < len; i++) {
*dest++ = hex[(data[i] >> 4) & 0xf];
*dest++ = hex[data[i] & 0xf];
*dest++ = ' ';
}
if (likely(len > 0))
dest--; /* last space will be overwrited with null */
*dest = '\0';
return dest - buff;
}
static inline void pr_ipc_msg(int level, u8 ch, const char *prefix,
const u8 *msg, unsigned int len)
{
size_t offset;
char str[MAX_STR_LEN] = {0, };
if (prefix)
snprintf(str, MAX_STR_LEN, "%s", prefix);
offset = strlen(str);
dump2hex((str + offset), msg, (len > MAX_HEX_LEN ? MAX_HEX_LEN : len));
gif_err("%s\n", str);
}
void gnss_log_ipc_pkt(struct sk_buff *skb, enum direction dir)
{
struct io_device *iod;
struct link_device *ld;
char prefix[MAX_PREFIX_LEN] = {0, };
unsigned int hdr_len;
unsigned int msg_len;
u8 *msg;
u8 *hdr;
u8 ch;
/*
if (!log_info.debug_log)
return;
*/
iod = skbpriv(skb)->iod;
ld = skbpriv(skb)->ld;
ch = skbpriv(skb)->exynos_ch;
/**
* Make a string of the route
*/
snprintf(prefix, MAX_PREFIX_LEN, "%s %s: %s: ",
"LNK", dir_str(dir), ld->name);
hdr = skbpriv(skb)->lnk_hdr ? skb->data : NULL;
hdr_len = hdr ? EXYNOS_HEADER_SIZE : 0;
if (hdr_len > 0) {
char *separation = " | ";
size_t offset = strlen(prefix);
dump2hex((prefix + offset), hdr, hdr_len);
strncat(prefix, separation, strlen(separation));
}
/**
* Print an IPC message with the prefix
*/
msg = skb->data + hdr_len;
msg_len = (skb->len - hdr_len);
pr_ipc_msg(log_info.fmt_msg, ch, prefix, msg, msg_len);
}

View file

@ -0,0 +1,50 @@
/*
* Copyright (C) 2011 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __GNSS_UTILS_H__
#define __GNSS_UTILS_H__
#include "gnss_prj.h"
struct __packed gnss_log {
u8 fmt_msg;
u8 boot_msg;
u8 dump_msg;
u8 rfs_msg;
u8 log_msg;
u8 ps_msg;
u8 router_msg;
u8 debug_log;
};
extern struct gnss_log log_info;
static const char const *direction_string[] = {
[TX] = "TX",
[RX] = "RX"
};
static const inline char *dir_str(enum direction dir)
{
if (unlikely(dir >= MAX_DIR))
return "INVALID";
else
return direction_string[dir];
}
/* print IPC message packet */
void gnss_log_ipc_pkt(struct sk_buff *skb, enum direction dir);
#endif/*__GNSS_UTILS_H__*/

View file

@ -0,0 +1,155 @@
/*
* Copyright (C) 2010 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __EXYNOS_IPC_H__
#define __EXYNOS_IPC_H__
#include <linux/types.h>
#include "gnss.h"
#define EXYNOS_SINGLE_MASK (0b11000000)
#define EXYNOS_MULTI_START_MASK (0b10000000)
#define EXYNOS_MULTI_LAST_MASK (0b01000000)
#define EXYNOS_START_MASK 0xABCD
#define EXYNOS_START_OFFSET 0
#define EXYNOS_START_SIZE 2
#define EXYNOS_FRAME_SEQ_OFFSET 2
#define EXYNOS_FRAME_SIZE 2
#define EXYNOS_FRAG_CONFIG_OFFSET 4
#define EXYNOS_FRAG_CONFIG_SIZE 2
#define EXYNOS_LEN_OFFSET 6
#define EXYNOS_LEN_SIZE 2
#define EXYNOS_CH_ID_OFFSET 8
#define EXYNOS_CH_SIZE 1
#define EXYNOS_CH_SEQ_OFFSET 9
#define EXYNOS_CH_SEQ_SIZE 1
#define EXYNOS_HEADER_SIZE 12
#define EXYNOS_DATA_LOOPBACK_CHANNEL 82
#define EXYNOS_FMT_NUM 1
#define EXYNOS_RFS_NUM 10
struct __packed frag_config {
u8 frame_first:1,
frame_last:1,
packet_index:6;
u8 frame_index;
};
/* EXYNOS link-layer header */
struct __packed exynos_link_header {
u16 seq;
struct frag_config cfg;
u16 len;
u16 reserved_1;
u8 ch_id;
u8 ch_seq;
u16 reserved_2;
};
struct __packed exynos_seq_num {
u16 frame_cnt;
u8 ch_cnt[255];
};
struct exynos_frame_data {
/* Frame length calculated from the length fields */
unsigned int len;
/* The length of link layer header */
unsigned int hdr_len;
/* The length of received header */
unsigned int hdr_rcvd;
/* The length of link layer payload */
unsigned int pay_len;
/* The length of received data */
unsigned int pay_rcvd;
/* The length of link layer padding */
unsigned int pad_len;
/* The length of received padding */
unsigned int pad_rcvd;
/* Header buffer */
u8 hdr[EXYNOS_HEADER_SIZE];
};
static inline bool exynos_start_valid(u8 *frm)
{
u16 cfg = *(u16 *)(frm + EXYNOS_START_OFFSET);
return cfg == EXYNOS_START_MASK ? true : false;
}
static inline bool exynos_multi_start_valid(u8 *frm)
{
u16 cfg = *(u16 *)(frm + EXYNOS_FRAG_CONFIG_OFFSET);
return ((cfg >> 8) & EXYNOS_MULTI_START_MASK) == EXYNOS_MULTI_START_MASK;
}
static inline bool exynos_multi_last_valid(u8 *frm)
{
u16 cfg = *(u16 *)(frm + EXYNOS_FRAG_CONFIG_OFFSET);
return ((cfg >> 8) & EXYNOS_MULTI_LAST_MASK) == EXYNOS_MULTI_LAST_MASK;
}
static inline bool exynos_single_frame(u8 *frm)
{
u16 cfg = *(u16 *)(frm + EXYNOS_FRAG_CONFIG_OFFSET);
return ((cfg >> 8) & EXYNOS_SINGLE_MASK) == EXYNOS_SINGLE_MASK;
}
static inline u8 exynos_get_ch(u8 *frm)
{
return frm[EXYNOS_CH_ID_OFFSET];
}
static inline unsigned int exynos_calc_padding_size(unsigned int len)
{
unsigned int residue = len & 0x3;
return residue ? (4 - residue) : 0;
}
static inline unsigned int exynos_get_frame_len(u8 *frm)
{
return (unsigned int)*(u16 *)(frm + EXYNOS_LEN_OFFSET);
}
static inline unsigned int exynos_get_total_len(u8 *frm)
{
unsigned int len;
unsigned int pad;
len = exynos_get_frame_len(frm);
pad = exynos_calc_padding_size(len) ? exynos_calc_padding_size(len) : 0;
return len + pad;
}
static inline bool exynos_padding_exist(u8 *frm)
{
return exynos_calc_padding_size(exynos_get_frame_len(frm)) ? true : false;
}
#endif

View file

@ -0,0 +1,215 @@
/*
* Copyright (C) 2014 Samsung Electronics.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __GNSS_IF_H__
#define __GNSS_IF_H__
#include <linux/platform_device.h>
#include <linux/miscdevice.h>
/**
* struct gnss_io_t - declaration for io_device
* @name: device name
* @id: for SIPC4, contains format & channel information
* (id & 11100000b)>>5 = format (eg, 0=FMT, 1=RAW, 2=RFS)
* (id & 00011111b) = channel (valid only if format is RAW)
* for SIPC5, contains only 8-bit channel ID
* @format: device format
* @io_type: type of this io_device
* @links: list of link_devices to use this io_device
* for example, if you want to use DPRAM and USB in an io_device.
* .links = LINKTYPE(LINKDEV_DPRAM) | LINKTYPE(LINKDEV_USB)
* @tx_link: when you use 2+ link_devices, set the link for TX.
* If define multiple link_devices in @links,
* you can receive data from them. But, cannot send data to all.
* TX is only one link_device.
* @app: the name of the application that will use this IO device
*
*/
struct gnss_io_t {
char *name;
int id;
char *app;
};
#define STR_SHMEM_BASE "shmem_base"
#define SHMEM_SIZE_1MB (1 << 20) /* 1 MB */
#define SHMEM_SIZE_2MB (2 << 20) /* 2 MB */
#define SHMEM_SIZE_4MB (4 << 20) /* 4 MB */
enum gnss_bcmd_ctrl {
CTRL0,
CTRL1,
CTRL2,
CTRL3,
BCMD_CTRL_COUNT,
};
enum gnss_reg_type {
GNSS_REG_RX_IPC_MSG,
GNSS_REG_TX_IPC_MSG,
GNSS_REG_WAKE_LOCK,
GNSS_REG_RX_HEAD,
GNSS_REG_RX_TAIL,
GNSS_REG_TX_HEAD,
GNSS_REG_TX_TAIL,
GNSS_REG_COUNT,
};
enum gnss_ipc_vector {
GNSS_IPC_MBOX,
GNSS_IPC_SHMEM,
GNSS_IPC_COUNT,
};
struct gnss_mbox {
int int_ap2gnss_bcmd;
int int_ap2gnss_req_fault_info;
int int_ap2gnss_ipc_msg;
int int_ap2gnss_ack_wake_set;
int int_ap2gnss_ack_wake_clr;
int irq_gnss2ap_bcmd;
int irq_gnss2ap_rsp_fault_info;
int irq_gnss2ap_ipc_msg;
int irq_gnss2ap_req_wake_clr;
unsigned reg_bcmd_ctrl[BCMD_CTRL_COUNT];
};
struct gnss_shared_reg_value {
int index;
u32 __iomem *addr;
};
struct gnss_shared_reg {
const char *name;
struct gnss_shared_reg_value value;
u32 device;
};
struct gnss_fault_data_area_value {
u32 index;
u8 __iomem *addr;
};
struct gnss_fault_data_area {
const char *name;
struct gnss_fault_data_area_value value;
u32 size;
u32 device;
};
struct gnss_pmu {
int (*power)(int);
int (*init)(void);
int (*get_pwr_status)(void);
int (*stop)(void);
int (*start)(void);
int (*clear_cp_fail)(void);
int (*clear_cp_wdt)(void);
};
/* platform data */
struct gnss_data {
char *name;
char *device_node_name;
int irq_gnss_active;
int irq_gnss_wdt;
int irq_gnss_wakeup;
struct gnss_mbox *mbx;
struct gnss_shared_reg *reg[GNSS_REG_COUNT];
struct gnss_fault_data_area fault_info;
/* Information of IO devices */
struct gnss_io_t *iodev;
/* SHDMEM ADDR */
u32 shmem_base;
u32 shmem_size;
u32 ipcmem_offset;
u32 ipc_size;
u32 ipc_reg_cnt;
u8 __iomem *gnss_base;
u8 __iomem *ipc_base;
};
struct shmem_conf {
u32 shmem_base;
u32 shmem_size;
};
#ifdef CONFIG_OF
#define gif_dt_read_enum(np, prop, dest) \
do { \
u32 val; \
if (of_property_read_u32(np, prop, &val)) \
return -EINVAL; \
dest = (__typeof__(dest))(val); \
} while (0)
#define gif_dt_read_bool(np, prop, dest) \
do { \
u32 val; \
if (of_property_read_u32(np, prop, &val)) \
return -EINVAL; \
dest = val ? true : false; \
} while (0)
#define gif_dt_read_string(np, prop, dest) \
do { \
if (of_property_read_string(np, prop, \
(const char **)&dest)) \
return -EINVAL; \
} while (0)
#define gif_dt_read_u32(np, prop, dest) \
do { \
u32 val; \
if (of_property_read_u32(np, prop, &val)) \
return -EINVAL; \
dest = val; \
} while (0)
#define gif_dt_read_u32_array(np, prop, dest, sz) \
do { \
if (of_property_read_u32_array(np, prop, dest, (sz))) \
return -EINVAL; \
} while (0)
#endif
#define LOG_TAG "gif: "
#define CALLEE (__func__)
#define CALLER (__builtin_return_address(0))
#define gif_err_limited(fmt, ...) \
printk_ratelimited(KERN_ERR "%s: " pr_fmt(fmt), __func__, ##__VA_ARGS__)
#define gif_err(fmt, ...) \
pr_err(LOG_TAG "%s: " pr_fmt(fmt), __func__, ##__VA_ARGS__)
#define gif_debug(fmt, ...) \
pr_debug(LOG_TAG "%s: " pr_fmt(fmt), __func__, ##__VA_ARGS__)
#define gif_info(fmt, ...) \
pr_info(LOG_TAG "%s: " pr_fmt(fmt), __func__, ##__VA_ARGS__)
#define gif_trace(fmt, ...) \
printk(KERN_DEBUG "gif: %s: %d: called(%pF): " fmt, \
__func__, __LINE__, __builtin_return_address(0), ##__VA_ARGS__)
#endif

View file

@ -0,0 +1,336 @@
#include <linux/io.h>
#include <linux/cpumask.h>
#include <linux/suspend.h>
#include <linux/notifier.h>
#include <linux/bug.h>
#include <linux/delay.h>
#include <linux/clk.h>
#include <linux/smc.h>
#include <soc/samsung/exynos-pmu.h>
#include "pmu-gnss.h"
static void __set_shdmem_size(struct gnss_ctl *gc, u32 reg_offset, u32 memsz)
{
u32 tmp;
memsz = (memsz >> MEMSIZE_SHIFT);
#ifdef USE_IOREMAP_NOPMU
{
u32 memcfg_val;
memcfg_val = __raw_readl(gc->pmu_reg + reg_offset);
memcfg_val &= ~(MEMSIZE_MASK << MEMSIZE_OFFSET);
memcfg_val |= (memsz << MEMSIZE_OFFSET);
__raw_writel(memcfg_val, gc->pmu_reg + reg_offset);
tmp = __raw_readl(gc->pmu_reg + reg_offset);
}
#else
exynos_pmu_update(reg_offset, MEMSIZE_MASK << MEMSIZE_OFFSET,
memsz << MEMSIZE_OFFSET);
exynos_pmu_read(reg_offset, &tmp);
#endif
}
static void set_shdmem_size(struct gnss_ctl *gc, u32 memsz)
{
gif_err("[GNSS]Set shared mem size: %dB\n", memsz);
#if !defined(CONFIG_SOC_EXYNOS7870) && !defined(CONFIG_SOC_EXYNOS7880)
__set_shdmem_size(gc, EXYNOS_PMU_GNSS2AP_MEM_CONFIG, memsz);
__set_shdmem_size(gc, EXYNOS_PMU_GNSS2AP_MEM_CONFIG3, memsz);
#else
__set_shdmem_size(gc, EXYNOS_PMU_GNSS2AP_MEM_CONFIG, memsz);
#endif
}
static void __set_shdmem_base(struct gnss_ctl *gc, u32 reg_offset, u32 shmem_base)
{
u32 tmp, base_addr;
base_addr = (shmem_base >> MEMBASE_ADDR_SHIFT);
#ifdef USE_IOREMAP_NOPMU
{
u32 memcfg_val;
gif_err("Access Reg : 0x%p\n", gc->pmu_reg + reg_offset);
memcfg_val = __raw_readl(gc->pmu_reg + reg_offset);
memcfg_val &= ~(MEMBASE_ADDR_MASK << MEMBASE_ADDR_OFFSET);
memcfg_val |= (base_addr << MEMBASE_ADDR_OFFSET);
__raw_writel(memcfg_val, gc->pmu_reg + reg_offset);
tmp = __raw_readl(gc->pmu_reg + reg_offset);
}
#else
exynos_pmu_update(reg_offset, MEMBASE_ADDR_MASK << MEMBASE_ADDR_OFFSET,
base_addr << MEMBASE_ADDR_OFFSET);
exynos_pmu_read(reg_offset, &tmp);
#endif
}
static void set_shdmem_base(struct gnss_ctl *gc, u32 shmem_base)
{
gif_err("[GNSS]Set shared mem baseaddr : 0x%x\n", shmem_base);
#if !defined(CONFIG_SOC_EXYNOS7870) && !defined(CONFIG_SOC_EXYNOS7880)
__set_shdmem_base(gc, EXYNOS_PMU_GNSS2AP_MEM_CONFIG1, shmem_base);
__set_shdmem_base(gc, EXYNOS_PMU_GNSS2AP_MEM_CONFIG2, shmem_base);
#else
__set_shdmem_base(gc, EXYNOS_PMU_GNSS2AP_MEM_CONFIG, shmem_base);
#endif
}
static void exynos_sys_powerdown_conf_gnss(struct gnss_ctl *gc)
{
#ifdef USE_IOREMAP_NOPMU
__raw_writel(0, gc->pmu_reg + EXYNOS_PMU_CENTRAL_SEQ_GNSS_CONFIGURATION);
__raw_writel(0, gc->pmu_reg + EXYNOS_PMU_RESET_AHEAD_GNSS_SYS_PWR_REG);
__raw_writel(0, gc->pmu_reg + EXYNOS_PMU_CLEANY_BUS_SYS_PWR_REG);
__raw_writel(0, gc->pmu_reg + EXYNOS_PMU_LOGIC_RESET_GNSS_SYS_PWR_REG);
__raw_writel(0, gc->pmu_reg + EXYNOS_PMU_TCXO_GATE_GNSS_SYS_PWR_REG);
__raw_writel(0, gc->pmu_reg + EXYNOS_PMU_RESET_ASB_GNSS_SYS_PWR_REG);
#else
exynos_pmu_write(EXYNOS_PMU_CENTRAL_SEQ_GNSS_CONFIGURATION, 0);
exynos_pmu_write(EXYNOS_PMU_RESET_AHEAD_GNSS_SYS_PWR_REG, 0);
exynos_pmu_write(EXYNOS_PMU_CLEANY_BUS_SYS_PWR_REG, 0);
exynos_pmu_write(EXYNOS_PMU_LOGIC_RESET_GNSS_SYS_PWR_REG, 0);
exynos_pmu_write(EXYNOS_PMU_TCXO_GATE_GNSS_SYS_PWR_REG, 0);
exynos_pmu_write(EXYNOS_PMU_RESET_ASB_GNSS_SYS_PWR_REG, 0);
#endif
}
int gnss_pmu_clear_interrupt(struct gnss_ctl *gc, enum gnss_int_clear gnss_int)
{
int ret = 0;
gif_debug("%s\n", __func__);
#ifdef USE_IOREMAP_NOPMU
{
u32 reg_val = 0;
reg_val = __raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
if (gnss_int == GNSS_INT_WAKEUP_CLEAR) {
reg_val |= GNSS_WAKEUP_REQ_CLR;
} else if (gnss_int == GNSS_INT_ACTIVE_CLEAR) {
reg_val |= GNSS_ACTIVE_REQ_CLR;
} else if (gnss_int == GNSS_INT_WDT_RESET_CLEAR) {
reg_val |= GNSS_WAKEUP_REQ_CLR;
} else {
gif_err("Unexpected interrupt value!\n");
return -EIO;
}
__raw_writel(reg_val, gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
}
#else
if (gnss_int == GNSS_INT_WAKEUP_CLEAR) {
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS,
GNSS_WAKEUP_REQ_CLR, GNSS_WAKEUP_REQ_CLR);
} else if (gnss_int == GNSS_INT_ACTIVE_CLEAR) {
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS,
GNSS_ACTIVE_REQ_CLR, GNSS_ACTIVE_REQ_CLR);
} else if (gnss_int == GNSS_INT_WDT_RESET_CLEAR) {
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS,
GNSS_RESET_REQ_CLR, GNSS_RESET_REQ_CLR);
} else {
gif_err("Unexpected interrupt value!\n");
return -EIO;
}
if (ret < 0) {
gif_err("ERR! GNSS Reset Fail: %d\n", ret);
return -EIO;
}
#endif
return ret;
}
int gnss_pmu_release_reset(struct gnss_ctl *gc)
{
u32 gnss_ctrl = 0;
int ret = 0;
#ifdef USE_IOREMAP_NOPMU
gnss_ctrl = __raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
{
u32 tmp_reg_val;
if (!(gnss_ctrl & GNSS_PWRON)) {
gnss_ctrl |= GNSS_PWRON;
__raw_writel(gnss_ctrl, gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
}
tmp_reg_val = __raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_S);
tmp_reg_val |= GNSS_START;
__raw_writel(tmp_reg_val, gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_S);
gif_err("PMU_GNSS_CTRL_S : 0x%x\n",
__raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_S));
}
#else
exynos_pmu_read(EXYNOS_PMU_GNSS_CTRL_NS, &gnss_ctrl);
if (!(gnss_ctrl & GNSS_PWRON)) {
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS, GNSS_PWRON,
GNSS_PWRON);
if (ret < 0) {
gif_err("ERR! write Fail: %d\n", ret);
ret = -EIO;
}
}
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_S, GNSS_START, GNSS_START);
if (ret < 0)
gif_err("ERR! GNSS Release Fail: %d\n", ret);
else {
exynos_pmu_read(EXYNOS_PMU_GNSS_CTRL_NS, &gnss_ctrl);
gif_info("PMU_GNSS_CTRL_S[0x%08x]\n", gnss_ctrl);
ret = -EIO;
}
#endif
return ret;
}
int gnss_pmu_hold_reset(struct gnss_ctl *gc)
{
int ret = 0;
u32 __maybe_unused gnss_ctrl;
/* set sys_pwr_cfg registers */
exynos_sys_powerdown_conf_gnss(gc);
#ifdef USE_IOREMAP_NOPMU
{
u32 reg_val;
reg_val = __raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
reg_val |= GNSS_RESET_SET;
__raw_writel(reg_val, gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
}
#else
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS, GNSS_RESET_SET,
GNSS_RESET_SET);
if (ret < 0) {
gif_err("ERR! GNSS Reset Fail: %d\n", ret);
return -1;
}
#endif
/* some delay */
cpu_relax();
usleep_range(80, 100);
return ret;
}
int gnss_pmu_power_on(struct gnss_ctl *gc, enum gnss_mode mode)
{
u32 gnss_ctrl;
int ret = 0;
gif_err("mode[%d]\n", mode);
#ifdef USE_IOREMAP_NOPMU
gnss_ctrl = __raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
if (mode == GNSS_POWER_ON) {
u32 tmp_reg_val;
if (!(gnss_ctrl & GNSS_PWRON)) {
gnss_ctrl |= GNSS_PWRON;
__raw_writel(gnss_ctrl, gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
}
tmp_reg_val = __raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_S);
tmp_reg_val |= GNSS_START;
__raw_writel(tmp_reg_val, gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_S);
} else {
gif_err("Not supported!!!(%d)\n", mode);
return -1;
}
#else
exynos_pmu_read(EXYNOS_PMU_GNSS_CTRL_NS, &gnss_ctrl);
if (mode == GNSS_POWER_ON) {
if (!(gnss_ctrl & GNSS_PWRON)) {
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS,
GNSS_PWRON, GNSS_PWRON);
if (ret < 0)
gif_err("ERR! write Fail: %d\n", ret);
}
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_S, GNSS_START,
GNSS_START);
if (ret < 0)
gif_err("ERR! write Fail: %d\n", ret);
} else {
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS, GNSS_PWRON, 0);
if (ret < 0) {
gif_err("ERR! write Fail: %d\n", ret);
return ret;
}
/* set sys_pwr_cfg registers */
exynos_sys_powerdown_conf_gnss(gc);
}
#endif
return ret;
}
int gnss_change_tcxo_mode(struct gnss_ctl *gc, enum gnss_tcxo_mode mode)
{
int ret = 0;
#ifdef USE_IOREMAP_NOPMU
{
u32 regval, tmp;
regval = __raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
if (mode == TCXO_SHARED_MODE) {
gif_err("Change TCXO mode to Shared Mode(%d)\n", mode);
regval &= ~TCXO_26M_40M_SEL;
__raw_writel(regval, gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
} else if (mode == TCXO_NON_SHARED_MODE) {
gif_err("Change TCXO mode to NON-sared Mode(%d)\n", mode);
regval |= TCXO_26M_40M_SEL;
__raw_writel(regval, gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
} else
gif_err("Unexpected modem(Mode:%d)\n", mode);
tmp = __raw_readl(gc->pmu_reg + EXYNOS_PMU_GNSS_CTRL_NS);
if (tmp != regval) {
gif_err("ERR! GNSS change tcxo: %d\n", ret);
return -1;
}
}
#else
if (mode == TCXO_SHARED_MODE) {
gif_err("Change TCXO mode to Shared Mode(%d)\n", mode);
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS,
TCXO_26M_40M_SEL, 0);
} else if (mode == TCXO_NON_SHARED_MODE) {
gif_err("Change TCXO mode to NON-sared Mode(%d)\n", mode);
ret = exynos_pmu_update(EXYNOS_PMU_GNSS_CTRL_NS,
TCXO_26M_40M_SEL, TCXO_26M_40M_SEL);
} else
gif_err("Unexpected modem(Mode:%d)\n", mode);
if (ret < 0) {
gif_err("ERR! GNSS change tcxo: %d\n", ret);
return -1;
}
#endif
return 0;
}
int gnss_pmu_init_conf(struct gnss_ctl *gc)
{
u32 shmem_size = gc->gnss_data->shmem_size;
u32 shmem_base = gc->gnss_data->shmem_base;
set_shdmem_size(gc, shmem_size);
set_shdmem_base(gc, shmem_base);
#ifndef USE_IOREMAP_NOPMU
/* set access window for GNSS */
exynos_pmu_write(EXYNOS_PMU_GNSS2AP_MIF0_PERI_ACCESS_CON, 0x0);
exynos_pmu_write(EXYNOS_PMU_GNSS2AP_MIF1_PERI_ACCESS_CON, 0x0);
#if !defined(CONFIG_SOC_EXYNOS7870)
exynos_pmu_write(EXYNOS_PMU_GNSS2AP_MIF2_PERI_ACCESS_CON, 0x0);
exynos_pmu_write(EXYNOS_PMU_GNSS2AP_MIF3_PERI_ACCESS_CON, 0x0);
#endif
exynos_pmu_write(EXYNOS_PMU_GNSS2AP_PERI_ACCESS_WIN, 0x0);
#endif
return 0;
}

Some files were not shown because too many files have changed in this diff Show more