Fixed MTP to work with TWRP

This commit is contained in:
awab228 2018-06-19 23:16:04 +02:00
commit f6dfaef42e
50820 changed files with 20846062 additions and 0 deletions

326
drivers/base/Kconfig Normal file
View file

@ -0,0 +1,326 @@
menu "Generic Driver Options"
config UEVENT_HELPER
bool "Support for uevent helper"
default y
help
The uevent helper program is forked by the kernel for
every uevent.
Before the switch to the netlink-based uevent source, this was
used to hook hotplug scripts into kernel device events. It
usually pointed to a shell script at /sbin/hotplug.
This should not be used today, because usual systems create
many events at bootup or device discovery in a very short time
frame. One forked process per event can create so many processes
that it creates a high system load, or on smaller systems
it is known to create out-of-memory situations during bootup.
config UEVENT_HELPER_PATH
string "path to uevent helper"
depends on UEVENT_HELPER
default ""
help
To disable user space helper program execution at by default
specify an empty string here. This setting can still be altered
via /proc/sys/kernel/hotplug or via /sys/kernel/uevent_helper
later at runtime.
config DEVTMPFS
bool "Maintain a devtmpfs filesystem to mount at /dev"
help
This creates a tmpfs/ramfs filesystem instance early at bootup.
In this filesystem, the kernel driver core maintains device
nodes with their default names and permissions for all
registered devices with an assigned major/minor number.
Userspace can modify the filesystem content as needed, add
symlinks, and apply needed permissions.
It provides a fully functional /dev directory, where usually
udev runs on top, managing permissions and adding meaningful
symlinks.
In very limited environments, it may provide a sufficient
functional /dev without any further help. It also allows simple
rescue systems, and reliably handles dynamic major/minor numbers.
Notice: if CONFIG_TMPFS isn't enabled, the simpler ramfs
file system will be used instead.
config DEVTMPFS_MOUNT
bool "Automount devtmpfs at /dev, after the kernel mounted the rootfs"
depends on DEVTMPFS
help
This will instruct the kernel to automatically mount the
devtmpfs filesystem at /dev, directly after the kernel has
mounted the root filesystem. The behavior can be overridden
with the commandline parameter: devtmpfs.mount=0|1.
This option does not affect initramfs based booting, here
the devtmpfs filesystem always needs to be mounted manually
after the rootfs is mounted.
With this option enabled, it allows to bring up a system in
rescue mode with init=/bin/sh, even when the /dev directory
on the rootfs is completely empty.
config STANDALONE
bool "Select only drivers that don't need compile-time external firmware"
default y
help
Select this option if you don't have magic firmware for drivers that
need it.
If unsure, say Y.
config PREVENT_FIRMWARE_BUILD
bool "Prevent firmware from being built"
default y
help
Say yes to avoid building firmware. Firmware is usually shipped
with the driver and only when updating the firmware should a
rebuild be made.
If unsure, say Y here.
config FW_LOADER
tristate "Userspace firmware loading support" if EXPERT
default y
---help---
This option is provided for the case where none of the in-tree modules
require userspace firmware loading support, but a module built
out-of-tree does.
config FIRMWARE_IN_KERNEL
bool "Include in-kernel firmware blobs in kernel binary"
depends on FW_LOADER
default y
help
The kernel source tree includes a number of firmware 'blobs'
that are used by various drivers. The recommended way to
use these is to run "make firmware_install", which, after
converting ihex files to binary, copies all of the needed
binary files in firmware/ to /lib/firmware/ on your system so
that they can be loaded by userspace helpers on request.
Enabling this option will build each required firmware blob
into the kernel directly, where request_firmware() will find
them without having to call out to userspace. This may be
useful if your root file system requires a device that uses
such firmware and do not wish to use an initrd.
This single option controls the inclusion of firmware for
every driver that uses request_firmware() and ships its
firmware in the kernel source tree, which avoids a
proliferation of 'Include firmware for xxx device' options.
Say 'N' and let firmware be loaded from userspace.
config EXTRA_FIRMWARE
string "External firmware blobs to build into the kernel binary"
depends on FW_LOADER
help
This option allows firmware to be built into the kernel for the case
where the user either cannot or doesn't want to provide it from
userspace at runtime (for example, when the firmware in question is
required for accessing the boot device, and the user doesn't want to
use an initrd).
This option is a string and takes the (space-separated) names of the
firmware files -- the same names that appear in MODULE_FIRMWARE()
and request_firmware() in the source. These files should exist under
the directory specified by the EXTRA_FIRMWARE_DIR option, which is
by default the firmware subdirectory of the kernel source tree.
For example, you might set CONFIG_EXTRA_FIRMWARE="usb8388.bin", copy
the usb8388.bin file into the firmware directory, and build the kernel.
Then any request_firmware("usb8388.bin") will be satisfied internally
without needing to call out to userspace.
WARNING: If you include additional firmware files into your binary
kernel image that are not available under the terms of the GPL,
then it may be a violation of the GPL to distribute the resulting
image since it combines both GPL and non-GPL work. You should
consult a lawyer of your own before distributing such an image.
config EXTRA_FIRMWARE_DIR
string "Firmware blobs root directory"
depends on EXTRA_FIRMWARE != ""
default "firmware"
help
This option controls the directory in which the kernel build system
looks for the firmware files listed in the EXTRA_FIRMWARE option.
The default is firmware/ in the kernel source tree, but by changing
this option you can point it elsewhere, such as /lib/firmware/ or
some other directory containing the firmware files.
config FW_LOADER_USER_HELPER
bool
config FW_LOADER_USER_HELPER_FALLBACK
bool "Fallback user-helper invocation for firmware loading"
depends on FW_LOADER
select FW_LOADER_USER_HELPER
help
This option enables / disables the invocation of user-helper
(e.g. udev) for loading firmware files as a fallback after the
direct file loading in kernel fails. The user-mode helper is
no longer required unless you have a special firmware file that
resides in a non-standard path. Moreover, the udev support has
been deprecated upstream.
If you are unsure about this, say N here.
config WANT_DEV_COREDUMP
bool
help
Drivers should "select" this option if they desire to use the
device coredump mechanism.
config ALLOW_DEV_COREDUMP
bool "Allow device coredump" if EXPERT
default y
help
This option controls if the device coredump mechanism is available or
not; if disabled, the mechanism will be omitted even if drivers that
can use it are enabled.
Say 'N' for more sensitive systems or systems that don't want
to ever access the information to not have the code, nor keep any
data.
If unsure, say Y.
config DEV_COREDUMP
bool
default y if WANT_DEV_COREDUMP
depends on ALLOW_DEV_COREDUMP
config DEBUG_DRIVER
bool "Driver Core verbose debug messages"
depends on DEBUG_KERNEL
help
Say Y here if you want the Driver core to produce a bunch of
debug messages to the system log. Select this if you are having a
problem with the driver core and want to see more of what is
going on.
If you are unsure about this, say N here.
config DEBUG_DEVRES
bool "Managed device resources verbose debug messages"
depends on DEBUG_KERNEL
help
This option enables kernel parameter devres.log. If set to
non-zero, devres debug messages are printed. Select this if
you are having a problem with devres or want to debug
resource management for a managed device. devres.log can be
switched on and off from sysfs node.
If you are unsure about this, Say N here.
config SYS_HYPERVISOR
bool
default n
config GENERIC_CPU_DEVICES
bool
default n
config HAVE_CPU_AUTOPROBE
def_bool ARCH_HAS_CPU_AUTOPROBE
config GENERIC_CPU_AUTOPROBE
bool
depends on !ARCH_HAS_CPU_AUTOPROBE
select HAVE_CPU_AUTOPROBE
config SOC_BUS
bool
source "drivers/base/regmap/Kconfig"
config DMA_SHARED_BUFFER
bool
default n
select ANON_INODES
help
This option enables the framework for buffer-sharing between
multiple drivers. A buffer is associated with a file using driver
APIs extension; the file's descriptor can then be passed on to other
driver.
config FENCE_TRACE
bool "Enable verbose FENCE_TRACE messages"
depends on DMA_SHARED_BUFFER
help
Enable the FENCE_TRACE printks. This will add extra
spam to the console log, but will make it easier to diagnose
lockup related problems for dma-buffers shared across multiple
devices.
config DMA_CMA
bool "DMA Contiguous Memory Allocator"
depends on HAVE_DMA_CONTIGUOUS && CMA
help
This enables the Contiguous Memory Allocator which allows drivers
to allocate big physically-contiguous blocks of memory for use with
hardware components that do not support I/O map nor scatter-gather.
You can disable CMA by specifying "cma=0" on the kernel's command
line.
For more information see <include/linux/dma-contiguous.h>.
If unsure, say "n".
if DMA_CMA
comment "Default contiguous memory area size:"
config CMA_SIZE_MBYTES
int "Size in Mega Bytes"
depends on !CMA_SIZE_SEL_PERCENTAGE
default 16
help
Defines the size (in MiB) of the default memory area for Contiguous
Memory Allocator.
config CMA_SIZE_PERCENTAGE
int "Percentage of total memory"
depends on !CMA_SIZE_SEL_MBYTES
default 10
help
Defines the size of the default memory area for Contiguous Memory
Allocator as a percentage of the total memory in the system.
choice
prompt "Selected region size"
default CMA_SIZE_SEL_MBYTES
config CMA_SIZE_SEL_MBYTES
bool "Use mega bytes value only"
config CMA_SIZE_SEL_PERCENTAGE
bool "Use percentage value only"
config CMA_SIZE_SEL_MIN
bool "Use lower value (minimum)"
config CMA_SIZE_SEL_MAX
bool "Use higher value (maximum)"
endchoice
config CMA_ALIGNMENT
int "Maximum PAGE_SIZE order of alignment for contiguous buffers"
range 4 12
default 8
help
DMA mapping framework by default aligns all buffers to the smallest
PAGE_SIZE order which is greater than or equal to the requested buffer
size. This works well for buffers up to a few hundreds kilobytes, but
for larger buffers it just a memory waste. With this parameter you can
specify the maximum PAGE_SIZE order for contiguous buffers. Larger
buffers will be aligned only to this specified order. The order is
expressed as a power of two multiplied by the PAGE_SIZE.
For example, if your system defaults to 4KiB pages, the order value
of 8 means that the buffers will be aligned up to 1MiB only.
If unsure, leave the default value "8".
endif
endmenu

27
drivers/base/Makefile Normal file
View file

@ -0,0 +1,27 @@
# Makefile for the Linux device tree
obj-y := component.o core.o bus.o dd.o syscore.o \
driver.o class.o platform.o \
cpu.o firmware.o init.o map.o devres.o \
attribute_container.o transport_class.o \
topology.o container.o
obj-$(CONFIG_DEVTMPFS) += devtmpfs.o
obj-$(CONFIG_DMA_CMA) += dma-contiguous.o
obj-y += power/
obj-$(CONFIG_HAS_DMA) += dma-mapping.o
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
obj-$(CONFIG_ISA) += isa.o
obj-$(CONFIG_FW_LOADER) += firmware_class.o
obj-$(CONFIG_NUMA) += node.o
obj-$(CONFIG_MEMORY_HOTPLUG_SPARSE) += memory.o
ifeq ($(CONFIG_SYSFS),y)
obj-$(CONFIG_MODULES) += module.o
endif
obj-$(CONFIG_SYS_HYPERVISOR) += hypervisor.o
obj-$(CONFIG_REGMAP) += regmap/
obj-$(CONFIG_SOC_BUS) += soc.o
obj-$(CONFIG_PINCTRL) += pinctrl.o
obj-$(CONFIG_DEV_COREDUMP) += devcoredump.o
ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG

View file

@ -0,0 +1,440 @@
/*
* attribute_container.c - implementation of a simple container for classes
*
* Copyright (c) 2005 - James Bottomley <James.Bottomley@steeleye.com>
*
* This file is licensed under GPLv2
*
* The basic idea here is to enable a device to be attached to an
* aritrary numer of classes without having to allocate storage for them.
* Instead, the contained classes select the devices they need to attach
* to via a matching function.
*/
#include <linux/attribute_container.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include "base.h"
/* This is a private structure used to tie the classdev and the
* container .. it should never be visible outside this file */
struct internal_container {
struct klist_node node;
struct attribute_container *cont;
struct device classdev;
};
static void internal_container_klist_get(struct klist_node *n)
{
struct internal_container *ic =
container_of(n, struct internal_container, node);
get_device(&ic->classdev);
}
static void internal_container_klist_put(struct klist_node *n)
{
struct internal_container *ic =
container_of(n, struct internal_container, node);
put_device(&ic->classdev);
}
/**
* attribute_container_classdev_to_container - given a classdev, return the container
*
* @classdev: the class device created by attribute_container_add_device.
*
* Returns the container associated with this classdev.
*/
struct attribute_container *
attribute_container_classdev_to_container(struct device *classdev)
{
struct internal_container *ic =
container_of(classdev, struct internal_container, classdev);
return ic->cont;
}
EXPORT_SYMBOL_GPL(attribute_container_classdev_to_container);
static LIST_HEAD(attribute_container_list);
static DEFINE_MUTEX(attribute_container_mutex);
/**
* attribute_container_register - register an attribute container
*
* @cont: The container to register. This must be allocated by the
* callee and should also be zeroed by it.
*/
int
attribute_container_register(struct attribute_container *cont)
{
INIT_LIST_HEAD(&cont->node);
klist_init(&cont->containers, internal_container_klist_get,
internal_container_klist_put);
mutex_lock(&attribute_container_mutex);
list_add_tail(&cont->node, &attribute_container_list);
mutex_unlock(&attribute_container_mutex);
return 0;
}
EXPORT_SYMBOL_GPL(attribute_container_register);
/**
* attribute_container_unregister - remove a container registration
*
* @cont: previously registered container to remove
*/
int
attribute_container_unregister(struct attribute_container *cont)
{
int retval = -EBUSY;
mutex_lock(&attribute_container_mutex);
spin_lock(&cont->containers.k_lock);
if (!list_empty(&cont->containers.k_list))
goto out;
retval = 0;
list_del(&cont->node);
out:
spin_unlock(&cont->containers.k_lock);
mutex_unlock(&attribute_container_mutex);
return retval;
}
EXPORT_SYMBOL_GPL(attribute_container_unregister);
/* private function used as class release */
static void attribute_container_release(struct device *classdev)
{
struct internal_container *ic
= container_of(classdev, struct internal_container, classdev);
struct device *dev = classdev->parent;
kfree(ic);
put_device(dev);
}
/**
* attribute_container_add_device - see if any container is interested in dev
*
* @dev: device to add attributes to
* @fn: function to trigger addition of class device.
*
* This function allocates storage for the class device(s) to be
* attached to dev (one for each matching attribute_container). If no
* fn is provided, the code will simply register the class device via
* device_add. If a function is provided, it is expected to add
* the class device at the appropriate time. One of the things that
* might be necessary is to allocate and initialise the classdev and
* then add it a later time. To do this, call this routine for
* allocation and initialisation and then use
* attribute_container_device_trigger() to call device_add() on
* it. Note: after this, the class device contains a reference to dev
* which is not relinquished until the release of the classdev.
*/
void
attribute_container_add_device(struct device *dev,
int (*fn)(struct attribute_container *,
struct device *,
struct device *))
{
struct attribute_container *cont;
mutex_lock(&attribute_container_mutex);
list_for_each_entry(cont, &attribute_container_list, node) {
struct internal_container *ic;
if (attribute_container_no_classdevs(cont))
continue;
if (!cont->match(cont, dev))
continue;
ic = kzalloc(sizeof(*ic), GFP_KERNEL);
if (!ic) {
dev_err(dev, "failed to allocate class container\n");
continue;
}
ic->cont = cont;
device_initialize(&ic->classdev);
ic->classdev.parent = get_device(dev);
ic->classdev.class = cont->class;
cont->class->dev_release = attribute_container_release;
dev_set_name(&ic->classdev, "%s", dev_name(dev));
if (fn)
fn(cont, dev, &ic->classdev);
else
attribute_container_add_class_device(&ic->classdev);
klist_add_tail(&ic->node, &cont->containers);
}
mutex_unlock(&attribute_container_mutex);
}
/* FIXME: can't break out of this unless klist_iter_exit is also
* called before doing the break
*/
#define klist_for_each_entry(pos, head, member, iter) \
for (klist_iter_init(head, iter); (pos = ({ \
struct klist_node *n = klist_next(iter); \
n ? container_of(n, typeof(*pos), member) : \
({ klist_iter_exit(iter) ; NULL; }); \
})) != NULL;)
/**
* attribute_container_remove_device - make device eligible for removal.
*
* @dev: The generic device
* @fn: A function to call to remove the device
*
* This routine triggers device removal. If fn is NULL, then it is
* simply done via device_unregister (note that if something
* still has a reference to the classdev, then the memory occupied
* will not be freed until the classdev is released). If you want a
* two phase release: remove from visibility and then delete the
* device, then you should use this routine with a fn that calls
* device_del() and then use attribute_container_device_trigger()
* to do the final put on the classdev.
*/
void
attribute_container_remove_device(struct device *dev,
void (*fn)(struct attribute_container *,
struct device *,
struct device *))
{
struct attribute_container *cont;
mutex_lock(&attribute_container_mutex);
list_for_each_entry(cont, &attribute_container_list, node) {
struct internal_container *ic;
struct klist_iter iter;
if (attribute_container_no_classdevs(cont))
continue;
if (!cont->match(cont, dev))
continue;
klist_for_each_entry(ic, &cont->containers, node, &iter) {
if (dev != ic->classdev.parent)
continue;
klist_del(&ic->node);
if (fn)
fn(cont, dev, &ic->classdev);
else {
attribute_container_remove_attrs(&ic->classdev);
device_unregister(&ic->classdev);
}
}
}
mutex_unlock(&attribute_container_mutex);
}
/**
* attribute_container_device_trigger - execute a trigger for each matching classdev
*
* @dev: The generic device to run the trigger for
* @fn the function to execute for each classdev.
*
* This funcion is for executing a trigger when you need to know both
* the container and the classdev. If you only care about the
* container, then use attribute_container_trigger() instead.
*/
void
attribute_container_device_trigger(struct device *dev,
int (*fn)(struct attribute_container *,
struct device *,
struct device *))
{
struct attribute_container *cont;
mutex_lock(&attribute_container_mutex);
list_for_each_entry(cont, &attribute_container_list, node) {
struct internal_container *ic;
struct klist_iter iter;
if (!cont->match(cont, dev))
continue;
if (attribute_container_no_classdevs(cont)) {
fn(cont, dev, NULL);
continue;
}
klist_for_each_entry(ic, &cont->containers, node, &iter) {
if (dev == ic->classdev.parent)
fn(cont, dev, &ic->classdev);
}
}
mutex_unlock(&attribute_container_mutex);
}
/**
* attribute_container_trigger - trigger a function for each matching container
*
* @dev: The generic device to activate the trigger for
* @fn: the function to trigger
*
* This routine triggers a function that only needs to know the
* matching containers (not the classdev) associated with a device.
* It is more lightweight than attribute_container_device_trigger, so
* should be used in preference unless the triggering function
* actually needs to know the classdev.
*/
void
attribute_container_trigger(struct device *dev,
int (*fn)(struct attribute_container *,
struct device *))
{
struct attribute_container *cont;
mutex_lock(&attribute_container_mutex);
list_for_each_entry(cont, &attribute_container_list, node) {
if (cont->match(cont, dev))
fn(cont, dev);
}
mutex_unlock(&attribute_container_mutex);
}
/**
* attribute_container_add_attrs - add attributes
*
* @classdev: The class device
*
* This simply creates all the class device sysfs files from the
* attributes listed in the container
*/
int
attribute_container_add_attrs(struct device *classdev)
{
struct attribute_container *cont =
attribute_container_classdev_to_container(classdev);
struct device_attribute **attrs = cont->attrs;
int i, error;
BUG_ON(attrs && cont->grp);
if (!attrs && !cont->grp)
return 0;
if (cont->grp)
return sysfs_create_group(&classdev->kobj, cont->grp);
for (i = 0; attrs[i]; i++) {
sysfs_attr_init(&attrs[i]->attr);
error = device_create_file(classdev, attrs[i]);
if (error)
return error;
}
return 0;
}
/**
* attribute_container_add_class_device - same function as device_add
*
* @classdev: the class device to add
*
* This performs essentially the same function as device_add except for
* attribute containers, namely add the classdev to the system and then
* create the attribute files
*/
int
attribute_container_add_class_device(struct device *classdev)
{
int error = device_add(classdev);
if (error)
return error;
return attribute_container_add_attrs(classdev);
}
/**
* attribute_container_add_class_device_adapter - simple adapter for triggers
*
* This function is identical to attribute_container_add_class_device except
* that it is designed to be called from the triggers
*/
int
attribute_container_add_class_device_adapter(struct attribute_container *cont,
struct device *dev,
struct device *classdev)
{
return attribute_container_add_class_device(classdev);
}
/**
* attribute_container_remove_attrs - remove any attribute files
*
* @classdev: The class device to remove the files from
*
*/
void
attribute_container_remove_attrs(struct device *classdev)
{
struct attribute_container *cont =
attribute_container_classdev_to_container(classdev);
struct device_attribute **attrs = cont->attrs;
int i;
if (!attrs && !cont->grp)
return;
if (cont->grp) {
sysfs_remove_group(&classdev->kobj, cont->grp);
return ;
}
for (i = 0; attrs[i]; i++)
device_remove_file(classdev, attrs[i]);
}
/**
* attribute_container_class_device_del - equivalent of class_device_del
*
* @classdev: the class device
*
* This function simply removes all the attribute files and then calls
* device_del.
*/
void
attribute_container_class_device_del(struct device *classdev)
{
attribute_container_remove_attrs(classdev);
device_del(classdev);
}
/**
* attribute_container_find_class_device - find the corresponding class_device
*
* @cont: the container
* @dev: the generic device
*
* Looks up the device in the container's list of class devices and returns
* the corresponding class_device.
*/
struct device *
attribute_container_find_class_device(struct attribute_container *cont,
struct device *dev)
{
struct device *cdev = NULL;
struct internal_container *ic;
struct klist_iter iter;
klist_for_each_entry(ic, &cont->containers, node, &iter) {
if (ic->classdev.parent == dev) {
cdev = &ic->classdev;
/* FIXME: must exit iterator then break */
klist_iter_exit(&iter);
break;
}
}
return cdev;
}
EXPORT_SYMBOL_GPL(attribute_container_find_class_device);

150
drivers/base/base.h Normal file
View file

@ -0,0 +1,150 @@
#include <linux/notifier.h>
/**
* struct subsys_private - structure to hold the private to the driver core portions of the bus_type/class structure.
*
* @subsys - the struct kset that defines this subsystem
* @devices_kset - the subsystem's 'devices' directory
* @interfaces - list of subsystem interfaces associated
* @mutex - protect the devices, and interfaces lists.
*
* @drivers_kset - the list of drivers associated
* @klist_devices - the klist to iterate over the @devices_kset
* @klist_drivers - the klist to iterate over the @drivers_kset
* @bus_notifier - the bus notifier list for anything that cares about things
* on this bus.
* @bus - pointer back to the struct bus_type that this structure is associated
* with.
*
* @glue_dirs - "glue" directory to put in-between the parent device to
* avoid namespace conflicts
* @class - pointer back to the struct class that this structure is associated
* with.
*
* This structure is the one that is the actual kobject allowing struct
* bus_type/class to be statically allocated safely. Nothing outside of the
* driver core should ever touch these fields.
*/
struct subsys_private {
struct kset subsys;
struct kset *devices_kset;
struct list_head interfaces;
struct mutex mutex;
struct kset *drivers_kset;
struct klist klist_devices;
struct klist klist_drivers;
struct blocking_notifier_head bus_notifier;
unsigned int drivers_autoprobe:1;
struct bus_type *bus;
struct kset glue_dirs;
struct class *class;
};
#define to_subsys_private(obj) container_of(obj, struct subsys_private, subsys.kobj)
struct driver_private {
struct kobject kobj;
struct klist klist_devices;
struct klist_node knode_bus;
struct module_kobject *mkobj;
struct device_driver *driver;
};
#define to_driver(obj) container_of(obj, struct driver_private, kobj)
/**
* struct device_private - structure to hold the private to the driver core portions of the device structure.
*
* @klist_children - klist containing all children of this device
* @knode_parent - node in sibling list
* @knode_driver - node in driver list
* @knode_bus - node in bus list
* @deferred_probe - entry in deferred_probe_list which is used to retry the
* binding of drivers which were unable to get all the resources needed by
* the device; typically because it depends on another driver getting
* probed first.
* @device - pointer back to the struct class that this structure is
* associated with.
*
* Nothing outside of the driver core should ever touch these fields.
*/
struct device_private {
struct klist klist_children;
struct klist_node knode_parent;
struct klist_node knode_driver;
struct klist_node knode_bus;
struct list_head deferred_probe;
struct device *device;
};
#define to_device_private_parent(obj) \
container_of(obj, struct device_private, knode_parent)
#define to_device_private_driver(obj) \
container_of(obj, struct device_private, knode_driver)
#define to_device_private_bus(obj) \
container_of(obj, struct device_private, knode_bus)
extern int device_private_init(struct device *dev);
/* initialisation functions */
extern int devices_init(void);
extern int buses_init(void);
extern int classes_init(void);
extern int firmware_init(void);
#ifdef CONFIG_SYS_HYPERVISOR
extern int hypervisor_init(void);
#else
static inline int hypervisor_init(void) { return 0; }
#endif
extern int platform_bus_init(void);
extern void cpu_dev_init(void);
extern void container_dev_init(void);
struct kobject *virtual_device_parent(struct device *dev);
extern int bus_add_device(struct device *dev);
extern void bus_probe_device(struct device *dev);
extern void bus_remove_device(struct device *dev);
extern int bus_add_driver(struct device_driver *drv);
extern void bus_remove_driver(struct device_driver *drv);
extern void driver_detach(struct device_driver *drv);
extern int driver_probe_device(struct device_driver *drv, struct device *dev);
extern void driver_deferred_probe_del(struct device *dev);
static inline int driver_match_device(struct device_driver *drv,
struct device *dev)
{
return drv->bus->match ? drv->bus->match(dev, drv) : 1;
}
extern int driver_add_groups(struct device_driver *drv,
const struct attribute_group **groups);
extern void driver_remove_groups(struct device_driver *drv,
const struct attribute_group **groups);
extern int device_add_groups(struct device *dev,
const struct attribute_group **groups);
extern void device_remove_groups(struct device *dev,
const struct attribute_group **groups);
extern char *make_class_name(const char *name, struct kobject *kobj);
extern int devres_release_all(struct device *dev);
/* /sys/devices directory */
extern struct kset *devices_kset;
#if defined(CONFIG_MODULES) && defined(CONFIG_SYSFS)
extern void module_add_driver(struct module *mod, struct device_driver *drv);
extern void module_remove_driver(struct device_driver *drv);
#else
static inline void module_add_driver(struct module *mod,
struct device_driver *drv) { }
static inline void module_remove_driver(struct device_driver *drv) { }
#endif
#ifdef CONFIG_DEVTMPFS
extern int devtmpfs_init(void);
#else
static inline int devtmpfs_init(void) { return 0; }
#endif

1274
drivers/base/bus.c Normal file

File diff suppressed because it is too large Load diff

598
drivers/base/class.c Normal file
View file

@ -0,0 +1,598 @@
/*
* class.c - basic device class management
*
* Copyright (c) 2002-3 Patrick Mochel
* Copyright (c) 2002-3 Open Source Development Labs
* Copyright (c) 2003-2004 Greg Kroah-Hartman
* Copyright (c) 2003-2004 IBM Corp.
*
* This file is released under the GPLv2
*
*/
#include <linux/device.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/string.h>
#include <linux/kdev_t.h>
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/genhd.h>
#include <linux/mutex.h>
#include "base.h"
#define to_class_attr(_attr) container_of(_attr, struct class_attribute, attr)
static ssize_t class_attr_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct class_attribute *class_attr = to_class_attr(attr);
struct subsys_private *cp = to_subsys_private(kobj);
ssize_t ret = -EIO;
if (class_attr->show)
ret = class_attr->show(cp->class, class_attr, buf);
return ret;
}
static ssize_t class_attr_store(struct kobject *kobj, struct attribute *attr,
const char *buf, size_t count)
{
struct class_attribute *class_attr = to_class_attr(attr);
struct subsys_private *cp = to_subsys_private(kobj);
ssize_t ret = -EIO;
if (class_attr->store)
ret = class_attr->store(cp->class, class_attr, buf, count);
return ret;
}
static void class_release(struct kobject *kobj)
{
struct subsys_private *cp = to_subsys_private(kobj);
struct class *class = cp->class;
pr_debug("class '%s': release.\n", class->name);
if (class->class_release)
class->class_release(class);
else
pr_debug("class '%s' does not have a release() function, "
"be careful\n", class->name);
kfree(cp);
}
static const struct kobj_ns_type_operations *class_child_ns_type(struct kobject *kobj)
{
struct subsys_private *cp = to_subsys_private(kobj);
struct class *class = cp->class;
return class->ns_type;
}
static const struct sysfs_ops class_sysfs_ops = {
.show = class_attr_show,
.store = class_attr_store,
};
static struct kobj_type class_ktype = {
.sysfs_ops = &class_sysfs_ops,
.release = class_release,
.child_ns_type = class_child_ns_type,
};
/* Hotplug events for classes go to the class subsys */
static struct kset *class_kset;
int class_create_file_ns(struct class *cls, const struct class_attribute *attr,
const void *ns)
{
int error;
if (cls)
error = sysfs_create_file_ns(&cls->p->subsys.kobj,
&attr->attr, ns);
else
error = -EINVAL;
return error;
}
void class_remove_file_ns(struct class *cls, const struct class_attribute *attr,
const void *ns)
{
if (cls)
sysfs_remove_file_ns(&cls->p->subsys.kobj, &attr->attr, ns);
}
static struct class *class_get(struct class *cls)
{
if (cls)
kset_get(&cls->p->subsys);
return cls;
}
static void class_put(struct class *cls)
{
if (cls)
kset_put(&cls->p->subsys);
}
static int add_class_attrs(struct class *cls)
{
int i;
int error = 0;
if (cls->class_attrs) {
for (i = 0; cls->class_attrs[i].attr.name; i++) {
error = class_create_file(cls, &cls->class_attrs[i]);
if (error)
goto error;
}
}
done:
return error;
error:
while (--i >= 0)
class_remove_file(cls, &cls->class_attrs[i]);
goto done;
}
static void remove_class_attrs(struct class *cls)
{
int i;
if (cls->class_attrs) {
for (i = 0; cls->class_attrs[i].attr.name; i++)
class_remove_file(cls, &cls->class_attrs[i]);
}
}
static void klist_class_dev_get(struct klist_node *n)
{
struct device *dev = container_of(n, struct device, knode_class);
get_device(dev);
}
static void klist_class_dev_put(struct klist_node *n)
{
struct device *dev = container_of(n, struct device, knode_class);
put_device(dev);
}
int __class_register(struct class *cls, struct lock_class_key *key)
{
struct subsys_private *cp;
int error;
pr_debug("device class '%s': registering\n", cls->name);
cp = kzalloc(sizeof(*cp), GFP_KERNEL);
if (!cp)
return -ENOMEM;
klist_init(&cp->klist_devices, klist_class_dev_get, klist_class_dev_put);
INIT_LIST_HEAD(&cp->interfaces);
kset_init(&cp->glue_dirs);
__mutex_init(&cp->mutex, "subsys mutex", key);
error = kobject_set_name(&cp->subsys.kobj, "%s", cls->name);
if (error) {
kfree(cp);
return error;
}
/* set the default /sys/dev directory for devices of this class */
if (!cls->dev_kobj)
cls->dev_kobj = sysfs_dev_char_kobj;
#if defined(CONFIG_BLOCK)
/* let the block class directory show up in the root of sysfs */
if (!sysfs_deprecated || cls != &block_class)
cp->subsys.kobj.kset = class_kset;
#else
cp->subsys.kobj.kset = class_kset;
#endif
cp->subsys.kobj.ktype = &class_ktype;
cp->class = cls;
cls->p = cp;
error = kset_register(&cp->subsys);
if (error) {
kfree(cp);
return error;
}
error = add_class_attrs(class_get(cls));
class_put(cls);
return error;
}
EXPORT_SYMBOL_GPL(__class_register);
void class_unregister(struct class *cls)
{
pr_debug("device class '%s': unregistering\n", cls->name);
remove_class_attrs(cls);
kset_unregister(&cls->p->subsys);
}
static void class_create_release(struct class *cls)
{
pr_debug("%s called for %s\n", __func__, cls->name);
kfree(cls);
}
/**
* class_create - create a struct class structure
* @owner: pointer to the module that is to "own" this struct class
* @name: pointer to a string for the name of this class.
* @key: the lock_class_key for this class; used by mutex lock debugging
*
* This is used to create a struct class pointer that can then be used
* in calls to device_create().
*
* Returns &struct class pointer on success, or ERR_PTR() on error.
*
* Note, the pointer created here is to be destroyed when finished by
* making a call to class_destroy().
*/
struct class *__class_create(struct module *owner, const char *name,
struct lock_class_key *key)
{
struct class *cls;
int retval;
cls = kzalloc(sizeof(*cls), GFP_KERNEL);
if (!cls) {
retval = -ENOMEM;
goto error;
}
cls->name = name;
cls->owner = owner;
cls->class_release = class_create_release;
retval = __class_register(cls, key);
if (retval)
goto error;
return cls;
error:
kfree(cls);
return ERR_PTR(retval);
}
EXPORT_SYMBOL_GPL(__class_create);
/**
* class_destroy - destroys a struct class structure
* @cls: pointer to the struct class that is to be destroyed
*
* Note, the pointer to be destroyed must have been created with a call
* to class_create().
*/
void class_destroy(struct class *cls)
{
if ((cls == NULL) || (IS_ERR(cls)))
return;
class_unregister(cls);
}
/**
* class_dev_iter_init - initialize class device iterator
* @iter: class iterator to initialize
* @class: the class we wanna iterate over
* @start: the device to start iterating from, if any
* @type: device_type of the devices to iterate over, NULL for all
*
* Initialize class iterator @iter such that it iterates over devices
* of @class. If @start is set, the list iteration will start there,
* otherwise if it is NULL, the iteration starts at the beginning of
* the list.
*/
void class_dev_iter_init(struct class_dev_iter *iter, struct class *class,
struct device *start, const struct device_type *type)
{
struct klist_node *start_knode = NULL;
if (start)
start_knode = &start->knode_class;
klist_iter_init_node(&class->p->klist_devices, &iter->ki, start_knode);
iter->type = type;
}
EXPORT_SYMBOL_GPL(class_dev_iter_init);
/**
* class_dev_iter_next - iterate to the next device
* @iter: class iterator to proceed
*
* Proceed @iter to the next device and return it. Returns NULL if
* iteration is complete.
*
* The returned device is referenced and won't be released till
* iterator is proceed to the next device or exited. The caller is
* free to do whatever it wants to do with the device including
* calling back into class code.
*/
struct device *class_dev_iter_next(struct class_dev_iter *iter)
{
struct klist_node *knode;
struct device *dev;
while (1) {
knode = klist_next(&iter->ki);
if (!knode)
return NULL;
dev = container_of(knode, struct device, knode_class);
if (!iter->type || iter->type == dev->type)
return dev;
}
}
EXPORT_SYMBOL_GPL(class_dev_iter_next);
/**
* class_dev_iter_exit - finish iteration
* @iter: class iterator to finish
*
* Finish an iteration. Always call this function after iteration is
* complete whether the iteration ran till the end or not.
*/
void class_dev_iter_exit(struct class_dev_iter *iter)
{
klist_iter_exit(&iter->ki);
}
EXPORT_SYMBOL_GPL(class_dev_iter_exit);
/**
* class_for_each_device - device iterator
* @class: the class we're iterating
* @start: the device to start with in the list, if any.
* @data: data for the callback
* @fn: function to be called for each device
*
* Iterate over @class's list of devices, and call @fn for each,
* passing it @data. If @start is set, the list iteration will start
* there, otherwise if it is NULL, the iteration starts at the
* beginning of the list.
*
* We check the return of @fn each time. If it returns anything
* other than 0, we break out and return that value.
*
* @fn is allowed to do anything including calling back into class
* code. There's no locking restriction.
*/
int class_for_each_device(struct class *class, struct device *start,
void *data, int (*fn)(struct device *, void *))
{
struct class_dev_iter iter;
struct device *dev;
int error = 0;
if (!class)
return -EINVAL;
if (!class->p) {
WARN(1, "%s called for class '%s' before it was initialized",
__func__, class->name);
return -EINVAL;
}
class_dev_iter_init(&iter, class, start, NULL);
while ((dev = class_dev_iter_next(&iter))) {
error = fn(dev, data);
if (error)
break;
}
class_dev_iter_exit(&iter);
return error;
}
EXPORT_SYMBOL_GPL(class_for_each_device);
/**
* class_find_device - device iterator for locating a particular device
* @class: the class we're iterating
* @start: Device to begin with
* @data: data for the match function
* @match: function to check device
*
* This is similar to the class_for_each_dev() function above, but it
* returns a reference to a device that is 'found' for later use, as
* determined by the @match callback.
*
* The callback should return 0 if the device doesn't match and non-zero
* if it does. If the callback returns non-zero, this function will
* return to the caller and not iterate over any more devices.
*
* Note, you will need to drop the reference with put_device() after use.
*
* @fn is allowed to do anything including calling back into class
* code. There's no locking restriction.
*/
struct device *class_find_device(struct class *class, struct device *start,
const void *data,
int (*match)(struct device *, const void *))
{
struct class_dev_iter iter;
struct device *dev;
if (!class)
return NULL;
if (!class->p) {
WARN(1, "%s called for class '%s' before it was initialized",
__func__, class->name);
return NULL;
}
class_dev_iter_init(&iter, class, start, NULL);
while ((dev = class_dev_iter_next(&iter))) {
if (match(dev, data)) {
get_device(dev);
break;
}
}
class_dev_iter_exit(&iter);
return dev;
}
EXPORT_SYMBOL_GPL(class_find_device);
int class_interface_register(struct class_interface *class_intf)
{
struct class *parent;
struct class_dev_iter iter;
struct device *dev;
if (!class_intf || !class_intf->class)
return -ENODEV;
parent = class_get(class_intf->class);
if (!parent)
return -EINVAL;
mutex_lock(&parent->p->mutex);
list_add_tail(&class_intf->node, &parent->p->interfaces);
if (class_intf->add_dev) {
class_dev_iter_init(&iter, parent, NULL, NULL);
while ((dev = class_dev_iter_next(&iter)))
class_intf->add_dev(dev, class_intf);
class_dev_iter_exit(&iter);
}
mutex_unlock(&parent->p->mutex);
return 0;
}
void class_interface_unregister(struct class_interface *class_intf)
{
struct class *parent = class_intf->class;
struct class_dev_iter iter;
struct device *dev;
if (!parent)
return;
mutex_lock(&parent->p->mutex);
list_del_init(&class_intf->node);
if (class_intf->remove_dev) {
class_dev_iter_init(&iter, parent, NULL, NULL);
while ((dev = class_dev_iter_next(&iter)))
class_intf->remove_dev(dev, class_intf);
class_dev_iter_exit(&iter);
}
mutex_unlock(&parent->p->mutex);
class_put(parent);
}
ssize_t show_class_attr_string(struct class *class,
struct class_attribute *attr, char *buf)
{
struct class_attribute_string *cs;
cs = container_of(attr, struct class_attribute_string, attr);
return snprintf(buf, PAGE_SIZE, "%s\n", cs->str);
}
EXPORT_SYMBOL_GPL(show_class_attr_string);
struct class_compat {
struct kobject *kobj;
};
/**
* class_compat_register - register a compatibility class
* @name: the name of the class
*
* Compatibility class are meant as a temporary user-space compatibility
* workaround when converting a family of class devices to a bus devices.
*/
struct class_compat *class_compat_register(const char *name)
{
struct class_compat *cls;
cls = kmalloc(sizeof(struct class_compat), GFP_KERNEL);
if (!cls)
return NULL;
cls->kobj = kobject_create_and_add(name, &class_kset->kobj);
if (!cls->kobj) {
kfree(cls);
return NULL;
}
return cls;
}
EXPORT_SYMBOL_GPL(class_compat_register);
/**
* class_compat_unregister - unregister a compatibility class
* @cls: the class to unregister
*/
void class_compat_unregister(struct class_compat *cls)
{
kobject_put(cls->kobj);
kfree(cls);
}
EXPORT_SYMBOL_GPL(class_compat_unregister);
/**
* class_compat_create_link - create a compatibility class device link to
* a bus device
* @cls: the compatibility class
* @dev: the target bus device
* @device_link: an optional device to which a "device" link should be created
*/
int class_compat_create_link(struct class_compat *cls, struct device *dev,
struct device *device_link)
{
int error;
error = sysfs_create_link(cls->kobj, &dev->kobj, dev_name(dev));
if (error)
return error;
/*
* Optionally add a "device" link (typically to the parent), as a
* class device would have one and we want to provide as much
* backwards compatibility as possible.
*/
if (device_link) {
error = sysfs_create_link(&dev->kobj, &device_link->kobj,
"device");
if (error)
sysfs_remove_link(cls->kobj, dev_name(dev));
}
return error;
}
EXPORT_SYMBOL_GPL(class_compat_create_link);
/**
* class_compat_remove_link - remove a compatibility class device link to
* a bus device
* @cls: the compatibility class
* @dev: the target bus device
* @device_link: an optional device to which a "device" link was previously
* created
*/
void class_compat_remove_link(struct class_compat *cls, struct device *dev,
struct device *device_link)
{
if (device_link)
sysfs_remove_link(&dev->kobj, "device");
sysfs_remove_link(cls->kobj, dev_name(dev));
}
EXPORT_SYMBOL_GPL(class_compat_remove_link);
int __init classes_init(void)
{
class_kset = kset_create_and_add("class", NULL, NULL);
if (!class_kset)
return -ENOMEM;
return 0;
}
EXPORT_SYMBOL_GPL(class_create_file_ns);
EXPORT_SYMBOL_GPL(class_remove_file_ns);
EXPORT_SYMBOL_GPL(class_unregister);
EXPORT_SYMBOL_GPL(class_destroy);
EXPORT_SYMBOL_GPL(class_interface_register);
EXPORT_SYMBOL_GPL(class_interface_unregister);

512
drivers/base/component.c Normal file
View file

@ -0,0 +1,512 @@
/*
* Componentized device handling.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This is work in progress. We gather up the component devices into a list,
* and bind them when instructed. At the moment, we're specific to the DRM
* subsystem, and only handles one master device, but this doesn't have to be
* the case.
*/
#include <linux/component.h>
#include <linux/device.h>
#include <linux/kref.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
struct component_match {
size_t alloc;
size_t num;
struct {
void *data;
int (*fn)(struct device *, void *);
} compare[0];
};
struct master {
struct list_head node;
struct list_head components;
bool bound;
const struct component_master_ops *ops;
struct device *dev;
struct component_match *match;
};
struct component {
struct list_head node;
struct list_head master_node;
struct master *master;
bool bound;
const struct component_ops *ops;
struct device *dev;
};
static DEFINE_MUTEX(component_mutex);
static LIST_HEAD(component_list);
static LIST_HEAD(masters);
static struct master *__master_find(struct device *dev,
const struct component_master_ops *ops)
{
struct master *m;
list_for_each_entry(m, &masters, node)
if (m->dev == dev && (!ops || m->ops == ops))
return m;
return NULL;
}
/* Attach an unattached component to a master. */
static void component_attach_master(struct master *master, struct component *c)
{
c->master = master;
list_add_tail(&c->master_node, &master->components);
}
/* Detach a component from a master. */
static void component_detach_master(struct master *master, struct component *c)
{
list_del(&c->master_node);
c->master = NULL;
}
/*
* Add a component to a master, finding the component via the compare
* function and compare data. This is safe to call for duplicate matches
* and will not result in the same component being added multiple times.
*/
int component_master_add_child(struct master *master,
int (*compare)(struct device *, void *), void *compare_data)
{
struct component *c;
int ret = -ENXIO;
list_for_each_entry(c, &component_list, node) {
if (c->master && c->master != master)
continue;
if (compare(c->dev, compare_data)) {
if (!c->master)
component_attach_master(master, c);
ret = 0;
break;
}
}
return ret;
}
EXPORT_SYMBOL_GPL(component_master_add_child);
static int find_components(struct master *master)
{
struct component_match *match = master->match;
size_t i;
int ret = 0;
if (!match) {
/*
* Search the list of components, looking for components that
* belong to this master, and attach them to the master.
*/
return master->ops->add_components(master->dev, master);
}
/*
* Scan the array of match functions and attach
* any components which are found to this master.
*/
for (i = 0; i < match->num; i++) {
ret = component_master_add_child(master,
match->compare[i].fn,
match->compare[i].data);
if (ret)
break;
}
return ret;
}
/* Detach all attached components from this master */
static void master_remove_components(struct master *master)
{
while (!list_empty(&master->components)) {
struct component *c = list_first_entry(&master->components,
struct component, master_node);
WARN_ON(c->master != master);
component_detach_master(master, c);
}
}
/*
* Try to bring up a master. If component is NULL, we're interested in
* this master, otherwise it's a component which must be present to try
* and bring up the master.
*
* Returns 1 for successful bringup, 0 if not ready, or -ve errno.
*/
static int try_to_bring_up_master(struct master *master,
struct component *component)
{
int ret;
if (master->bound)
return 0;
/*
* Search the list of components, looking for components that
* belong to this master, and attach them to the master.
*/
if (find_components(master)) {
/* Failed to find all components */
ret = 0;
goto out;
}
if (component && component->master != master) {
ret = 0;
goto out;
}
if (!devres_open_group(master->dev, NULL, GFP_KERNEL)) {
ret = -ENOMEM;
goto out;
}
/* Found all components */
ret = master->ops->bind(master->dev);
if (ret < 0) {
devres_release_group(master->dev, NULL);
dev_info(master->dev, "master bind failed: %d\n", ret);
goto out;
}
master->bound = true;
return 1;
out:
master_remove_components(master);
return ret;
}
static int try_to_bring_up_masters(struct component *component)
{
struct master *m;
int ret = 0;
list_for_each_entry(m, &masters, node) {
ret = try_to_bring_up_master(m, component);
if (ret != 0)
break;
}
return ret;
}
static void take_down_master(struct master *master)
{
if (master->bound) {
master->ops->unbind(master->dev);
devres_release_group(master->dev, NULL);
master->bound = false;
}
master_remove_components(master);
}
static size_t component_match_size(size_t num)
{
return offsetof(struct component_match, compare[num]);
}
static struct component_match *component_match_realloc(struct device *dev,
struct component_match *match, size_t num)
{
struct component_match *new;
if (match && match->alloc == num)
return match;
new = devm_kmalloc(dev, component_match_size(num), GFP_KERNEL);
if (!new)
return ERR_PTR(-ENOMEM);
if (match) {
memcpy(new, match, component_match_size(min(match->num, num)));
devm_kfree(dev, match);
} else {
new->num = 0;
}
new->alloc = num;
return new;
}
/*
* Add a component to be matched.
*
* The match array is first created or extended if necessary.
*/
void component_match_add(struct device *dev, struct component_match **matchptr,
int (*compare)(struct device *, void *), void *compare_data)
{
struct component_match *match = *matchptr;
if (IS_ERR(match))
return;
if (!match || match->num == match->alloc) {
size_t new_size = match ? match->alloc + 16 : 15;
match = component_match_realloc(dev, match, new_size);
*matchptr = match;
if (IS_ERR(match))
return;
}
match->compare[match->num].fn = compare;
match->compare[match->num].data = compare_data;
match->num++;
}
EXPORT_SYMBOL(component_match_add);
int component_master_add_with_match(struct device *dev,
const struct component_master_ops *ops,
struct component_match *match)
{
struct master *master;
int ret;
if (ops->add_components && match)
return -EINVAL;
if (match) {
/* Reallocate the match array for its true size */
match = component_match_realloc(dev, match, match->num);
if (IS_ERR(match))
return PTR_ERR(match);
}
master = kzalloc(sizeof(*master), GFP_KERNEL);
if (!master)
return -ENOMEM;
master->dev = dev;
master->ops = ops;
master->match = match;
INIT_LIST_HEAD(&master->components);
/* Add to the list of available masters. */
mutex_lock(&component_mutex);
list_add(&master->node, &masters);
ret = try_to_bring_up_master(master, NULL);
if (ret < 0) {
/* Delete off the list if we weren't successful */
list_del(&master->node);
kfree(master);
}
mutex_unlock(&component_mutex);
return ret < 0 ? ret : 0;
}
EXPORT_SYMBOL_GPL(component_master_add_with_match);
int component_master_add(struct device *dev,
const struct component_master_ops *ops)
{
return component_master_add_with_match(dev, ops, NULL);
}
EXPORT_SYMBOL_GPL(component_master_add);
void component_master_del(struct device *dev,
const struct component_master_ops *ops)
{
struct master *master;
mutex_lock(&component_mutex);
master = __master_find(dev, ops);
if (master) {
take_down_master(master);
list_del(&master->node);
kfree(master);
}
mutex_unlock(&component_mutex);
}
EXPORT_SYMBOL_GPL(component_master_del);
static void component_unbind(struct component *component,
struct master *master, void *data)
{
WARN_ON(!component->bound);
component->ops->unbind(component->dev, master->dev, data);
component->bound = false;
/* Release all resources claimed in the binding of this component */
devres_release_group(component->dev, component);
}
void component_unbind_all(struct device *master_dev, void *data)
{
struct master *master;
struct component *c;
WARN_ON(!mutex_is_locked(&component_mutex));
master = __master_find(master_dev, NULL);
if (!master)
return;
list_for_each_entry_reverse(c, &master->components, master_node)
component_unbind(c, master, data);
}
EXPORT_SYMBOL_GPL(component_unbind_all);
static int component_bind(struct component *component, struct master *master,
void *data)
{
int ret;
/*
* Each component initialises inside its own devres group.
* This allows us to roll-back a failed component without
* affecting anything else.
*/
if (!devres_open_group(master->dev, NULL, GFP_KERNEL))
return -ENOMEM;
/*
* Also open a group for the device itself: this allows us
* to release the resources claimed against the sub-device
* at the appropriate moment.
*/
if (!devres_open_group(component->dev, component, GFP_KERNEL)) {
devres_release_group(master->dev, NULL);
return -ENOMEM;
}
dev_dbg(master->dev, "binding %s (ops %ps)\n",
dev_name(component->dev), component->ops);
ret = component->ops->bind(component->dev, master->dev, data);
if (!ret) {
component->bound = true;
/*
* Close the component device's group so that resources
* allocated in the binding are encapsulated for removal
* at unbind. Remove the group on the DRM device as we
* can clean those resources up independently.
*/
devres_close_group(component->dev, NULL);
devres_remove_group(master->dev, NULL);
dev_info(master->dev, "bound %s (ops %ps)\n",
dev_name(component->dev), component->ops);
} else {
devres_release_group(component->dev, NULL);
devres_release_group(master->dev, NULL);
dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
dev_name(component->dev), component->ops, ret);
}
return ret;
}
int component_bind_all(struct device *master_dev, void *data)
{
struct master *master;
struct component *c;
int ret = 0;
WARN_ON(!mutex_is_locked(&component_mutex));
master = __master_find(master_dev, NULL);
if (!master)
return -EINVAL;
list_for_each_entry(c, &master->components, master_node) {
ret = component_bind(c, master, data);
if (ret)
break;
}
if (ret != 0) {
list_for_each_entry_continue_reverse(c, &master->components,
master_node)
component_unbind(c, master, data);
}
return ret;
}
EXPORT_SYMBOL_GPL(component_bind_all);
int component_add(struct device *dev, const struct component_ops *ops)
{
struct component *component;
int ret;
component = kzalloc(sizeof(*component), GFP_KERNEL);
if (!component)
return -ENOMEM;
component->ops = ops;
component->dev = dev;
dev_dbg(dev, "adding component (ops %ps)\n", ops);
mutex_lock(&component_mutex);
list_add_tail(&component->node, &component_list);
ret = try_to_bring_up_masters(component);
if (ret < 0) {
list_del(&component->node);
kfree(component);
}
mutex_unlock(&component_mutex);
return ret < 0 ? ret : 0;
}
EXPORT_SYMBOL_GPL(component_add);
void component_del(struct device *dev, const struct component_ops *ops)
{
struct component *c, *component = NULL;
mutex_lock(&component_mutex);
list_for_each_entry(c, &component_list, node)
if (c->dev == dev && c->ops == ops) {
list_del(&c->node);
component = c;
break;
}
if (component && component->master)
take_down_master(component->master);
mutex_unlock(&component_mutex);
WARN_ON(!component);
kfree(component);
}
EXPORT_SYMBOL_GPL(component_del);
MODULE_LICENSE("GPL v2");

44
drivers/base/container.c Normal file
View file

@ -0,0 +1,44 @@
/*
* System bus type for containers.
*
* Copyright (C) 2013, Intel Corporation
* Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/container.h>
#include "base.h"
#define CONTAINER_BUS_NAME "container"
static int trivial_online(struct device *dev)
{
return 0;
}
static int container_offline(struct device *dev)
{
struct container_dev *cdev = to_container_dev(dev);
return cdev->offline ? cdev->offline(cdev) : 0;
}
struct bus_type container_subsys = {
.name = CONTAINER_BUS_NAME,
.dev_name = CONTAINER_BUS_NAME,
.online = trivial_online,
.offline = container_offline,
};
void __init container_dev_init(void)
{
int ret;
ret = subsys_system_register(&container_subsys, NULL);
if (ret)
pr_err("%s() failed: %d\n", __func__, ret);
}

2156
drivers/base/core.c Normal file

File diff suppressed because it is too large Load diff

432
drivers/base/cpu.c Normal file
View file

@ -0,0 +1,432 @@
/*
* CPU subsystem support
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/cpu.h>
#include <linux/topology.h>
#include <linux/device.h>
#include <linux/node.h>
#include <linux/gfp.h>
#include <linux/slab.h>
#include <linux/percpu.h>
#include <linux/acpi.h>
#include <linux/of.h>
#include <linux/cpufeature.h>
#include "base.h"
static DEFINE_PER_CPU(struct device *, cpu_sys_devices);
static int cpu_subsys_match(struct device *dev, struct device_driver *drv)
{
/* ACPI style match is the only one that may succeed. */
if (acpi_driver_match_device(dev, drv))
return 1;
return 0;
}
#ifdef CONFIG_HOTPLUG_CPU
static void change_cpu_under_node(struct cpu *cpu,
unsigned int from_nid, unsigned int to_nid)
{
int cpuid = cpu->dev.id;
unregister_cpu_under_node(cpuid, from_nid);
register_cpu_under_node(cpuid, to_nid);
cpu->node_id = to_nid;
}
static int __ref cpu_subsys_online(struct device *dev)
{
struct cpu *cpu = container_of(dev, struct cpu, dev);
int cpuid = dev->id;
int from_nid, to_nid;
int ret;
from_nid = cpu_to_node(cpuid);
if (from_nid == NUMA_NO_NODE)
return -ENODEV;
ret = cpu_up(cpuid);
/*
* When hot adding memory to memoryless node and enabling a cpu
* on the node, node number of the cpu may internally change.
*/
to_nid = cpu_to_node(cpuid);
if (from_nid != to_nid)
change_cpu_under_node(cpu, from_nid, to_nid);
return ret;
}
static int cpu_subsys_offline(struct device *dev)
{
return cpu_down(dev->id);
}
void unregister_cpu(struct cpu *cpu)
{
int logical_cpu = cpu->dev.id;
unregister_cpu_under_node(logical_cpu, cpu_to_node(logical_cpu));
device_unregister(&cpu->dev);
per_cpu(cpu_sys_devices, logical_cpu) = NULL;
return;
}
#ifdef CONFIG_ARCH_CPU_PROBE_RELEASE
static ssize_t cpu_probe_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
ssize_t cnt;
int ret;
ret = lock_device_hotplug_sysfs();
if (ret)
return ret;
cnt = arch_cpu_probe(buf, count);
unlock_device_hotplug();
return cnt;
}
static ssize_t cpu_release_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
ssize_t cnt;
int ret;
ret = lock_device_hotplug_sysfs();
if (ret)
return ret;
cnt = arch_cpu_release(buf, count);
unlock_device_hotplug();
return cnt;
}
static DEVICE_ATTR(probe, S_IWUSR, NULL, cpu_probe_store);
static DEVICE_ATTR(release, S_IWUSR, NULL, cpu_release_store);
#endif /* CONFIG_ARCH_CPU_PROBE_RELEASE */
#endif /* CONFIG_HOTPLUG_CPU */
struct bus_type cpu_subsys = {
.name = "cpu",
.dev_name = "cpu",
.match = cpu_subsys_match,
#ifdef CONFIG_HOTPLUG_CPU
.online = cpu_subsys_online,
.offline = cpu_subsys_offline,
#endif
};
EXPORT_SYMBOL_GPL(cpu_subsys);
#ifdef CONFIG_KEXEC
#include <linux/kexec.h>
static ssize_t show_crash_notes(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct cpu *cpu = container_of(dev, struct cpu, dev);
ssize_t rc;
unsigned long long addr;
int cpunum;
cpunum = cpu->dev.id;
/*
* Might be reading other cpu's data based on which cpu read thread
* has been scheduled. But cpu data (memory) is allocated once during
* boot up and this data does not change there after. Hence this
* operation should be safe. No locking required.
*/
addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpunum));
rc = sprintf(buf, "%Lx\n", addr);
return rc;
}
static DEVICE_ATTR(crash_notes, 0400, show_crash_notes, NULL);
static ssize_t show_crash_notes_size(struct device *dev,
struct device_attribute *attr,
char *buf)
{
ssize_t rc;
rc = sprintf(buf, "%zu\n", sizeof(note_buf_t));
return rc;
}
static DEVICE_ATTR(crash_notes_size, 0400, show_crash_notes_size, NULL);
static struct attribute *crash_note_cpu_attrs[] = {
&dev_attr_crash_notes.attr,
&dev_attr_crash_notes_size.attr,
NULL
};
static struct attribute_group crash_note_cpu_attr_group = {
.attrs = crash_note_cpu_attrs,
};
#endif
static const struct attribute_group *common_cpu_attr_groups[] = {
#ifdef CONFIG_KEXEC
&crash_note_cpu_attr_group,
#endif
NULL
};
static const struct attribute_group *hotplugable_cpu_attr_groups[] = {
#ifdef CONFIG_KEXEC
&crash_note_cpu_attr_group,
#endif
NULL
};
/*
* Print cpu online, possible, present, and system maps
*/
struct cpu_attr {
struct device_attribute attr;
const struct cpumask *const * const map;
};
static ssize_t show_cpus_attr(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct cpu_attr *ca = container_of(attr, struct cpu_attr, attr);
int n = cpulist_scnprintf(buf, PAGE_SIZE-2, *(ca->map));
buf[n++] = '\n';
buf[n] = '\0';
return n;
}
#define _CPU_ATTR(name, map) \
{ __ATTR(name, 0444, show_cpus_attr, NULL), map }
/* Keep in sync with cpu_subsys_attrs */
static struct cpu_attr cpu_attrs[] = {
_CPU_ATTR(online, &cpu_online_mask),
_CPU_ATTR(possible, &cpu_possible_mask),
_CPU_ATTR(present, &cpu_present_mask),
};
/*
* Print values for NR_CPUS and offlined cpus
*/
static ssize_t print_cpus_kernel_max(struct device *dev,
struct device_attribute *attr, char *buf)
{
int n = snprintf(buf, PAGE_SIZE-2, "%d\n", NR_CPUS - 1);
return n;
}
static DEVICE_ATTR(kernel_max, 0444, print_cpus_kernel_max, NULL);
/* arch-optional setting to enable display of offline cpus >= nr_cpu_ids */
unsigned int total_cpus;
static ssize_t print_cpus_offline(struct device *dev,
struct device_attribute *attr, char *buf)
{
int n = 0, len = PAGE_SIZE-2;
cpumask_var_t offline;
/* display offline cpus < nr_cpu_ids */
if (!alloc_cpumask_var(&offline, GFP_KERNEL))
return -ENOMEM;
cpumask_andnot(offline, cpu_possible_mask, cpu_online_mask);
n = cpulist_scnprintf(buf, len, offline);
free_cpumask_var(offline);
/* display offline cpus >= nr_cpu_ids */
if (total_cpus && nr_cpu_ids < total_cpus) {
if (n && n < len)
buf[n++] = ',';
if (nr_cpu_ids == total_cpus-1)
n += snprintf(&buf[n], len - n, "%d", nr_cpu_ids);
else
n += snprintf(&buf[n], len - n, "%d-%d",
nr_cpu_ids, total_cpus-1);
}
n += snprintf(&buf[n], len - n, "\n");
return n;
}
static DEVICE_ATTR(offline, 0444, print_cpus_offline, NULL);
static void cpu_device_release(struct device *dev)
{
/*
* This is an empty function to prevent the driver core from spitting a
* warning at us. Yes, I know this is directly opposite of what the
* documentation for the driver core and kobjects say, and the author
* of this code has already been publically ridiculed for doing
* something as foolish as this. However, at this point in time, it is
* the only way to handle the issue of statically allocated cpu
* devices. The different architectures will have their cpu device
* code reworked to properly handle this in the near future, so this
* function will then be changed to correctly free up the memory held
* by the cpu device.
*
* Never copy this way of doing things, or you too will be made fun of
* on the linux-kernel list, you have been warned.
*/
}
#ifdef CONFIG_HAVE_CPU_AUTOPROBE
#ifdef CONFIG_GENERIC_CPU_AUTOPROBE
static ssize_t print_cpu_modalias(struct device *dev,
struct device_attribute *attr,
char *buf)
{
ssize_t n;
u32 i;
n = sprintf(buf, "cpu:type:" CPU_FEATURE_TYPEFMT ":feature:",
CPU_FEATURE_TYPEVAL);
for (i = 0; i < MAX_CPU_FEATURES; i++)
if (cpu_have_feature(i)) {
if (PAGE_SIZE < n + sizeof(",XXXX\n")) {
WARN(1, "CPU features overflow page\n");
break;
}
n += sprintf(&buf[n], ",%04X", i);
}
buf[n++] = '\n';
return n;
}
#else
#define print_cpu_modalias arch_print_cpu_modalias
#endif
#ifdef CONFIG_ARCH_HAS_CPU_AUTOPROBE
static int cpu_uevent(struct device *dev, struct kobj_uevent_env *env)
{
char *buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
if (buf) {
print_cpu_modalias(NULL, NULL, buf);
add_uevent_var(env, "MODALIAS=%s", buf);
kfree(buf);
}
return 0;
}
#endif
#endif
/*
* register_cpu - Setup a sysfs device for a CPU.
* @cpu - cpu->hotpluggable field set to 1 will generate a control file in
* sysfs for this CPU.
* @num - CPU number to use when creating the device.
*
* Initialize and register the CPU device.
*/
int register_cpu(struct cpu *cpu, int num)
{
int error;
cpu->node_id = cpu_to_node(num);
memset(&cpu->dev, 0x00, sizeof(struct device));
cpu->dev.id = num;
cpu->dev.bus = &cpu_subsys;
cpu->dev.release = cpu_device_release;
cpu->dev.offline_disabled = !cpu->hotpluggable;
cpu->dev.of_node = of_get_cpu_node(num, NULL);
#ifdef CONFIG_ARCH_HAS_CPU_AUTOPROBE
cpu->dev.bus->uevent = cpu_uevent;
#endif
cpu->dev.groups = common_cpu_attr_groups;
if (cpu->hotpluggable)
cpu->dev.groups = hotplugable_cpu_attr_groups;
error = device_register(&cpu->dev);
if (!error)
per_cpu(cpu_sys_devices, num) = &cpu->dev;
if (!error)
register_cpu_under_node(num, cpu_to_node(num));
return error;
}
struct device *get_cpu_device(unsigned cpu)
{
if (cpu < nr_cpu_ids && cpu_possible(cpu))
return per_cpu(cpu_sys_devices, cpu);
else
return NULL;
}
EXPORT_SYMBOL_GPL(get_cpu_device);
#ifdef CONFIG_HAVE_CPU_AUTOPROBE
static DEVICE_ATTR(modalias, 0444, print_cpu_modalias, NULL);
#endif
static struct attribute *cpu_root_attrs[] = {
#ifdef CONFIG_ARCH_CPU_PROBE_RELEASE
&dev_attr_probe.attr,
&dev_attr_release.attr,
#endif
&cpu_attrs[0].attr.attr,
&cpu_attrs[1].attr.attr,
&cpu_attrs[2].attr.attr,
&dev_attr_kernel_max.attr,
&dev_attr_offline.attr,
#ifdef CONFIG_HAVE_CPU_AUTOPROBE
&dev_attr_modalias.attr,
#endif
NULL
};
static struct attribute_group cpu_root_attr_group = {
.attrs = cpu_root_attrs,
};
static const struct attribute_group *cpu_root_attr_groups[] = {
&cpu_root_attr_group,
NULL,
};
bool cpu_is_hotpluggable(unsigned cpu)
{
struct device *dev = get_cpu_device(cpu);
return dev && container_of(dev, struct cpu, dev)->hotpluggable;
}
EXPORT_SYMBOL_GPL(cpu_is_hotpluggable);
#ifdef CONFIG_GENERIC_CPU_DEVICES
static DEFINE_PER_CPU(struct cpu, cpu_devices);
#endif
static void __init cpu_dev_register_generic(void)
{
#ifdef CONFIG_GENERIC_CPU_DEVICES
int i;
for_each_possible_cpu(i) {
if (register_cpu(&per_cpu(cpu_devices, i), i))
panic("Failed to register CPU device");
}
#endif
}
void __init cpu_dev_init(void)
{
if (subsys_system_register(&cpu_subsys, cpu_root_attr_groups))
panic("Failed to register CPU subsystem");
cpu_dev_register_generic();
}

589
drivers/base/dd.c Normal file
View file

@ -0,0 +1,589 @@
/*
* drivers/base/dd.c - The core device/driver interactions.
*
* This file contains the (sometimes tricky) code that controls the
* interactions between devices and drivers, which primarily includes
* driver binding and unbinding.
*
* All of this code used to exist in drivers/base/bus.c, but was
* relocated to here in the name of compartmentalization (since it wasn't
* strictly code just for the 'struct bus_type'.
*
* Copyright (c) 2002-5 Patrick Mochel
* Copyright (c) 2002-3 Open Source Development Labs
* Copyright (c) 2007-2009 Greg Kroah-Hartman <gregkh@suse.de>
* Copyright (c) 2007-2009 Novell Inc.
*
* This file is released under the GPLv2
*/
#include <linux/device.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/kthread.h>
#include <linux/wait.h>
#include <linux/async.h>
#include <linux/pm_runtime.h>
#include <linux/pinctrl/devinfo.h>
#include "base.h"
#include "power/power.h"
/*
* Deferred Probe infrastructure.
*
* Sometimes driver probe order matters, but the kernel doesn't always have
* dependency information which means some drivers will get probed before a
* resource it depends on is available. For example, an SDHCI driver may
* first need a GPIO line from an i2c GPIO controller before it can be
* initialized. If a required resource is not available yet, a driver can
* request probing to be deferred by returning -EPROBE_DEFER from its probe hook
*
* Deferred probe maintains two lists of devices, a pending list and an active
* list. A driver returning -EPROBE_DEFER causes the device to be added to the
* pending list. A successful driver probe will trigger moving all devices
* from the pending to the active list so that the workqueue will eventually
* retry them.
*
* The deferred_probe_mutex must be held any time the deferred_probe_*_list
* of the (struct device*)->p->deferred_probe pointers are manipulated
*/
static DEFINE_MUTEX(deferred_probe_mutex);
static LIST_HEAD(deferred_probe_pending_list);
static LIST_HEAD(deferred_probe_active_list);
static struct workqueue_struct *deferred_wq;
static atomic_t deferred_trigger_count = ATOMIC_INIT(0);
/*
* deferred_probe_work_func() - Retry probing devices in the active list.
*/
static void deferred_probe_work_func(struct work_struct *work)
{
struct device *dev;
struct device_private *private;
/*
* This block processes every device in the deferred 'active' list.
* Each device is removed from the active list and passed to
* bus_probe_device() to re-attempt the probe. The loop continues
* until every device in the active list is removed and retried.
*
* Note: Once the device is removed from the list and the mutex is
* released, it is possible for the device get freed by another thread
* and cause a illegal pointer dereference. This code uses
* get/put_device() to ensure the device structure cannot disappear
* from under our feet.
*/
mutex_lock(&deferred_probe_mutex);
while (!list_empty(&deferred_probe_active_list)) {
private = list_first_entry(&deferred_probe_active_list,
typeof(*dev->p), deferred_probe);
dev = private->device;
list_del_init(&private->deferred_probe);
get_device(dev);
/*
* Drop the mutex while probing each device; the probe path may
* manipulate the deferred list
*/
mutex_unlock(&deferred_probe_mutex);
/*
* Force the device to the end of the dpm_list since
* the PM code assumes that the order we add things to
* the list is a good order for suspend but deferred
* probe makes that very unsafe.
*/
device_pm_lock();
device_pm_move_last(dev);
device_pm_unlock();
dev_dbg(dev, "Retrying from deferred list\n");
bus_probe_device(dev);
mutex_lock(&deferred_probe_mutex);
put_device(dev);
}
mutex_unlock(&deferred_probe_mutex);
}
static DECLARE_WORK(deferred_probe_work, deferred_probe_work_func);
static void driver_deferred_probe_add(struct device *dev)
{
mutex_lock(&deferred_probe_mutex);
if (list_empty(&dev->p->deferred_probe)) {
dev_dbg(dev, "Added to deferred list\n");
list_add_tail(&dev->p->deferred_probe, &deferred_probe_pending_list);
}
mutex_unlock(&deferred_probe_mutex);
}
void driver_deferred_probe_del(struct device *dev)
{
mutex_lock(&deferred_probe_mutex);
if (!list_empty(&dev->p->deferred_probe)) {
dev_dbg(dev, "Removed from deferred list\n");
list_del_init(&dev->p->deferred_probe);
}
mutex_unlock(&deferred_probe_mutex);
}
static bool driver_deferred_probe_enable = false;
/**
* driver_deferred_probe_trigger() - Kick off re-probing deferred devices
*
* This functions moves all devices from the pending list to the active
* list and schedules the deferred probe workqueue to process them. It
* should be called anytime a driver is successfully bound to a device.
*
* Note, there is a race condition in multi-threaded probe. In the case where
* more than one device is probing at the same time, it is possible for one
* probe to complete successfully while another is about to defer. If the second
* depends on the first, then it will get put on the pending list after the
* trigger event has already occured and will be stuck there.
*
* The atomic 'deferred_trigger_count' is used to determine if a successful
* trigger has occurred in the midst of probing a driver. If the trigger count
* changes in the midst of a probe, then deferred processing should be triggered
* again.
*/
static void driver_deferred_probe_trigger(void)
{
if (!driver_deferred_probe_enable)
return;
/*
* A successful probe means that all the devices in the pending list
* should be triggered to be reprobed. Move all the deferred devices
* into the active list so they can be retried by the workqueue
*/
mutex_lock(&deferred_probe_mutex);
atomic_inc(&deferred_trigger_count);
list_splice_tail_init(&deferred_probe_pending_list,
&deferred_probe_active_list);
mutex_unlock(&deferred_probe_mutex);
/*
* Kick the re-probe thread. It may already be scheduled, but it is
* safe to kick it again.
*/
queue_work(deferred_wq, &deferred_probe_work);
}
/**
* deferred_probe_initcall() - Enable probing of deferred devices
*
* We don't want to get in the way when the bulk of drivers are getting probed.
* Instead, this initcall makes sure that deferred probing is delayed until
* late_initcall time.
*/
static int deferred_probe_initcall(void)
{
deferred_wq = create_singlethread_workqueue("deferwq");
if (WARN_ON(!deferred_wq))
return -ENOMEM;
driver_deferred_probe_enable = true;
driver_deferred_probe_trigger();
/* Sort as many dependencies as possible before exiting initcalls */
flush_workqueue(deferred_wq);
return 0;
}
late_initcall(deferred_probe_initcall);
static void driver_bound(struct device *dev)
{
if (klist_node_attached(&dev->p->knode_driver)) {
printk(KERN_WARNING "%s: device %s already bound\n",
__func__, kobject_name(&dev->kobj));
return;
}
pr_debug("driver: '%s': %s: bound to device '%s'\n", dev->driver->name,
__func__, dev_name(dev));
klist_add_tail(&dev->p->knode_driver, &dev->driver->p->klist_devices);
/*
* Make sure the device is no longer in one of the deferred lists and
* kick off retrying all pending devices
*/
driver_deferred_probe_del(dev);
driver_deferred_probe_trigger();
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_BOUND_DRIVER, dev);
}
static int driver_sysfs_add(struct device *dev)
{
int ret;
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_BIND_DRIVER, dev);
ret = sysfs_create_link(&dev->driver->p->kobj, &dev->kobj,
kobject_name(&dev->kobj));
if (ret == 0) {
ret = sysfs_create_link(&dev->kobj, &dev->driver->p->kobj,
"driver");
if (ret)
sysfs_remove_link(&dev->driver->p->kobj,
kobject_name(&dev->kobj));
}
return ret;
}
static void driver_sysfs_remove(struct device *dev)
{
struct device_driver *drv = dev->driver;
if (drv) {
sysfs_remove_link(&drv->p->kobj, kobject_name(&dev->kobj));
sysfs_remove_link(&dev->kobj, "driver");
}
}
/**
* device_bind_driver - bind a driver to one device.
* @dev: device.
*
* Allow manual attachment of a driver to a device.
* Caller must have already set @dev->driver.
*
* Note that this does not modify the bus reference count
* nor take the bus's rwsem. Please verify those are accounted
* for before calling this. (It is ok to call with no other effort
* from a driver's probe() method.)
*
* This function must be called with the device lock held.
*/
int device_bind_driver(struct device *dev)
{
int ret;
ret = driver_sysfs_add(dev);
if (!ret)
driver_bound(dev);
return ret;
}
EXPORT_SYMBOL_GPL(device_bind_driver);
static atomic_t probe_count = ATOMIC_INIT(0);
static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue);
static int really_probe(struct device *dev, struct device_driver *drv)
{
int ret = 0;
int local_trigger_count = atomic_read(&deferred_trigger_count);
atomic_inc(&probe_count);
pr_debug("bus: '%s': %s: probing driver %s with device %s\n",
drv->bus->name, __func__, drv->name, dev_name(dev));
WARN_ON(!list_empty(&dev->devres_head));
dev->driver = drv;
/* If using pinctrl, bind pins now before probing */
ret = pinctrl_bind_pins(dev);
if (ret)
goto probe_failed;
if (driver_sysfs_add(dev)) {
printk(KERN_ERR "%s: driver_sysfs_add(%s) failed\n",
__func__, dev_name(dev));
goto probe_failed;
}
if (dev->bus->probe) {
ret = dev->bus->probe(dev);
if (ret)
goto probe_failed;
} else if (drv->probe) {
ret = drv->probe(dev);
if (ret)
goto probe_failed;
}
driver_bound(dev);
ret = 1;
pr_debug("bus: '%s': %s: bound device %s to driver %s\n",
drv->bus->name, __func__, dev_name(dev), drv->name);
goto done;
probe_failed:
devres_release_all(dev);
driver_sysfs_remove(dev);
dev->driver = NULL;
dev_set_drvdata(dev, NULL);
if (ret == -EPROBE_DEFER) {
/* Driver requested deferred probing */
dev_info(dev, "Driver %s requests probe deferral\n", drv->name);
driver_deferred_probe_add(dev);
/* Did a trigger occur while probing? Need to re-trigger if yes */
if (local_trigger_count != atomic_read(&deferred_trigger_count))
driver_deferred_probe_trigger();
} else if (ret != -ENODEV && ret != -ENXIO) {
/* driver matched but the probe failed */
printk(KERN_WARNING
"%s: probe of %s failed with error %d\n",
drv->name, dev_name(dev), ret);
} else {
pr_debug("%s: probe of %s rejects match %d\n",
drv->name, dev_name(dev), ret);
}
/*
* Ignore errors returned by ->probe so that the next driver can try
* its luck.
*/
ret = 0;
done:
atomic_dec(&probe_count);
wake_up(&probe_waitqueue);
return ret;
}
/**
* driver_probe_done
* Determine if the probe sequence is finished or not.
*
* Should somehow figure out how to use a semaphore, not an atomic variable...
*/
int driver_probe_done(void)
{
pr_debug("%s: probe_count = %d\n", __func__,
atomic_read(&probe_count));
if (atomic_read(&probe_count))
return -EBUSY;
return 0;
}
/**
* wait_for_device_probe
* Wait for device probing to be completed.
*/
void wait_for_device_probe(void)
{
/* wait for the known devices to complete their probing */
wait_event(probe_waitqueue, atomic_read(&probe_count) == 0);
async_synchronize_full();
}
EXPORT_SYMBOL_GPL(wait_for_device_probe);
/**
* driver_probe_device - attempt to bind device & driver together
* @drv: driver to bind a device to
* @dev: device to try to bind to the driver
*
* This function returns -ENODEV if the device is not registered,
* 1 if the device is bound successfully and 0 otherwise.
*
* This function must be called with @dev lock held. When called for a
* USB interface, @dev->parent lock must be held as well.
*/
int driver_probe_device(struct device_driver *drv, struct device *dev)
{
int ret = 0;
if (!device_is_registered(dev))
return -ENODEV;
pr_debug("bus: '%s': %s: matched device %s with driver %s\n",
drv->bus->name, __func__, dev_name(dev), drv->name);
pm_runtime_barrier(dev);
ret = really_probe(dev, drv);
pm_request_idle(dev);
return ret;
}
static int __device_attach(struct device_driver *drv, void *data)
{
struct device *dev = data;
if (!driver_match_device(drv, dev))
return 0;
return driver_probe_device(drv, dev);
}
/**
* device_attach - try to attach device to a driver.
* @dev: device.
*
* Walk the list of drivers that the bus has and call
* driver_probe_device() for each pair. If a compatible
* pair is found, break out and return.
*
* Returns 1 if the device was bound to a driver;
* 0 if no matching driver was found;
* -ENODEV if the device is not registered.
*
* When called for a USB interface, @dev->parent lock must be held.
*/
int device_attach(struct device *dev)
{
int ret = 0;
device_lock(dev);
if (dev->driver) {
if (klist_node_attached(&dev->p->knode_driver)) {
ret = 1;
goto out_unlock;
}
ret = device_bind_driver(dev);
if (ret == 0)
ret = 1;
else {
dev->driver = NULL;
ret = 0;
}
} else {
ret = bus_for_each_drv(dev->bus, NULL, dev, __device_attach);
pm_request_idle(dev);
}
out_unlock:
device_unlock(dev);
return ret;
}
EXPORT_SYMBOL_GPL(device_attach);
static int __driver_attach(struct device *dev, void *data)
{
struct device_driver *drv = data;
/*
* Lock device and try to bind to it. We drop the error
* here and always return 0, because we need to keep trying
* to bind to devices and some drivers will return an error
* simply if it didn't support the device.
*
* driver_probe_device() will spit a warning if there
* is an error.
*/
if (!driver_match_device(drv, dev))
return 0;
if (dev->parent) /* Needed for USB */
device_lock(dev->parent);
device_lock(dev);
if (!dev->driver)
driver_probe_device(drv, dev);
device_unlock(dev);
if (dev->parent)
device_unlock(dev->parent);
return 0;
}
/**
* driver_attach - try to bind driver to devices.
* @drv: driver.
*
* Walk the list of devices that the bus has on it and try to
* match the driver with each one. If driver_probe_device()
* returns 0 and the @dev->driver is set, we've found a
* compatible pair.
*/
int driver_attach(struct device_driver *drv)
{
return bus_for_each_dev(drv->bus, NULL, drv, __driver_attach);
}
EXPORT_SYMBOL_GPL(driver_attach);
/*
* __device_release_driver() must be called with @dev lock held.
* When called for a USB interface, @dev->parent lock must be held as well.
*/
static void __device_release_driver(struct device *dev)
{
struct device_driver *drv;
drv = dev->driver;
if (drv) {
pm_runtime_get_sync(dev);
driver_sysfs_remove(dev);
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_UNBIND_DRIVER,
dev);
pm_runtime_put_sync(dev);
if (dev->bus && dev->bus->remove)
dev->bus->remove(dev);
else if (drv->remove)
drv->remove(dev);
devres_release_all(dev);
dev->driver = NULL;
dev_set_drvdata(dev, NULL);
klist_remove(&dev->p->knode_driver);
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_UNBOUND_DRIVER,
dev);
}
}
/**
* device_release_driver - manually detach device from driver.
* @dev: device.
*
* Manually detach device from driver.
* When called for a USB interface, @dev->parent lock must be held.
*/
void device_release_driver(struct device *dev)
{
/*
* If anyone calls device_release_driver() recursively from
* within their ->remove callback for the same device, they
* will deadlock right here.
*/
device_lock(dev);
__device_release_driver(dev);
device_unlock(dev);
}
EXPORT_SYMBOL_GPL(device_release_driver);
/**
* driver_detach - detach driver from all devices it controls.
* @drv: driver.
*/
void driver_detach(struct device_driver *drv)
{
struct device_private *dev_prv;
struct device *dev;
for (;;) {
spin_lock(&drv->p->klist_devices.k_lock);
if (list_empty(&drv->p->klist_devices.k_list)) {
spin_unlock(&drv->p->klist_devices.k_lock);
break;
}
dev_prv = list_entry(drv->p->klist_devices.k_list.prev,
struct device_private,
knode_driver.n_node);
dev = dev_prv->device;
get_device(dev);
spin_unlock(&drv->p->klist_devices.k_lock);
if (dev->parent) /* Needed for USB */
device_lock(dev->parent);
device_lock(dev);
if (dev->driver == drv)
__device_release_driver(dev);
device_unlock(dev);
if (dev->parent)
device_unlock(dev->parent);
put_device(dev);
}
}

265
drivers/base/devcoredump.c Normal file
View file

@ -0,0 +1,265 @@
/*
* This file is provided under the GPLv2 license.
*
* GPL LICENSE SUMMARY
*
* Copyright(c) 2014 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* The full GNU General Public License is included in this distribution
* in the file called COPYING.
*
* Contact Information:
* Intel Linux Wireless <ilw@linux.intel.com>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
* Author: Johannes Berg <johannes@sipsolutions.net>
*/
#include <linux/module.h>
#include <linux/device.h>
#include <linux/devcoredump.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/workqueue.h>
/* if data isn't read by userspace after 5 minutes then delete it */
#define DEVCD_TIMEOUT (HZ * 60 * 5)
struct devcd_entry {
struct device devcd_dev;
const void *data;
size_t datalen;
struct module *owner;
ssize_t (*read)(char *buffer, loff_t offset, size_t count,
const void *data, size_t datalen);
void (*free)(const void *data);
struct delayed_work del_wk;
struct device *failing_dev;
};
static struct devcd_entry *dev_to_devcd(struct device *dev)
{
return container_of(dev, struct devcd_entry, devcd_dev);
}
static void devcd_dev_release(struct device *dev)
{
struct devcd_entry *devcd = dev_to_devcd(dev);
devcd->free(devcd->data);
module_put(devcd->owner);
/*
* this seems racy, but I don't see a notifier or such on
* a struct device to know when it goes away?
*/
if (devcd->failing_dev->kobj.sd)
sysfs_delete_link(&devcd->failing_dev->kobj, &dev->kobj,
"devcoredump");
put_device(devcd->failing_dev);
kfree(devcd);
}
static void devcd_del(struct work_struct *wk)
{
struct devcd_entry *devcd;
devcd = container_of(wk, struct devcd_entry, del_wk.work);
device_del(&devcd->devcd_dev);
put_device(&devcd->devcd_dev);
}
static ssize_t devcd_data_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buffer, loff_t offset, size_t count)
{
struct device *dev = kobj_to_dev(kobj);
struct devcd_entry *devcd = dev_to_devcd(dev);
return devcd->read(buffer, offset, count, devcd->data, devcd->datalen);
}
static ssize_t devcd_data_write(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buffer, loff_t offset, size_t count)
{
struct device *dev = kobj_to_dev(kobj);
struct devcd_entry *devcd = dev_to_devcd(dev);
mod_delayed_work(system_wq, &devcd->del_wk, 0);
return count;
}
static struct bin_attribute devcd_attr_data = {
.attr = { .name = "data", .mode = S_IRUSR | S_IWUSR, },
.size = 0,
.read = devcd_data_read,
.write = devcd_data_write,
};
static struct bin_attribute *devcd_dev_bin_attrs[] = {
&devcd_attr_data, NULL,
};
static const struct attribute_group devcd_dev_group = {
.bin_attrs = devcd_dev_bin_attrs,
};
static const struct attribute_group *devcd_dev_groups[] = {
&devcd_dev_group, NULL,
};
static struct class devcd_class = {
.name = "devcoredump",
.owner = THIS_MODULE,
.dev_release = devcd_dev_release,
.dev_groups = devcd_dev_groups,
};
static ssize_t devcd_readv(char *buffer, loff_t offset, size_t count,
const void *data, size_t datalen)
{
if (offset > datalen)
return -EINVAL;
if (offset + count > datalen)
count = datalen - offset;
if (count)
memcpy(buffer, ((u8 *)data) + offset, count);
return count;
}
/**
* dev_coredumpv - create device coredump with vmalloc data
* @dev: the struct device for the crashed device
* @data: vmalloc data containing the device coredump
* @datalen: length of the data
* @gfp: allocation flags
*
* This function takes ownership of the vmalloc'ed data and will free
* it when it is no longer used. See dev_coredumpm() for more information.
*/
void dev_coredumpv(struct device *dev, const void *data, size_t datalen,
gfp_t gfp)
{
dev_coredumpm(dev, NULL, data, datalen, gfp, devcd_readv, vfree);
}
EXPORT_SYMBOL_GPL(dev_coredumpv);
static int devcd_match_failing(struct device *dev, const void *failing)
{
struct devcd_entry *devcd = dev_to_devcd(dev);
return devcd->failing_dev == failing;
}
/**
* dev_coredumpm - create device coredump with read/free methods
* @dev: the struct device for the crashed device
* @owner: the module that contains the read/free functions, use %THIS_MODULE
* @data: data cookie for the @read/@free functions
* @datalen: length of the data
* @gfp: allocation flags
* @read: function to read from the given buffer
* @free: function to free the given buffer
*
* Creates a new device coredump for the given device. If a previous one hasn't
* been read yet, the new coredump is discarded. The data lifetime is determined
* by the device coredump framework and when it is no longer needed the @free
* function will be called to free the data.
*/
void dev_coredumpm(struct device *dev, struct module *owner,
const void *data, size_t datalen, gfp_t gfp,
ssize_t (*read)(char *buffer, loff_t offset, size_t count,
const void *data, size_t datalen),
void (*free)(const void *data))
{
static atomic_t devcd_count = ATOMIC_INIT(0);
struct devcd_entry *devcd;
struct device *existing;
existing = class_find_device(&devcd_class, NULL, dev,
devcd_match_failing);
if (existing) {
put_device(existing);
goto free;
}
if (!try_module_get(owner))
goto free;
devcd = kzalloc(sizeof(*devcd), gfp);
if (!devcd)
goto put_module;
devcd->owner = owner;
devcd->data = data;
devcd->datalen = datalen;
devcd->read = read;
devcd->free = free;
devcd->failing_dev = get_device(dev);
device_initialize(&devcd->devcd_dev);
dev_set_name(&devcd->devcd_dev, "devcd%d",
atomic_inc_return(&devcd_count));
devcd->devcd_dev.class = &devcd_class;
if (device_add(&devcd->devcd_dev))
goto put_device;
if (sysfs_create_link(&devcd->devcd_dev.kobj, &dev->kobj,
"failing_device"))
/* nothing - symlink will be missing */;
if (sysfs_create_link(&dev->kobj, &devcd->devcd_dev.kobj,
"devcoredump"))
/* nothing - symlink will be missing */;
INIT_DELAYED_WORK(&devcd->del_wk, devcd_del);
schedule_delayed_work(&devcd->del_wk, DEVCD_TIMEOUT);
return;
put_device:
put_device(&devcd->devcd_dev);
put_module:
module_put(owner);
free:
free(data);
}
EXPORT_SYMBOL_GPL(dev_coredumpm);
static int __init devcoredump_init(void)
{
return class_register(&devcd_class);
}
__initcall(devcoredump_init);
static int devcd_free(struct device *dev, void *data)
{
struct devcd_entry *devcd = dev_to_devcd(dev);
flush_delayed_work(&devcd->del_wk);
return 0;
}
static void __exit devcoredump_exit(void)
{
class_for_each_device(&devcd_class, NULL, NULL, devcd_free);
class_unregister(&devcd_class);
}
__exitcall(devcoredump_exit);

986
drivers/base/devres.c Normal file
View file

@ -0,0 +1,986 @@
/*
* drivers/base/devres.c - device resource management
*
* Copyright (c) 2006 SUSE Linux Products GmbH
* Copyright (c) 2006 Tejun Heo <teheo@suse.de>
*
* This file is released under the GPLv2.
*/
#include <linux/device.h>
#include <linux/module.h>
#include <linux/slab.h>
#include "base.h"
struct devres_node {
struct list_head entry;
dr_release_t release;
#ifdef CONFIG_DEBUG_DEVRES
const char *name;
size_t size;
#endif
};
struct devres {
struct devres_node node;
/* -- 3 pointers */
unsigned long long data[]; /* guarantee ull alignment */
};
struct devres_group {
struct devres_node node[2];
void *id;
int color;
/* -- 8 pointers */
};
#ifdef CONFIG_DEBUG_DEVRES
static int log_devres = 0;
module_param_named(log, log_devres, int, S_IRUGO | S_IWUSR);
static void set_node_dbginfo(struct devres_node *node, const char *name,
size_t size)
{
node->name = name;
node->size = size;
}
static void devres_log(struct device *dev, struct devres_node *node,
const char *op)
{
if (unlikely(log_devres))
dev_err(dev, "DEVRES %3s %p %s (%lu bytes)\n",
op, node, node->name, (unsigned long)node->size);
}
#else /* CONFIG_DEBUG_DEVRES */
#define set_node_dbginfo(node, n, s) do {} while (0)
#define devres_log(dev, node, op) do {} while (0)
#endif /* CONFIG_DEBUG_DEVRES */
/*
* Release functions for devres group. These callbacks are used only
* for identification.
*/
static void group_open_release(struct device *dev, void *res)
{
/* noop */
}
static void group_close_release(struct device *dev, void *res)
{
/* noop */
}
static struct devres_group * node_to_group(struct devres_node *node)
{
if (node->release == &group_open_release)
return container_of(node, struct devres_group, node[0]);
if (node->release == &group_close_release)
return container_of(node, struct devres_group, node[1]);
return NULL;
}
static __always_inline struct devres * alloc_dr(dr_release_t release,
size_t size, gfp_t gfp)
{
size_t tot_size = sizeof(struct devres) + size;
struct devres *dr;
dr = kmalloc_track_caller(tot_size, gfp);
if (unlikely(!dr))
return NULL;
memset(dr, 0, offsetof(struct devres, data));
INIT_LIST_HEAD(&dr->node.entry);
dr->node.release = release;
return dr;
}
static void add_dr(struct device *dev, struct devres_node *node)
{
devres_log(dev, node, "ADD");
BUG_ON(!list_empty(&node->entry));
list_add_tail(&node->entry, &dev->devres_head);
}
#ifdef CONFIG_DEBUG_DEVRES
void * __devres_alloc(dr_release_t release, size_t size, gfp_t gfp,
const char *name)
{
struct devres *dr;
dr = alloc_dr(release, size, gfp | __GFP_ZERO);
if (unlikely(!dr))
return NULL;
set_node_dbginfo(&dr->node, name, size);
return dr->data;
}
EXPORT_SYMBOL_GPL(__devres_alloc);
#else
/**
* devres_alloc - Allocate device resource data
* @release: Release function devres will be associated with
* @size: Allocation size
* @gfp: Allocation flags
*
* Allocate devres of @size bytes. The allocated area is zeroed, then
* associated with @release. The returned pointer can be passed to
* other devres_*() functions.
*
* RETURNS:
* Pointer to allocated devres on success, NULL on failure.
*/
void * devres_alloc(dr_release_t release, size_t size, gfp_t gfp)
{
struct devres *dr;
dr = alloc_dr(release, size, gfp | __GFP_ZERO);
if (unlikely(!dr))
return NULL;
return dr->data;
}
EXPORT_SYMBOL_GPL(devres_alloc);
#endif
/**
* devres_for_each_res - Resource iterator
* @dev: Device to iterate resource from
* @release: Look for resources associated with this release function
* @match: Match function (optional)
* @match_data: Data for the match function
* @fn: Function to be called for each matched resource.
* @data: Data for @fn, the 3rd parameter of @fn
*
* Call @fn for each devres of @dev which is associated with @release
* and for which @match returns 1.
*
* RETURNS:
* void
*/
void devres_for_each_res(struct device *dev, dr_release_t release,
dr_match_t match, void *match_data,
void (*fn)(struct device *, void *, void *),
void *data)
{
struct devres_node *node;
struct devres_node *tmp;
unsigned long flags;
if (!fn)
return;
spin_lock_irqsave(&dev->devres_lock, flags);
list_for_each_entry_safe_reverse(node, tmp,
&dev->devres_head, entry) {
struct devres *dr = container_of(node, struct devres, node);
if (node->release != release)
continue;
if (match && !match(dev, dr->data, match_data))
continue;
fn(dev, dr->data, data);
}
spin_unlock_irqrestore(&dev->devres_lock, flags);
}
EXPORT_SYMBOL_GPL(devres_for_each_res);
/**
* devres_free - Free device resource data
* @res: Pointer to devres data to free
*
* Free devres created with devres_alloc().
*/
void devres_free(void *res)
{
if (res) {
struct devres *dr = container_of(res, struct devres, data);
BUG_ON(!list_empty(&dr->node.entry));
kfree(dr);
}
}
EXPORT_SYMBOL_GPL(devres_free);
/**
* devres_add - Register device resource
* @dev: Device to add resource to
* @res: Resource to register
*
* Register devres @res to @dev. @res should have been allocated
* using devres_alloc(). On driver detach, the associated release
* function will be invoked and devres will be freed automatically.
*/
void devres_add(struct device *dev, void *res)
{
struct devres *dr = container_of(res, struct devres, data);
unsigned long flags;
spin_lock_irqsave(&dev->devres_lock, flags);
add_dr(dev, &dr->node);
spin_unlock_irqrestore(&dev->devres_lock, flags);
}
EXPORT_SYMBOL_GPL(devres_add);
static struct devres *find_dr(struct device *dev, dr_release_t release,
dr_match_t match, void *match_data)
{
struct devres_node *node;
list_for_each_entry_reverse(node, &dev->devres_head, entry) {
struct devres *dr = container_of(node, struct devres, node);
if (node->release != release)
continue;
if (match && !match(dev, dr->data, match_data))
continue;
return dr;
}
return NULL;
}
/**
* devres_find - Find device resource
* @dev: Device to lookup resource from
* @release: Look for resources associated with this release function
* @match: Match function (optional)
* @match_data: Data for the match function
*
* Find the latest devres of @dev which is associated with @release
* and for which @match returns 1. If @match is NULL, it's considered
* to match all.
*
* RETURNS:
* Pointer to found devres, NULL if not found.
*/
void * devres_find(struct device *dev, dr_release_t release,
dr_match_t match, void *match_data)
{
struct devres *dr;
unsigned long flags;
spin_lock_irqsave(&dev->devres_lock, flags);
dr = find_dr(dev, release, match, match_data);
spin_unlock_irqrestore(&dev->devres_lock, flags);
if (dr)
return dr->data;
return NULL;
}
EXPORT_SYMBOL_GPL(devres_find);
/**
* devres_get - Find devres, if non-existent, add one atomically
* @dev: Device to lookup or add devres for
* @new_res: Pointer to new initialized devres to add if not found
* @match: Match function (optional)
* @match_data: Data for the match function
*
* Find the latest devres of @dev which has the same release function
* as @new_res and for which @match return 1. If found, @new_res is
* freed; otherwise, @new_res is added atomically.
*
* RETURNS:
* Pointer to found or added devres.
*/
void * devres_get(struct device *dev, void *new_res,
dr_match_t match, void *match_data)
{
struct devres *new_dr = container_of(new_res, struct devres, data);
struct devres *dr;
unsigned long flags;
spin_lock_irqsave(&dev->devres_lock, flags);
dr = find_dr(dev, new_dr->node.release, match, match_data);
if (!dr) {
add_dr(dev, &new_dr->node);
dr = new_dr;
new_dr = NULL;
}
spin_unlock_irqrestore(&dev->devres_lock, flags);
devres_free(new_dr);
return dr->data;
}
EXPORT_SYMBOL_GPL(devres_get);
/**
* devres_remove - Find a device resource and remove it
* @dev: Device to find resource from
* @release: Look for resources associated with this release function
* @match: Match function (optional)
* @match_data: Data for the match function
*
* Find the latest devres of @dev associated with @release and for
* which @match returns 1. If @match is NULL, it's considered to
* match all. If found, the resource is removed atomically and
* returned.
*
* RETURNS:
* Pointer to removed devres on success, NULL if not found.
*/
void * devres_remove(struct device *dev, dr_release_t release,
dr_match_t match, void *match_data)
{
struct devres *dr;
unsigned long flags;
spin_lock_irqsave(&dev->devres_lock, flags);
dr = find_dr(dev, release, match, match_data);
if (dr) {
list_del_init(&dr->node.entry);
devres_log(dev, &dr->node, "REM");
}
spin_unlock_irqrestore(&dev->devres_lock, flags);
if (dr)
return dr->data;
return NULL;
}
EXPORT_SYMBOL_GPL(devres_remove);
/**
* devres_destroy - Find a device resource and destroy it
* @dev: Device to find resource from
* @release: Look for resources associated with this release function
* @match: Match function (optional)
* @match_data: Data for the match function
*
* Find the latest devres of @dev associated with @release and for
* which @match returns 1. If @match is NULL, it's considered to
* match all. If found, the resource is removed atomically and freed.
*
* Note that the release function for the resource will not be called,
* only the devres-allocated data will be freed. The caller becomes
* responsible for freeing any other data.
*
* RETURNS:
* 0 if devres is found and freed, -ENOENT if not found.
*/
int devres_destroy(struct device *dev, dr_release_t release,
dr_match_t match, void *match_data)
{
void *res;
res = devres_remove(dev, release, match, match_data);
if (unlikely(!res))
return -ENOENT;
devres_free(res);
return 0;
}
EXPORT_SYMBOL_GPL(devres_destroy);
/**
* devres_release - Find a device resource and destroy it, calling release
* @dev: Device to find resource from
* @release: Look for resources associated with this release function
* @match: Match function (optional)
* @match_data: Data for the match function
*
* Find the latest devres of @dev associated with @release and for
* which @match returns 1. If @match is NULL, it's considered to
* match all. If found, the resource is removed atomically, the
* release function called and the resource freed.
*
* RETURNS:
* 0 if devres is found and freed, -ENOENT if not found.
*/
int devres_release(struct device *dev, dr_release_t release,
dr_match_t match, void *match_data)
{
void *res;
res = devres_remove(dev, release, match, match_data);
if (unlikely(!res))
return -ENOENT;
(*release)(dev, res);
devres_free(res);
return 0;
}
EXPORT_SYMBOL_GPL(devres_release);
static int remove_nodes(struct device *dev,
struct list_head *first, struct list_head *end,
struct list_head *todo)
{
int cnt = 0, nr_groups = 0;
struct list_head *cur;
/* First pass - move normal devres entries to @todo and clear
* devres_group colors.
*/
cur = first;
while (cur != end) {
struct devres_node *node;
struct devres_group *grp;
node = list_entry(cur, struct devres_node, entry);
cur = cur->next;
grp = node_to_group(node);
if (grp) {
/* clear color of group markers in the first pass */
grp->color = 0;
nr_groups++;
} else {
/* regular devres entry */
if (&node->entry == first)
first = first->next;
list_move_tail(&node->entry, todo);
cnt++;
}
}
if (!nr_groups)
return cnt;
/* Second pass - Scan groups and color them. A group gets
* color value of two iff the group is wholly contained in
* [cur, end). That is, for a closed group, both opening and
* closing markers should be in the range, while just the
* opening marker is enough for an open group.
*/
cur = first;
while (cur != end) {
struct devres_node *node;
struct devres_group *grp;
node = list_entry(cur, struct devres_node, entry);
cur = cur->next;
grp = node_to_group(node);
BUG_ON(!grp || list_empty(&grp->node[0].entry));
grp->color++;
if (list_empty(&grp->node[1].entry))
grp->color++;
BUG_ON(grp->color <= 0 || grp->color > 2);
if (grp->color == 2) {
/* No need to update cur or end. The removed
* nodes are always before both.
*/
list_move_tail(&grp->node[0].entry, todo);
list_del_init(&grp->node[1].entry);
}
}
return cnt;
}
static int release_nodes(struct device *dev, struct list_head *first,
struct list_head *end, unsigned long flags)
__releases(&dev->devres_lock)
{
LIST_HEAD(todo);
int cnt;
struct devres *dr, *tmp;
cnt = remove_nodes(dev, first, end, &todo);
spin_unlock_irqrestore(&dev->devres_lock, flags);
/* Release. Note that both devres and devres_group are
* handled as devres in the following loop. This is safe.
*/
list_for_each_entry_safe_reverse(dr, tmp, &todo, node.entry) {
devres_log(dev, &dr->node, "REL");
dr->node.release(dev, dr->data);
kfree(dr);
}
return cnt;
}
/**
* devres_release_all - Release all managed resources
* @dev: Device to release resources for
*
* Release all resources associated with @dev. This function is
* called on driver detach.
*/
int devres_release_all(struct device *dev)
{
unsigned long flags;
/* Looks like an uninitialized device structure */
if (WARN_ON(dev->devres_head.next == NULL))
return -ENODEV;
spin_lock_irqsave(&dev->devres_lock, flags);
return release_nodes(dev, dev->devres_head.next, &dev->devres_head,
flags);
}
/**
* devres_open_group - Open a new devres group
* @dev: Device to open devres group for
* @id: Separator ID
* @gfp: Allocation flags
*
* Open a new devres group for @dev with @id. For @id, using a
* pointer to an object which won't be used for another group is
* recommended. If @id is NULL, address-wise unique ID is created.
*
* RETURNS:
* ID of the new group, NULL on failure.
*/
void * devres_open_group(struct device *dev, void *id, gfp_t gfp)
{
struct devres_group *grp;
unsigned long flags;
grp = kmalloc(sizeof(*grp), gfp);
if (unlikely(!grp))
return NULL;
grp->node[0].release = &group_open_release;
grp->node[1].release = &group_close_release;
INIT_LIST_HEAD(&grp->node[0].entry);
INIT_LIST_HEAD(&grp->node[1].entry);
set_node_dbginfo(&grp->node[0], "grp<", 0);
set_node_dbginfo(&grp->node[1], "grp>", 0);
grp->id = grp;
if (id)
grp->id = id;
spin_lock_irqsave(&dev->devres_lock, flags);
add_dr(dev, &grp->node[0]);
spin_unlock_irqrestore(&dev->devres_lock, flags);
return grp->id;
}
EXPORT_SYMBOL_GPL(devres_open_group);
/* Find devres group with ID @id. If @id is NULL, look for the latest. */
static struct devres_group * find_group(struct device *dev, void *id)
{
struct devres_node *node;
list_for_each_entry_reverse(node, &dev->devres_head, entry) {
struct devres_group *grp;
if (node->release != &group_open_release)
continue;
grp = container_of(node, struct devres_group, node[0]);
if (id) {
if (grp->id == id)
return grp;
} else if (list_empty(&grp->node[1].entry))
return grp;
}
return NULL;
}
/**
* devres_close_group - Close a devres group
* @dev: Device to close devres group for
* @id: ID of target group, can be NULL
*
* Close the group identified by @id. If @id is NULL, the latest open
* group is selected.
*/
void devres_close_group(struct device *dev, void *id)
{
struct devres_group *grp;
unsigned long flags;
spin_lock_irqsave(&dev->devres_lock, flags);
grp = find_group(dev, id);
if (grp)
add_dr(dev, &grp->node[1]);
else
WARN_ON(1);
spin_unlock_irqrestore(&dev->devres_lock, flags);
}
EXPORT_SYMBOL_GPL(devres_close_group);
/**
* devres_remove_group - Remove a devres group
* @dev: Device to remove group for
* @id: ID of target group, can be NULL
*
* Remove the group identified by @id. If @id is NULL, the latest
* open group is selected. Note that removing a group doesn't affect
* any other resources.
*/
void devres_remove_group(struct device *dev, void *id)
{
struct devres_group *grp;
unsigned long flags;
spin_lock_irqsave(&dev->devres_lock, flags);
grp = find_group(dev, id);
if (grp) {
list_del_init(&grp->node[0].entry);
list_del_init(&grp->node[1].entry);
devres_log(dev, &grp->node[0], "REM");
} else
WARN_ON(1);
spin_unlock_irqrestore(&dev->devres_lock, flags);
kfree(grp);
}
EXPORT_SYMBOL_GPL(devres_remove_group);
/**
* devres_release_group - Release resources in a devres group
* @dev: Device to release group for
* @id: ID of target group, can be NULL
*
* Release all resources in the group identified by @id. If @id is
* NULL, the latest open group is selected. The selected group and
* groups properly nested inside the selected group are removed.
*
* RETURNS:
* The number of released non-group resources.
*/
int devres_release_group(struct device *dev, void *id)
{
struct devres_group *grp;
unsigned long flags;
int cnt = 0;
spin_lock_irqsave(&dev->devres_lock, flags);
grp = find_group(dev, id);
if (grp) {
struct list_head *first = &grp->node[0].entry;
struct list_head *end = &dev->devres_head;
if (!list_empty(&grp->node[1].entry))
end = grp->node[1].entry.next;
cnt = release_nodes(dev, first, end, flags);
} else {
WARN_ON(1);
spin_unlock_irqrestore(&dev->devres_lock, flags);
}
return cnt;
}
EXPORT_SYMBOL_GPL(devres_release_group);
/*
* Custom devres actions allow inserting a simple function call
* into the teadown sequence.
*/
struct action_devres {
void *data;
void (*action)(void *);
};
static int devm_action_match(struct device *dev, void *res, void *p)
{
struct action_devres *devres = res;
struct action_devres *target = p;
return devres->action == target->action &&
devres->data == target->data;
}
static void devm_action_release(struct device *dev, void *res)
{
struct action_devres *devres = res;
devres->action(devres->data);
}
/**
* devm_add_action() - add a custom action to list of managed resources
* @dev: Device that owns the action
* @action: Function that should be called
* @data: Pointer to data passed to @action implementation
*
* This adds a custom action to the list of managed resources so that
* it gets executed as part of standard resource unwinding.
*/
int devm_add_action(struct device *dev, void (*action)(void *), void *data)
{
struct action_devres *devres;
devres = devres_alloc(devm_action_release,
sizeof(struct action_devres), GFP_KERNEL);
if (!devres)
return -ENOMEM;
devres->data = data;
devres->action = action;
devres_add(dev, devres);
return 0;
}
EXPORT_SYMBOL_GPL(devm_add_action);
/**
* devm_remove_action() - removes previously added custom action
* @dev: Device that owns the action
* @action: Function implementing the action
* @data: Pointer to data passed to @action implementation
*
* Removes instance of @action previously added by devm_add_action().
* Both action and data should match one of the existing entries.
*/
void devm_remove_action(struct device *dev, void (*action)(void *), void *data)
{
struct action_devres devres = {
.data = data,
.action = action,
};
WARN_ON(devres_destroy(dev, devm_action_release, devm_action_match,
&devres));
}
EXPORT_SYMBOL_GPL(devm_remove_action);
/*
* Managed kmalloc/kfree
*/
static void devm_kmalloc_release(struct device *dev, void *res)
{
/* noop */
}
static int devm_kmalloc_match(struct device *dev, void *res, void *data)
{
return res == data;
}
/**
* devm_kmalloc - Resource-managed kmalloc
* @dev: Device to allocate memory for
* @size: Allocation size
* @gfp: Allocation gfp flags
*
* Managed kmalloc. Memory allocated with this function is
* automatically freed on driver detach. Like all other devres
* resources, guaranteed alignment is unsigned long long.
*
* RETURNS:
* Pointer to allocated memory on success, NULL on failure.
*/
void * devm_kmalloc(struct device *dev, size_t size, gfp_t gfp)
{
struct devres *dr;
/* use raw alloc_dr for kmalloc caller tracing */
dr = alloc_dr(devm_kmalloc_release, size, gfp);
if (unlikely(!dr))
return NULL;
/*
* This is named devm_kzalloc_release for historical reasons
* The initial implementation did not support kmalloc, only kzalloc
*/
set_node_dbginfo(&dr->node, "devm_kzalloc_release", size);
devres_add(dev, dr->data);
return dr->data;
}
EXPORT_SYMBOL_GPL(devm_kmalloc);
/**
* devm_kstrdup - Allocate resource managed space and
* copy an existing string into that.
* @dev: Device to allocate memory for
* @s: the string to duplicate
* @gfp: the GFP mask used in the devm_kmalloc() call when
* allocating memory
* RETURNS:
* Pointer to allocated string on success, NULL on failure.
*/
char *devm_kstrdup(struct device *dev, const char *s, gfp_t gfp)
{
size_t size;
char *buf;
if (!s)
return NULL;
size = strlen(s) + 1;
buf = devm_kmalloc(dev, size, gfp);
if (buf)
memcpy(buf, s, size);
return buf;
}
EXPORT_SYMBOL_GPL(devm_kstrdup);
/**
* devm_kvasprintf - Allocate resource managed space and format a string
* into that.
* @dev: Device to allocate memory for
* @gfp: the GFP mask used in the devm_kmalloc() call when
* allocating memory
* @fmt: The printf()-style format string
* @ap: Arguments for the format string
* RETURNS:
* Pointer to allocated string on success, NULL on failure.
*/
char *devm_kvasprintf(struct device *dev, gfp_t gfp, const char *fmt,
va_list ap)
{
unsigned int len;
char *p;
va_list aq;
va_copy(aq, ap);
len = vsnprintf(NULL, 0, fmt, aq);
va_end(aq);
p = devm_kmalloc(dev, len+1, gfp);
if (!p)
return NULL;
vsnprintf(p, len+1, fmt, ap);
return p;
}
EXPORT_SYMBOL(devm_kvasprintf);
/**
* devm_kasprintf - Allocate resource managed space and format a string
* into that.
* @dev: Device to allocate memory for
* @gfp: the GFP mask used in the devm_kmalloc() call when
* allocating memory
* @fmt: The printf()-style format string
* @...: Arguments for the format string
* RETURNS:
* Pointer to allocated string on success, NULL on failure.
*/
char *devm_kasprintf(struct device *dev, gfp_t gfp, const char *fmt, ...)
{
va_list ap;
char *p;
va_start(ap, fmt);
p = devm_kvasprintf(dev, gfp, fmt, ap);
va_end(ap);
return p;
}
EXPORT_SYMBOL_GPL(devm_kasprintf);
/**
* devm_kfree - Resource-managed kfree
* @dev: Device this memory belongs to
* @p: Memory to free
*
* Free memory allocated with devm_kmalloc().
*/
void devm_kfree(struct device *dev, void *p)
{
int rc;
rc = devres_destroy(dev, devm_kmalloc_release, devm_kmalloc_match, p);
WARN_ON(rc);
}
EXPORT_SYMBOL_GPL(devm_kfree);
/**
* devm_kmemdup - Resource-managed kmemdup
* @dev: Device this memory belongs to
* @src: Memory region to duplicate
* @len: Memory region length
* @gfp: GFP mask to use
*
* Duplicate region of a memory using resource managed kmalloc
*/
void *devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp)
{
void *p;
p = devm_kmalloc(dev, len, gfp);
if (p)
memcpy(p, src, len);
return p;
}
EXPORT_SYMBOL_GPL(devm_kmemdup);
struct pages_devres {
unsigned long addr;
unsigned int order;
};
static int devm_pages_match(struct device *dev, void *res, void *p)
{
struct pages_devres *devres = res;
struct pages_devres *target = p;
return devres->addr == target->addr;
}
static void devm_pages_release(struct device *dev, void *res)
{
struct pages_devres *devres = res;
free_pages(devres->addr, devres->order);
}
/**
* devm_get_free_pages - Resource-managed __get_free_pages
* @dev: Device to allocate memory for
* @gfp_mask: Allocation gfp flags
* @order: Allocation size is (1 << order) pages
*
* Managed get_free_pages. Memory allocated with this function is
* automatically freed on driver detach.
*
* RETURNS:
* Address of allocated memory on success, 0 on failure.
*/
unsigned long devm_get_free_pages(struct device *dev,
gfp_t gfp_mask, unsigned int order)
{
struct pages_devres *devres;
unsigned long addr;
addr = __get_free_pages(gfp_mask, order);
if (unlikely(!addr))
return 0;
devres = devres_alloc(devm_pages_release,
sizeof(struct pages_devres), GFP_KERNEL);
if (unlikely(!devres)) {
free_pages(addr, order);
return 0;
}
devres->addr = addr;
devres->order = order;
devres_add(dev, devres);
return addr;
}
EXPORT_SYMBOL_GPL(devm_get_free_pages);
/**
* devm_free_pages - Resource-managed free_pages
* @dev: Device this memory belongs to
* @addr: Memory to free
*
* Free memory allocated with devm_get_free_pages(). Unlike free_pages,
* there is no need to supply the @order.
*/
void devm_free_pages(struct device *dev, unsigned long addr)
{
struct pages_devres devres = { .addr = addr };
WARN_ON(devres_release(dev, devm_pages_release, devm_pages_match,
&devres));
}
EXPORT_SYMBOL_GPL(devm_free_pages);

443
drivers/base/devtmpfs.c Normal file
View file

@ -0,0 +1,443 @@
/*
* devtmpfs - kernel-maintained tmpfs-based /dev
*
* Copyright (C) 2009, Kay Sievers <kay.sievers@vrfy.org>
*
* During bootup, before any driver core device is registered,
* devtmpfs, a tmpfs-based filesystem is created. Every driver-core
* device which requests a device node, will add a node in this
* filesystem.
* By default, all devices are named after the name of the device,
* owned by root and have a default mode of 0600. Subsystems can
* overwrite the default setting if needed.
*/
#include <linux/kernel.h>
#include <linux/syscalls.h>
#include <linux/mount.h>
#include <linux/device.h>
#include <linux/genhd.h>
#include <linux/namei.h>
#include <linux/fs.h>
#include <linux/shmem_fs.h>
#include <linux/ramfs.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/kthread.h>
#include "base.h"
static struct task_struct *thread;
#if defined CONFIG_DEVTMPFS_MOUNT
static int mount_dev = 1;
#else
static int mount_dev;
#endif
static DEFINE_SPINLOCK(req_lock);
static struct req {
struct req *next;
struct completion done;
int err;
const char *name;
umode_t mode; /* 0 => delete */
kuid_t uid;
kgid_t gid;
struct device *dev;
} *requests;
static int __init mount_param(char *str)
{
mount_dev = simple_strtoul(str, NULL, 0);
return 1;
}
__setup("devtmpfs.mount=", mount_param);
static struct dentry *dev_mount(struct file_system_type *fs_type, int flags,
const char *dev_name, void *data)
{
#ifdef CONFIG_TMPFS
return mount_single(fs_type, flags, data, shmem_fill_super);
#else
return mount_single(fs_type, flags, data, ramfs_fill_super);
#endif
}
static struct file_system_type dev_fs_type = {
.name = "devtmpfs",
.mount = dev_mount,
.kill_sb = kill_litter_super,
};
#ifdef CONFIG_BLOCK
static inline int is_blockdev(struct device *dev)
{
return dev->class == &block_class;
}
#else
static inline int is_blockdev(struct device *dev) { return 0; }
#endif
int devtmpfs_create_node(struct device *dev)
{
const char *tmp = NULL;
struct req req;
if (!thread)
return 0;
req.mode = 0;
req.uid = GLOBAL_ROOT_UID;
req.gid = GLOBAL_ROOT_GID;
req.name = device_get_devnode(dev, &req.mode, &req.uid, &req.gid, &tmp);
if (!req.name)
return -ENOMEM;
if (req.mode == 0)
req.mode = 0600;
if (is_blockdev(dev))
req.mode |= S_IFBLK;
else
req.mode |= S_IFCHR;
req.dev = dev;
init_completion(&req.done);
spin_lock(&req_lock);
req.next = requests;
requests = &req;
spin_unlock(&req_lock);
wake_up_process(thread);
wait_for_completion(&req.done);
kfree(tmp);
return req.err;
}
int devtmpfs_delete_node(struct device *dev)
{
const char *tmp = NULL;
struct req req;
if (!thread)
return 0;
req.name = device_get_devnode(dev, NULL, NULL, NULL, &tmp);
if (!req.name)
return -ENOMEM;
req.mode = 0;
req.dev = dev;
init_completion(&req.done);
spin_lock(&req_lock);
req.next = requests;
requests = &req;
spin_unlock(&req_lock);
wake_up_process(thread);
wait_for_completion(&req.done);
kfree(tmp);
return req.err;
}
static int dev_mkdir(const char *name, umode_t mode)
{
struct dentry *dentry;
struct path path;
int err;
dentry = kern_path_create(AT_FDCWD, name, &path, LOOKUP_DIRECTORY);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
err = vfs_mkdir(path.dentry->d_inode, dentry, mode);
if (!err)
/* mark as kernel-created inode */
dentry->d_inode->i_private = &thread;
done_path_create(&path, dentry);
return err;
}
static int create_path(const char *nodepath)
{
char *path;
char *s;
int err = 0;
/* parent directories do not exist, create them */
path = kstrdup(nodepath, GFP_KERNEL);
if (!path)
return -ENOMEM;
s = path;
for (;;) {
s = strchr(s, '/');
if (!s)
break;
s[0] = '\0';
err = dev_mkdir(path, 0755);
if (err && err != -EEXIST)
break;
s[0] = '/';
s++;
}
kfree(path);
return err;
}
static int handle_create(const char *nodename, umode_t mode, kuid_t uid,
kgid_t gid, struct device *dev)
{
struct dentry *dentry;
struct path path;
int err;
dentry = kern_path_create(AT_FDCWD, nodename, &path, 0);
if (dentry == ERR_PTR(-ENOENT)) {
create_path(nodename);
dentry = kern_path_create(AT_FDCWD, nodename, &path, 0);
}
if (IS_ERR(dentry))
return PTR_ERR(dentry);
err = vfs_mknod(path.dentry->d_inode, dentry, mode, dev->devt);
if (!err) {
struct iattr newattrs;
newattrs.ia_mode = mode;
newattrs.ia_uid = uid;
newattrs.ia_gid = gid;
newattrs.ia_valid = ATTR_MODE|ATTR_UID|ATTR_GID;
mutex_lock(&dentry->d_inode->i_mutex);
notify_change(dentry, &newattrs, NULL);
mutex_unlock(&dentry->d_inode->i_mutex);
/* mark as kernel-created inode */
dentry->d_inode->i_private = &thread;
}
done_path_create(&path, dentry);
return err;
}
static int dev_rmdir(const char *name)
{
struct path parent;
struct dentry *dentry;
int err;
dentry = kern_path_locked(name, &parent);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
if (dentry->d_inode) {
if (dentry->d_inode->i_private == &thread)
err = vfs_rmdir(parent.dentry->d_inode, dentry);
else
err = -EPERM;
} else {
err = -ENOENT;
}
dput(dentry);
mutex_unlock(&parent.dentry->d_inode->i_mutex);
path_put(&parent);
return err;
}
static int delete_path(const char *nodepath)
{
const char *path;
int err = 0;
path = kstrdup(nodepath, GFP_KERNEL);
if (!path)
return -ENOMEM;
for (;;) {
char *base;
base = strrchr(path, '/');
if (!base)
break;
base[0] = '\0';
err = dev_rmdir(path);
if (err)
break;
}
kfree(path);
return err;
}
static int dev_mynode(struct device *dev, struct inode *inode, struct kstat *stat)
{
/* did we create it */
if (inode->i_private != &thread)
return 0;
/* does the dev_t match */
if (is_blockdev(dev)) {
if (!S_ISBLK(stat->mode))
return 0;
} else {
if (!S_ISCHR(stat->mode))
return 0;
}
if (stat->rdev != dev->devt)
return 0;
/* ours */
return 1;
}
static int handle_remove(const char *nodename, struct device *dev)
{
struct path parent;
struct dentry *dentry;
int deleted = 0;
int err;
dentry = kern_path_locked(nodename, &parent);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
if (dentry->d_inode) {
struct kstat stat;
struct path p = {.mnt = parent.mnt, .dentry = dentry};
err = vfs_getattr(&p, &stat);
if (!err && dev_mynode(dev, dentry->d_inode, &stat)) {
struct iattr newattrs;
/*
* before unlinking this node, reset permissions
* of possible references like hardlinks
*/
newattrs.ia_uid = GLOBAL_ROOT_UID;
newattrs.ia_gid = GLOBAL_ROOT_GID;
newattrs.ia_mode = stat.mode & ~0777;
newattrs.ia_valid =
ATTR_UID|ATTR_GID|ATTR_MODE;
mutex_lock(&dentry->d_inode->i_mutex);
notify_change(dentry, &newattrs, NULL);
mutex_unlock(&dentry->d_inode->i_mutex);
err = vfs_unlink(parent.dentry->d_inode, dentry, NULL);
if (!err || err == -ENOENT)
deleted = 1;
}
} else {
err = -ENOENT;
}
dput(dentry);
mutex_unlock(&parent.dentry->d_inode->i_mutex);
path_put(&parent);
if (deleted && strchr(nodename, '/'))
delete_path(nodename);
return err;
}
/*
* If configured, or requested by the commandline, devtmpfs will be
* auto-mounted after the kernel mounted the root filesystem.
*/
int devtmpfs_mount(const char *mntdir)
{
int err;
if (!mount_dev)
return 0;
if (!thread)
return 0;
err = sys_mount("devtmpfs", (char *)mntdir, "devtmpfs", MS_SILENT, NULL);
if (err)
printk(KERN_INFO "devtmpfs: error mounting %i\n", err);
else
printk(KERN_INFO "devtmpfs: mounted\n");
return err;
}
static DECLARE_COMPLETION(setup_done);
static int handle(const char *name, umode_t mode, kuid_t uid, kgid_t gid,
struct device *dev)
{
if (mode)
return handle_create(name, mode, uid, gid, dev);
else
return handle_remove(name, dev);
}
static int devtmpfsd(void *p)
{
char options[] = "mode=0755";
int *err = p;
*err = sys_unshare(CLONE_NEWNS);
if (*err)
goto out;
*err = sys_mount("devtmpfs", "/", "devtmpfs", MS_SILENT, options);
if (*err)
goto out;
sys_chdir("/.."); /* will traverse into overmounted root */
sys_chroot(".");
complete(&setup_done);
while (1) {
spin_lock(&req_lock);
while (requests) {
struct req *req = requests;
requests = NULL;
spin_unlock(&req_lock);
while (req) {
struct req *next = req->next;
req->err = handle(req->name, req->mode,
req->uid, req->gid, req->dev);
complete(&req->done);
req = next;
}
spin_lock(&req_lock);
}
__set_current_state(TASK_INTERRUPTIBLE);
spin_unlock(&req_lock);
schedule();
}
return 0;
out:
complete(&setup_done);
return *err;
}
/*
* Create devtmpfs instance, driver-core devices will add their device
* nodes here.
*/
int __init devtmpfs_init(void)
{
int err = register_filesystem(&dev_fs_type);
if (err) {
printk(KERN_ERR "devtmpfs: unable to register devtmpfs "
"type %i\n", err);
return err;
}
thread = kthread_run(devtmpfsd, &err, "kdevtmpfs");
if (!IS_ERR(thread)) {
wait_for_completion(&setup_done);
} else {
err = PTR_ERR(thread);
thread = NULL;
}
if (err) {
printk(KERN_ERR "devtmpfs: unable to create devtmpfs %i\n", err);
unregister_filesystem(&dev_fs_type);
return err;
}
printk(KERN_INFO "devtmpfs: initialized\n");
return 0;
}

327
drivers/base/dma-coherent.c Normal file
View file

@ -0,0 +1,327 @@
/*
* Coherent per-device memory handling.
* Borrowed from i386
*/
#include <linux/slab.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/dma-mapping.h>
struct dma_coherent_mem {
void *virt_base;
dma_addr_t device_base;
unsigned long pfn_base;
int size;
int flags;
unsigned long *bitmap;
spinlock_t spinlock;
};
static int dma_init_coherent_memory(phys_addr_t phys_addr, dma_addr_t device_addr,
size_t size, int flags,
struct dma_coherent_mem **mem)
{
struct dma_coherent_mem *dma_mem = NULL;
void __iomem *mem_base = NULL;
int pages = size >> PAGE_SHIFT;
int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
if ((flags & (DMA_MEMORY_MAP | DMA_MEMORY_IO)) == 0)
goto out;
if (!size)
goto out;
mem_base = ioremap(phys_addr, size);
if (!mem_base)
goto out;
dma_mem = kzalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
if (!dma_mem)
goto out;
dma_mem->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
if (!dma_mem->bitmap)
goto out;
dma_mem->virt_base = mem_base;
dma_mem->device_base = device_addr;
dma_mem->pfn_base = PFN_DOWN(phys_addr);
dma_mem->size = pages;
dma_mem->flags = flags;
spin_lock_init(&dma_mem->spinlock);
*mem = dma_mem;
if (flags & DMA_MEMORY_MAP)
return DMA_MEMORY_MAP;
return DMA_MEMORY_IO;
out:
kfree(dma_mem);
if (mem_base)
iounmap(mem_base);
return 0;
}
static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
{
if (!mem)
return;
iounmap(mem->virt_base);
kfree(mem->bitmap);
kfree(mem);
}
static int dma_assign_coherent_memory(struct device *dev,
struct dma_coherent_mem *mem)
{
if (dev->dma_mem)
return -EBUSY;
dev->dma_mem = mem;
/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
return 0;
}
int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
dma_addr_t device_addr, size_t size, int flags)
{
struct dma_coherent_mem *mem;
int ret;
ret = dma_init_coherent_memory(phys_addr, device_addr, size, flags,
&mem);
if (ret == 0)
return 0;
if (dma_assign_coherent_memory(dev, mem) == 0)
return ret;
dma_release_coherent_memory(mem);
return 0;
}
EXPORT_SYMBOL(dma_declare_coherent_memory);
void dma_release_declared_memory(struct device *dev)
{
struct dma_coherent_mem *mem = dev->dma_mem;
if (!mem)
return;
dma_release_coherent_memory(mem);
dev->dma_mem = NULL;
}
EXPORT_SYMBOL(dma_release_declared_memory);
void *dma_mark_declared_memory_occupied(struct device *dev,
dma_addr_t device_addr, size_t size)
{
struct dma_coherent_mem *mem = dev->dma_mem;
unsigned long flags;
int pos, err;
size += device_addr & ~PAGE_MASK;
if (!mem)
return ERR_PTR(-EINVAL);
spin_lock_irqsave(&mem->spinlock, flags);
pos = (device_addr - mem->device_base) >> PAGE_SHIFT;
err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
spin_unlock_irqrestore(&mem->spinlock, flags);
if (err != 0)
return ERR_PTR(err);
return mem->virt_base + (pos << PAGE_SHIFT);
}
EXPORT_SYMBOL(dma_mark_declared_memory_occupied);
/**
* dma_alloc_from_coherent() - try to allocate memory from the per-device coherent area
*
* @dev: device from which we allocate memory
* @size: size of requested memory area
* @dma_handle: This will be filled with the correct dma handle
* @ret: This pointer will be filled with the virtual address
* to allocated area.
*
* This function should be only called from per-arch dma_alloc_coherent()
* to support allocation from per-device coherent memory pools.
*
* Returns 0 if dma_alloc_coherent should continue with allocating from
* generic memory areas, or !0 if dma_alloc_coherent should return @ret.
*/
int dma_alloc_from_coherent(struct device *dev, ssize_t size,
dma_addr_t *dma_handle, void **ret)
{
struct dma_coherent_mem *mem;
int order = get_order(size);
unsigned long flags;
int pageno;
if (!dev)
return 0;
mem = dev->dma_mem;
if (!mem)
return 0;
*ret = NULL;
spin_lock_irqsave(&mem->spinlock, flags);
if (unlikely(size > (mem->size << PAGE_SHIFT)))
goto err;
pageno = bitmap_find_free_region(mem->bitmap, mem->size, order);
if (unlikely(pageno < 0))
goto err;
/*
* Memory was found in the per-device area.
*/
*dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
*ret = mem->virt_base + (pageno << PAGE_SHIFT);
memset(*ret, 0, size);
spin_unlock_irqrestore(&mem->spinlock, flags);
return 1;
err:
spin_unlock_irqrestore(&mem->spinlock, flags);
/*
* In the case where the allocation can not be satisfied from the
* per-device area, try to fall back to generic memory if the
* constraints allow it.
*/
return mem->flags & DMA_MEMORY_EXCLUSIVE;
}
EXPORT_SYMBOL(dma_alloc_from_coherent);
/**
* dma_release_from_coherent() - try to free the memory allocated from per-device coherent memory pool
* @dev: device from which the memory was allocated
* @order: the order of pages allocated
* @vaddr: virtual address of allocated pages
*
* This checks whether the memory was allocated from the per-device
* coherent memory pool and if so, releases that memory.
*
* Returns 1 if we correctly released the memory, or 0 if
* dma_release_coherent() should proceed with releasing memory from
* generic pools.
*/
int dma_release_from_coherent(struct device *dev, int order, void *vaddr)
{
struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
if (mem && vaddr >= mem->virt_base && vaddr <
(mem->virt_base + (mem->size << PAGE_SHIFT))) {
int page = (vaddr - mem->virt_base) >> PAGE_SHIFT;
unsigned long flags;
spin_lock_irqsave(&mem->spinlock, flags);
bitmap_release_region(mem->bitmap, page, order);
spin_unlock_irqrestore(&mem->spinlock, flags);
return 1;
}
return 0;
}
EXPORT_SYMBOL(dma_release_from_coherent);
/**
* dma_mmap_from_coherent() - try to mmap the memory allocated from
* per-device coherent memory pool to userspace
* @dev: device from which the memory was allocated
* @vma: vm_area for the userspace memory
* @vaddr: cpu address returned by dma_alloc_from_coherent
* @size: size of the memory buffer allocated by dma_alloc_from_coherent
* @ret: result from remap_pfn_range()
*
* This checks whether the memory was allocated from the per-device
* coherent memory pool and if so, maps that memory to the provided vma.
*
* Returns 1 if we correctly mapped the memory, or 0 if the caller should
* proceed with mapping memory from generic pools.
*/
int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
void *vaddr, size_t size, int *ret)
{
struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
if (mem && vaddr >= mem->virt_base && vaddr + size <=
(mem->virt_base + (mem->size << PAGE_SHIFT))) {
unsigned long off = vma->vm_pgoff;
int start = (vaddr - mem->virt_base) >> PAGE_SHIFT;
int user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
int count = size >> PAGE_SHIFT;
*ret = -ENXIO;
if (off < count && user_count <= count - off) {
unsigned long pfn = mem->pfn_base + start + off;
*ret = remap_pfn_range(vma, vma->vm_start, pfn,
user_count << PAGE_SHIFT,
vma->vm_page_prot);
}
return 1;
}
return 0;
}
EXPORT_SYMBOL(dma_mmap_from_coherent);
/*
* Support for reserved memory regions defined in device tree
*/
#ifdef CONFIG_OF_RESERVED_MEM
#include <linux/of.h>
#include <linux/of_fdt.h>
#include <linux/of_reserved_mem.h>
static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
{
struct dma_coherent_mem *mem = rmem->priv;
if (!mem &&
dma_init_coherent_memory(rmem->base, rmem->base, rmem->size,
DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE,
&mem) != DMA_MEMORY_MAP) {
pr_err("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
return -ENODEV;
}
rmem->priv = mem;
dma_assign_coherent_memory(dev, mem);
return 0;
}
static void rmem_dma_device_release(struct reserved_mem *rmem,
struct device *dev)
{
dev->dma_mem = NULL;
}
static const struct reserved_mem_ops rmem_dma_ops = {
.device_init = rmem_dma_device_init,
.device_release = rmem_dma_device_release,
};
static int __init rmem_dma_setup(struct reserved_mem *rmem)
{
unsigned long node = rmem->fdt_node;
if (of_get_flat_dt_prop(node, "reusable", NULL))
return -EINVAL;
#ifdef CONFIG_ARM
if (!of_get_flat_dt_prop(node, "no-map", NULL)) {
pr_err("Reserved memory: regions without no-map are not yet supported\n");
return -EINVAL;
}
#endif
rmem->ops = &rmem_dma_ops;
pr_info("Reserved memory: created DMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);
return 0;
}
RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup);
#endif

View file

@ -0,0 +1,557 @@
/*
* Contiguous Memory Allocator for DMA mapping framework
* Copyright (c) 2010-2011 by Samsung Electronics.
* Written by:
* Marek Szyprowski <m.szyprowski@samsung.com>
* Michal Nazarewicz <mina86@mina86.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
* License or (at your optional) any later version of the license.
*/
#define pr_fmt(fmt) "cma: " fmt
#ifdef CONFIG_CMA_DEBUG
#ifndef DEBUG
# define DEBUG
#endif
#endif
#include <asm/compat.h>
#include <asm/page.h>
#include <asm/dma-contiguous.h>
#include <linux/memblock.h>
#include <linux/err.h>
#include <linux/mm.h>
#include <linux/mutex.h>
#include <linux/page-isolation.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/swap.h>
#include <linux/mm_types.h>
#include <linux/dma-contiguous.h>
struct cma {
unsigned long base_pfn;
unsigned long count;
unsigned long free_count;
unsigned long *bitmap;
unsigned long carved_out_count;
bool isolated;
};
struct cma *dma_contiguous_default_area;
#ifdef CONFIG_CMA_SIZE_MBYTES
#define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
#else
#define CMA_SIZE_MBYTES 0
#endif
/*
* Default global CMA area size can be defined in kernel's .config.
* This is usefull mainly for distro maintainers to create a kernel
* that works correctly for most supported systems.
* The size can be set in bytes or as a percentage of the total memory
* in the system.
*
* Users, who want to set the size of global CMA area for their system
* should use cma= kernel parameter.
*/
static const phys_addr_t size_bytes = CMA_SIZE_MBYTES * SZ_1M;
static phys_addr_t size_cmdline = -1;
static int __init early_cma(char *p)
{
pr_debug("%s(%s)\n", __func__, p);
size_cmdline = memparse(p, &p);
return 0;
}
early_param("cma", early_cma);
#ifdef CONFIG_CMA_SIZE_PERCENTAGE
static phys_addr_t __init __maybe_unused cma_early_percent_memory(void)
{
struct memblock_region *reg;
unsigned long total_pages = 0;
/*
* We cannot use memblock_phys_mem_size() here, because
* memblock_analyze() has not been called yet.
*/
for_each_memblock(memory, reg)
total_pages += memblock_region_memory_end_pfn(reg) -
memblock_region_memory_base_pfn(reg);
return (total_pages * CONFIG_CMA_SIZE_PERCENTAGE / 100) << PAGE_SHIFT;
}
#else
static inline __maybe_unused phys_addr_t cma_early_percent_memory(void)
{
return 0;
}
#endif
static struct cma cma_areas[MAX_CMA_AREAS];
static unsigned cma_area_count;
static int __init __dma_contiguous_reserve_area(
phys_addr_t size, phys_addr_t base,
phys_addr_t limit, struct cma **res_cma)
{
struct cma *cma = &cma_areas[cma_area_count];
phys_addr_t alignment;
int ret = 0;
pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
(unsigned long)size, (unsigned long)base,
(unsigned long)limit);
/* Sanity checks */
if (cma_area_count == ARRAY_SIZE(cma_areas)) {
pr_err("Not enough slots for CMA reserved regions!\n");
return -ENOSPC;
}
if (!size)
return -EINVAL;
/* Sanitise input arguments */
#ifndef CMA_NO_MIGRATION
alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
#else
/* constraints for memory protection */
alignment = (size < SZ_1M) ? (SZ_4K << get_order(size)): SZ_1M;
#endif
if (base & (alignment - 1)) {
pr_err("Invalid alignment of base address %pa\n", &base);
return -EINVAL;
}
base = ALIGN(base, alignment);
size = ALIGN(size, alignment);
limit &= ~(alignment - 1);
/* Reserve memory */
if (base) {
if (memblock_is_region_reserved(base, size) ||
memblock_reserve(base, size) < 0) {
ret = -EBUSY;
goto err;
}
} else {
/*
* Use __memblock_alloc_base() since
* memblock_alloc_base() panic()s.
*/
phys_addr_t addr = __memblock_alloc_base(size, alignment, limit);
if (!addr) {
ret = -ENOMEM;
goto err;
} else {
base = addr;
}
}
/*
* Each reserved area must be initialised later, when more kernel
* subsystems (like slab allocator) are available.
*/
cma->base_pfn = PFN_DOWN(base);
cma->count = size >> PAGE_SHIFT;
cma->free_count = cma->count;
*res_cma = cma;
cma_area_count++;
pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
(unsigned long)base);
/* Architecture specific contiguous memory fixup. */
dma_contiguous_early_fixup(base, size);
return 0;
err:
pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
return ret;
}
/**
* dma_contiguous_reserve() - reserve area(s) for contiguous memory handling
* @limit: End address of the reserved memory (optional, 0 for any).
*
* This function reserves memory from early allocator. It should be
* called by arch specific code once the early allocator (memblock or bootmem)
* has been activated and all other subsystems have already allocated/reserved
* memory.
*/
void __init dma_contiguous_reserve(phys_addr_t limit)
{
phys_addr_t selected_size = 0;
pr_debug("%s(limit %08lx)\n", __func__, (unsigned long)limit);
if (size_cmdline != -1) {
selected_size = size_cmdline;
} else {
#ifdef CONFIG_CMA_SIZE_SEL_MBYTES
selected_size = size_bytes;
#elif defined(CONFIG_CMA_SIZE_SEL_PERCENTAGE)
selected_size = cma_early_percent_memory();
#elif defined(CONFIG_CMA_SIZE_SEL_MIN)
selected_size = min(size_bytes, cma_early_percent_memory());
#elif defined(CONFIG_CMA_SIZE_SEL_MAX)
selected_size = max(size_bytes, cma_early_percent_memory());
#endif
}
if (selected_size && !dma_contiguous_default_area) {
pr_debug("%s: reserving %ld MiB for global area\n", __func__,
(unsigned long)selected_size / SZ_1M);
__dma_contiguous_reserve_area(selected_size, 0, limit,
&dma_contiguous_default_area);
}
};
static DEFINE_MUTEX(cma_mutex);
#ifndef CMA_NO_MIGRATION
static int __init cma_activate_area(struct cma *cma)
{
int bitmap_size = BITS_TO_LONGS(cma->count) * sizeof(long);
unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
unsigned i = cma->count >> pageblock_order;
struct zone *zone;
cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
if (!cma->bitmap)
return -ENOMEM;
WARN_ON_ONCE(!pfn_valid(pfn));
zone = page_zone(pfn_to_page(pfn));
do {
unsigned j;
base_pfn = pfn;
for (j = pageblock_nr_pages; j; --j, pfn++) {
WARN_ON_ONCE(!pfn_valid(pfn));
if (page_zone(pfn_to_page(pfn)) != zone)
return -EINVAL;
}
init_cma_reserved_pageblock(pfn_to_page(base_pfn));
} while (--i);
return 0;
}
#else
static __init int cma_activate_area(unsigned long base_pfn, unsigned long count)
{
return 0;
}
#endif
static int __init cma_init_reserved_areas(void)
{
int i;
for (i = 0; i < cma_area_count; i++) {
int ret = cma_activate_area(&cma_areas[i]);
if (ret)
return ret;
}
return 0;
}
core_initcall(cma_init_reserved_areas);
/**
* dma_contiguous_reserve_area() - reserve custom contiguous area
* @size: Size of the reserved area (in bytes),
* @base: Base address of the reserved area optional, use 0 for any
* @limit: End address of the reserved memory (optional, 0 for any).
* @res_cma: Pointer to store the created cma region.
*
* This function reserves memory from early allocator. It should be
* called by arch specific code once the early allocator (memblock or bootmem)
* has been activated and all other subsystems have already allocated/reserved
* memory. This function allows to create custom reserved areas for specific
* devices.
*/
#ifdef CONFIG_OF_RESERVED_MEM
int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
phys_addr_t limit, struct cma **res_cma)
{
struct cma *cma = &cma_areas[cma_area_count];
pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
(unsigned long)size, (unsigned long)base,
(unsigned long)limit);
/* Sanity checks */
if (cma_area_count == ARRAY_SIZE(cma_areas)) {
pr_err("Not enough slots for CMA reserved regions!\n");
return -ENOSPC;
}
if (!size)
return -EINVAL;
cma->base_pfn = PFN_DOWN(base);
cma->count = size >> PAGE_SHIFT;
cma->free_count = cma->count;
*res_cma = cma;
cma_area_count++;
pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
(unsigned long)base);
/* Architecture specific contiguous memory fixup. */
dma_contiguous_early_fixup(base, size);
return 0;
}
#else
int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
phys_addr_t limit, struct cma **res_cma)
{
return __dma_contiguous_reserve_area(size, base, limit, res_cma);
}
#endif
/**
* dma_alloc_from_contiguous() - allocate pages from contiguous area
* @dev: Pointer to device for which the allocation is performed.
* @count: Requested number of pages.
* @align: Requested alignment of pages (in PAGE_SIZE order).
*
* This function allocates memory buffer for specified device. It uses
* device specific contiguous memory area if available or the default
* global one. Requires architecture specific get_dev_cma_area() helper
* function.
*/
struct page *dma_alloc_from_contiguous(struct device *dev, int count,
unsigned int align)
{
unsigned long mask, pfn, pageno, start = 0;
struct cma *cma = dev_get_cma_area(dev);
struct page *page = NULL;
int ret;
if (!cma || !cma->count)
return NULL;
if (align > CONFIG_CMA_ALIGNMENT)
align = CONFIG_CMA_ALIGNMENT;
pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
count, align);
if (!count)
return NULL;
mask = (1 << align) - 1;
mutex_lock(&cma_mutex);
for (;;) {
pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
start, count, mask);
if (pageno >= cma->count)
break;
pfn = cma->base_pfn + pageno;
ret = cma->isolated ?
0 : alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
if (ret == 0) {
bitmap_set(cma->bitmap, pageno, count);
page = pfn_to_page(pfn);
cma->free_count -= count;
break;
} else if (ret != -EBUSY) {
break;
}
pr_debug("%s(): memory range at %p is busy, retrying\n",
__func__, pfn_to_page(pfn));
/* try again with a bit different memory target */
start = pageno + mask + 1;
}
mutex_unlock(&cma_mutex);
pr_debug("%s(): returned %p\n", __func__, page);
return page;
}
/**
* dma_release_from_contiguous() - release allocated pages
* @dev: Pointer to device for which the pages were allocated.
* @pages: Allocated pages.
* @count: Number of allocated pages.
*
* This function releases memory allocated by dma_alloc_from_contiguous().
* It returns false when provided pages do not belong to contiguous area and
* true otherwise.
*/
bool dma_release_from_contiguous(struct device *dev, struct page *pages,
int count)
{
struct cma *cma = dev_get_cma_area(dev);
unsigned long pfn;
if (!cma || !pages)
return false;
pr_debug("%s(page %p)\n", __func__, (void *)pages);
pfn = page_to_pfn(pages);
if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
return false;
VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
mutex_lock(&cma_mutex);
bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
if (!cma->isolated)
free_contig_range(pfn, count);
cma->free_count += count;
mutex_unlock(&cma_mutex);
return true;
}
/**
* dma_contiguous_info() - retrieving contiguous memory information
* @dev: Pointer to device to get the information.
* @info: [OUT] pointer to a structure to store the information
*
* This fills @info the status of a contiguous memory. -ENODEV if
* the given device does not have contiguous memory.
*/
int dma_contiguous_info(struct device *dev, struct cma_info *info)
{
struct cma *cma = dev_get_cma_area(dev);
if (!info)
return 0;
if (!cma)
return -ENODEV;
info->base = cma->base_pfn << PAGE_SHIFT;
info->size = cma->count << PAGE_SHIFT;
info->free = cma->free_count << PAGE_SHIFT;
info->isolated = cma->isolated;
return 0;
}
#ifndef CMA_NO_MIGRATION
static void dma_contiguous_deisolate_until(struct device *dev, int idx_until)
{
struct cma *cma = dev_get_cma_area(dev);
int idx;
if (!cma || !idx_until)
return;
mutex_lock(&cma_mutex);
if (!cma->isolated) {
mutex_unlock(&cma_mutex);
dev_err(dev, "Not isolated!\n");
return;
}
idx = find_first_zero_bit(cma->bitmap, idx_until);
while (idx < idx_until) {
int idx_set;
idx_set = find_next_bit(cma->bitmap, idx_until, idx);
free_contig_range(cma->base_pfn + idx, idx_set - idx);
idx = find_next_zero_bit(cma->bitmap, idx_until, idx_set);
}
cma->isolated = false;
mutex_unlock(&cma_mutex);
}
/**
* dma_contiguous_deisolate() - return contiguous memory to the page allocator
* @dev: Pointer to device which owns the contiguous memory
*
* This function return the contiguous memory that is not allocated by CMA to
* the page allocator so that the kernel can allocate the contiguous memory.
*/
void dma_contiguous_deisolate(struct device *dev)
{
struct cma *cma = dev_get_cma_area(dev);
dma_contiguous_deisolate_until(dev, cma->count);
}
/**
* dma_contiguous_isolate() - isolate contiguous memory from the page allocator
* @dev: Pointer to device which owns the contiguous memory
*
* This function isolates contiguous memory from the page allocator. If some of
* the contiguous memory is allocated, it is reclaimed.
*/
int dma_contiguous_isolate(struct device *dev)
{
struct cma *cma = dev_get_cma_area(dev);
int ret;
int idx;
if (!cma)
return -ENODEV;
if (cma->count == 0)
return 0;
mutex_lock(&cma_mutex);
if (cma->isolated) {
mutex_unlock(&cma_mutex);
dev_err(dev, "Alread isolated!\n");
return 0;
}
idx = find_first_zero_bit(cma->bitmap, cma->count);
while (idx < cma->count) {
int idx_set;
idx_set = find_next_bit(cma->bitmap, cma->count, idx);
do {
ret = alloc_contig_range(cma->base_pfn + idx,
cma->base_pfn + idx_set,
MIGRATE_CMA);
} while (ret == -EBUSY);
if (ret < 0) {
mutex_unlock(&cma_mutex);
dma_contiguous_deisolate_until(dev, idx_set);
dev_err(dev, "Failed to isolate %#lx@%#010llx (%d).\n",
(idx_set - idx) * PAGE_SIZE,
PFN_PHYS(cma->base_pfn + idx), ret);
return ret;
}
idx = find_next_zero_bit(cma->bitmap, cma->count, idx_set);
}
cma->isolated = true;
mutex_unlock(&cma_mutex);
return 0;
}
#endif /* CMA_NO_MIGRATION */

341
drivers/base/dma-mapping.c Normal file
View file

@ -0,0 +1,341 @@
/*
* drivers/base/dma-mapping.c - arch-independent dma-mapping routines
*
* Copyright (c) 2006 SUSE Linux Products GmbH
* Copyright (c) 2006 Tejun Heo <teheo@suse.de>
*
* This file is released under the GPLv2.
*/
#include <linux/dma-mapping.h>
#include <linux/export.h>
#include <linux/gfp.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <asm-generic/dma-coherent.h>
/*
* Managed DMA API
*/
struct dma_devres {
size_t size;
void *vaddr;
dma_addr_t dma_handle;
};
static void dmam_coherent_release(struct device *dev, void *res)
{
struct dma_devres *this = res;
dma_free_coherent(dev, this->size, this->vaddr, this->dma_handle);
}
static void dmam_noncoherent_release(struct device *dev, void *res)
{
struct dma_devres *this = res;
dma_free_noncoherent(dev, this->size, this->vaddr, this->dma_handle);
}
static int dmam_match(struct device *dev, void *res, void *match_data)
{
struct dma_devres *this = res, *match = match_data;
if (this->vaddr == match->vaddr) {
WARN_ON(this->size != match->size ||
this->dma_handle != match->dma_handle);
return 1;
}
return 0;
}
/**
* dmam_alloc_coherent - Managed dma_alloc_coherent()
* @dev: Device to allocate coherent memory for
* @size: Size of allocation
* @dma_handle: Out argument for allocated DMA handle
* @gfp: Allocation flags
*
* Managed dma_alloc_coherent(). Memory allocated using this function
* will be automatically released on driver detach.
*
* RETURNS:
* Pointer to allocated memory on success, NULL on failure.
*/
void * dmam_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
{
struct dma_devres *dr;
void *vaddr;
dr = devres_alloc(dmam_coherent_release, sizeof(*dr), gfp);
if (!dr)
return NULL;
vaddr = dma_alloc_coherent(dev, size, dma_handle, gfp);
if (!vaddr) {
devres_free(dr);
return NULL;
}
dr->vaddr = vaddr;
dr->dma_handle = *dma_handle;
dr->size = size;
devres_add(dev, dr);
return vaddr;
}
EXPORT_SYMBOL(dmam_alloc_coherent);
/**
* dmam_free_coherent - Managed dma_free_coherent()
* @dev: Device to free coherent memory for
* @size: Size of allocation
* @vaddr: Virtual address of the memory to free
* @dma_handle: DMA handle of the memory to free
*
* Managed dma_free_coherent().
*/
void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle)
{
struct dma_devres match_data = { size, vaddr, dma_handle };
dma_free_coherent(dev, size, vaddr, dma_handle);
WARN_ON(devres_destroy(dev, dmam_coherent_release, dmam_match,
&match_data));
}
EXPORT_SYMBOL(dmam_free_coherent);
/**
* dmam_alloc_non_coherent - Managed dma_alloc_non_coherent()
* @dev: Device to allocate non_coherent memory for
* @size: Size of allocation
* @dma_handle: Out argument for allocated DMA handle
* @gfp: Allocation flags
*
* Managed dma_alloc_non_coherent(). Memory allocated using this
* function will be automatically released on driver detach.
*
* RETURNS:
* Pointer to allocated memory on success, NULL on failure.
*/
void *dmam_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
{
struct dma_devres *dr;
void *vaddr;
dr = devres_alloc(dmam_noncoherent_release, sizeof(*dr), gfp);
if (!dr)
return NULL;
vaddr = dma_alloc_noncoherent(dev, size, dma_handle, gfp);
if (!vaddr) {
devres_free(dr);
return NULL;
}
dr->vaddr = vaddr;
dr->dma_handle = *dma_handle;
dr->size = size;
devres_add(dev, dr);
return vaddr;
}
EXPORT_SYMBOL(dmam_alloc_noncoherent);
/**
* dmam_free_coherent - Managed dma_free_noncoherent()
* @dev: Device to free noncoherent memory for
* @size: Size of allocation
* @vaddr: Virtual address of the memory to free
* @dma_handle: DMA handle of the memory to free
*
* Managed dma_free_noncoherent().
*/
void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle)
{
struct dma_devres match_data = { size, vaddr, dma_handle };
dma_free_noncoherent(dev, size, vaddr, dma_handle);
WARN_ON(!devres_destroy(dev, dmam_noncoherent_release, dmam_match,
&match_data));
}
EXPORT_SYMBOL(dmam_free_noncoherent);
#ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
static void dmam_coherent_decl_release(struct device *dev, void *res)
{
dma_release_declared_memory(dev);
}
/**
* dmam_declare_coherent_memory - Managed dma_declare_coherent_memory()
* @dev: Device to declare coherent memory for
* @phys_addr: Physical address of coherent memory to be declared
* @device_addr: Device address of coherent memory to be declared
* @size: Size of coherent memory to be declared
* @flags: Flags
*
* Managed dma_declare_coherent_memory().
*
* RETURNS:
* 0 on success, -errno on failure.
*/
int dmam_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
dma_addr_t device_addr, size_t size, int flags)
{
void *res;
int rc;
res = devres_alloc(dmam_coherent_decl_release, 0, GFP_KERNEL);
if (!res)
return -ENOMEM;
rc = dma_declare_coherent_memory(dev, phys_addr, device_addr, size,
flags);
if (rc == 0)
devres_add(dev, res);
else
devres_free(res);
return rc;
}
EXPORT_SYMBOL(dmam_declare_coherent_memory);
/**
* dmam_release_declared_memory - Managed dma_release_declared_memory().
* @dev: Device to release declared coherent memory for
*
* Managed dmam_release_declared_memory().
*/
void dmam_release_declared_memory(struct device *dev)
{
WARN_ON(devres_destroy(dev, dmam_coherent_decl_release, NULL, NULL));
}
EXPORT_SYMBOL(dmam_release_declared_memory);
#endif
/*
* Create scatter-list for the already allocated DMA buffer.
*/
int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size)
{
struct page *page = virt_to_page(cpu_addr);
int ret;
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
if (unlikely(ret))
return ret;
sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
return 0;
}
EXPORT_SYMBOL(dma_common_get_sgtable);
/*
* Create userspace mapping for the DMA-coherent memory.
*/
int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size)
{
int ret = -ENXIO;
#ifdef CONFIG_MMU
unsigned long user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
unsigned long off = vma->vm_pgoff;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret))
return ret;
if (off < count && user_count <= (count - off)) {
ret = remap_pfn_range(vma, vma->vm_start,
pfn + off,
user_count << PAGE_SHIFT,
vma->vm_page_prot);
}
#endif /* CONFIG_MMU */
return ret;
}
EXPORT_SYMBOL(dma_common_mmap);
#ifdef CONFIG_MMU
/*
* remaps an array of PAGE_SIZE pages into another vm_area
* Cannot be used in non-sleeping contexts
*/
void *dma_common_pages_remap(struct page **pages, size_t size,
unsigned long vm_flags, pgprot_t prot,
const void *caller)
{
struct vm_struct *area;
area = get_vm_area_caller(size, vm_flags, caller);
if (!area)
return NULL;
area->pages = pages;
if (map_vm_area(area, prot, pages)) {
vunmap(area->addr);
return NULL;
}
return area->addr;
}
/*
* remaps an allocated contiguous region into another vm_area.
* Cannot be used in non-sleeping contexts
*/
void *dma_common_contiguous_remap(struct page *page, size_t size,
unsigned long vm_flags,
pgprot_t prot, const void *caller)
{
int i;
struct page **pages;
void *ptr;
unsigned long pfn;
pages = kmalloc(sizeof(struct page *) << get_order(size), GFP_KERNEL);
if (!pages)
return NULL;
for (i = 0, pfn = page_to_pfn(page); i < (size >> PAGE_SHIFT); i++)
pages[i] = pfn_to_page(pfn + i);
ptr = dma_common_pages_remap(pages, size, vm_flags, prot, caller);
kfree(pages);
return ptr;
}
/*
* unmaps a range previously mapped by dma_common_*_remap
*/
void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags)
{
struct vm_struct *area = find_vm_area(cpu_addr);
if (!area || (area->flags & vm_flags) != vm_flags) {
WARN(1, "trying to free invalid coherent area: %p\n", cpu_addr);
return;
}
unmap_kernel_range((unsigned long)cpu_addr, size);
vunmap(cpu_addr);
}
#endif

223
drivers/base/driver.c Normal file
View file

@ -0,0 +1,223 @@
/*
* driver.c - centralized device driver management
*
* Copyright (c) 2002-3 Patrick Mochel
* Copyright (c) 2002-3 Open Source Development Labs
* Copyright (c) 2007 Greg Kroah-Hartman <gregkh@suse.de>
* Copyright (c) 2007 Novell Inc.
*
* This file is released under the GPLv2
*
*/
#include <linux/device.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/sysfs.h>
#include "base.h"
static struct device *next_device(struct klist_iter *i)
{
struct klist_node *n = klist_next(i);
struct device *dev = NULL;
struct device_private *dev_prv;
if (n) {
dev_prv = to_device_private_driver(n);
dev = dev_prv->device;
}
return dev;
}
/**
* driver_for_each_device - Iterator for devices bound to a driver.
* @drv: Driver we're iterating.
* @start: Device to begin with
* @data: Data to pass to the callback.
* @fn: Function to call for each device.
*
* Iterate over the @drv's list of devices calling @fn for each one.
*/
int driver_for_each_device(struct device_driver *drv, struct device *start,
void *data, int (*fn)(struct device *, void *))
{
struct klist_iter i;
struct device *dev;
int error = 0;
if (!drv)
return -EINVAL;
klist_iter_init_node(&drv->p->klist_devices, &i,
start ? &start->p->knode_driver : NULL);
while ((dev = next_device(&i)) && !error)
error = fn(dev, data);
klist_iter_exit(&i);
return error;
}
EXPORT_SYMBOL_GPL(driver_for_each_device);
/**
* driver_find_device - device iterator for locating a particular device.
* @drv: The device's driver
* @start: Device to begin with
* @data: Data to pass to match function
* @match: Callback function to check device
*
* This is similar to the driver_for_each_device() function above, but
* it returns a reference to a device that is 'found' for later use, as
* determined by the @match callback.
*
* The callback should return 0 if the device doesn't match and non-zero
* if it does. If the callback returns non-zero, this function will
* return to the caller and not iterate over any more devices.
*/
struct device *driver_find_device(struct device_driver *drv,
struct device *start, void *data,
int (*match)(struct device *dev, void *data))
{
struct klist_iter i;
struct device *dev;
if (!drv || !drv->p)
return NULL;
klist_iter_init_node(&drv->p->klist_devices, &i,
(start ? &start->p->knode_driver : NULL));
while ((dev = next_device(&i)))
if (match(dev, data) && get_device(dev))
break;
klist_iter_exit(&i);
return dev;
}
EXPORT_SYMBOL_GPL(driver_find_device);
/**
* driver_create_file - create sysfs file for driver.
* @drv: driver.
* @attr: driver attribute descriptor.
*/
int driver_create_file(struct device_driver *drv,
const struct driver_attribute *attr)
{
int error;
if (drv)
error = sysfs_create_file(&drv->p->kobj, &attr->attr);
else
error = -EINVAL;
return error;
}
EXPORT_SYMBOL_GPL(driver_create_file);
/**
* driver_remove_file - remove sysfs file for driver.
* @drv: driver.
* @attr: driver attribute descriptor.
*/
void driver_remove_file(struct device_driver *drv,
const struct driver_attribute *attr)
{
if (drv)
sysfs_remove_file(&drv->p->kobj, &attr->attr);
}
EXPORT_SYMBOL_GPL(driver_remove_file);
int driver_add_groups(struct device_driver *drv,
const struct attribute_group **groups)
{
return sysfs_create_groups(&drv->p->kobj, groups);
}
void driver_remove_groups(struct device_driver *drv,
const struct attribute_group **groups)
{
sysfs_remove_groups(&drv->p->kobj, groups);
}
/**
* driver_register - register driver with bus
* @drv: driver to register
*
* We pass off most of the work to the bus_add_driver() call,
* since most of the things we have to do deal with the bus
* structures.
*/
int driver_register(struct device_driver *drv)
{
int ret;
struct device_driver *other;
BUG_ON(!drv->bus->p);
if ((drv->bus->probe && drv->probe) ||
(drv->bus->remove && drv->remove) ||
(drv->bus->shutdown && drv->shutdown))
printk(KERN_WARNING "Driver '%s' needs updating - please use "
"bus_type methods\n", drv->name);
other = driver_find(drv->name, drv->bus);
if (other) {
printk(KERN_ERR "Error: Driver '%s' is already registered, "
"aborting...\n", drv->name);
return -EBUSY;
}
ret = bus_add_driver(drv);
if (ret)
return ret;
ret = driver_add_groups(drv, drv->groups);
if (ret) {
bus_remove_driver(drv);
return ret;
}
kobject_uevent(&drv->p->kobj, KOBJ_ADD);
return ret;
}
EXPORT_SYMBOL_GPL(driver_register);
/**
* driver_unregister - remove driver from system.
* @drv: driver.
*
* Again, we pass off most of the work to the bus-level call.
*/
void driver_unregister(struct device_driver *drv)
{
if (!drv || !drv->p) {
WARN(1, "Unexpected driver unregister!\n");
return;
}
driver_remove_groups(drv, drv->groups);
bus_remove_driver(drv);
}
EXPORT_SYMBOL_GPL(driver_unregister);
/**
* driver_find - locate driver on a bus by its name.
* @name: name of the driver.
* @bus: bus to scan for the driver.
*
* Call kset_find_obj() to iterate over list of drivers on
* a bus to find driver by name. Return driver if found.
*
* This routine provides no locking to prevent the driver it returns
* from being unregistered or unloaded while the caller is using it.
* The caller is responsible for preventing this.
*/
struct device_driver *driver_find(const char *name, struct bus_type *bus)
{
struct kobject *k = kset_find_obj(bus->p->drivers_kset, name);
struct driver_private *priv;
if (k) {
/* Drop reference added by kset_find_obj() */
kobject_put(k);
priv = to_driver(k);
return priv->driver;
}
return NULL;
}
EXPORT_SYMBOL_GPL(driver_find);

27
drivers/base/firmware.c Normal file
View file

@ -0,0 +1,27 @@
/*
* firmware.c - firmware subsystem hoohaw.
*
* Copyright (c) 2002-3 Patrick Mochel
* Copyright (c) 2002-3 Open Source Development Labs
* Copyright (c) 2007 Greg Kroah-Hartman <gregkh@suse.de>
* Copyright (c) 2007 Novell Inc.
*
* This file is released under the GPLv2
*/
#include <linux/kobject.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/device.h>
#include "base.h"
struct kobject *firmware_kobj;
EXPORT_SYMBOL_GPL(firmware_kobj);
int __init firmware_init(void)
{
firmware_kobj = kobject_create_and_add("firmware", NULL);
if (!firmware_kobj)
return -ENOMEM;
return 0;
}

File diff suppressed because it is too large Load diff

25
drivers/base/hypervisor.c Normal file
View file

@ -0,0 +1,25 @@
/*
* hypervisor.c - /sys/hypervisor subsystem.
*
* Copyright (C) IBM Corp. 2006
* Copyright (C) 2007 Greg Kroah-Hartman <gregkh@suse.de>
* Copyright (C) 2007 Novell Inc.
*
* This file is released under the GPLv2
*/
#include <linux/kobject.h>
#include <linux/device.h>
#include <linux/export.h>
#include "base.h"
struct kobject *hypervisor_kobj;
EXPORT_SYMBOL_GPL(hypervisor_kobj);
int __init hypervisor_init(void)
{
hypervisor_kobj = kobject_create_and_add("hypervisor", NULL);
if (!hypervisor_kobj)
return -ENOMEM;
return 0;
}

37
drivers/base/init.c Normal file
View file

@ -0,0 +1,37 @@
/*
* Copyright (c) 2002-3 Patrick Mochel
* Copyright (c) 2002-3 Open Source Development Labs
*
* This file is released under the GPLv2
*/
#include <linux/device.h>
#include <linux/init.h>
#include <linux/memory.h>
#include "base.h"
/**
* driver_init - initialize driver model.
*
* Call the driver model init functions to initialize their
* subsystems. Called early from init/main.c.
*/
void __init driver_init(void)
{
/* These are the core pieces */
devtmpfs_init();
devices_init();
buses_init();
classes_init();
firmware_init();
hypervisor_init();
/* These are also core pieces, but must come after the
* core core pieces.
*/
platform_bus_init();
cpu_dev_init();
memory_dev_init();
container_dev_init();
}

183
drivers/base/isa.c Normal file
View file

@ -0,0 +1,183 @@
/*
* ISA bus.
*/
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/dma-mapping.h>
#include <linux/isa.h>
static struct device isa_bus = {
.init_name = "isa"
};
struct isa_dev {
struct device dev;
struct device *next;
unsigned int id;
};
#define to_isa_dev(x) container_of((x), struct isa_dev, dev)
static int isa_bus_match(struct device *dev, struct device_driver *driver)
{
struct isa_driver *isa_driver = to_isa_driver(driver);
if (dev->platform_data == isa_driver) {
if (!isa_driver->match ||
isa_driver->match(dev, to_isa_dev(dev)->id))
return 1;
dev->platform_data = NULL;
}
return 0;
}
static int isa_bus_probe(struct device *dev)
{
struct isa_driver *isa_driver = dev->platform_data;
if (isa_driver->probe)
return isa_driver->probe(dev, to_isa_dev(dev)->id);
return 0;
}
static int isa_bus_remove(struct device *dev)
{
struct isa_driver *isa_driver = dev->platform_data;
if (isa_driver->remove)
return isa_driver->remove(dev, to_isa_dev(dev)->id);
return 0;
}
static void isa_bus_shutdown(struct device *dev)
{
struct isa_driver *isa_driver = dev->platform_data;
if (isa_driver->shutdown)
isa_driver->shutdown(dev, to_isa_dev(dev)->id);
}
static int isa_bus_suspend(struct device *dev, pm_message_t state)
{
struct isa_driver *isa_driver = dev->platform_data;
if (isa_driver->suspend)
return isa_driver->suspend(dev, to_isa_dev(dev)->id, state);
return 0;
}
static int isa_bus_resume(struct device *dev)
{
struct isa_driver *isa_driver = dev->platform_data;
if (isa_driver->resume)
return isa_driver->resume(dev, to_isa_dev(dev)->id);
return 0;
}
static struct bus_type isa_bus_type = {
.name = "isa",
.match = isa_bus_match,
.probe = isa_bus_probe,
.remove = isa_bus_remove,
.shutdown = isa_bus_shutdown,
.suspend = isa_bus_suspend,
.resume = isa_bus_resume
};
static void isa_dev_release(struct device *dev)
{
kfree(to_isa_dev(dev));
}
void isa_unregister_driver(struct isa_driver *isa_driver)
{
struct device *dev = isa_driver->devices;
while (dev) {
struct device *tmp = to_isa_dev(dev)->next;
device_unregister(dev);
dev = tmp;
}
driver_unregister(&isa_driver->driver);
}
EXPORT_SYMBOL_GPL(isa_unregister_driver);
int isa_register_driver(struct isa_driver *isa_driver, unsigned int ndev)
{
int error;
unsigned int id;
isa_driver->driver.bus = &isa_bus_type;
isa_driver->devices = NULL;
error = driver_register(&isa_driver->driver);
if (error)
return error;
for (id = 0; id < ndev; id++) {
struct isa_dev *isa_dev;
isa_dev = kzalloc(sizeof *isa_dev, GFP_KERNEL);
if (!isa_dev) {
error = -ENOMEM;
break;
}
isa_dev->dev.parent = &isa_bus;
isa_dev->dev.bus = &isa_bus_type;
dev_set_name(&isa_dev->dev, "%s.%u",
isa_driver->driver.name, id);
isa_dev->dev.platform_data = isa_driver;
isa_dev->dev.release = isa_dev_release;
isa_dev->id = id;
isa_dev->dev.coherent_dma_mask = DMA_BIT_MASK(24);
isa_dev->dev.dma_mask = &isa_dev->dev.coherent_dma_mask;
error = device_register(&isa_dev->dev);
if (error) {
put_device(&isa_dev->dev);
break;
}
if (isa_dev->dev.platform_data) {
isa_dev->next = isa_driver->devices;
isa_driver->devices = &isa_dev->dev;
} else
device_unregister(&isa_dev->dev);
}
if (!error && !isa_driver->devices)
error = -ENODEV;
if (error)
isa_unregister_driver(isa_driver);
return error;
}
EXPORT_SYMBOL_GPL(isa_register_driver);
static int __init isa_bus_init(void)
{
int error;
error = bus_register(&isa_bus_type);
if (!error) {
error = device_register(&isa_bus);
if (error)
bus_unregister(&isa_bus_type);
}
return error;
}
device_initcall(isa_bus_init);

155
drivers/base/map.c Normal file
View file

@ -0,0 +1,155 @@
/*
* linux/drivers/base/map.c
*
* (C) Copyright Al Viro 2002,2003
* Released under GPL v2.
*
* NOTE: data structure needs to be changed. It works, but for large dev_t
* it will be too slow. It is isolated, though, so these changes will be
* local to that file.
*/
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/mutex.h>
#include <linux/kdev_t.h>
#include <linux/kobject.h>
#include <linux/kobj_map.h>
struct kobj_map {
struct probe {
struct probe *next;
dev_t dev;
unsigned long range;
struct module *owner;
kobj_probe_t *get;
int (*lock)(dev_t, void *);
void *data;
} *probes[255];
struct mutex *lock;
};
int kobj_map(struct kobj_map *domain, dev_t dev, unsigned long range,
struct module *module, kobj_probe_t *probe,
int (*lock)(dev_t, void *), void *data)
{
unsigned n = MAJOR(dev + range - 1) - MAJOR(dev) + 1;
unsigned index = MAJOR(dev);
unsigned i;
struct probe *p;
if (n > 255)
n = 255;
p = kmalloc(sizeof(struct probe) * n, GFP_KERNEL);
if (p == NULL)
return -ENOMEM;
for (i = 0; i < n; i++, p++) {
p->owner = module;
p->get = probe;
p->lock = lock;
p->dev = dev;
p->range = range;
p->data = data;
}
mutex_lock(domain->lock);
for (i = 0, p -= n; i < n; i++, p++, index++) {
struct probe **s = &domain->probes[index % 255];
while (*s && (*s)->range < range)
s = &(*s)->next;
p->next = *s;
*s = p;
}
mutex_unlock(domain->lock);
return 0;
}
void kobj_unmap(struct kobj_map *domain, dev_t dev, unsigned long range)
{
unsigned n = MAJOR(dev + range - 1) - MAJOR(dev) + 1;
unsigned index = MAJOR(dev);
unsigned i;
struct probe *found = NULL;
if (n > 255)
n = 255;
mutex_lock(domain->lock);
for (i = 0; i < n; i++, index++) {
struct probe **s;
for (s = &domain->probes[index % 255]; *s; s = &(*s)->next) {
struct probe *p = *s;
if (p->dev == dev && p->range == range) {
*s = p->next;
if (!found)
found = p;
break;
}
}
}
mutex_unlock(domain->lock);
kfree(found);
}
struct kobject *kobj_lookup(struct kobj_map *domain, dev_t dev, int *index)
{
struct kobject *kobj;
struct probe *p;
unsigned long best = ~0UL;
retry:
mutex_lock(domain->lock);
for (p = domain->probes[MAJOR(dev) % 255]; p; p = p->next) {
struct kobject *(*probe)(dev_t, int *, void *);
struct module *owner;
void *data;
if (p->dev > dev || p->dev + p->range - 1 < dev)
continue;
if (p->range - 1 >= best)
break;
if (!try_module_get(p->owner))
continue;
owner = p->owner;
data = p->data;
probe = p->get;
best = p->range - 1;
*index = dev - p->dev;
if (p->lock && p->lock(dev, data) < 0) {
module_put(owner);
continue;
}
mutex_unlock(domain->lock);
kobj = probe(dev, index, data);
/* Currently ->owner protects _only_ ->probe() itself. */
module_put(owner);
if (kobj)
return kobj;
goto retry;
}
mutex_unlock(domain->lock);
return NULL;
}
struct kobj_map *kobj_map_init(kobj_probe_t *base_probe, struct mutex *lock)
{
struct kobj_map *p = kmalloc(sizeof(struct kobj_map), GFP_KERNEL);
struct probe *base = kzalloc(sizeof(*base), GFP_KERNEL);
int i;
if ((p == NULL) || (base == NULL)) {
kfree(p);
kfree(base);
return NULL;
}
base->dev = 1;
base->range = ~0;
base->get = base_probe;
for (i = 0; i < 255; i++)
p->probes[i] = base;
p->lock = lock;
return p;
}

776
drivers/base/memory.c Normal file
View file

@ -0,0 +1,776 @@
/*
* Memory subsystem support
*
* Written by Matt Tolentino <matthew.e.tolentino@intel.com>
* Dave Hansen <haveblue@us.ibm.com>
*
* This file provides the necessary infrastructure to represent
* a SPARSEMEM-memory-model system's physical memory in /sysfs.
* All arch-independent code that assumes MEMORY_HOTPLUG requires
* SPARSEMEM should be contained here, or in mm/memory_hotplug.c.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/topology.h>
#include <linux/capability.h>
#include <linux/device.h>
#include <linux/memory.h>
#include <linux/memory_hotplug.h>
#include <linux/mm.h>
#include <linux/mutex.h>
#include <linux/stat.h>
#include <linux/slab.h>
#include <linux/atomic.h>
#include <asm/uaccess.h>
static DEFINE_MUTEX(mem_sysfs_mutex);
#define MEMORY_CLASS_NAME "memory"
#define to_memory_block(dev) container_of(dev, struct memory_block, dev)
static int sections_per_block;
static inline int base_memory_block_id(int section_nr)
{
return section_nr / sections_per_block;
}
static int memory_subsys_online(struct device *dev);
static int memory_subsys_offline(struct device *dev);
static struct bus_type memory_subsys = {
.name = MEMORY_CLASS_NAME,
.dev_name = MEMORY_CLASS_NAME,
.online = memory_subsys_online,
.offline = memory_subsys_offline,
};
static BLOCKING_NOTIFIER_HEAD(memory_chain);
int register_memory_notifier(struct notifier_block *nb)
{
return blocking_notifier_chain_register(&memory_chain, nb);
}
EXPORT_SYMBOL(register_memory_notifier);
void unregister_memory_notifier(struct notifier_block *nb)
{
blocking_notifier_chain_unregister(&memory_chain, nb);
}
EXPORT_SYMBOL(unregister_memory_notifier);
static ATOMIC_NOTIFIER_HEAD(memory_isolate_chain);
int register_memory_isolate_notifier(struct notifier_block *nb)
{
return atomic_notifier_chain_register(&memory_isolate_chain, nb);
}
EXPORT_SYMBOL(register_memory_isolate_notifier);
void unregister_memory_isolate_notifier(struct notifier_block *nb)
{
atomic_notifier_chain_unregister(&memory_isolate_chain, nb);
}
EXPORT_SYMBOL(unregister_memory_isolate_notifier);
static void memory_block_release(struct device *dev)
{
struct memory_block *mem = to_memory_block(dev);
kfree(mem);
}
unsigned long __weak memory_block_size_bytes(void)
{
return MIN_MEMORY_BLOCK_SIZE;
}
static unsigned long get_memory_block_size(void)
{
unsigned long block_sz;
block_sz = memory_block_size_bytes();
/* Validate blk_sz is a power of 2 and not less than section size */
if ((block_sz & (block_sz - 1)) || (block_sz < MIN_MEMORY_BLOCK_SIZE)) {
WARN_ON(1);
block_sz = MIN_MEMORY_BLOCK_SIZE;
}
return block_sz;
}
/*
* use this as the physical section index that this memsection
* uses.
*/
static ssize_t show_mem_start_phys_index(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct memory_block *mem = to_memory_block(dev);
unsigned long phys_index;
phys_index = mem->start_section_nr / sections_per_block;
return sprintf(buf, "%08lx\n", phys_index);
}
/*
* Show whether the section of memory is likely to be hot-removable
*/
static ssize_t show_mem_removable(struct device *dev,
struct device_attribute *attr, char *buf)
{
unsigned long i, pfn;
int ret = 1;
struct memory_block *mem = to_memory_block(dev);
for (i = 0; i < sections_per_block; i++) {
if (!present_section_nr(mem->start_section_nr + i))
continue;
pfn = section_nr_to_pfn(mem->start_section_nr + i);
ret &= is_mem_section_removable(pfn, PAGES_PER_SECTION);
}
return sprintf(buf, "%d\n", ret);
}
/*
* online, offline, going offline, etc.
*/
static ssize_t show_mem_state(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct memory_block *mem = to_memory_block(dev);
ssize_t len = 0;
/*
* We can probably put these states in a nice little array
* so that they're not open-coded
*/
switch (mem->state) {
case MEM_ONLINE:
len = sprintf(buf, "online\n");
break;
case MEM_OFFLINE:
len = sprintf(buf, "offline\n");
break;
case MEM_GOING_OFFLINE:
len = sprintf(buf, "going-offline\n");
break;
default:
len = sprintf(buf, "ERROR-UNKNOWN-%ld\n",
mem->state);
WARN_ON(1);
break;
}
return len;
}
int memory_notify(unsigned long val, void *v)
{
return blocking_notifier_call_chain(&memory_chain, val, v);
}
int memory_isolate_notify(unsigned long val, void *v)
{
return atomic_notifier_call_chain(&memory_isolate_chain, val, v);
}
/*
* The probe routines leave the pages reserved, just as the bootmem code does.
* Make sure they're still that way.
*/
static bool pages_correctly_reserved(unsigned long start_pfn)
{
int i, j;
struct page *page;
unsigned long pfn = start_pfn;
/*
* memmap between sections is not contiguous except with
* SPARSEMEM_VMEMMAP. We lookup the page once per section
* and assume memmap is contiguous within each section
*/
for (i = 0; i < sections_per_block; i++, pfn += PAGES_PER_SECTION) {
if (WARN_ON_ONCE(!pfn_valid(pfn)))
return false;
page = pfn_to_page(pfn);
for (j = 0; j < PAGES_PER_SECTION; j++) {
if (PageReserved(page + j))
continue;
printk(KERN_WARNING "section number %ld page number %d "
"not reserved, was it already online?\n",
pfn_to_section_nr(pfn), j);
return false;
}
}
return true;
}
/*
* MEMORY_HOTPLUG depends on SPARSEMEM in mm/Kconfig, so it is
* OK to have direct references to sparsemem variables in here.
*/
static int
memory_block_action(unsigned long phys_index, unsigned long action, int online_type)
{
unsigned long start_pfn;
unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
struct page *first_page;
int ret;
first_page = pfn_to_page(phys_index << PFN_SECTION_SHIFT);
start_pfn = page_to_pfn(first_page);
switch (action) {
case MEM_ONLINE:
if (!pages_correctly_reserved(start_pfn))
return -EBUSY;
ret = online_pages(start_pfn, nr_pages, online_type);
break;
case MEM_OFFLINE:
ret = offline_pages(start_pfn, nr_pages);
break;
default:
WARN(1, KERN_WARNING "%s(%ld, %ld) unknown action: "
"%ld\n", __func__, phys_index, action, action);
ret = -EINVAL;
}
return ret;
}
static int memory_block_change_state(struct memory_block *mem,
unsigned long to_state, unsigned long from_state_req)
{
int ret = 0;
if (mem->state != from_state_req)
return -EINVAL;
if (to_state == MEM_OFFLINE)
mem->state = MEM_GOING_OFFLINE;
ret = memory_block_action(mem->start_section_nr, to_state,
mem->online_type);
mem->state = ret ? from_state_req : to_state;
return ret;
}
/* The device lock serializes operations on memory_subsys_[online|offline] */
static int memory_subsys_online(struct device *dev)
{
struct memory_block *mem = to_memory_block(dev);
int ret;
if (mem->state == MEM_ONLINE)
return 0;
/*
* If we are called from store_mem_state(), online_type will be
* set >= 0 Otherwise we were called from the device online
* attribute and need to set the online_type.
*/
if (mem->online_type < 0)
mem->online_type = MMOP_ONLINE_KEEP;
ret = memory_block_change_state(mem, MEM_ONLINE, MEM_OFFLINE);
/* clear online_type */
mem->online_type = -1;
return ret;
}
static int memory_subsys_offline(struct device *dev)
{
struct memory_block *mem = to_memory_block(dev);
if (mem->state == MEM_OFFLINE)
return 0;
return memory_block_change_state(mem, MEM_OFFLINE, MEM_ONLINE);
}
static ssize_t
store_mem_state(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct memory_block *mem = to_memory_block(dev);
int ret, online_type;
ret = lock_device_hotplug_sysfs();
if (ret)
return ret;
if (sysfs_streq(buf, "online_kernel"))
online_type = MMOP_ONLINE_KERNEL;
else if (sysfs_streq(buf, "online_movable"))
online_type = MMOP_ONLINE_MOVABLE;
else if (sysfs_streq(buf, "online"))
online_type = MMOP_ONLINE_KEEP;
else if (sysfs_streq(buf, "offline"))
online_type = MMOP_OFFLINE;
else {
ret = -EINVAL;
goto err;
}
switch (online_type) {
case MMOP_ONLINE_KERNEL:
case MMOP_ONLINE_MOVABLE:
case MMOP_ONLINE_KEEP:
/*
* mem->online_type is not protected so there can be a
* race here. However, when racing online, the first
* will succeed and the second will just return as the
* block will already be online. The online type
* could be either one, but that is expected.
*/
mem->online_type = online_type;
ret = device_online(&mem->dev);
break;
case MMOP_OFFLINE:
ret = device_offline(&mem->dev);
break;
default:
ret = -EINVAL; /* should never happen */
}
err:
unlock_device_hotplug();
if (ret)
return ret;
return count;
}
/*
* phys_device is a bad name for this. What I really want
* is a way to differentiate between memory ranges that
* are part of physical devices that constitute
* a complete removable unit or fru.
* i.e. do these ranges belong to the same physical device,
* s.t. if I offline all of these sections I can then
* remove the physical device?
*/
static ssize_t show_phys_device(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct memory_block *mem = to_memory_block(dev);
return sprintf(buf, "%d\n", mem->phys_device);
}
#ifdef CONFIG_MEMORY_HOTREMOVE
static ssize_t show_valid_zones(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct memory_block *mem = to_memory_block(dev);
unsigned long start_pfn, end_pfn;
unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
struct page *first_page;
struct zone *zone;
start_pfn = section_nr_to_pfn(mem->start_section_nr);
end_pfn = start_pfn + nr_pages;
first_page = pfn_to_page(start_pfn);
/* The block contains more than one zone can not be offlined. */
if (!test_pages_in_a_zone(start_pfn, end_pfn))
return sprintf(buf, "none\n");
zone = page_zone(first_page);
if (zone_idx(zone) == ZONE_MOVABLE - 1) {
/*The mem block is the last memoryblock of this zone.*/
if (end_pfn == zone_end_pfn(zone))
return sprintf(buf, "%s %s\n",
zone->name, (zone + 1)->name);
}
if (zone_idx(zone) == ZONE_MOVABLE) {
/*The mem block is the first memoryblock of ZONE_MOVABLE.*/
if (start_pfn == zone->zone_start_pfn)
return sprintf(buf, "%s %s\n",
zone->name, (zone - 1)->name);
}
return sprintf(buf, "%s\n", zone->name);
}
static DEVICE_ATTR(valid_zones, 0444, show_valid_zones, NULL);
#endif
static DEVICE_ATTR(phys_index, 0444, show_mem_start_phys_index, NULL);
static DEVICE_ATTR(state, 0644, show_mem_state, store_mem_state);
static DEVICE_ATTR(phys_device, 0444, show_phys_device, NULL);
static DEVICE_ATTR(removable, 0444, show_mem_removable, NULL);
/*
* Block size attribute stuff
*/
static ssize_t
print_block_size(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%lx\n", get_memory_block_size());
}
static DEVICE_ATTR(block_size_bytes, 0444, print_block_size, NULL);
/*
* Some architectures will have custom drivers to do this, and
* will not need to do it from userspace. The fake hot-add code
* as well as ppc64 will do all of their discovery in userspace
* and will require this interface.
*/
#ifdef CONFIG_ARCH_MEMORY_PROBE
static ssize_t
memory_probe_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
u64 phys_addr;
int nid;
int i, ret;
unsigned long pages_per_block = PAGES_PER_SECTION * sections_per_block;
ret = kstrtoull(buf, 0, &phys_addr);
if (ret)
return ret;
if (phys_addr & ((pages_per_block << PAGE_SHIFT) - 1))
return -EINVAL;
for (i = 0; i < sections_per_block; i++) {
nid = memory_add_physaddr_to_nid(phys_addr);
ret = add_memory(nid, phys_addr,
PAGES_PER_SECTION << PAGE_SHIFT);
if (ret)
goto out;
phys_addr += MIN_MEMORY_BLOCK_SIZE;
}
ret = count;
out:
return ret;
}
static DEVICE_ATTR(probe, S_IWUSR, NULL, memory_probe_store);
#endif
#ifdef CONFIG_MEMORY_FAILURE
/*
* Support for offlining pages of memory
*/
/* Soft offline a page */
static ssize_t
store_soft_offline_page(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
int ret;
u64 pfn;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (kstrtoull(buf, 0, &pfn) < 0)
return -EINVAL;
pfn >>= PAGE_SHIFT;
if (!pfn_valid(pfn))
return -ENXIO;
ret = soft_offline_page(pfn_to_page(pfn), 0);
return ret == 0 ? count : ret;
}
/* Forcibly offline a page, including killing processes. */
static ssize_t
store_hard_offline_page(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
int ret;
u64 pfn;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (kstrtoull(buf, 0, &pfn) < 0)
return -EINVAL;
pfn >>= PAGE_SHIFT;
ret = memory_failure(pfn, 0, 0);
return ret ? ret : count;
}
static DEVICE_ATTR(soft_offline_page, S_IWUSR, NULL, store_soft_offline_page);
static DEVICE_ATTR(hard_offline_page, S_IWUSR, NULL, store_hard_offline_page);
#endif
/*
* Note that phys_device is optional. It is here to allow for
* differentiation between which *physical* devices each
* section belongs to...
*/
int __weak arch_get_memory_phys_device(unsigned long start_pfn)
{
return 0;
}
/*
* A reference for the returned object is held and the reference for the
* hinted object is released.
*/
struct memory_block *find_memory_block_hinted(struct mem_section *section,
struct memory_block *hint)
{
int block_id = base_memory_block_id(__section_nr(section));
struct device *hintdev = hint ? &hint->dev : NULL;
struct device *dev;
dev = subsys_find_device_by_id(&memory_subsys, block_id, hintdev);
if (hint)
put_device(&hint->dev);
if (!dev)
return NULL;
return to_memory_block(dev);
}
/*
* For now, we have a linear search to go find the appropriate
* memory_block corresponding to a particular phys_index. If
* this gets to be a real problem, we can always use a radix
* tree or something here.
*
* This could be made generic for all device subsystems.
*/
struct memory_block *find_memory_block(struct mem_section *section)
{
return find_memory_block_hinted(section, NULL);
}
static struct attribute *memory_memblk_attrs[] = {
&dev_attr_phys_index.attr,
&dev_attr_state.attr,
&dev_attr_phys_device.attr,
&dev_attr_removable.attr,
#ifdef CONFIG_MEMORY_HOTREMOVE
&dev_attr_valid_zones.attr,
#endif
NULL
};
static struct attribute_group memory_memblk_attr_group = {
.attrs = memory_memblk_attrs,
};
static const struct attribute_group *memory_memblk_attr_groups[] = {
&memory_memblk_attr_group,
NULL,
};
/*
* register_memory - Setup a sysfs device for a memory block
*/
static
int register_memory(struct memory_block *memory)
{
memory->dev.bus = &memory_subsys;
memory->dev.id = memory->start_section_nr / sections_per_block;
memory->dev.release = memory_block_release;
memory->dev.groups = memory_memblk_attr_groups;
memory->dev.offline = memory->state == MEM_OFFLINE;
return device_register(&memory->dev);
}
static int init_memory_block(struct memory_block **memory,
struct mem_section *section, unsigned long state)
{
struct memory_block *mem;
unsigned long start_pfn;
int scn_nr;
int ret = 0;
mem = kzalloc(sizeof(*mem), GFP_KERNEL);
if (!mem)
return -ENOMEM;
scn_nr = __section_nr(section);
mem->start_section_nr =
base_memory_block_id(scn_nr) * sections_per_block;
mem->end_section_nr = mem->start_section_nr + sections_per_block - 1;
mem->state = state;
mem->section_count++;
start_pfn = section_nr_to_pfn(mem->start_section_nr);
mem->phys_device = arch_get_memory_phys_device(start_pfn);
ret = register_memory(mem);
*memory = mem;
return ret;
}
static int add_memory_block(int base_section_nr)
{
struct memory_block *mem;
int i, ret, section_count = 0, section_nr;
for (i = base_section_nr;
(i < base_section_nr + sections_per_block) && i < NR_MEM_SECTIONS;
i++) {
if (!present_section_nr(i))
continue;
if (section_count == 0)
section_nr = i;
section_count++;
}
if (section_count == 0)
return 0;
ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE);
if (ret)
return ret;
mem->section_count = section_count;
return 0;
}
/*
* need an interface for the VM to add new memory regions,
* but without onlining it.
*/
int register_new_memory(int nid, struct mem_section *section)
{
int ret = 0;
struct memory_block *mem;
mutex_lock(&mem_sysfs_mutex);
mem = find_memory_block(section);
if (mem) {
mem->section_count++;
put_device(&mem->dev);
} else {
ret = init_memory_block(&mem, section, MEM_OFFLINE);
if (ret)
goto out;
}
if (mem->section_count == sections_per_block)
ret = register_mem_sect_under_node(mem, nid);
out:
mutex_unlock(&mem_sysfs_mutex);
return ret;
}
#ifdef CONFIG_MEMORY_HOTREMOVE
static void
unregister_memory(struct memory_block *memory)
{
BUG_ON(memory->dev.bus != &memory_subsys);
/* drop the ref. we got in remove_memory_block() */
put_device(&memory->dev);
device_unregister(&memory->dev);
}
static int remove_memory_block(unsigned long node_id,
struct mem_section *section, int phys_device)
{
struct memory_block *mem;
mutex_lock(&mem_sysfs_mutex);
mem = find_memory_block(section);
unregister_mem_sect_under_nodes(mem, __section_nr(section));
mem->section_count--;
if (mem->section_count == 0)
unregister_memory(mem);
else
put_device(&mem->dev);
mutex_unlock(&mem_sysfs_mutex);
return 0;
}
int unregister_memory_section(struct mem_section *section)
{
if (!present_section(section))
return -EINVAL;
return remove_memory_block(0, section, 0);
}
#endif /* CONFIG_MEMORY_HOTREMOVE */
/* return true if the memory block is offlined, otherwise, return false */
bool is_memblock_offlined(struct memory_block *mem)
{
return mem->state == MEM_OFFLINE;
}
static struct attribute *memory_root_attrs[] = {
#ifdef CONFIG_ARCH_MEMORY_PROBE
&dev_attr_probe.attr,
#endif
#ifdef CONFIG_MEMORY_FAILURE
&dev_attr_soft_offline_page.attr,
&dev_attr_hard_offline_page.attr,
#endif
&dev_attr_block_size_bytes.attr,
NULL
};
static struct attribute_group memory_root_attr_group = {
.attrs = memory_root_attrs,
};
static const struct attribute_group *memory_root_attr_groups[] = {
&memory_root_attr_group,
NULL,
};
/*
* Initialize the sysfs support for memory devices...
*/
int __init memory_dev_init(void)
{
unsigned int i;
int ret;
int err;
unsigned long block_sz;
ret = subsys_system_register(&memory_subsys, memory_root_attr_groups);
if (ret)
goto out;
block_sz = get_memory_block_size();
sections_per_block = block_sz / MIN_MEMORY_BLOCK_SIZE;
/*
* Create entries for memory sections that were found
* during boot and have been initialized
*/
mutex_lock(&mem_sysfs_mutex);
for (i = 0; i < NR_MEM_SECTIONS; i += sections_per_block) {
err = add_memory_block(i);
if (!ret)
ret = err;
}
mutex_unlock(&mem_sysfs_mutex);
out:
if (ret)
printk(KERN_ERR "%s() failed: %d\n", __func__, ret);
return ret;
}

93
drivers/base/module.c Normal file
View file

@ -0,0 +1,93 @@
/*
* module.c - module sysfs fun for drivers
*
* This file is released under the GPLv2
*
*/
#include <linux/device.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/string.h>
#include "base.h"
static char *make_driver_name(struct device_driver *drv)
{
char *driver_name;
driver_name = kasprintf(GFP_KERNEL, "%s:%s", drv->bus->name, drv->name);
if (!driver_name)
return NULL;
return driver_name;
}
static void module_create_drivers_dir(struct module_kobject *mk)
{
if (!mk || mk->drivers_dir)
return;
mk->drivers_dir = kobject_create_and_add("drivers", &mk->kobj);
}
void module_add_driver(struct module *mod, struct device_driver *drv)
{
char *driver_name;
int no_warn;
struct module_kobject *mk = NULL;
if (!drv)
return;
if (mod)
mk = &mod->mkobj;
else if (drv->mod_name) {
struct kobject *mkobj;
/* Lookup built-in module entry in /sys/modules */
mkobj = kset_find_obj(module_kset, drv->mod_name);
if (mkobj) {
mk = container_of(mkobj, struct module_kobject, kobj);
/* remember our module structure */
drv->p->mkobj = mk;
/* kset_find_obj took a reference */
kobject_put(mkobj);
}
}
if (!mk)
return;
/* Don't check return codes; these calls are idempotent */
no_warn = sysfs_create_link(&drv->p->kobj, &mk->kobj, "module");
driver_name = make_driver_name(drv);
if (driver_name) {
module_create_drivers_dir(mk);
no_warn = sysfs_create_link(mk->drivers_dir, &drv->p->kobj,
driver_name);
kfree(driver_name);
}
}
void module_remove_driver(struct device_driver *drv)
{
struct module_kobject *mk = NULL;
char *driver_name;
if (!drv)
return;
sysfs_remove_link(&drv->p->kobj, "module");
if (drv->owner)
mk = &drv->owner->mkobj;
else if (drv->p->mkobj)
mk = drv->p->mkobj;
if (mk && mk->drivers_dir) {
driver_name = make_driver_name(drv);
if (driver_name) {
sysfs_remove_link(mk->drivers_dir, driver_name);
kfree(driver_name);
}
}
}

694
drivers/base/node.c Normal file
View file

@ -0,0 +1,694 @@
/*
* Basic Node interface support
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/mm.h>
#include <linux/memory.h>
#include <linux/vmstat.h>
#include <linux/notifier.h>
#include <linux/node.h>
#include <linux/hugetlb.h>
#include <linux/compaction.h>
#include <linux/cpumask.h>
#include <linux/topology.h>
#include <linux/nodemask.h>
#include <linux/cpu.h>
#include <linux/device.h>
#include <linux/swap.h>
#include <linux/slab.h>
static struct bus_type node_subsys = {
.name = "node",
.dev_name = "node",
};
static ssize_t node_read_cpumap(struct device *dev, int type, char *buf)
{
struct node *node_dev = to_node(dev);
const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
int len;
/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
len = type?
cpulist_scnprintf(buf, PAGE_SIZE-2, mask) :
cpumask_scnprintf(buf, PAGE_SIZE-2, mask);
buf[len++] = '\n';
buf[len] = '\0';
return len;
}
static inline ssize_t node_read_cpumask(struct device *dev,
struct device_attribute *attr, char *buf)
{
return node_read_cpumap(dev, 0, buf);
}
static inline ssize_t node_read_cpulist(struct device *dev,
struct device_attribute *attr, char *buf)
{
return node_read_cpumap(dev, 1, buf);
}
static DEVICE_ATTR(cpumap, S_IRUGO, node_read_cpumask, NULL);
static DEVICE_ATTR(cpulist, S_IRUGO, node_read_cpulist, NULL);
#define K(x) ((x) << (PAGE_SHIFT - 10))
static ssize_t node_read_meminfo(struct device *dev,
struct device_attribute *attr, char *buf)
{
int n;
int nid = dev->id;
struct sysinfo i;
si_meminfo_node(&i, nid);
n = sprintf(buf,
"Node %d MemTotal: %8lu kB\n"
"Node %d MemFree: %8lu kB\n"
"Node %d MemUsed: %8lu kB\n"
"Node %d Active: %8lu kB\n"
"Node %d Inactive: %8lu kB\n"
"Node %d Active(anon): %8lu kB\n"
"Node %d Inactive(anon): %8lu kB\n"
"Node %d Active(file): %8lu kB\n"
"Node %d Inactive(file): %8lu kB\n"
"Node %d Unevictable: %8lu kB\n"
"Node %d Mlocked: %8lu kB\n",
nid, K(i.totalram),
nid, K(i.freeram),
nid, K(i.totalram - i.freeram),
nid, K(node_page_state(nid, NR_ACTIVE_ANON) +
node_page_state(nid, NR_ACTIVE_FILE)),
nid, K(node_page_state(nid, NR_INACTIVE_ANON) +
node_page_state(nid, NR_INACTIVE_FILE)),
nid, K(node_page_state(nid, NR_ACTIVE_ANON)),
nid, K(node_page_state(nid, NR_INACTIVE_ANON)),
nid, K(node_page_state(nid, NR_ACTIVE_FILE)),
nid, K(node_page_state(nid, NR_INACTIVE_FILE)),
nid, K(node_page_state(nid, NR_UNEVICTABLE)),
nid, K(node_page_state(nid, NR_MLOCK)));
#ifdef CONFIG_HIGHMEM
n += sprintf(buf + n,
"Node %d HighTotal: %8lu kB\n"
"Node %d HighFree: %8lu kB\n"
"Node %d LowTotal: %8lu kB\n"
"Node %d LowFree: %8lu kB\n",
nid, K(i.totalhigh),
nid, K(i.freehigh),
nid, K(i.totalram - i.totalhigh),
nid, K(i.freeram - i.freehigh));
#endif
n += sprintf(buf + n,
"Node %d Dirty: %8lu kB\n"
"Node %d Writeback: %8lu kB\n"
"Node %d FilePages: %8lu kB\n"
"Node %d Mapped: %8lu kB\n"
"Node %d AnonPages: %8lu kB\n"
"Node %d Shmem: %8lu kB\n"
"Node %d KernelStack: %8lu kB\n"
"Node %d PageTables: %8lu kB\n"
"Node %d NFS_Unstable: %8lu kB\n"
"Node %d Bounce: %8lu kB\n"
"Node %d WritebackTmp: %8lu kB\n"
"Node %d Slab: %8lu kB\n"
"Node %d SReclaimable: %8lu kB\n"
"Node %d SUnreclaim: %8lu kB\n"
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
"Node %d AnonHugePages: %8lu kB\n"
#endif
,
nid, K(node_page_state(nid, NR_FILE_DIRTY)),
nid, K(node_page_state(nid, NR_WRITEBACK)),
nid, K(node_page_state(nid, NR_FILE_PAGES)),
nid, K(node_page_state(nid, NR_FILE_MAPPED)),
nid, K(node_page_state(nid, NR_ANON_PAGES)),
nid, K(i.sharedram),
nid, node_page_state(nid, NR_KERNEL_STACK) *
THREAD_SIZE / 1024,
nid, K(node_page_state(nid, NR_PAGETABLE)),
nid, K(node_page_state(nid, NR_UNSTABLE_NFS)),
nid, K(node_page_state(nid, NR_BOUNCE)),
nid, K(node_page_state(nid, NR_WRITEBACK_TEMP)),
nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE) +
node_page_state(nid, NR_SLAB_UNRECLAIMABLE)),
nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE)),
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE))
, nid,
K(node_page_state(nid, NR_ANON_TRANSPARENT_HUGEPAGES) *
HPAGE_PMD_NR));
#else
nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE)));
#endif
n += hugetlb_report_node_meminfo(nid, buf + n);
return n;
}
#undef K
static DEVICE_ATTR(meminfo, S_IRUGO, node_read_meminfo, NULL);
static ssize_t node_read_numastat(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf,
"numa_hit %lu\n"
"numa_miss %lu\n"
"numa_foreign %lu\n"
"interleave_hit %lu\n"
"local_node %lu\n"
"other_node %lu\n",
node_page_state(dev->id, NUMA_HIT),
node_page_state(dev->id, NUMA_MISS),
node_page_state(dev->id, NUMA_FOREIGN),
node_page_state(dev->id, NUMA_INTERLEAVE_HIT),
node_page_state(dev->id, NUMA_LOCAL),
node_page_state(dev->id, NUMA_OTHER));
}
static DEVICE_ATTR(numastat, S_IRUGO, node_read_numastat, NULL);
static ssize_t node_read_vmstat(struct device *dev,
struct device_attribute *attr, char *buf)
{
int nid = dev->id;
int i;
int n = 0;
for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
n += sprintf(buf+n, "%s %lu\n", vmstat_text[i],
node_page_state(nid, i));
return n;
}
static DEVICE_ATTR(vmstat, S_IRUGO, node_read_vmstat, NULL);
static ssize_t node_read_distance(struct device *dev,
struct device_attribute *attr, char * buf)
{
int nid = dev->id;
int len = 0;
int i;
/*
* buf is currently PAGE_SIZE in length and each node needs 4 chars
* at the most (distance + space or newline).
*/
BUILD_BUG_ON(MAX_NUMNODES * 4 > PAGE_SIZE);
for_each_online_node(i)
len += sprintf(buf + len, "%s%d", i ? " " : "", node_distance(nid, i));
len += sprintf(buf + len, "\n");
return len;
}
static DEVICE_ATTR(distance, S_IRUGO, node_read_distance, NULL);
#ifdef CONFIG_HUGETLBFS
/*
* hugetlbfs per node attributes registration interface:
* When/if hugetlb[fs] subsystem initializes [sometime after this module],
* it will register its per node attributes for all online nodes with
* memory. It will also call register_hugetlbfs_with_node(), below, to
* register its attribute registration functions with this node driver.
* Once these hooks have been initialized, the node driver will call into
* the hugetlb module to [un]register attributes for hot-plugged nodes.
*/
static node_registration_func_t __hugetlb_register_node;
static node_registration_func_t __hugetlb_unregister_node;
static inline bool hugetlb_register_node(struct node *node)
{
if (__hugetlb_register_node &&
node_state(node->dev.id, N_MEMORY)) {
__hugetlb_register_node(node);
return true;
}
return false;
}
static inline void hugetlb_unregister_node(struct node *node)
{
if (__hugetlb_unregister_node)
__hugetlb_unregister_node(node);
}
void register_hugetlbfs_with_node(node_registration_func_t doregister,
node_registration_func_t unregister)
{
__hugetlb_register_node = doregister;
__hugetlb_unregister_node = unregister;
}
#else
static inline void hugetlb_register_node(struct node *node) {}
static inline void hugetlb_unregister_node(struct node *node) {}
#endif
static void node_device_release(struct device *dev)
{
struct node *node = to_node(dev);
#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HUGETLBFS)
/*
* We schedule the work only when a memory section is
* onlined/offlined on this node. When we come here,
* all the memory on this node has been offlined,
* so we won't enqueue new work to this work.
*
* The work is using node->node_work, so we should
* flush work before freeing the memory.
*/
flush_work(&node->node_work);
#endif
kfree(node);
}
/*
* register_node - Setup a sysfs device for a node.
* @num - Node number to use when creating the device.
*
* Initialize and register the node device.
*/
static int register_node(struct node *node, int num, struct node *parent)
{
int error;
node->dev.id = num;
node->dev.bus = &node_subsys;
node->dev.release = node_device_release;
error = device_register(&node->dev);
if (!error){
device_create_file(&node->dev, &dev_attr_cpumap);
device_create_file(&node->dev, &dev_attr_cpulist);
device_create_file(&node->dev, &dev_attr_meminfo);
device_create_file(&node->dev, &dev_attr_numastat);
device_create_file(&node->dev, &dev_attr_distance);
device_create_file(&node->dev, &dev_attr_vmstat);
hugetlb_register_node(node);
compaction_register_node(node);
}
return error;
}
/**
* unregister_node - unregister a node device
* @node: node going away
*
* Unregisters a node device @node. All the devices on the node must be
* unregistered before calling this function.
*/
void unregister_node(struct node *node)
{
device_remove_file(&node->dev, &dev_attr_cpumap);
device_remove_file(&node->dev, &dev_attr_cpulist);
device_remove_file(&node->dev, &dev_attr_meminfo);
device_remove_file(&node->dev, &dev_attr_numastat);
device_remove_file(&node->dev, &dev_attr_distance);
device_remove_file(&node->dev, &dev_attr_vmstat);
hugetlb_unregister_node(node); /* no-op, if memoryless node */
device_unregister(&node->dev);
}
struct node *node_devices[MAX_NUMNODES];
/*
* register cpu under node
*/
int register_cpu_under_node(unsigned int cpu, unsigned int nid)
{
int ret;
struct device *obj;
if (!node_online(nid))
return 0;
obj = get_cpu_device(cpu);
if (!obj)
return 0;
ret = sysfs_create_link(&node_devices[nid]->dev.kobj,
&obj->kobj,
kobject_name(&obj->kobj));
if (ret)
return ret;
return sysfs_create_link(&obj->kobj,
&node_devices[nid]->dev.kobj,
kobject_name(&node_devices[nid]->dev.kobj));
}
int unregister_cpu_under_node(unsigned int cpu, unsigned int nid)
{
struct device *obj;
if (!node_online(nid))
return 0;
obj = get_cpu_device(cpu);
if (!obj)
return 0;
sysfs_remove_link(&node_devices[nid]->dev.kobj,
kobject_name(&obj->kobj));
sysfs_remove_link(&obj->kobj,
kobject_name(&node_devices[nid]->dev.kobj));
return 0;
}
#ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
#define page_initialized(page) (page->lru.next)
static int get_nid_for_pfn(unsigned long pfn)
{
struct page *page;
if (!pfn_valid_within(pfn))
return -1;
page = pfn_to_page(pfn);
if (!page_initialized(page))
return -1;
return pfn_to_nid(pfn);
}
/* register memory section under specified node if it spans that node */
int register_mem_sect_under_node(struct memory_block *mem_blk, int nid)
{
int ret;
unsigned long pfn, sect_start_pfn, sect_end_pfn;
if (!mem_blk)
return -EFAULT;
if (!node_online(nid))
return 0;
sect_start_pfn = section_nr_to_pfn(mem_blk->start_section_nr);
sect_end_pfn = section_nr_to_pfn(mem_blk->end_section_nr);
sect_end_pfn += PAGES_PER_SECTION - 1;
for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) {
int page_nid;
page_nid = get_nid_for_pfn(pfn);
if (page_nid < 0)
continue;
if (page_nid != nid)
continue;
ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj,
&mem_blk->dev.kobj,
kobject_name(&mem_blk->dev.kobj));
if (ret)
return ret;
return sysfs_create_link_nowarn(&mem_blk->dev.kobj,
&node_devices[nid]->dev.kobj,
kobject_name(&node_devices[nid]->dev.kobj));
}
/* mem section does not span the specified node */
return 0;
}
/* unregister memory section under all nodes that it spans */
int unregister_mem_sect_under_nodes(struct memory_block *mem_blk,
unsigned long phys_index)
{
NODEMASK_ALLOC(nodemask_t, unlinked_nodes, GFP_KERNEL);
unsigned long pfn, sect_start_pfn, sect_end_pfn;
if (!mem_blk) {
NODEMASK_FREE(unlinked_nodes);
return -EFAULT;
}
if (!unlinked_nodes)
return -ENOMEM;
nodes_clear(*unlinked_nodes);
sect_start_pfn = section_nr_to_pfn(phys_index);
sect_end_pfn = sect_start_pfn + PAGES_PER_SECTION - 1;
for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) {
int nid;
nid = get_nid_for_pfn(pfn);
if (nid < 0)
continue;
if (!node_online(nid))
continue;
if (node_test_and_set(nid, *unlinked_nodes))
continue;
sysfs_remove_link(&node_devices[nid]->dev.kobj,
kobject_name(&mem_blk->dev.kobj));
sysfs_remove_link(&mem_blk->dev.kobj,
kobject_name(&node_devices[nid]->dev.kobj));
}
NODEMASK_FREE(unlinked_nodes);
return 0;
}
static int link_mem_sections(int nid)
{
unsigned long start_pfn = NODE_DATA(nid)->node_start_pfn;
unsigned long end_pfn = start_pfn + NODE_DATA(nid)->node_spanned_pages;
unsigned long pfn;
struct memory_block *mem_blk = NULL;
int err = 0;
for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
unsigned long section_nr = pfn_to_section_nr(pfn);
struct mem_section *mem_sect;
int ret;
if (!present_section_nr(section_nr))
continue;
mem_sect = __nr_to_section(section_nr);
/* same memblock ? */
if (mem_blk)
if ((section_nr >= mem_blk->start_section_nr) &&
(section_nr <= mem_blk->end_section_nr))
continue;
mem_blk = find_memory_block_hinted(mem_sect, mem_blk);
ret = register_mem_sect_under_node(mem_blk, nid);
if (!err)
err = ret;
/* discard ref obtained in find_memory_block() */
}
if (mem_blk)
kobject_put(&mem_blk->dev.kobj);
return err;
}
#ifdef CONFIG_HUGETLBFS
/*
* Handle per node hstate attribute [un]registration on transistions
* to/from memoryless state.
*/
static void node_hugetlb_work(struct work_struct *work)
{
struct node *node = container_of(work, struct node, node_work);
/*
* We only get here when a node transitions to/from memoryless state.
* We can detect which transition occurred by examining whether the
* node has memory now. hugetlb_register_node() already check this
* so we try to register the attributes. If that fails, then the
* node has transitioned to memoryless, try to unregister the
* attributes.
*/
if (!hugetlb_register_node(node))
hugetlb_unregister_node(node);
}
static void init_node_hugetlb_work(int nid)
{
INIT_WORK(&node_devices[nid]->node_work, node_hugetlb_work);
}
static int node_memory_callback(struct notifier_block *self,
unsigned long action, void *arg)
{
struct memory_notify *mnb = arg;
int nid = mnb->status_change_nid;
switch (action) {
case MEM_ONLINE:
case MEM_OFFLINE:
/*
* offload per node hstate [un]registration to a work thread
* when transitioning to/from memoryless state.
*/
if (nid != NUMA_NO_NODE)
schedule_work(&node_devices[nid]->node_work);
break;
case MEM_GOING_ONLINE:
case MEM_GOING_OFFLINE:
case MEM_CANCEL_ONLINE:
case MEM_CANCEL_OFFLINE:
default:
break;
}
return NOTIFY_OK;
}
#endif /* CONFIG_HUGETLBFS */
#else /* !CONFIG_MEMORY_HOTPLUG_SPARSE */
static int link_mem_sections(int nid) { return 0; }
#endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */
#if !defined(CONFIG_MEMORY_HOTPLUG_SPARSE) || \
!defined(CONFIG_HUGETLBFS)
static inline int node_memory_callback(struct notifier_block *self,
unsigned long action, void *arg)
{
return NOTIFY_OK;
}
static void init_node_hugetlb_work(int nid) { }
#endif
int register_one_node(int nid)
{
int error = 0;
int cpu;
if (node_online(nid)) {
int p_node = parent_node(nid);
struct node *parent = NULL;
if (p_node != nid)
parent = node_devices[p_node];
node_devices[nid] = kzalloc(sizeof(struct node), GFP_KERNEL);
if (!node_devices[nid])
return -ENOMEM;
error = register_node(node_devices[nid], nid, parent);
/* link cpu under this node */
for_each_present_cpu(cpu) {
if (cpu_to_node(cpu) == nid)
register_cpu_under_node(cpu, nid);
}
/* link memory sections under this node */
error = link_mem_sections(nid);
/* initialize work queue for memory hot plug */
init_node_hugetlb_work(nid);
}
return error;
}
void unregister_one_node(int nid)
{
if (!node_devices[nid])
return;
unregister_node(node_devices[nid]);
node_devices[nid] = NULL;
}
/*
* node states attributes
*/
static ssize_t print_nodes_state(enum node_states state, char *buf)
{
int n;
n = nodelist_scnprintf(buf, PAGE_SIZE-2, node_states[state]);
buf[n++] = '\n';
buf[n] = '\0';
return n;
}
struct node_attr {
struct device_attribute attr;
enum node_states state;
};
static ssize_t show_node_state(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct node_attr *na = container_of(attr, struct node_attr, attr);
return print_nodes_state(na->state, buf);
}
#define _NODE_ATTR(name, state) \
{ __ATTR(name, 0444, show_node_state, NULL), state }
static struct node_attr node_state_attr[] = {
[N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
[N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
[N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
#ifdef CONFIG_HIGHMEM
[N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
#endif
#ifdef CONFIG_MOVABLE_NODE
[N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
#endif
[N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
};
static struct attribute *node_state_attrs[] = {
&node_state_attr[N_POSSIBLE].attr.attr,
&node_state_attr[N_ONLINE].attr.attr,
&node_state_attr[N_NORMAL_MEMORY].attr.attr,
#ifdef CONFIG_HIGHMEM
&node_state_attr[N_HIGH_MEMORY].attr.attr,
#endif
#ifdef CONFIG_MOVABLE_NODE
&node_state_attr[N_MEMORY].attr.attr,
#endif
&node_state_attr[N_CPU].attr.attr,
NULL
};
static struct attribute_group memory_root_attr_group = {
.attrs = node_state_attrs,
};
static const struct attribute_group *cpu_root_attr_groups[] = {
&memory_root_attr_group,
NULL,
};
#define NODE_CALLBACK_PRI 2 /* lower than SLAB */
static int __init register_node_type(void)
{
int ret;
BUILD_BUG_ON(ARRAY_SIZE(node_state_attr) != NR_NODE_STATES);
BUILD_BUG_ON(ARRAY_SIZE(node_state_attrs)-1 != NR_NODE_STATES);
ret = subsys_system_register(&node_subsys, cpu_root_attr_groups);
if (!ret) {
static struct notifier_block node_memory_callback_nb = {
.notifier_call = node_memory_callback,
.priority = NODE_CALLBACK_PRI,
};
register_hotmemory_notifier(&node_memory_callback_nb);
}
/*
* Note: we're not going to unregister the node class if we fail
* to register the node state class attribute files.
*/
return ret;
}
postcore_initcall(register_node_type);

88
drivers/base/pinctrl.c Normal file
View file

@ -0,0 +1,88 @@
/*
* Driver core interface to the pinctrl subsystem.
*
* Copyright (C) 2012 ST-Ericsson SA
* Written on behalf of Linaro for ST-Ericsson
* Based on bits of regulator core, gpio core and clk core
*
* Author: Linus Walleij <linus.walleij@linaro.org>
*
* License terms: GNU General Public License (GPL) version 2
*/
#include <linux/device.h>
#include <linux/pinctrl/devinfo.h>
#include <linux/pinctrl/consumer.h>
#include <linux/slab.h>
/**
* pinctrl_bind_pins() - called by the device core before probe
* @dev: the device that is just about to probe
*/
int pinctrl_bind_pins(struct device *dev)
{
int ret;
dev->pins = devm_kzalloc(dev, sizeof(*(dev->pins)), GFP_KERNEL);
if (!dev->pins)
return -ENOMEM;
dev->pins->p = devm_pinctrl_get(dev);
if (IS_ERR(dev->pins->p)) {
dev_dbg(dev, "no pinctrl handle\n");
ret = PTR_ERR(dev->pins->p);
goto cleanup_alloc;
}
dev->pins->default_state = pinctrl_lookup_state(dev->pins->p,
PINCTRL_STATE_DEFAULT);
if (IS_ERR(dev->pins->default_state)) {
dev_dbg(dev, "no default pinctrl state\n");
ret = 0;
goto cleanup_get;
}
ret = pinctrl_select_state(dev->pins->p, dev->pins->default_state);
if (ret) {
dev_dbg(dev, "failed to activate default pinctrl state\n");
goto cleanup_get;
}
#ifdef CONFIG_PM
/*
* If power management is enabled, we also look for the optional
* sleep and idle pin states, with semantics as defined in
* <linux/pinctrl/pinctrl-state.h>
*/
dev->pins->sleep_state = pinctrl_lookup_state(dev->pins->p,
PINCTRL_STATE_SLEEP);
if (IS_ERR(dev->pins->sleep_state))
/* Not supplying this state is perfectly legal */
dev_dbg(dev, "no sleep pinctrl state\n");
dev->pins->idle_state = pinctrl_lookup_state(dev->pins->p,
PINCTRL_STATE_IDLE);
if (IS_ERR(dev->pins->idle_state))
/* Not supplying this state is perfectly legal */
dev_dbg(dev, "no idle pinctrl state\n");
#endif
return 0;
/*
* If no pinctrl handle or default state was found for this device,
* let's explicitly free the pin container in the device, there is
* no point in keeping it around.
*/
cleanup_get:
devm_pinctrl_put(dev->pins->p);
cleanup_alloc:
devm_kfree(dev, dev->pins);
dev->pins = NULL;
/* Only return deferrals */
if (ret != -EPROBE_DEFER)
ret = 0;
return ret;
}

1327
drivers/base/platform.c Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,8 @@
obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o
obj-$(CONFIG_PM_OPP) += opp.o
obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
obj-$(CONFIG_HAVE_CLK) += clock_ops.o
ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG

View file

@ -0,0 +1,516 @@
/*
* drivers/base/power/clock_ops.c - Generic clock manipulation PM callbacks
*
* Copyright (c) 2011 Rafael J. Wysocki <rjw@sisk.pl>, Renesas Electronics Corp.
*
* This file is released under the GPLv2.
*/
#include <linux/kernel.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/pm.h>
#include <linux/pm_clock.h>
#include <linux/clk.h>
#include <linux/slab.h>
#include <linux/err.h>
#ifdef CONFIG_PM
enum pce_status {
PCE_STATUS_NONE = 0,
PCE_STATUS_ACQUIRED,
PCE_STATUS_ENABLED,
PCE_STATUS_ERROR,
};
struct pm_clock_entry {
struct list_head node;
char *con_id;
struct clk *clk;
enum pce_status status;
};
/**
* pm_clk_enable - Enable a clock, reporting any errors
* @dev: The device for the given clock
* @clk: The clock being enabled.
*/
static inline int __pm_clk_enable(struct device *dev, struct clk *clk)
{
int ret = clk_enable(clk);
if (ret)
dev_err(dev, "%s: failed to enable clk %p, error %d\n",
__func__, clk, ret);
return ret;
}
/**
* pm_clk_acquire - Acquire a device clock.
* @dev: Device whose clock is to be acquired.
* @ce: PM clock entry corresponding to the clock.
*/
static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
{
ce->clk = clk_get(dev, ce->con_id);
if (IS_ERR(ce->clk)) {
ce->status = PCE_STATUS_ERROR;
} else {
clk_prepare(ce->clk);
ce->status = PCE_STATUS_ACQUIRED;
dev_dbg(dev, "Clock %s managed by runtime PM.\n", ce->con_id);
}
}
/**
* pm_clk_add - Start using a device clock for power management.
* @dev: Device whose clock is going to be used for power management.
* @con_id: Connection ID of the clock.
*
* Add the clock represented by @con_id to the list of clocks used for
* the power management of @dev.
*/
int pm_clk_add(struct device *dev, const char *con_id)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
if (!psd)
return -EINVAL;
ce = kzalloc(sizeof(*ce), GFP_KERNEL);
if (!ce) {
dev_err(dev, "Not enough memory for clock entry.\n");
return -ENOMEM;
}
if (con_id) {
ce->con_id = kstrdup(con_id, GFP_KERNEL);
if (!ce->con_id) {
dev_err(dev,
"Not enough memory for clock connection ID.\n");
kfree(ce);
return -ENOMEM;
}
}
pm_clk_acquire(dev, ce);
spin_lock_irq(&psd->lock);
list_add_tail(&ce->node, &psd->clock_list);
spin_unlock_irq(&psd->lock);
return 0;
}
/**
* __pm_clk_remove - Destroy PM clock entry.
* @ce: PM clock entry to destroy.
*/
static void __pm_clk_remove(struct pm_clock_entry *ce)
{
if (!ce)
return;
if (ce->status < PCE_STATUS_ERROR) {
if (ce->status == PCE_STATUS_ENABLED)
clk_disable(ce->clk);
if (ce->status >= PCE_STATUS_ACQUIRED) {
clk_unprepare(ce->clk);
clk_put(ce->clk);
}
}
kfree(ce->con_id);
kfree(ce);
}
/**
* pm_clk_remove - Stop using a device clock for power management.
* @dev: Device whose clock should not be used for PM any more.
* @con_id: Connection ID of the clock.
*
* Remove the clock represented by @con_id from the list of clocks used for
* the power management of @dev.
*/
void pm_clk_remove(struct device *dev, const char *con_id)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
if (!psd)
return;
spin_lock_irq(&psd->lock);
list_for_each_entry(ce, &psd->clock_list, node) {
if (!con_id && !ce->con_id)
goto remove;
else if (!con_id || !ce->con_id)
continue;
else if (!strcmp(con_id, ce->con_id))
goto remove;
}
spin_unlock_irq(&psd->lock);
return;
remove:
list_del(&ce->node);
spin_unlock_irq(&psd->lock);
__pm_clk_remove(ce);
}
/**
* pm_clk_init - Initialize a device's list of power management clocks.
* @dev: Device to initialize the list of PM clocks for.
*
* Initialize the lock and clock_list members of the device's pm_subsys_data
* object.
*/
void pm_clk_init(struct device *dev)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
if (psd)
INIT_LIST_HEAD(&psd->clock_list);
}
/**
* pm_clk_create - Create and initialize a device's list of PM clocks.
* @dev: Device to create and initialize the list of PM clocks for.
*
* Allocate a struct pm_subsys_data object, initialize its lock and clock_list
* members and make the @dev's power.subsys_data field point to it.
*/
int pm_clk_create(struct device *dev)
{
return dev_pm_get_subsys_data(dev);
}
/**
* pm_clk_destroy - Destroy a device's list of power management clocks.
* @dev: Device to destroy the list of PM clocks for.
*
* Clear the @dev's power.subsys_data field, remove the list of clock entries
* from the struct pm_subsys_data object pointed to by it before and free
* that object.
*/
void pm_clk_destroy(struct device *dev)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce, *c;
struct list_head list;
if (!psd)
return;
INIT_LIST_HEAD(&list);
spin_lock_irq(&psd->lock);
list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
list_move(&ce->node, &list);
spin_unlock_irq(&psd->lock);
dev_pm_put_subsys_data(dev);
list_for_each_entry_safe_reverse(ce, c, &list, node) {
list_del(&ce->node);
__pm_clk_remove(ce);
}
}
#endif /* CONFIG_PM */
#ifdef CONFIG_PM_RUNTIME
/**
* pm_clk_suspend - Disable clocks in a device's PM clock list.
* @dev: Device to disable the clocks for.
*/
int pm_clk_suspend(struct device *dev)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
unsigned long flags;
dev_dbg(dev, "%s()\n", __func__);
if (!psd)
return 0;
spin_lock_irqsave(&psd->lock, flags);
list_for_each_entry_reverse(ce, &psd->clock_list, node) {
if (ce->status < PCE_STATUS_ERROR) {
if (ce->status == PCE_STATUS_ENABLED)
clk_disable(ce->clk);
ce->status = PCE_STATUS_ACQUIRED;
}
}
spin_unlock_irqrestore(&psd->lock, flags);
return 0;
}
/**
* pm_clk_resume - Enable clocks in a device's PM clock list.
* @dev: Device to enable the clocks for.
*/
int pm_clk_resume(struct device *dev)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
unsigned long flags;
int ret;
dev_dbg(dev, "%s()\n", __func__);
if (!psd)
return 0;
spin_lock_irqsave(&psd->lock, flags);
list_for_each_entry(ce, &psd->clock_list, node) {
if (ce->status < PCE_STATUS_ERROR) {
ret = __pm_clk_enable(dev, ce->clk);
if (!ret)
ce->status = PCE_STATUS_ENABLED;
}
}
spin_unlock_irqrestore(&psd->lock, flags);
return 0;
}
/**
* pm_clk_notify - Notify routine for device addition and removal.
* @nb: Notifier block object this function is a member of.
* @action: Operation being carried out by the caller.
* @data: Device the routine is being run for.
*
* For this function to work, @nb must be a member of an object of type
* struct pm_clk_notifier_block containing all of the requisite data.
* Specifically, the pm_domain member of that object is copied to the device's
* pm_domain field and its con_ids member is used to populate the device's list
* of PM clocks, depending on @action.
*
* If the device's pm_domain field is already populated with a value different
* from the one stored in the struct pm_clk_notifier_block object, the function
* does nothing.
*/
static int pm_clk_notify(struct notifier_block *nb,
unsigned long action, void *data)
{
struct pm_clk_notifier_block *clknb;
struct device *dev = data;
char **con_id;
int error;
dev_dbg(dev, "%s() %ld\n", __func__, action);
clknb = container_of(nb, struct pm_clk_notifier_block, nb);
switch (action) {
case BUS_NOTIFY_ADD_DEVICE:
if (dev->pm_domain)
break;
error = pm_clk_create(dev);
if (error)
break;
dev->pm_domain = clknb->pm_domain;
if (clknb->con_ids[0]) {
for (con_id = clknb->con_ids; *con_id; con_id++)
pm_clk_add(dev, *con_id);
} else {
pm_clk_add(dev, NULL);
}
break;
case BUS_NOTIFY_DEL_DEVICE:
if (dev->pm_domain != clknb->pm_domain)
break;
dev->pm_domain = NULL;
pm_clk_destroy(dev);
break;
}
return 0;
}
#else /* !CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM
/**
* pm_clk_suspend - Disable clocks in a device's PM clock list.
* @dev: Device to disable the clocks for.
*/
int pm_clk_suspend(struct device *dev)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
unsigned long flags;
dev_dbg(dev, "%s()\n", __func__);
/* If there is no driver, the clocks are already disabled. */
if (!psd || !dev->driver)
return 0;
spin_lock_irqsave(&psd->lock, flags);
list_for_each_entry_reverse(ce, &psd->clock_list, node) {
if (ce->status < PCE_STATUS_ERROR) {
if (ce->status == PCE_STATUS_ENABLED)
clk_disable(ce->clk);
ce->status = PCE_STATUS_ACQUIRED;
}
}
spin_unlock_irqrestore(&psd->lock, flags);
return 0;
}
/**
* pm_clk_resume - Enable clocks in a device's PM clock list.
* @dev: Device to enable the clocks for.
*/
int pm_clk_resume(struct device *dev)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
unsigned long flags;
int ret;
dev_dbg(dev, "%s()\n", __func__);
/* If there is no driver, the clocks should remain disabled. */
if (!psd || !dev->driver)
return 0;
spin_lock_irqsave(&psd->lock, flags);
list_for_each_entry(ce, &psd->clock_list, node) {
if (ce->status < PCE_STATUS_ERROR) {
ret = __pm_clk_enable(dev, ce->clk);
if (!ret)
ce->status = PCE_STATUS_ENABLED;
}
}
spin_unlock_irqrestore(&psd->lock, flags);
return 0;
}
#endif /* CONFIG_PM */
/**
* enable_clock - Enable a device clock.
* @dev: Device whose clock is to be enabled.
* @con_id: Connection ID of the clock.
*/
static void enable_clock(struct device *dev, const char *con_id)
{
struct clk *clk;
clk = clk_get(dev, con_id);
if (!IS_ERR(clk)) {
clk_prepare_enable(clk);
clk_put(clk);
dev_info(dev, "Runtime PM disabled, clock forced on.\n");
}
}
/**
* disable_clock - Disable a device clock.
* @dev: Device whose clock is to be disabled.
* @con_id: Connection ID of the clock.
*/
static void disable_clock(struct device *dev, const char *con_id)
{
struct clk *clk;
clk = clk_get(dev, con_id);
if (!IS_ERR(clk)) {
clk_disable_unprepare(clk);
clk_put(clk);
dev_info(dev, "Runtime PM disabled, clock forced off.\n");
}
}
/**
* pm_clk_notify - Notify routine for device addition and removal.
* @nb: Notifier block object this function is a member of.
* @action: Operation being carried out by the caller.
* @data: Device the routine is being run for.
*
* For this function to work, @nb must be a member of an object of type
* struct pm_clk_notifier_block containing all of the requisite data.
* Specifically, the con_ids member of that object is used to enable or disable
* the device's clocks, depending on @action.
*/
static int pm_clk_notify(struct notifier_block *nb,
unsigned long action, void *data)
{
struct pm_clk_notifier_block *clknb;
struct device *dev = data;
char **con_id;
dev_dbg(dev, "%s() %ld\n", __func__, action);
clknb = container_of(nb, struct pm_clk_notifier_block, nb);
switch (action) {
case BUS_NOTIFY_BIND_DRIVER:
if (clknb->con_ids[0]) {
for (con_id = clknb->con_ids; *con_id; con_id++)
enable_clock(dev, *con_id);
} else {
enable_clock(dev, NULL);
}
break;
case BUS_NOTIFY_UNBOUND_DRIVER:
if (clknb->con_ids[0]) {
for (con_id = clknb->con_ids; *con_id; con_id++)
disable_clock(dev, *con_id);
} else {
disable_clock(dev, NULL);
}
break;
}
return 0;
}
#endif /* !CONFIG_PM_RUNTIME */
/**
* pm_clk_add_notifier - Add bus type notifier for power management clocks.
* @bus: Bus type to add the notifier to.
* @clknb: Notifier to be added to the given bus type.
*
* The nb member of @clknb is not expected to be initialized and its
* notifier_call member will be replaced with pm_clk_notify(). However,
* the remaining members of @clknb should be populated prior to calling this
* routine.
*/
void pm_clk_add_notifier(struct bus_type *bus,
struct pm_clk_notifier_block *clknb)
{
if (!bus || !clknb)
return;
clknb->nb.notifier_call = pm_clk_notify;
bus_register_notifier(bus, &clknb->nb);
}

136
drivers/base/power/common.c Normal file
View file

@ -0,0 +1,136 @@
/*
* drivers/base/power/common.c - Common device power management code.
*
* Copyright (C) 2011 Rafael J. Wysocki <rjw@sisk.pl>, Renesas Electronics Corp.
*
* This file is released under the GPLv2.
*/
#include <linux/kernel.h>
#include <linux/device.h>
#include <linux/export.h>
#include <linux/slab.h>
#include <linux/pm_clock.h>
#include <linux/acpi.h>
#include <linux/pm_domain.h>
/**
* dev_pm_get_subsys_data - Create or refcount power.subsys_data for device.
* @dev: Device to handle.
*
* If power.subsys_data is NULL, point it to a new object, otherwise increment
* its reference counter. Return 1 if a new object has been created, otherwise
* return 0 or error code.
*/
int dev_pm_get_subsys_data(struct device *dev)
{
struct pm_subsys_data *psd;
psd = kzalloc(sizeof(*psd), GFP_KERNEL);
if (!psd)
return -ENOMEM;
spin_lock_irq(&dev->power.lock);
if (dev->power.subsys_data) {
dev->power.subsys_data->refcount++;
} else {
spin_lock_init(&psd->lock);
psd->refcount = 1;
dev->power.subsys_data = psd;
pm_clk_init(dev);
psd = NULL;
}
spin_unlock_irq(&dev->power.lock);
/* kfree() verifies that its argument is nonzero. */
kfree(psd);
return 0;
}
EXPORT_SYMBOL_GPL(dev_pm_get_subsys_data);
/**
* dev_pm_put_subsys_data - Drop reference to power.subsys_data.
* @dev: Device to handle.
*
* If the reference counter of power.subsys_data is zero after dropping the
* reference, power.subsys_data is removed. Return 1 if that happens or 0
* otherwise.
*/
int dev_pm_put_subsys_data(struct device *dev)
{
struct pm_subsys_data *psd;
int ret = 1;
spin_lock_irq(&dev->power.lock);
psd = dev_to_psd(dev);
if (!psd)
goto out;
if (--psd->refcount == 0) {
dev->power.subsys_data = NULL;
} else {
psd = NULL;
ret = 0;
}
out:
spin_unlock_irq(&dev->power.lock);
kfree(psd);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_put_subsys_data);
/**
* dev_pm_domain_attach - Attach a device to its PM domain.
* @dev: Device to attach.
* @power_on: Used to indicate whether we should power on the device.
*
* The @dev may only be attached to a single PM domain. By iterating through
* the available alternatives we try to find a valid PM domain for the device.
* As attachment succeeds, the ->detach() callback in the struct dev_pm_domain
* should be assigned by the corresponding attach function.
*
* This function should typically be invoked from subsystem level code during
* the probe phase. Especially for those that holds devices which requires
* power management through PM domains.
*
* Callers must ensure proper synchronization of this function with power
* management callbacks.
*
* Returns 0 on successfully attached PM domain or negative error code.
*/
int dev_pm_domain_attach(struct device *dev, bool power_on)
{
int ret;
ret = acpi_dev_pm_attach(dev, power_on);
if (ret)
ret = genpd_dev_pm_attach(dev);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_domain_attach);
/**
* dev_pm_domain_detach - Detach a device from its PM domain.
* @dev: Device to attach.
* @power_off: Used to indicate whether we should power off the device.
*
* This functions will reverse the actions from dev_pm_domain_attach() and thus
* try to detach the @dev from its PM domain. Typically it should be invoked
* from subsystem level code during the remove phase.
*
* Callers must ensure proper synchronization of this function with power
* management callbacks.
*/
void dev_pm_domain_detach(struct device *dev, bool power_off)
{
if (dev->pm_domain && dev->pm_domain->detach)
dev->pm_domain->detach(dev, power_off);
}
EXPORT_SYMBOL_GPL(dev_pm_domain_detach);

2380
drivers/base/power/domain.c Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,250 @@
/*
* drivers/base/power/domain_governor.c - Governors for device PM domains.
*
* Copyright (C) 2011 Rafael J. Wysocki <rjw@sisk.pl>, Renesas Electronics Corp.
*
* This file is released under the GPLv2.
*/
#include <linux/kernel.h>
#include <linux/pm_domain.h>
#include <linux/pm_qos.h>
#include <linux/hrtimer.h>
#ifdef CONFIG_PM_RUNTIME
static int dev_update_qos_constraint(struct device *dev, void *data)
{
s64 *constraint_ns_p = data;
s32 constraint_ns = -1;
if (dev->power.subsys_data && dev->power.subsys_data->domain_data)
constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns;
if (constraint_ns < 0) {
constraint_ns = dev_pm_qos_read_value(dev);
constraint_ns *= NSEC_PER_USEC;
}
if (constraint_ns == 0)
return 0;
/*
* constraint_ns cannot be negative here, because the device has been
* suspended.
*/
if (constraint_ns < *constraint_ns_p || *constraint_ns_p == 0)
*constraint_ns_p = constraint_ns;
return 0;
}
/**
* default_stop_ok - Default PM domain governor routine for stopping devices.
* @dev: Device to check.
*/
static bool default_stop_ok(struct device *dev)
{
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
unsigned long flags;
s64 constraint_ns;
dev_dbg(dev, "%s()\n", __func__);
spin_lock_irqsave(&dev->power.lock, flags);
if (!td->constraint_changed) {
bool ret = td->cached_stop_ok;
spin_unlock_irqrestore(&dev->power.lock, flags);
return ret;
}
td->constraint_changed = false;
td->cached_stop_ok = false;
td->effective_constraint_ns = -1;
constraint_ns = __dev_pm_qos_read_value(dev);
spin_unlock_irqrestore(&dev->power.lock, flags);
if (constraint_ns < 0)
return false;
constraint_ns *= NSEC_PER_USEC;
/*
* We can walk the children without any additional locking, because
* they all have been suspended at this point and their
* effective_constraint_ns fields won't be modified in parallel with us.
*/
if (!dev->power.ignore_children)
device_for_each_child(dev, &constraint_ns,
dev_update_qos_constraint);
if (constraint_ns > 0) {
constraint_ns -= td->start_latency_ns;
if (constraint_ns == 0)
return false;
}
td->effective_constraint_ns = constraint_ns;
td->cached_stop_ok = constraint_ns > td->stop_latency_ns ||
constraint_ns == 0;
/*
* The children have been suspended already, so we don't need to take
* their stop latencies into account here.
*/
return td->cached_stop_ok;
}
/**
* default_power_down_ok - Default generic PM domain power off governor routine.
* @pd: PM domain to check.
*
* This routine must be executed under the PM domain's lock.
*/
static bool default_power_down_ok(struct dev_pm_domain *pd)
{
struct generic_pm_domain *genpd = pd_to_genpd(pd);
struct gpd_link *link;
struct pm_domain_data *pdd;
s64 min_off_time_ns;
s64 off_on_time_ns;
if (genpd->max_off_time_changed) {
struct gpd_link *link;
/*
* We have to invalidate the cached results for the masters, so
* use the observation that default_power_down_ok() is not
* going to be called for any master until this instance
* returns.
*/
list_for_each_entry(link, &genpd->slave_links, slave_node)
link->master->max_off_time_changed = true;
genpd->max_off_time_changed = false;
genpd->cached_power_down_ok = false;
genpd->max_off_time_ns = -1;
} else {
return genpd->cached_power_down_ok;
}
off_on_time_ns = genpd->power_off_latency_ns +
genpd->power_on_latency_ns;
/*
* It doesn't make sense to remove power from the domain if saving
* the state of all devices in it and the power off/power on operations
* take too much time.
*
* All devices in this domain have been stopped already at this point.
*/
list_for_each_entry(pdd, &genpd->dev_list, list_node) {
if (pdd->dev->driver)
off_on_time_ns +=
to_gpd_data(pdd)->td.save_state_latency_ns;
}
min_off_time_ns = -1;
/*
* Check if subdomains can be off for enough time.
*
* All subdomains have been powered off already at this point.
*/
list_for_each_entry(link, &genpd->master_links, master_node) {
struct generic_pm_domain *sd = link->slave;
s64 sd_max_off_ns = sd->max_off_time_ns;
if (sd_max_off_ns < 0)
continue;
/*
* Check if the subdomain is allowed to be off long enough for
* the current domain to turn off and on (that's how much time
* it will have to wait worst case).
*/
if (sd_max_off_ns <= off_on_time_ns)
return false;
if (min_off_time_ns > sd_max_off_ns || min_off_time_ns < 0)
min_off_time_ns = sd_max_off_ns;
}
/*
* Check if the devices in the domain can be off enough time.
*/
list_for_each_entry(pdd, &genpd->dev_list, list_node) {
struct gpd_timing_data *td;
s64 constraint_ns;
if (!pdd->dev->driver)
continue;
/*
* Check if the device is allowed to be off long enough for the
* domain to turn off and on (that's how much time it will
* have to wait worst case).
*/
td = &to_gpd_data(pdd)->td;
constraint_ns = td->effective_constraint_ns;
/* default_stop_ok() need not be called before us. */
if (constraint_ns < 0) {
constraint_ns = dev_pm_qos_read_value(pdd->dev);
constraint_ns *= NSEC_PER_USEC;
}
if (constraint_ns == 0)
continue;
/*
* constraint_ns cannot be negative here, because the device has
* been suspended.
*/
constraint_ns -= td->restore_state_latency_ns;
if (constraint_ns <= off_on_time_ns)
return false;
if (min_off_time_ns > constraint_ns || min_off_time_ns < 0)
min_off_time_ns = constraint_ns;
}
genpd->cached_power_down_ok = true;
/*
* If the computed minimum device off time is negative, there are no
* latency constraints, so the domain can spend arbitrary time in the
* "off" state.
*/
if (min_off_time_ns < 0)
return true;
/*
* The difference between the computed minimum subdomain or device off
* time and the time needed to turn the domain on is the maximum
* theoretical time this domain can spend in the "off" state.
*/
genpd->max_off_time_ns = min_off_time_ns - genpd->power_on_latency_ns;
return true;
}
static bool always_on_power_down_ok(struct dev_pm_domain *domain)
{
return false;
}
#else /* !CONFIG_PM_RUNTIME */
static inline bool default_stop_ok(struct device *dev) { return false; }
#define default_power_down_ok NULL
#define always_on_power_down_ok NULL
#endif /* !CONFIG_PM_RUNTIME */
struct dev_power_governor simple_qos_governor = {
.stop_ok = default_stop_ok,
.power_down_ok = default_power_down_ok,
};
/**
* pm_genpd_gov_always_on - A governor implementing an always-on policy
*/
struct dev_power_governor pm_domain_always_on_gov = {
.power_down_ok = always_on_power_down_ok,
.stop_ok = default_stop_ok,
};

View file

@ -0,0 +1,306 @@
/*
* drivers/base/power/generic_ops.c - Generic PM callbacks for subsystems
*
* Copyright (c) 2010 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
*
* This file is released under the GPLv2.
*/
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/export.h>
#ifdef CONFIG_PM
/**
* pm_generic_runtime_suspend - Generic runtime suspend callback for subsystems.
* @dev: Device to suspend.
*
* If PM operations are defined for the @dev's driver and they include
* ->runtime_suspend(), execute it and return its error code. Otherwise,
* return 0.
*/
int pm_generic_runtime_suspend(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int ret;
ret = pm && pm->runtime_suspend ? pm->runtime_suspend(dev) : 0;
return ret;
}
EXPORT_SYMBOL_GPL(pm_generic_runtime_suspend);
/**
* pm_generic_runtime_resume - Generic runtime resume callback for subsystems.
* @dev: Device to resume.
*
* If PM operations are defined for the @dev's driver and they include
* ->runtime_resume(), execute it and return its error code. Otherwise,
* return 0.
*/
int pm_generic_runtime_resume(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int ret;
ret = pm && pm->runtime_resume ? pm->runtime_resume(dev) : 0;
return ret;
}
EXPORT_SYMBOL_GPL(pm_generic_runtime_resume);
#endif /* CONFIG_PM */
#ifdef CONFIG_PM_SLEEP
/**
* pm_generic_prepare - Generic routine preparing a device for power transition.
* @dev: Device to prepare.
*
* Prepare a device for a system-wide power transition.
*/
int pm_generic_prepare(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (drv && drv->pm && drv->pm->prepare)
ret = drv->pm->prepare(dev);
return ret;
}
/**
* pm_generic_suspend_noirq - Generic suspend_noirq callback for subsystems.
* @dev: Device to suspend.
*/
int pm_generic_suspend_noirq(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->suspend_noirq ? pm->suspend_noirq(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_suspend_noirq);
/**
* pm_generic_suspend_late - Generic suspend_late callback for subsystems.
* @dev: Device to suspend.
*/
int pm_generic_suspend_late(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->suspend_late ? pm->suspend_late(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_suspend_late);
/**
* pm_generic_suspend - Generic suspend callback for subsystems.
* @dev: Device to suspend.
*/
int pm_generic_suspend(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->suspend ? pm->suspend(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_suspend);
/**
* pm_generic_freeze_noirq - Generic freeze_noirq callback for subsystems.
* @dev: Device to freeze.
*/
int pm_generic_freeze_noirq(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->freeze_noirq ? pm->freeze_noirq(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_freeze_noirq);
/**
* pm_generic_freeze_late - Generic freeze_late callback for subsystems.
* @dev: Device to freeze.
*/
int pm_generic_freeze_late(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->freeze_late ? pm->freeze_late(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_freeze_late);
/**
* pm_generic_freeze - Generic freeze callback for subsystems.
* @dev: Device to freeze.
*/
int pm_generic_freeze(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->freeze ? pm->freeze(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_freeze);
/**
* pm_generic_poweroff_noirq - Generic poweroff_noirq callback for subsystems.
* @dev: Device to handle.
*/
int pm_generic_poweroff_noirq(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->poweroff_noirq ? pm->poweroff_noirq(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_poweroff_noirq);
/**
* pm_generic_poweroff_late - Generic poweroff_late callback for subsystems.
* @dev: Device to handle.
*/
int pm_generic_poweroff_late(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->poweroff_late ? pm->poweroff_late(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_poweroff_late);
/**
* pm_generic_poweroff - Generic poweroff callback for subsystems.
* @dev: Device to handle.
*/
int pm_generic_poweroff(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->poweroff ? pm->poweroff(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_poweroff);
/**
* pm_generic_thaw_noirq - Generic thaw_noirq callback for subsystems.
* @dev: Device to thaw.
*/
int pm_generic_thaw_noirq(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->thaw_noirq ? pm->thaw_noirq(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_thaw_noirq);
/**
* pm_generic_thaw_early - Generic thaw_early callback for subsystems.
* @dev: Device to thaw.
*/
int pm_generic_thaw_early(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->thaw_early ? pm->thaw_early(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_thaw_early);
/**
* pm_generic_thaw - Generic thaw callback for subsystems.
* @dev: Device to thaw.
*/
int pm_generic_thaw(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->thaw ? pm->thaw(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_thaw);
/**
* pm_generic_resume_noirq - Generic resume_noirq callback for subsystems.
* @dev: Device to resume.
*/
int pm_generic_resume_noirq(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->resume_noirq ? pm->resume_noirq(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_resume_noirq);
/**
* pm_generic_resume_early - Generic resume_early callback for subsystems.
* @dev: Device to resume.
*/
int pm_generic_resume_early(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->resume_early ? pm->resume_early(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_resume_early);
/**
* pm_generic_resume - Generic resume callback for subsystems.
* @dev: Device to resume.
*/
int pm_generic_resume(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->resume ? pm->resume(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_resume);
/**
* pm_generic_restore_noirq - Generic restore_noirq callback for subsystems.
* @dev: Device to restore.
*/
int pm_generic_restore_noirq(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->restore_noirq ? pm->restore_noirq(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_restore_noirq);
/**
* pm_generic_restore_early - Generic restore_early callback for subsystems.
* @dev: Device to resume.
*/
int pm_generic_restore_early(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->restore_early ? pm->restore_early(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_restore_early);
/**
* pm_generic_restore - Generic restore callback for subsystems.
* @dev: Device to restore.
*/
int pm_generic_restore(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return pm && pm->restore ? pm->restore(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_restore);
/**
* pm_generic_complete - Generic routine completing a device power transition.
* @dev: Device to handle.
*
* Complete a device power transition during a system-wide power transition.
*/
void pm_generic_complete(struct device *dev)
{
struct device_driver *drv = dev->driver;
if (drv && drv->pm && drv->pm->complete)
drv->pm->complete(dev);
/*
* Let runtime PM try to suspend devices that haven't been in use before
* going into the system-wide sleep state we're resuming from.
*/
pm_request_idle(dev);
}
#endif /* CONFIG_PM_SLEEP */

1768
drivers/base/power/main.c Normal file

File diff suppressed because it is too large Load diff

678
drivers/base/power/opp.c Normal file
View file

@ -0,0 +1,678 @@
/*
* Generic OPP Interface
*
* Copyright (C) 2009-2010 Texas Instruments Incorporated.
* Nishanth Menon
* Romit Dasgupta
* Kevin Hilman
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/list.h>
#include <linux/rculist.h>
#include <linux/rcupdate.h>
#include <linux/pm_opp.h>
#include <linux/of.h>
#include <linux/export.h>
/*
* Internal data structure organization with the OPP layer library is as
* follows:
* dev_opp_list (root)
* |- device 1 (represents voltage domain 1)
* | |- opp 1 (availability, freq, voltage)
* | |- opp 2 ..
* ... ...
* | `- opp n ..
* |- device 2 (represents the next voltage domain)
* ...
* `- device m (represents mth voltage domain)
* device 1, 2.. are represented by dev_opp structure while each opp
* is represented by the opp structure.
*/
/**
* struct dev_pm_opp - Generic OPP description structure
* @node: opp list node. The nodes are maintained throughout the lifetime
* of boot. It is expected only an optimal set of OPPs are
* added to the library by the SoC framework.
* RCU usage: opp list is traversed with RCU locks. node
* modification is possible realtime, hence the modifications
* are protected by the dev_opp_list_lock for integrity.
* IMPORTANT: the opp nodes should be maintained in increasing
* order.
* @available: true/false - marks if this OPP as available or not
* @rate: Frequency in hertz
* @u_volt: Nominal voltage in microvolts corresponding to this OPP
* @dev_opp: points back to the device_opp struct this opp belongs to
* @head: RCU callback head used for deferred freeing
*
* This structure stores the OPP information for a given device.
*/
struct dev_pm_opp {
struct list_head node;
bool available;
unsigned long rate;
unsigned long u_volt;
struct device_opp *dev_opp;
struct rcu_head head;
};
/**
* struct device_opp - Device opp structure
* @node: list node - contains the devices with OPPs that
* have been registered. Nodes once added are not modified in this
* list.
* RCU usage: nodes are not modified in the list of device_opp,
* however addition is possible and is secured by dev_opp_list_lock
* @dev: device pointer
* @head: notifier head to notify the OPP availability changes.
* @opp_list: list of opps
*
* This is an internal data structure maintaining the link to opps attached to
* a device. This structure is not meant to be shared to users as it is
* meant for book keeping and private to OPP library
*/
struct device_opp {
struct list_head node;
struct device *dev;
struct srcu_notifier_head head;
struct list_head opp_list;
};
/*
* The root of the list of all devices. All device_opp structures branch off
* from here, with each device_opp containing the list of opp it supports in
* various states of availability.
*/
static LIST_HEAD(dev_opp_list);
/* Lock to allow exclusive modification to the device and opp lists */
static DEFINE_MUTEX(dev_opp_list_lock);
/**
* find_device_opp() - find device_opp struct using device pointer
* @dev: device pointer used to lookup device OPPs
*
* Search list of device OPPs for one containing matching device. Does a RCU
* reader operation to grab the pointer needed.
*
* Returns pointer to 'struct device_opp' if found, otherwise -ENODEV or
* -EINVAL based on type of error.
*
* Locking: This function must be called under rcu_read_lock(). device_opp
* is a RCU protected pointer. This means that device_opp is valid as long
* as we are under RCU lock.
*/
static struct device_opp *find_device_opp(struct device *dev)
{
struct device_opp *tmp_dev_opp, *dev_opp = ERR_PTR(-ENODEV);
if (unlikely(IS_ERR_OR_NULL(dev))) {
pr_err("%s: Invalid parameters\n", __func__);
return ERR_PTR(-EINVAL);
}
list_for_each_entry_rcu(tmp_dev_opp, &dev_opp_list, node) {
if (tmp_dev_opp->dev == dev) {
dev_opp = tmp_dev_opp;
break;
}
}
return dev_opp;
}
/**
* dev_pm_opp_get_voltage() - Gets the voltage corresponding to an available opp
* @opp: opp for which voltage has to be returned for
*
* Return voltage in micro volt corresponding to the opp, else
* return 0
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. This means that opp which could have been fetched by
* opp_find_freq_{exact,ceil,floor} functions is valid as long as we are
* under RCU lock. The pointer returned by the opp_find_freq family must be
* used in the same section as the usage of this function with the pointer
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer.
*/
unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp)
{
struct dev_pm_opp *tmp_opp;
unsigned long v = 0;
tmp_opp = rcu_dereference(opp);
if (unlikely(IS_ERR_OR_NULL(tmp_opp)) || !tmp_opp->available)
pr_err("%s: Invalid parameters\n", __func__);
else
v = tmp_opp->u_volt;
return v;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage);
/**
* dev_pm_opp_get_freq() - Gets the frequency corresponding to an available opp
* @opp: opp for which frequency has to be returned for
*
* Return frequency in hertz corresponding to the opp, else
* return 0
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. This means that opp which could have been fetched by
* opp_find_freq_{exact,ceil,floor} functions is valid as long as we are
* under RCU lock. The pointer returned by the opp_find_freq family must be
* used in the same section as the usage of this function with the pointer
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer.
*/
unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
{
struct dev_pm_opp *tmp_opp;
unsigned long f = 0;
tmp_opp = rcu_dereference(opp);
if (unlikely(IS_ERR_OR_NULL(tmp_opp)) || !tmp_opp->available)
pr_err("%s: Invalid parameters\n", __func__);
else
f = tmp_opp->rate;
return f;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq);
/**
* dev_pm_opp_get_opp_count() - Get number of opps available in the opp list
* @dev: device for which we do this operation
*
* This function returns the number of available opps if there are any,
* else returns 0 if none or the corresponding error value.
*
* Locking: This function must be called under rcu_read_lock(). This function
* internally references two RCU protected structures: device_opp and opp which
* are safe as long as we are under a common RCU locked section.
*/
int dev_pm_opp_get_opp_count(struct device *dev)
{
struct device_opp *dev_opp;
struct dev_pm_opp *temp_opp;
int count = 0;
dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp)) {
int r = PTR_ERR(dev_opp);
dev_err(dev, "%s: device OPP not found (%d)\n", __func__, r);
return r;
}
list_for_each_entry_rcu(temp_opp, &dev_opp->opp_list, node) {
if (temp_opp->available)
count++;
}
return count;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);
/**
* dev_pm_opp_find_freq_exact() - search for an exact frequency
* @dev: device for which we do this operation
* @freq: frequency to search for
* @available: true/false - match for available opp
*
* Searches for exact match in the opp list and returns pointer to the matching
* opp if found, else returns ERR_PTR in case of error and should be handled
* using IS_ERR. Error return values can be:
* EINVAL: for bad pointer
* ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices
*
* Note: available is a modifier for the search. if available=true, then the
* match is for exact matching frequency and is available in the stored OPP
* table. if false, the match is for exact frequency which is not available.
*
* This provides a mechanism to enable an opp which is not available currently
* or the opposite as well.
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. The reason for the same is that the opp pointer which is
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/
struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq,
bool available)
{
struct device_opp *dev_opp;
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp)) {
int r = PTR_ERR(dev_opp);
dev_err(dev, "%s: device OPP not found (%d)\n", __func__, r);
return ERR_PTR(r);
}
list_for_each_entry_rcu(temp_opp, &dev_opp->opp_list, node) {
if (temp_opp->available == available &&
temp_opp->rate == freq) {
opp = temp_opp;
break;
}
}
return opp;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact);
/**
* dev_pm_opp_find_freq_ceil() - Search for an rounded ceil freq
* @dev: device for which we do this operation
* @freq: Start frequency
*
* Search for the matching ceil *available* OPP from a starting freq
* for a device.
*
* Returns matching *opp and refreshes *freq accordingly, else returns
* ERR_PTR in case of error and should be handled using IS_ERR. Error return
* values can be:
* EINVAL: for bad pointer
* ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. The reason for the same is that the opp pointer which is
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/
struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq)
{
struct device_opp *dev_opp;
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
if (!dev || !freq) {
dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);
return ERR_PTR(-EINVAL);
}
dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp))
return ERR_CAST(dev_opp);
list_for_each_entry_rcu(temp_opp, &dev_opp->opp_list, node) {
if (temp_opp->available && temp_opp->rate >= *freq) {
opp = temp_opp;
*freq = opp->rate;
break;
}
}
return opp;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil);
/**
* dev_pm_opp_find_freq_floor() - Search for a rounded floor freq
* @dev: device for which we do this operation
* @freq: Start frequency
*
* Search for the matching floor *available* OPP from a starting freq
* for a device.
*
* Returns matching *opp and refreshes *freq accordingly, else returns
* ERR_PTR in case of error and should be handled using IS_ERR. Error return
* values can be:
* EINVAL: for bad pointer
* ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. The reason for the same is that the opp pointer which is
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/
struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq)
{
struct device_opp *dev_opp;
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
if (!dev || !freq) {
dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);
return ERR_PTR(-EINVAL);
}
dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp))
return ERR_CAST(dev_opp);
list_for_each_entry_rcu(temp_opp, &dev_opp->opp_list, node) {
if (temp_opp->available) {
/* go to the next node, before choosing prev */
if (temp_opp->rate > *freq)
break;
else
opp = temp_opp;
}
}
if (!IS_ERR(opp))
*freq = opp->rate;
return opp;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor);
/**
* dev_pm_opp_add() - Add an OPP table from a table definitions
* @dev: device for which we do this operation
* @freq: Frequency in Hz for this OPP
* @u_volt: Voltage in uVolts for this OPP
*
* This function adds an opp definition to the opp list and returns status.
* The opp is made available by default and it can be controlled using
* dev_pm_opp_enable/disable functions.
*
* Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return:
* 0: On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available
* -EEXIST: Freq are same and volt are different OR
* Duplicate OPPs (both freq and volt are same) and !opp->available
* -ENOMEM: Memory allocation failure
*/
int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
{
struct device_opp *dev_opp = NULL;
struct dev_pm_opp *opp, *new_opp;
struct list_head *head;
/* allocate new OPP node */
new_opp = kzalloc(sizeof(*new_opp), GFP_KERNEL);
if (!new_opp) {
dev_warn(dev, "%s: Unable to create new OPP node\n", __func__);
return -ENOMEM;
}
/* Hold our list modification lock here */
mutex_lock(&dev_opp_list_lock);
/* Check for existing list for 'dev' */
dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp)) {
/*
* Allocate a new device OPP table. In the infrequent case
* where a new device is needed to be added, we pay this
* penalty.
*/
dev_opp = kzalloc(sizeof(struct device_opp), GFP_KERNEL);
if (!dev_opp) {
mutex_unlock(&dev_opp_list_lock);
kfree(new_opp);
dev_warn(dev,
"%s: Unable to create device OPP structure\n",
__func__);
return -ENOMEM;
}
dev_opp->dev = dev;
srcu_init_notifier_head(&dev_opp->head);
INIT_LIST_HEAD(&dev_opp->opp_list);
/* Secure the device list modification */
list_add_rcu(&dev_opp->node, &dev_opp_list);
}
/* populate the opp table */
new_opp->dev_opp = dev_opp;
new_opp->rate = freq;
new_opp->u_volt = u_volt;
new_opp->available = true;
/*
* Insert new OPP in order of increasing frequency
* and discard if already present
*/
head = &dev_opp->opp_list;
list_for_each_entry_rcu(opp, &dev_opp->opp_list, node) {
if (new_opp->rate <= opp->rate)
break;
else
head = &opp->node;
}
/* Duplicate OPPs ? */
if (new_opp->rate == opp->rate) {
int ret = opp->available && new_opp->u_volt == opp->u_volt ?
0 : -EEXIST;
dev_warn(dev, "%s: duplicate OPPs detected. Existing: freq: %lu, volt: %lu, enabled: %d. New: freq: %lu, volt: %lu, enabled: %d\n",
__func__, opp->rate, opp->u_volt, opp->available,
new_opp->rate, new_opp->u_volt, new_opp->available);
mutex_unlock(&dev_opp_list_lock);
kfree(new_opp);
return ret;
}
list_add_rcu(&new_opp->node, head);
mutex_unlock(&dev_opp_list_lock);
/*
* Notify the changes in the availability of the operable
* frequency/voltage list.
*/
srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_ADD, new_opp);
return 0;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_add);
/**
* opp_set_availability() - helper to set the availability of an opp
* @dev: device for which we do this operation
* @freq: OPP frequency to modify availability
* @availability_req: availability status requested for this opp
*
* Set the availability of an OPP with an RCU operation, opp_{enable,disable}
* share a common logic which is isolated here.
*
* Returns -EINVAL for bad pointers, -ENOMEM if no memory available for the
* copy operation, returns 0 if no modifcation was done OR modification was
* successful.
*
* Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks to
* keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*/
static int opp_set_availability(struct device *dev, unsigned long freq,
bool availability_req)
{
struct device_opp *tmp_dev_opp, *dev_opp = ERR_PTR(-ENODEV);
struct dev_pm_opp *new_opp, *tmp_opp, *opp = ERR_PTR(-ENODEV);
int r = 0;
/* keep the node allocated */
new_opp = kmalloc(sizeof(*new_opp), GFP_KERNEL);
if (!new_opp) {
dev_warn(dev, "%s: Unable to create OPP\n", __func__);
return -ENOMEM;
}
mutex_lock(&dev_opp_list_lock);
/* Find the device_opp */
list_for_each_entry(tmp_dev_opp, &dev_opp_list, node) {
if (dev == tmp_dev_opp->dev) {
dev_opp = tmp_dev_opp;
break;
}
}
if (IS_ERR(dev_opp)) {
r = PTR_ERR(dev_opp);
dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r);
goto unlock;
}
/* Do we have the frequency? */
list_for_each_entry(tmp_opp, &dev_opp->opp_list, node) {
if (tmp_opp->rate == freq) {
opp = tmp_opp;
break;
}
}
if (IS_ERR(opp)) {
r = PTR_ERR(opp);
goto unlock;
}
/* Is update really needed? */
if (opp->available == availability_req)
goto unlock;
/* copy the old data over */
*new_opp = *opp;
/* plug in new node */
new_opp->available = availability_req;
list_replace_rcu(&opp->node, &new_opp->node);
mutex_unlock(&dev_opp_list_lock);
kfree_rcu(opp, head);
/* Notify the change of the OPP availability */
if (availability_req)
srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_ENABLE,
new_opp);
else
srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_DISABLE,
new_opp);
return 0;
unlock:
mutex_unlock(&dev_opp_list_lock);
kfree(new_opp);
return r;
}
/**
* dev_pm_opp_enable() - Enable a specific OPP
* @dev: device for which we do this operation
* @freq: OPP frequency to enable
*
* Enables a provided opp. If the operation is valid, this returns 0, else the
* corresponding error value. It is meant to be used for users an OPP available
* after being temporarily made unavailable with dev_pm_opp_disable.
*
* Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function indirectly uses RCU and mutex locks to keep the
* integrity of the internal data structures. Callers should ensure that
* this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*/
int dev_pm_opp_enable(struct device *dev, unsigned long freq)
{
return opp_set_availability(dev, freq, true);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_enable);
/**
* dev_pm_opp_disable() - Disable a specific OPP
* @dev: device for which we do this operation
* @freq: OPP frequency to disable
*
* Disables a provided opp. If the operation is valid, this returns
* 0, else the corresponding error value. It is meant to be a temporary
* control by users to make this OPP not available until the circumstances are
* right to make it available again (with a call to dev_pm_opp_enable).
*
* Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function indirectly uses RCU and mutex locks to keep the
* integrity of the internal data structures. Callers should ensure that
* this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*/
int dev_pm_opp_disable(struct device *dev, unsigned long freq)
{
return opp_set_availability(dev, freq, false);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_disable);
/**
* dev_pm_opp_get_notifier() - find notifier_head of the device with opp
* @dev: device pointer used to lookup device OPPs.
*/
struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev)
{
struct device_opp *dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp))
return ERR_CAST(dev_opp); /* matching type */
return &dev_opp->head;
}
#ifdef CONFIG_OF
/**
* of_init_opp_table() - Initialize opp table from device tree
* @dev: device pointer used to lookup device OPPs.
*
* Register the initial OPP table with the OPP library for given device.
*/
int of_init_opp_table(struct device *dev)
{
const struct property *prop;
const __be32 *val;
int nr;
prop = of_find_property(dev->of_node, "operating-points", NULL);
if (!prop)
return -ENODEV;
if (!prop->value)
return -ENODATA;
/*
* Each OPP is a set of tuples consisting of frequency and
* voltage like <freq-kHz vol-uV>.
*/
nr = prop->length / sizeof(u32);
if (nr % 2) {
dev_err(dev, "%s: Invalid OPP list\n", __func__);
return -EINVAL;
}
val = prop->value;
while (nr) {
unsigned long freq = be32_to_cpup(val++) * 1000;
unsigned long volt = be32_to_cpup(val++);
if (dev_pm_opp_add(dev, freq, volt))
dev_warn(dev, "%s: Failed to add OPP %ld\n",
__func__, freq);
nr -= 2;
}
return 0;
}
EXPORT_SYMBOL_GPL(of_init_opp_table);
#endif

109
drivers/base/power/power.h Normal file
View file

@ -0,0 +1,109 @@
#include <linux/pm_qos.h>
static inline void device_pm_init_common(struct device *dev)
{
if (!dev->power.early_init) {
spin_lock_init(&dev->power.lock);
dev->power.qos = NULL;
dev->power.early_init = true;
}
}
#ifdef CONFIG_PM_RUNTIME
static inline void pm_runtime_early_init(struct device *dev)
{
dev->power.disable_depth = 1;
device_pm_init_common(dev);
}
extern void pm_runtime_init(struct device *dev);
extern void pm_runtime_remove(struct device *dev);
#else /* !CONFIG_PM_RUNTIME */
static inline void pm_runtime_early_init(struct device *dev)
{
device_pm_init_common(dev);
}
static inline void pm_runtime_init(struct device *dev) {}
static inline void pm_runtime_remove(struct device *dev) {}
#endif /* !CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM_SLEEP
/* kernel/power/main.c */
extern int pm_async_enabled;
/* drivers/base/power/main.c */
extern struct list_head dpm_list; /* The active device list */
static inline struct device *to_device(struct list_head *entry)
{
return container_of(entry, struct device, power.entry);
}
extern void device_pm_sleep_init(struct device *dev);
extern void device_pm_add(struct device *);
extern void device_pm_remove(struct device *);
extern void device_pm_move_before(struct device *, struct device *);
extern void device_pm_move_after(struct device *, struct device *);
extern void device_pm_move_last(struct device *);
extern void device_pm_move_first(struct device *);
#else /* !CONFIG_PM_SLEEP */
static inline void device_pm_sleep_init(struct device *dev) {}
static inline void device_pm_add(struct device *dev) {}
static inline void device_pm_remove(struct device *dev)
{
pm_runtime_remove(dev);
}
static inline void device_pm_move_before(struct device *deva,
struct device *devb) {}
static inline void device_pm_move_after(struct device *deva,
struct device *devb) {}
static inline void device_pm_move_last(struct device *dev) {}
static inline void device_pm_move_first(struct device *dev) {}
#endif /* !CONFIG_PM_SLEEP */
static inline void device_pm_init(struct device *dev)
{
device_pm_init_common(dev);
device_pm_sleep_init(dev);
pm_runtime_init(dev);
}
#ifdef CONFIG_PM
/*
* sysfs.c
*/
extern int dpm_sysfs_add(struct device *dev);
extern void dpm_sysfs_remove(struct device *dev);
extern void rpm_sysfs_remove(struct device *dev);
extern int wakeup_sysfs_add(struct device *dev);
extern void wakeup_sysfs_remove(struct device *dev);
extern int pm_qos_sysfs_add_resume_latency(struct device *dev);
extern void pm_qos_sysfs_remove_resume_latency(struct device *dev);
extern int pm_qos_sysfs_add_flags(struct device *dev);
extern void pm_qos_sysfs_remove_flags(struct device *dev);
#else /* CONFIG_PM */
static inline int dpm_sysfs_add(struct device *dev) { return 0; }
static inline void dpm_sysfs_remove(struct device *dev) {}
static inline void rpm_sysfs_remove(struct device *dev) {}
static inline int wakeup_sysfs_add(struct device *dev) { return 0; }
static inline void wakeup_sysfs_remove(struct device *dev) {}
static inline int pm_qos_sysfs_add(struct device *dev) { return 0; }
static inline void pm_qos_sysfs_remove(struct device *dev) {}
#endif

886
drivers/base/power/qos.c Normal file
View file

@ -0,0 +1,886 @@
/*
* Devices PM QoS constraints management
*
* Copyright (C) 2011 Texas Instruments, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*
* This module exposes the interface to kernel space for specifying
* per-device PM QoS dependencies. It provides infrastructure for registration
* of:
*
* Dependents on a QoS value : register requests
* Watchers of QoS value : get notified when target QoS value changes
*
* This QoS design is best effort based. Dependents register their QoS needs.
* Watchers register to keep track of the current QoS needs of the system.
* Watchers can register different types of notification callbacks:
* . a per-device notification callback using the dev_pm_qos_*_notifier API.
* The notification chain data is stored in the per-device constraint
* data struct.
* . a system-wide notification callback using the dev_pm_qos_*_global_notifier
* API. The notification chain data is stored in a static variable.
*
* Note about the per-device constraint data struct allocation:
* . The per-device constraints data struct ptr is tored into the device
* dev_pm_info.
* . To minimize the data usage by the per-device constraints, the data struct
* is only allocated at the first call to dev_pm_qos_add_request.
* . The data is later free'd when the device is removed from the system.
* . A global mutex protects the constraints users from the data being
* allocated and free'd.
*/
#include <linux/pm_qos.h>
#include <linux/spinlock.h>
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/export.h>
#include <linux/pm_runtime.h>
#include <linux/err.h>
#include <trace/events/power.h>
#include "power.h"
static DEFINE_MUTEX(dev_pm_qos_mtx);
static DEFINE_MUTEX(dev_pm_qos_sysfs_mtx);
static BLOCKING_NOTIFIER_HEAD(dev_pm_notifiers);
/**
* __dev_pm_qos_flags - Check PM QoS flags for a given device.
* @dev: Device to check the PM QoS flags for.
* @mask: Flags to check against.
*
* This routine must be called with dev->power.lock held.
*/
enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask)
{
struct dev_pm_qos *qos = dev->power.qos;
struct pm_qos_flags *pqf;
s32 val;
if (IS_ERR_OR_NULL(qos))
return PM_QOS_FLAGS_UNDEFINED;
pqf = &qos->flags;
if (list_empty(&pqf->list))
return PM_QOS_FLAGS_UNDEFINED;
val = pqf->effective_flags & mask;
if (val)
return (val == mask) ? PM_QOS_FLAGS_ALL : PM_QOS_FLAGS_SOME;
return PM_QOS_FLAGS_NONE;
}
/**
* dev_pm_qos_flags - Check PM QoS flags for a given device (locked).
* @dev: Device to check the PM QoS flags for.
* @mask: Flags to check against.
*/
enum pm_qos_flags_status dev_pm_qos_flags(struct device *dev, s32 mask)
{
unsigned long irqflags;
enum pm_qos_flags_status ret;
spin_lock_irqsave(&dev->power.lock, irqflags);
ret = __dev_pm_qos_flags(dev, mask);
spin_unlock_irqrestore(&dev->power.lock, irqflags);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_flags);
/**
* __dev_pm_qos_read_value - Get PM QoS constraint for a given device.
* @dev: Device to get the PM QoS constraint value for.
*
* This routine must be called with dev->power.lock held.
*/
s32 __dev_pm_qos_read_value(struct device *dev)
{
return IS_ERR_OR_NULL(dev->power.qos) ?
0 : pm_qos_read_value(&dev->power.qos->resume_latency);
}
/**
* dev_pm_qos_read_value - Get PM QoS constraint for a given device (locked).
* @dev: Device to get the PM QoS constraint value for.
*/
s32 dev_pm_qos_read_value(struct device *dev)
{
unsigned long flags;
s32 ret;
spin_lock_irqsave(&dev->power.lock, flags);
ret = __dev_pm_qos_read_value(dev);
spin_unlock_irqrestore(&dev->power.lock, flags);
return ret;
}
/**
* apply_constraint - Add/modify/remove device PM QoS request.
* @req: Constraint request to apply
* @action: Action to perform (add/update/remove).
* @value: Value to assign to the QoS request.
*
* Internal function to update the constraints list using the PM QoS core
* code and if needed call the per-device and the global notification
* callbacks
*/
static int apply_constraint(struct dev_pm_qos_request *req,
enum pm_qos_req_action action, s32 value)
{
struct dev_pm_qos *qos = req->dev->power.qos;
int ret;
switch(req->type) {
case DEV_PM_QOS_RESUME_LATENCY:
ret = pm_qos_update_target(&qos->resume_latency,
&req->data.pnode, action, value);
if (ret) {
value = pm_qos_read_value(&qos->resume_latency);
blocking_notifier_call_chain(&dev_pm_notifiers,
(unsigned long)value,
req);
}
break;
case DEV_PM_QOS_LATENCY_TOLERANCE:
ret = pm_qos_update_target(&qos->latency_tolerance,
&req->data.pnode, action, value);
if (ret) {
value = pm_qos_read_value(&qos->latency_tolerance);
req->dev->power.set_latency_tolerance(req->dev, value);
}
break;
case DEV_PM_QOS_FLAGS:
ret = pm_qos_update_flags(&qos->flags, &req->data.flr,
action, value);
break;
default:
ret = -EINVAL;
}
return ret;
}
/*
* dev_pm_qos_constraints_allocate
* @dev: device to allocate data for
*
* Called at the first call to add_request, for constraint data allocation
* Must be called with the dev_pm_qos_mtx mutex held
*/
static int dev_pm_qos_constraints_allocate(struct device *dev)
{
struct dev_pm_qos *qos;
struct pm_qos_constraints *c;
struct blocking_notifier_head *n;
qos = kzalloc(sizeof(*qos), GFP_KERNEL);
if (!qos)
return -ENOMEM;
n = kzalloc(sizeof(*n), GFP_KERNEL);
if (!n) {
kfree(qos);
return -ENOMEM;
}
BLOCKING_INIT_NOTIFIER_HEAD(n);
c = &qos->resume_latency;
plist_head_init(&c->list);
c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
c->no_constraint_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
c->type = PM_QOS_MIN;
c->notifiers = n;
c = &qos->latency_tolerance;
plist_head_init(&c->list);
c->target_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE;
c->default_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE;
c->no_constraint_value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
c->type = PM_QOS_MIN;
INIT_LIST_HEAD(&qos->flags.list);
spin_lock_irq(&dev->power.lock);
dev->power.qos = qos;
spin_unlock_irq(&dev->power.lock);
return 0;
}
static void __dev_pm_qos_hide_latency_limit(struct device *dev);
static void __dev_pm_qos_hide_flags(struct device *dev);
/**
* dev_pm_qos_constraints_destroy
* @dev: target device
*
* Called from the device PM subsystem on device removal under device_pm_lock().
*/
void dev_pm_qos_constraints_destroy(struct device *dev)
{
struct dev_pm_qos *qos;
struct dev_pm_qos_request *req, *tmp;
struct pm_qos_constraints *c;
struct pm_qos_flags *f;
mutex_lock(&dev_pm_qos_sysfs_mtx);
/*
* If the device's PM QoS resume latency limit or PM QoS flags have been
* exposed to user space, they have to be hidden at this point.
*/
pm_qos_sysfs_remove_resume_latency(dev);
pm_qos_sysfs_remove_flags(dev);
mutex_lock(&dev_pm_qos_mtx);
__dev_pm_qos_hide_latency_limit(dev);
__dev_pm_qos_hide_flags(dev);
qos = dev->power.qos;
if (!qos)
goto out;
/* Flush the constraints lists for the device. */
c = &qos->resume_latency;
plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) {
/*
* Update constraints list and call the notification
* callbacks if needed
*/
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
memset(req, 0, sizeof(*req));
}
c = &qos->latency_tolerance;
plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) {
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
memset(req, 0, sizeof(*req));
}
f = &qos->flags;
list_for_each_entry_safe(req, tmp, &f->list, data.flr.node) {
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
memset(req, 0, sizeof(*req));
}
spin_lock_irq(&dev->power.lock);
dev->power.qos = ERR_PTR(-ENODEV);
spin_unlock_irq(&dev->power.lock);
kfree(c->notifiers);
kfree(qos);
out:
mutex_unlock(&dev_pm_qos_mtx);
mutex_unlock(&dev_pm_qos_sysfs_mtx);
}
static bool dev_pm_qos_invalid_request(struct device *dev,
struct dev_pm_qos_request *req)
{
return !req || (req->type == DEV_PM_QOS_LATENCY_TOLERANCE
&& !dev->power.set_latency_tolerance);
}
static int __dev_pm_qos_add_request(struct device *dev,
struct dev_pm_qos_request *req,
enum dev_pm_qos_req_type type, s32 value)
{
int ret = 0;
if (!dev || dev_pm_qos_invalid_request(dev, req))
return -EINVAL;
if (WARN(dev_pm_qos_request_active(req),
"%s() called for already added request\n", __func__))
return -EINVAL;
if (IS_ERR(dev->power.qos))
ret = -ENODEV;
else if (!dev->power.qos)
ret = dev_pm_qos_constraints_allocate(dev);
trace_dev_pm_qos_add_request(dev_name(dev), type, value);
if (!ret) {
req->dev = dev;
req->type = type;
ret = apply_constraint(req, PM_QOS_ADD_REQ, value);
}
return ret;
}
/**
* dev_pm_qos_add_request - inserts new qos request into the list
* @dev: target device for the constraint
* @req: pointer to a preallocated handle
* @type: type of the request
* @value: defines the qos request
*
* This function inserts a new entry in the device constraints list of
* requested qos performance characteristics. It recomputes the aggregate
* QoS expectations of parameters and initializes the dev_pm_qos_request
* handle. Caller needs to save this handle for later use in updates and
* removal.
*
* Returns 1 if the aggregated constraint value has changed,
* 0 if the aggregated constraint value has not changed,
* -EINVAL in case of wrong parameters, -ENOMEM if there's not enough memory
* to allocate for data structures, -ENODEV if the device has just been removed
* from the system.
*
* Callers should ensure that the target device is not RPM_SUSPENDED before
* using this function for requests of type DEV_PM_QOS_FLAGS.
*/
int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
enum dev_pm_qos_req_type type, s32 value)
{
int ret;
mutex_lock(&dev_pm_qos_mtx);
ret = __dev_pm_qos_add_request(dev, req, type, value);
mutex_unlock(&dev_pm_qos_mtx);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_add_request);
/**
* __dev_pm_qos_update_request - Modify an existing device PM QoS request.
* @req : PM QoS request to modify.
* @new_value: New value to request.
*/
static int __dev_pm_qos_update_request(struct dev_pm_qos_request *req,
s32 new_value)
{
s32 curr_value;
int ret = 0;
if (!req) /*guard against callers passing in null */
return -EINVAL;
if (WARN(!dev_pm_qos_request_active(req),
"%s() called for unknown object\n", __func__))
return -EINVAL;
if (IS_ERR_OR_NULL(req->dev->power.qos))
return -ENODEV;
switch(req->type) {
case DEV_PM_QOS_RESUME_LATENCY:
case DEV_PM_QOS_LATENCY_TOLERANCE:
curr_value = req->data.pnode.prio;
break;
case DEV_PM_QOS_FLAGS:
curr_value = req->data.flr.flags;
break;
default:
return -EINVAL;
}
trace_dev_pm_qos_update_request(dev_name(req->dev), req->type,
new_value);
if (curr_value != new_value)
ret = apply_constraint(req, PM_QOS_UPDATE_REQ, new_value);
return ret;
}
/**
* dev_pm_qos_update_request - modifies an existing qos request
* @req : handle to list element holding a dev_pm_qos request to use
* @new_value: defines the qos request
*
* Updates an existing dev PM qos request along with updating the
* target value.
*
* Attempts are made to make this code callable on hot code paths.
*
* Returns 1 if the aggregated constraint value has changed,
* 0 if the aggregated constraint value has not changed,
* -EINVAL in case of wrong parameters, -ENODEV if the device has been
* removed from the system
*
* Callers should ensure that the target device is not RPM_SUSPENDED before
* using this function for requests of type DEV_PM_QOS_FLAGS.
*/
int dev_pm_qos_update_request(struct dev_pm_qos_request *req, s32 new_value)
{
int ret;
mutex_lock(&dev_pm_qos_mtx);
ret = __dev_pm_qos_update_request(req, new_value);
mutex_unlock(&dev_pm_qos_mtx);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_update_request);
static int __dev_pm_qos_remove_request(struct dev_pm_qos_request *req)
{
int ret;
if (!req) /*guard against callers passing in null */
return -EINVAL;
if (WARN(!dev_pm_qos_request_active(req),
"%s() called for unknown object\n", __func__))
return -EINVAL;
if (IS_ERR_OR_NULL(req->dev->power.qos))
return -ENODEV;
trace_dev_pm_qos_remove_request(dev_name(req->dev), req->type,
PM_QOS_DEFAULT_VALUE);
ret = apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
memset(req, 0, sizeof(*req));
return ret;
}
/**
* dev_pm_qos_remove_request - modifies an existing qos request
* @req: handle to request list element
*
* Will remove pm qos request from the list of constraints and
* recompute the current target value. Call this on slow code paths.
*
* Returns 1 if the aggregated constraint value has changed,
* 0 if the aggregated constraint value has not changed,
* -EINVAL in case of wrong parameters, -ENODEV if the device has been
* removed from the system
*
* Callers should ensure that the target device is not RPM_SUSPENDED before
* using this function for requests of type DEV_PM_QOS_FLAGS.
*/
int dev_pm_qos_remove_request(struct dev_pm_qos_request *req)
{
int ret;
mutex_lock(&dev_pm_qos_mtx);
ret = __dev_pm_qos_remove_request(req);
mutex_unlock(&dev_pm_qos_mtx);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_remove_request);
/**
* dev_pm_qos_add_notifier - sets notification entry for changes to target value
* of per-device PM QoS constraints
*
* @dev: target device for the constraint
* @notifier: notifier block managed by caller.
*
* Will register the notifier into a notification chain that gets called
* upon changes to the target value for the device.
*
* If the device's constraints object doesn't exist when this routine is called,
* it will be created (or error code will be returned if that fails).
*/
int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier)
{
int ret = 0;
mutex_lock(&dev_pm_qos_mtx);
if (IS_ERR(dev->power.qos))
ret = -ENODEV;
else if (!dev->power.qos)
ret = dev_pm_qos_constraints_allocate(dev);
if (!ret)
ret = blocking_notifier_chain_register(dev->power.qos->resume_latency.notifiers,
notifier);
mutex_unlock(&dev_pm_qos_mtx);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_add_notifier);
/**
* dev_pm_qos_remove_notifier - deletes notification for changes to target value
* of per-device PM QoS constraints
*
* @dev: target device for the constraint
* @notifier: notifier block to be removed.
*
* Will remove the notifier from the notification chain that gets called
* upon changes to the target value.
*/
int dev_pm_qos_remove_notifier(struct device *dev,
struct notifier_block *notifier)
{
int retval = 0;
mutex_lock(&dev_pm_qos_mtx);
/* Silently return if the constraints object is not present. */
if (!IS_ERR_OR_NULL(dev->power.qos))
retval = blocking_notifier_chain_unregister(dev->power.qos->resume_latency.notifiers,
notifier);
mutex_unlock(&dev_pm_qos_mtx);
return retval;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_remove_notifier);
/**
* dev_pm_qos_add_global_notifier - sets notification entry for changes to
* target value of the PM QoS constraints for any device
*
* @notifier: notifier block managed by caller.
*
* Will register the notifier into a notification chain that gets called
* upon changes to the target value for any device.
*/
int dev_pm_qos_add_global_notifier(struct notifier_block *notifier)
{
return blocking_notifier_chain_register(&dev_pm_notifiers, notifier);
}
EXPORT_SYMBOL_GPL(dev_pm_qos_add_global_notifier);
/**
* dev_pm_qos_remove_global_notifier - deletes notification for changes to
* target value of PM QoS constraints for any device
*
* @notifier: notifier block to be removed.
*
* Will remove the notifier from the notification chain that gets called
* upon changes to the target value for any device.
*/
int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier)
{
return blocking_notifier_chain_unregister(&dev_pm_notifiers, notifier);
}
EXPORT_SYMBOL_GPL(dev_pm_qos_remove_global_notifier);
/**
* dev_pm_qos_add_ancestor_request - Add PM QoS request for device's ancestor.
* @dev: Device whose ancestor to add the request for.
* @req: Pointer to the preallocated handle.
* @type: Type of the request.
* @value: Constraint latency value.
*/
int dev_pm_qos_add_ancestor_request(struct device *dev,
struct dev_pm_qos_request *req,
enum dev_pm_qos_req_type type, s32 value)
{
struct device *ancestor = dev->parent;
int ret = -ENODEV;
switch (type) {
case DEV_PM_QOS_RESUME_LATENCY:
while (ancestor && !ancestor->power.ignore_children)
ancestor = ancestor->parent;
break;
case DEV_PM_QOS_LATENCY_TOLERANCE:
while (ancestor && !ancestor->power.set_latency_tolerance)
ancestor = ancestor->parent;
break;
default:
ancestor = NULL;
}
if (ancestor)
ret = dev_pm_qos_add_request(ancestor, req, type, value);
if (ret < 0)
req->dev = NULL;
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_add_ancestor_request);
#ifdef CONFIG_PM_RUNTIME
static void __dev_pm_qos_drop_user_request(struct device *dev,
enum dev_pm_qos_req_type type)
{
struct dev_pm_qos_request *req = NULL;
switch(type) {
case DEV_PM_QOS_RESUME_LATENCY:
req = dev->power.qos->resume_latency_req;
dev->power.qos->resume_latency_req = NULL;
break;
case DEV_PM_QOS_LATENCY_TOLERANCE:
req = dev->power.qos->latency_tolerance_req;
dev->power.qos->latency_tolerance_req = NULL;
break;
case DEV_PM_QOS_FLAGS:
req = dev->power.qos->flags_req;
dev->power.qos->flags_req = NULL;
break;
}
__dev_pm_qos_remove_request(req);
kfree(req);
}
static void dev_pm_qos_drop_user_request(struct device *dev,
enum dev_pm_qos_req_type type)
{
mutex_lock(&dev_pm_qos_mtx);
__dev_pm_qos_drop_user_request(dev, type);
mutex_unlock(&dev_pm_qos_mtx);
}
/**
* dev_pm_qos_expose_latency_limit - Expose PM QoS latency limit to user space.
* @dev: Device whose PM QoS latency limit is to be exposed to user space.
* @value: Initial value of the latency limit.
*/
int dev_pm_qos_expose_latency_limit(struct device *dev, s32 value)
{
struct dev_pm_qos_request *req;
int ret;
if (!device_is_registered(dev) || value < 0)
return -EINVAL;
req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req)
return -ENOMEM;
ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_RESUME_LATENCY, value);
if (ret < 0) {
kfree(req);
return ret;
}
mutex_lock(&dev_pm_qos_sysfs_mtx);
mutex_lock(&dev_pm_qos_mtx);
if (IS_ERR_OR_NULL(dev->power.qos))
ret = -ENODEV;
else if (dev->power.qos->resume_latency_req)
ret = -EEXIST;
if (ret < 0) {
__dev_pm_qos_remove_request(req);
kfree(req);
mutex_unlock(&dev_pm_qos_mtx);
goto out;
}
dev->power.qos->resume_latency_req = req;
mutex_unlock(&dev_pm_qos_mtx);
ret = pm_qos_sysfs_add_resume_latency(dev);
if (ret)
dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_RESUME_LATENCY);
out:
mutex_unlock(&dev_pm_qos_sysfs_mtx);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_expose_latency_limit);
static void __dev_pm_qos_hide_latency_limit(struct device *dev)
{
if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->resume_latency_req)
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_RESUME_LATENCY);
}
/**
* dev_pm_qos_hide_latency_limit - Hide PM QoS latency limit from user space.
* @dev: Device whose PM QoS latency limit is to be hidden from user space.
*/
void dev_pm_qos_hide_latency_limit(struct device *dev)
{
mutex_lock(&dev_pm_qos_sysfs_mtx);
pm_qos_sysfs_remove_resume_latency(dev);
mutex_lock(&dev_pm_qos_mtx);
__dev_pm_qos_hide_latency_limit(dev);
mutex_unlock(&dev_pm_qos_mtx);
mutex_unlock(&dev_pm_qos_sysfs_mtx);
}
EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_limit);
/**
* dev_pm_qos_expose_flags - Expose PM QoS flags of a device to user space.
* @dev: Device whose PM QoS flags are to be exposed to user space.
* @val: Initial values of the flags.
*/
int dev_pm_qos_expose_flags(struct device *dev, s32 val)
{
struct dev_pm_qos_request *req;
int ret;
if (!device_is_registered(dev))
return -EINVAL;
req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req)
return -ENOMEM;
ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_FLAGS, val);
if (ret < 0) {
kfree(req);
return ret;
}
pm_runtime_get_sync(dev);
mutex_lock(&dev_pm_qos_sysfs_mtx);
mutex_lock(&dev_pm_qos_mtx);
if (IS_ERR_OR_NULL(dev->power.qos))
ret = -ENODEV;
else if (dev->power.qos->flags_req)
ret = -EEXIST;
if (ret < 0) {
__dev_pm_qos_remove_request(req);
kfree(req);
mutex_unlock(&dev_pm_qos_mtx);
goto out;
}
dev->power.qos->flags_req = req;
mutex_unlock(&dev_pm_qos_mtx);
ret = pm_qos_sysfs_add_flags(dev);
if (ret)
dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS);
out:
mutex_unlock(&dev_pm_qos_sysfs_mtx);
pm_runtime_put(dev);
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_expose_flags);
static void __dev_pm_qos_hide_flags(struct device *dev)
{
if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->flags_req)
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS);
}
/**
* dev_pm_qos_hide_flags - Hide PM QoS flags of a device from user space.
* @dev: Device whose PM QoS flags are to be hidden from user space.
*/
void dev_pm_qos_hide_flags(struct device *dev)
{
pm_runtime_get_sync(dev);
mutex_lock(&dev_pm_qos_sysfs_mtx);
pm_qos_sysfs_remove_flags(dev);
mutex_lock(&dev_pm_qos_mtx);
__dev_pm_qos_hide_flags(dev);
mutex_unlock(&dev_pm_qos_mtx);
mutex_unlock(&dev_pm_qos_sysfs_mtx);
pm_runtime_put(dev);
}
EXPORT_SYMBOL_GPL(dev_pm_qos_hide_flags);
/**
* dev_pm_qos_update_flags - Update PM QoS flags request owned by user space.
* @dev: Device to update the PM QoS flags request for.
* @mask: Flags to set/clear.
* @set: Whether to set or clear the flags (true means set).
*/
int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set)
{
s32 value;
int ret;
pm_runtime_get_sync(dev);
mutex_lock(&dev_pm_qos_mtx);
if (IS_ERR_OR_NULL(dev->power.qos) || !dev->power.qos->flags_req) {
ret = -EINVAL;
goto out;
}
value = dev_pm_qos_requested_flags(dev);
if (set)
value |= mask;
else
value &= ~mask;
ret = __dev_pm_qos_update_request(dev->power.qos->flags_req, value);
out:
mutex_unlock(&dev_pm_qos_mtx);
pm_runtime_put(dev);
return ret;
}
/**
* dev_pm_qos_get_user_latency_tolerance - Get user space latency tolerance.
* @dev: Device to obtain the user space latency tolerance for.
*/
s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev)
{
s32 ret;
mutex_lock(&dev_pm_qos_mtx);
ret = IS_ERR_OR_NULL(dev->power.qos)
|| !dev->power.qos->latency_tolerance_req ?
PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT :
dev->power.qos->latency_tolerance_req->data.pnode.prio;
mutex_unlock(&dev_pm_qos_mtx);
return ret;
}
/**
* dev_pm_qos_update_user_latency_tolerance - Update user space latency tolerance.
* @dev: Device to update the user space latency tolerance for.
* @val: New user space latency tolerance for @dev (negative values disable).
*/
int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
{
int ret;
mutex_lock(&dev_pm_qos_mtx);
if (IS_ERR_OR_NULL(dev->power.qos)
|| !dev->power.qos->latency_tolerance_req) {
struct dev_pm_qos_request *req;
if (val < 0) {
ret = -EINVAL;
goto out;
}
req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req) {
ret = -ENOMEM;
goto out;
}
ret = __dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY_TOLERANCE, val);
if (ret < 0) {
kfree(req);
goto out;
}
dev->power.qos->latency_tolerance_req = req;
} else {
if (val < 0) {
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY_TOLERANCE);
ret = 0;
} else {
ret = __dev_pm_qos_update_request(dev->power.qos->latency_tolerance_req, val);
}
}
out:
mutex_unlock(&dev_pm_qos_mtx);
return ret;
}
#else /* !CONFIG_PM_RUNTIME */
static void __dev_pm_qos_hide_latency_limit(struct device *dev) {}
static void __dev_pm_qos_hide_flags(struct device *dev) {}
#endif /* CONFIG_PM_RUNTIME */

1487
drivers/base/power/runtime.c Normal file

File diff suppressed because it is too large Load diff

770
drivers/base/power/sysfs.c Normal file
View file

@ -0,0 +1,770 @@
/*
* drivers/base/power/sysfs.c - sysfs entries for device PM
*/
#include <linux/device.h>
#include <linux/string.h>
#include <linux/export.h>
#include <linux/pm_qos.h>
#include <linux/pm_runtime.h>
#include <linux/atomic.h>
#include <linux/jiffies.h>
#include "power.h"
/*
* control - Report/change current runtime PM setting of the device
*
* Runtime power management of a device can be blocked with the help of
* this attribute. All devices have one of the following two values for
* the power/control file:
*
* + "auto\n" to allow the device to be power managed at run time;
* + "on\n" to prevent the device from being power managed at run time;
*
* The default for all devices is "auto", which means that devices may be
* subject to automatic power management, depending on their drivers.
* Changing this attribute to "on" prevents the driver from power managing
* the device at run time. Doing that while the device is suspended causes
* it to be woken up.
*
* wakeup - Report/change current wakeup option for device
*
* Some devices support "wakeup" events, which are hardware signals
* used to activate devices from suspended or low power states. Such
* devices have one of three values for the sysfs power/wakeup file:
*
* + "enabled\n" to issue the events;
* + "disabled\n" not to do so; or
* + "\n" for temporary or permanent inability to issue wakeup.
*
* (For example, unconfigured USB devices can't issue wakeups.)
*
* Familiar examples of devices that can issue wakeup events include
* keyboards and mice (both PS2 and USB styles), power buttons, modems,
* "Wake-On-LAN" Ethernet links, GPIO lines, and more. Some events
* will wake the entire system from a suspend state; others may just
* wake up the device (if the system as a whole is already active).
* Some wakeup events use normal IRQ lines; other use special out
* of band signaling.
*
* It is the responsibility of device drivers to enable (or disable)
* wakeup signaling as part of changing device power states, respecting
* the policy choices provided through the driver model.
*
* Devices may not be able to generate wakeup events from all power
* states. Also, the events may be ignored in some configurations;
* for example, they might need help from other devices that aren't
* active, or which may have wakeup disabled. Some drivers rely on
* wakeup events internally (unless they are disabled), keeping
* their hardware in low power modes whenever they're unused. This
* saves runtime power, without requiring system-wide sleep states.
*
* async - Report/change current async suspend setting for the device
*
* Asynchronous suspend and resume of the device during system-wide power
* state transitions can be enabled by writing "enabled" to this file.
* Analogously, if "disabled" is written to this file, the device will be
* suspended and resumed synchronously.
*
* All devices have one of the following two values for power/async:
*
* + "enabled\n" to permit the asynchronous suspend/resume of the device;
* + "disabled\n" to forbid it;
*
* NOTE: It generally is unsafe to permit the asynchronous suspend/resume
* of a device unless it is certain that all of the PM dependencies of the
* device are known to the PM core. However, for some devices this
* attribute is set to "enabled" by bus type code or device drivers and in
* that cases it should be safe to leave the default value.
*
* autosuspend_delay_ms - Report/change a device's autosuspend_delay value
*
* Some drivers don't want to carry out a runtime suspend as soon as a
* device becomes idle; they want it always to remain idle for some period
* of time before suspending it. This period is the autosuspend_delay
* value (expressed in milliseconds) and it can be controlled by the user.
* If the value is negative then the device will never be runtime
* suspended.
*
* NOTE: The autosuspend_delay_ms attribute and the autosuspend_delay
* value are used only if the driver calls pm_runtime_use_autosuspend().
*
* wakeup_count - Report the number of wakeup events related to the device
*/
const char power_group_name[] = "power";
EXPORT_SYMBOL_GPL(power_group_name);
#ifdef CONFIG_PM_RUNTIME
static const char ctrl_auto[] = "auto";
static const char ctrl_on[] = "on";
static ssize_t control_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%s\n",
dev->power.runtime_auto ? ctrl_auto : ctrl_on);
}
static ssize_t control_store(struct device * dev, struct device_attribute *attr,
const char * buf, size_t n)
{
char *cp;
int len = n;
cp = memchr(buf, '\n', n);
if (cp)
len = cp - buf;
device_lock(dev);
if (len == sizeof ctrl_auto - 1 && strncmp(buf, ctrl_auto, len) == 0)
pm_runtime_allow(dev);
else if (len == sizeof ctrl_on - 1 && strncmp(buf, ctrl_on, len) == 0)
pm_runtime_forbid(dev);
else
n = -EINVAL;
device_unlock(dev);
return n;
}
static DEVICE_ATTR(control, 0644, control_show, control_store);
static ssize_t rtpm_active_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int ret;
spin_lock_irq(&dev->power.lock);
update_pm_runtime_accounting(dev);
ret = sprintf(buf, "%i\n", jiffies_to_msecs(dev->power.active_jiffies));
spin_unlock_irq(&dev->power.lock);
return ret;
}
static DEVICE_ATTR(runtime_active_time, 0444, rtpm_active_time_show, NULL);
static ssize_t rtpm_suspended_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int ret;
spin_lock_irq(&dev->power.lock);
update_pm_runtime_accounting(dev);
ret = sprintf(buf, "%i\n",
jiffies_to_msecs(dev->power.suspended_jiffies));
spin_unlock_irq(&dev->power.lock);
return ret;
}
static DEVICE_ATTR(runtime_suspended_time, 0444, rtpm_suspended_time_show, NULL);
static ssize_t rtpm_status_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
const char *p;
if (dev->power.runtime_error) {
p = "error\n";
} else if (dev->power.disable_depth) {
p = "unsupported\n";
} else {
switch (dev->power.runtime_status) {
case RPM_SUSPENDED:
p = "suspended\n";
break;
case RPM_SUSPENDING:
p = "suspending\n";
break;
case RPM_RESUMING:
p = "resuming\n";
break;
case RPM_ACTIVE:
p = "active\n";
break;
default:
return -EIO;
}
}
return sprintf(buf, p);
}
static DEVICE_ATTR(runtime_status, 0444, rtpm_status_show, NULL);
static ssize_t autosuspend_delay_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (!dev->power.use_autosuspend)
return -EIO;
return sprintf(buf, "%d\n", dev->power.autosuspend_delay);
}
static ssize_t autosuspend_delay_ms_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t n)
{
long delay;
if (!dev->power.use_autosuspend)
return -EIO;
if (kstrtol(buf, 10, &delay) != 0 || delay != (int) delay)
return -EINVAL;
device_lock(dev);
pm_runtime_set_autosuspend_delay(dev, delay);
device_unlock(dev);
return n;
}
static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show,
autosuspend_delay_ms_store);
static ssize_t pm_qos_resume_latency_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev));
}
static ssize_t pm_qos_resume_latency_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t n)
{
s32 value;
int ret;
if (kstrtos32(buf, 0, &value))
return -EINVAL;
if (value < 0)
return -EINVAL;
ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req,
value);
return ret < 0 ? ret : n;
}
static DEVICE_ATTR(pm_qos_resume_latency_us, 0644,
pm_qos_resume_latency_show, pm_qos_resume_latency_store);
static ssize_t pm_qos_latency_tolerance_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
s32 value = dev_pm_qos_get_user_latency_tolerance(dev);
if (value < 0)
return sprintf(buf, "auto\n");
else if (value == PM_QOS_LATENCY_ANY)
return sprintf(buf, "any\n");
return sprintf(buf, "%d\n", value);
}
static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t n)
{
s32 value;
int ret;
if (kstrtos32(buf, 0, &value)) {
if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n"))
value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
else if (!strcmp(buf, "any") || !strcmp(buf, "any\n"))
value = PM_QOS_LATENCY_ANY;
}
ret = dev_pm_qos_update_user_latency_tolerance(dev, value);
return ret < 0 ? ret : n;
}
static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644,
pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store);
static ssize_t pm_qos_no_power_off_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
& PM_QOS_FLAG_NO_POWER_OFF));
}
static ssize_t pm_qos_no_power_off_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t n)
{
int ret;
if (kstrtoint(buf, 0, &ret))
return -EINVAL;
if (ret != 0 && ret != 1)
return -EINVAL;
ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_NO_POWER_OFF, ret);
return ret < 0 ? ret : n;
}
static DEVICE_ATTR(pm_qos_no_power_off, 0644,
pm_qos_no_power_off_show, pm_qos_no_power_off_store);
static ssize_t pm_qos_remote_wakeup_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
& PM_QOS_FLAG_REMOTE_WAKEUP));
}
static ssize_t pm_qos_remote_wakeup_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t n)
{
int ret;
if (kstrtoint(buf, 0, &ret))
return -EINVAL;
if (ret != 0 && ret != 1)
return -EINVAL;
ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_REMOTE_WAKEUP, ret);
return ret < 0 ? ret : n;
}
static DEVICE_ATTR(pm_qos_remote_wakeup, 0644,
pm_qos_remote_wakeup_show, pm_qos_remote_wakeup_store);
#endif /* CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM_SLEEP
static const char _enabled[] = "enabled";
static const char _disabled[] = "disabled";
static ssize_t
wake_show(struct device * dev, struct device_attribute *attr, char * buf)
{
return sprintf(buf, "%s\n", device_can_wakeup(dev)
? (device_may_wakeup(dev) ? _enabled : _disabled)
: "");
}
static ssize_t
wake_store(struct device * dev, struct device_attribute *attr,
const char * buf, size_t n)
{
char *cp;
int len = n;
if (!device_can_wakeup(dev))
return -EINVAL;
cp = memchr(buf, '\n', n);
if (cp)
len = cp - buf;
if (len == sizeof _enabled - 1
&& strncmp(buf, _enabled, sizeof _enabled - 1) == 0)
device_set_wakeup_enable(dev, 1);
else if (len == sizeof _disabled - 1
&& strncmp(buf, _disabled, sizeof _disabled - 1) == 0)
device_set_wakeup_enable(dev, 0);
else
return -EINVAL;
return n;
}
static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store);
static ssize_t wakeup_count_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
unsigned long count = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
count = dev->power.wakeup->event_count;
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_count, 0444, wakeup_count_show, NULL);
static ssize_t wakeup_active_count_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
unsigned long count = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
count = dev->power.wakeup->active_count;
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_active_count, 0444, wakeup_active_count_show, NULL);
static ssize_t wakeup_abort_count_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
unsigned long count = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
count = dev->power.wakeup->wakeup_count;
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_abort_count, 0444, wakeup_abort_count_show, NULL);
static ssize_t wakeup_expire_count_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
unsigned long count = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
count = dev->power.wakeup->expire_count;
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_expire_count, 0444, wakeup_expire_count_show, NULL);
static ssize_t wakeup_active_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
unsigned int active = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
active = dev->power.wakeup->active;
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%u\n", active) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_active, 0444, wakeup_active_show, NULL);
static ssize_t wakeup_total_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
s64 msec = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
msec = ktime_to_ms(dev->power.wakeup->total_time);
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_total_time_ms, 0444, wakeup_total_time_show, NULL);
static ssize_t wakeup_max_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
s64 msec = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
msec = ktime_to_ms(dev->power.wakeup->max_time);
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_max_time_ms, 0444, wakeup_max_time_show, NULL);
static ssize_t wakeup_last_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
s64 msec = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
msec = ktime_to_ms(dev->power.wakeup->last_time);
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_last_time_ms, 0444, wakeup_last_time_show, NULL);
#ifdef CONFIG_PM_AUTOSLEEP
static ssize_t wakeup_prevent_sleep_time_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
s64 msec = 0;
bool enabled = false;
spin_lock_irq(&dev->power.lock);
if (dev->power.wakeup) {
msec = ktime_to_ms(dev->power.wakeup->prevent_sleep_time);
enabled = true;
}
spin_unlock_irq(&dev->power.lock);
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
}
static DEVICE_ATTR(wakeup_prevent_sleep_time_ms, 0444,
wakeup_prevent_sleep_time_show, NULL);
#endif /* CONFIG_PM_AUTOSLEEP */
#endif /* CONFIG_PM_SLEEP */
#ifdef CONFIG_PM_ADVANCED_DEBUG
#ifdef CONFIG_PM_RUNTIME
static ssize_t rtpm_usagecount_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count));
}
static ssize_t rtpm_children_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", dev->power.ignore_children ?
0 : atomic_read(&dev->power.child_count));
}
static ssize_t rtpm_enabled_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
if ((dev->power.disable_depth) && (dev->power.runtime_auto == false))
return sprintf(buf, "disabled & forbidden\n");
else if (dev->power.disable_depth)
return sprintf(buf, "disabled\n");
else if (dev->power.runtime_auto == false)
return sprintf(buf, "forbidden\n");
return sprintf(buf, "enabled\n");
}
static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL);
static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL);
static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL);
#endif
#ifdef CONFIG_PM_SLEEP
static ssize_t async_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%s\n",
device_async_suspend_enabled(dev) ?
_enabled : _disabled);
}
static ssize_t async_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t n)
{
char *cp;
int len = n;
cp = memchr(buf, '\n', n);
if (cp)
len = cp - buf;
if (len == sizeof _enabled - 1 && strncmp(buf, _enabled, len) == 0)
device_enable_async_suspend(dev);
else if (len == sizeof _disabled - 1 &&
strncmp(buf, _disabled, len) == 0)
device_disable_async_suspend(dev);
else
return -EINVAL;
return n;
}
static DEVICE_ATTR(async, 0644, async_show, async_store);
#endif
#endif /* CONFIG_PM_ADVANCED_DEBUG */
static struct attribute *power_attrs[] = {
#ifdef CONFIG_PM_ADVANCED_DEBUG
#ifdef CONFIG_PM_SLEEP
&dev_attr_async.attr,
#endif
#ifdef CONFIG_PM_RUNTIME
&dev_attr_runtime_status.attr,
&dev_attr_runtime_usage.attr,
&dev_attr_runtime_active_kids.attr,
&dev_attr_runtime_enabled.attr,
#endif
#endif /* CONFIG_PM_ADVANCED_DEBUG */
NULL,
};
static struct attribute_group pm_attr_group = {
.name = power_group_name,
.attrs = power_attrs,
};
static struct attribute *wakeup_attrs[] = {
#ifdef CONFIG_PM_SLEEP
&dev_attr_wakeup.attr,
&dev_attr_wakeup_count.attr,
&dev_attr_wakeup_active_count.attr,
&dev_attr_wakeup_abort_count.attr,
&dev_attr_wakeup_expire_count.attr,
&dev_attr_wakeup_active.attr,
&dev_attr_wakeup_total_time_ms.attr,
&dev_attr_wakeup_max_time_ms.attr,
&dev_attr_wakeup_last_time_ms.attr,
#ifdef CONFIG_PM_AUTOSLEEP
&dev_attr_wakeup_prevent_sleep_time_ms.attr,
#endif
#endif
NULL,
};
static struct attribute_group pm_wakeup_attr_group = {
.name = power_group_name,
.attrs = wakeup_attrs,
};
static struct attribute *runtime_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
#ifndef CONFIG_PM_ADVANCED_DEBUG
&dev_attr_runtime_status.attr,
#endif
&dev_attr_control.attr,
&dev_attr_runtime_suspended_time.attr,
&dev_attr_runtime_active_time.attr,
&dev_attr_autosuspend_delay_ms.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL,
};
static struct attribute_group pm_runtime_attr_group = {
.name = power_group_name,
.attrs = runtime_attrs,
};
static struct attribute *pm_qos_resume_latency_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
&dev_attr_pm_qos_resume_latency_us.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL,
};
static struct attribute_group pm_qos_resume_latency_attr_group = {
.name = power_group_name,
.attrs = pm_qos_resume_latency_attrs,
};
static struct attribute *pm_qos_latency_tolerance_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
&dev_attr_pm_qos_latency_tolerance_us.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL,
};
static struct attribute_group pm_qos_latency_tolerance_attr_group = {
.name = power_group_name,
.attrs = pm_qos_latency_tolerance_attrs,
};
static struct attribute *pm_qos_flags_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
&dev_attr_pm_qos_no_power_off.attr,
&dev_attr_pm_qos_remote_wakeup.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL,
};
static struct attribute_group pm_qos_flags_attr_group = {
.name = power_group_name,
.attrs = pm_qos_flags_attrs,
};
int dpm_sysfs_add(struct device *dev)
{
int rc;
rc = sysfs_create_group(&dev->kobj, &pm_attr_group);
if (rc)
return rc;
if (pm_runtime_callbacks_present(dev)) {
rc = sysfs_merge_group(&dev->kobj, &pm_runtime_attr_group);
if (rc)
goto err_out;
}
if (device_can_wakeup(dev)) {
rc = sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group);
if (rc)
goto err_runtime;
}
if (dev->power.set_latency_tolerance) {
rc = sysfs_merge_group(&dev->kobj,
&pm_qos_latency_tolerance_attr_group);
if (rc)
goto err_wakeup;
}
return 0;
err_wakeup:
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
err_runtime:
sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
err_out:
sysfs_remove_group(&dev->kobj, &pm_attr_group);
return rc;
}
int wakeup_sysfs_add(struct device *dev)
{
return sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group);
}
void wakeup_sysfs_remove(struct device *dev)
{
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
}
int pm_qos_sysfs_add_resume_latency(struct device *dev)
{
return sysfs_merge_group(&dev->kobj, &pm_qos_resume_latency_attr_group);
}
void pm_qos_sysfs_remove_resume_latency(struct device *dev)
{
sysfs_unmerge_group(&dev->kobj, &pm_qos_resume_latency_attr_group);
}
int pm_qos_sysfs_add_flags(struct device *dev)
{
return sysfs_merge_group(&dev->kobj, &pm_qos_flags_attr_group);
}
void pm_qos_sysfs_remove_flags(struct device *dev)
{
sysfs_unmerge_group(&dev->kobj, &pm_qos_flags_attr_group);
}
void rpm_sysfs_remove(struct device *dev)
{
sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
}
void dpm_sysfs_remove(struct device *dev)
{
sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
dev_pm_qos_constraints_destroy(dev);
rpm_sysfs_remove(dev);
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
sysfs_remove_group(&dev->kobj, &pm_attr_group);
}

266
drivers/base/power/trace.c Normal file
View file

@ -0,0 +1,266 @@
/*
* drivers/base/power/trace.c
*
* Copyright (C) 2006 Linus Torvalds
*
* Trace facility for suspend/resume problems, when none of the
* devices may be working.
*/
#include <linux/resume-trace.h>
#include <linux/export.h>
#include <linux/rtc.h>
#include <asm/rtc.h>
#include "power.h"
/*
* Horrid, horrid, horrid.
*
* It turns out that the _only_ piece of hardware that actually
* keeps its value across a hard boot (and, more importantly, the
* POST init sequence) is literally the realtime clock.
*
* Never mind that an RTC chip has 114 bytes (and often a whole
* other bank of an additional 128 bytes) of nice SRAM that is
* _designed_ to keep data - the POST will clear it. So we literally
* can just use the few bytes of actual time data, which means that
* we're really limited.
*
* It means, for example, that we can't use the seconds at all
* (since the time between the hang and the boot might be more
* than a minute), and we'd better not depend on the low bits of
* the minutes either.
*
* There are the wday fields etc, but I wouldn't guarantee those
* are dependable either. And if the date isn't valid, either the
* hw or POST will do strange things.
*
* So we're left with:
* - year: 0-99
* - month: 0-11
* - day-of-month: 1-28
* - hour: 0-23
* - min: (0-30)*2
*
* Giving us a total range of 0-16128000 (0xf61800), ie less
* than 24 bits of actual data we can save across reboots.
*
* And if your box can't boot in less than three minutes,
* you're screwed.
*
* Now, almost 24 bits of data is pitifully small, so we need
* to be pretty dense if we want to use it for anything nice.
* What we do is that instead of saving off nice readable info,
* we save off _hashes_ of information that we can hopefully
* regenerate after the reboot.
*
* In particular, this means that we might be unlucky, and hit
* a case where we have a hash collision, and we end up not
* being able to tell for certain exactly which case happened.
* But that's hopefully unlikely.
*
* What we do is to take the bits we can fit, and split them
* into three parts (16*997*1009 = 16095568), and use the values
* for:
* - 0-15: user-settable
* - 0-996: file + line number
* - 0-1008: device
*/
#define USERHASH (16)
#define FILEHASH (997)
#define DEVHASH (1009)
#define DEVSEED (7919)
static unsigned int dev_hash_value;
static int set_magic_time(unsigned int user, unsigned int file, unsigned int device)
{
unsigned int n = user + USERHASH*(file + FILEHASH*device);
// June 7th, 2006
static struct rtc_time time = {
.tm_sec = 0,
.tm_min = 0,
.tm_hour = 0,
.tm_mday = 7,
.tm_mon = 5, // June - counting from zero
.tm_year = 106,
.tm_wday = 3,
.tm_yday = 160,
.tm_isdst = 1
};
time.tm_year = (n % 100);
n /= 100;
time.tm_mon = (n % 12);
n /= 12;
time.tm_mday = (n % 28) + 1;
n /= 28;
time.tm_hour = (n % 24);
n /= 24;
time.tm_min = (n % 20) * 3;
n /= 20;
set_rtc_time(&time);
return n ? -1 : 0;
}
static unsigned int read_magic_time(void)
{
struct rtc_time time;
unsigned int val;
get_rtc_time(&time);
pr_info("RTC time: %2d:%02d:%02d, date: %02d/%02d/%02d\n",
time.tm_hour, time.tm_min, time.tm_sec,
time.tm_mon + 1, time.tm_mday, time.tm_year % 100);
val = time.tm_year; /* 100 years */
if (val > 100)
val -= 100;
val += time.tm_mon * 100; /* 12 months */
val += (time.tm_mday-1) * 100 * 12; /* 28 month-days */
val += time.tm_hour * 100 * 12 * 28; /* 24 hours */
val += (time.tm_min / 3) * 100 * 12 * 28 * 24; /* 20 3-minute intervals */
return val;
}
/*
* This is just the sdbm hash function with a user-supplied
* seed and final size parameter.
*/
static unsigned int hash_string(unsigned int seed, const char *data, unsigned int mod)
{
unsigned char c;
while ((c = *data++) != 0) {
seed = (seed << 16) + (seed << 6) - seed + c;
}
return seed % mod;
}
void set_trace_device(struct device *dev)
{
dev_hash_value = hash_string(DEVSEED, dev_name(dev), DEVHASH);
}
EXPORT_SYMBOL(set_trace_device);
/*
* We could just take the "tracedata" index into the .tracedata
* section instead. Generating a hash of the data gives us a
* chance to work across kernel versions, and perhaps more
* importantly it also gives us valid/invalid check (ie we will
* likely not give totally bogus reports - if the hash matches,
* it's not any guarantee, but it's a high _likelihood_ that
* the match is valid).
*/
void generate_resume_trace(const void *tracedata, unsigned int user)
{
unsigned short lineno = *(unsigned short *)tracedata;
const char *file = *(const char **)(tracedata + 2);
unsigned int user_hash_value, file_hash_value;
user_hash_value = user % USERHASH;
file_hash_value = hash_string(lineno, file, FILEHASH);
set_magic_time(user_hash_value, file_hash_value, dev_hash_value);
}
EXPORT_SYMBOL(generate_resume_trace);
extern char __tracedata_start, __tracedata_end;
static int show_file_hash(unsigned int value)
{
int match;
char *tracedata;
match = 0;
for (tracedata = &__tracedata_start ; tracedata < &__tracedata_end ;
tracedata += 2 + sizeof(unsigned long)) {
unsigned short lineno = *(unsigned short *)tracedata;
const char *file = *(const char **)(tracedata + 2);
unsigned int hash = hash_string(lineno, file, FILEHASH);
if (hash != value)
continue;
pr_info(" hash matches %s:%u\n", file, lineno);
match++;
}
return match;
}
static int show_dev_hash(unsigned int value)
{
int match = 0;
struct list_head *entry;
device_pm_lock();
entry = dpm_list.prev;
while (entry != &dpm_list) {
struct device * dev = to_device(entry);
unsigned int hash = hash_string(DEVSEED, dev_name(dev), DEVHASH);
if (hash == value) {
dev_info(dev, "hash matches\n");
match++;
}
entry = entry->prev;
}
device_pm_unlock();
return match;
}
static unsigned int hash_value_early_read;
int show_trace_dev_match(char *buf, size_t size)
{
unsigned int value = hash_value_early_read / (USERHASH * FILEHASH);
int ret = 0;
struct list_head *entry;
/*
* It's possible that multiple devices will match the hash and we can't
* tell which is the culprit, so it's best to output them all.
*/
device_pm_lock();
entry = dpm_list.prev;
while (size && entry != &dpm_list) {
struct device *dev = to_device(entry);
unsigned int hash = hash_string(DEVSEED, dev_name(dev),
DEVHASH);
if (hash == value) {
int len = snprintf(buf, size, "%s\n",
dev_driver_string(dev));
if (len > size)
len = size;
buf += len;
ret += len;
size -= len;
}
entry = entry->prev;
}
device_pm_unlock();
return ret;
}
static int early_resume_init(void)
{
hash_value_early_read = read_magic_time();
return 0;
}
static int late_resume_init(void)
{
unsigned int val = hash_value_early_read;
unsigned int user, file, dev;
user = val % USERHASH;
val = val / USERHASH;
file = val % FILEHASH;
val = val / FILEHASH;
dev = val /* % DEVHASH */;
pr_info(" Magic number: %d:%d:%d\n", user, file, dev);
show_file_hash(file);
show_dev_hash(dev);
return 0;
}
core_initcall(early_resume_init);
late_initcall(late_resume_init);

1064
drivers/base/power/wakeup.c Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,28 @@
# Generic register map support. There are no user servicable options here,
# this is an API intended to be used by other kernel subsystems. These
# subsystems should select the appropriate symbols.
config REGMAP
default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_MMIO || REGMAP_IRQ)
select LZO_COMPRESS
select LZO_DECOMPRESS
select IRQ_DOMAIN if REGMAP_IRQ
bool
config REGMAP_I2C
tristate
depends on I2C
config REGMAP_SPI
tristate
depends on SPI
config REGMAP_SPMI
tristate
depends on SPMI
config REGMAP_MMIO
tristate
config REGMAP_IRQ
bool

View file

@ -0,0 +1,8 @@
obj-$(CONFIG_REGMAP) += regmap.o regcache.o
obj-$(CONFIG_REGMAP) += regcache-rbtree.o regcache-lzo.o regcache-flat.o
obj-$(CONFIG_DEBUG_FS) += regmap-debugfs.o
obj-$(CONFIG_REGMAP_I2C) += regmap-i2c.o
obj-$(CONFIG_REGMAP_SPI) += regmap-spi.o
obj-$(CONFIG_REGMAP_SPMI) += regmap-spmi.o
obj-$(CONFIG_REGMAP_MMIO) += regmap-mmio.o
obj-$(CONFIG_REGMAP_IRQ) += regmap-irq.o

View file

@ -0,0 +1,248 @@
/*
* Register map access API internal header
*
* Copyright 2011 Wolfson Microelectronics plc
*
* Author: Mark Brown <broonie@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _REGMAP_INTERNAL_H
#define _REGMAP_INTERNAL_H
#include <linux/regmap.h>
#include <linux/fs.h>
#include <linux/list.h>
#include <linux/wait.h>
struct regmap;
struct regcache_ops;
struct regmap_debugfs_off_cache {
struct list_head list;
off_t min;
off_t max;
unsigned int base_reg;
unsigned int max_reg;
};
struct regmap_format {
size_t buf_size;
size_t reg_bytes;
size_t pad_bytes;
size_t val_bytes;
void (*format_write)(struct regmap *map,
unsigned int reg, unsigned int val);
void (*format_reg)(void *buf, unsigned int reg, unsigned int shift);
void (*format_val)(void *buf, unsigned int val, unsigned int shift);
unsigned int (*parse_val)(const void *buf);
void (*parse_inplace)(void *buf);
};
struct regmap_async {
struct list_head list;
struct regmap *map;
void *work_buf;
};
struct regmap {
union {
struct mutex mutex;
spinlock_t spinlock;
};
unsigned long spinlock_flags;
regmap_lock lock;
regmap_unlock unlock;
void *lock_arg; /* This is passed to lock/unlock functions */
struct device *dev; /* Device we do I/O on */
void *work_buf; /* Scratch buffer used to format I/O */
struct regmap_format format; /* Buffer format */
const struct regmap_bus *bus;
void *bus_context;
const char *name;
bool async;
spinlock_t async_lock;
wait_queue_head_t async_waitq;
struct list_head async_list;
struct list_head async_free;
int async_ret;
#ifdef CONFIG_DEBUG_FS
struct dentry *debugfs;
const char *debugfs_name;
unsigned int debugfs_reg_len;
unsigned int debugfs_val_len;
unsigned int debugfs_tot_len;
struct list_head debugfs_off_cache;
struct mutex cache_lock;
#endif
unsigned int max_register;
bool (*writeable_reg)(struct device *dev, unsigned int reg);
bool (*readable_reg)(struct device *dev, unsigned int reg);
bool (*volatile_reg)(struct device *dev, unsigned int reg);
bool (*precious_reg)(struct device *dev, unsigned int reg);
const struct regmap_access_table *wr_table;
const struct regmap_access_table *rd_table;
const struct regmap_access_table *volatile_table;
const struct regmap_access_table *precious_table;
int (*reg_read)(void *context, unsigned int reg, unsigned int *val);
int (*reg_write)(void *context, unsigned int reg, unsigned int val);
bool defer_caching;
u8 read_flag_mask;
u8 write_flag_mask;
/* number of bits to (left) shift the reg value when formatting*/
int reg_shift;
int reg_stride;
/* regcache specific members */
const struct regcache_ops *cache_ops;
enum regcache_type cache_type;
/* number of bytes in reg_defaults_raw */
unsigned int cache_size_raw;
/* number of bytes per word in reg_defaults_raw */
unsigned int cache_word_size;
/* number of entries in reg_defaults */
unsigned int num_reg_defaults;
/* number of entries in reg_defaults_raw */
unsigned int num_reg_defaults_raw;
/* if set, only the cache is modified not the HW */
u32 cache_only;
/* if set, only the HW is modified not the cache */
u32 cache_bypass;
/* if set, remember to free reg_defaults_raw */
bool cache_free;
struct reg_default *reg_defaults;
const void *reg_defaults_raw;
void *cache;
u32 cache_dirty;
struct reg_default *patch;
int patch_regs;
/* if set, converts bulk rw to single rw */
bool use_single_rw;
/* if set, the device supports multi write mode */
bool can_multi_write;
struct rb_root range_tree;
void *selector_work_buf; /* Scratch buffer used for selector */
};
struct regcache_ops {
const char *name;
enum regcache_type type;
int (*init)(struct regmap *map);
int (*exit)(struct regmap *map);
#ifdef CONFIG_DEBUG_FS
void (*debugfs_init)(struct regmap *map);
#endif
int (*read)(struct regmap *map, unsigned int reg, unsigned int *value);
int (*write)(struct regmap *map, unsigned int reg, unsigned int value);
int (*sync)(struct regmap *map, unsigned int min, unsigned int max);
int (*drop)(struct regmap *map, unsigned int min, unsigned int max);
};
bool regmap_writeable(struct regmap *map, unsigned int reg);
bool regmap_readable(struct regmap *map, unsigned int reg);
bool regmap_volatile(struct regmap *map, unsigned int reg);
bool regmap_precious(struct regmap *map, unsigned int reg);
int _regmap_write(struct regmap *map, unsigned int reg,
unsigned int val);
struct regmap_range_node {
struct rb_node node;
const char *name;
struct regmap *map;
unsigned int range_min;
unsigned int range_max;
unsigned int selector_reg;
unsigned int selector_mask;
int selector_shift;
unsigned int window_start;
unsigned int window_len;
};
struct regmap_field {
struct regmap *regmap;
unsigned int mask;
/* lsb */
unsigned int shift;
unsigned int reg;
unsigned int id_size;
unsigned int id_offset;
};
#ifdef CONFIG_DEBUG_FS
extern void regmap_debugfs_initcall(void);
extern void regmap_debugfs_init(struct regmap *map, const char *name);
extern void regmap_debugfs_exit(struct regmap *map);
#else
static inline void regmap_debugfs_initcall(void) { }
static inline void regmap_debugfs_init(struct regmap *map, const char *name) { }
static inline void regmap_debugfs_exit(struct regmap *map) { }
#endif
/* regcache core declarations */
int regcache_init(struct regmap *map, const struct regmap_config *config);
void regcache_exit(struct regmap *map);
int regcache_read(struct regmap *map,
unsigned int reg, unsigned int *value);
int regcache_write(struct regmap *map,
unsigned int reg, unsigned int value);
int regcache_sync(struct regmap *map);
int regcache_sync_block(struct regmap *map, void *block,
unsigned long *cache_present,
unsigned int block_base, unsigned int start,
unsigned int end);
static inline const void *regcache_get_val_addr(struct regmap *map,
const void *base,
unsigned int idx)
{
return base + (map->cache_word_size * idx);
}
unsigned int regcache_get_val(struct regmap *map, const void *base,
unsigned int idx);
bool regcache_set_val(struct regmap *map, void *base, unsigned int idx,
unsigned int val);
int regcache_lookup_reg(struct regmap *map, unsigned int reg);
int _regmap_raw_write(struct regmap *map, unsigned int reg,
const void *val, size_t val_len);
void regmap_async_complete_cb(struct regmap_async *async, int ret);
extern struct regcache_ops regcache_rbtree_ops;
extern struct regcache_ops regcache_lzo_ops;
extern struct regcache_ops regcache_flat_ops;
static inline const char *regmap_name(const struct regmap *map)
{
if (map->dev)
return dev_name(map->dev);
return map->name;
}
#endif

View file

@ -0,0 +1,72 @@
/*
* Register cache access API - flat caching support
*
* Copyright 2012 Wolfson Microelectronics plc
*
* Author: Mark Brown <broonie@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/seq_file.h>
#include "internal.h"
static int regcache_flat_init(struct regmap *map)
{
int i;
unsigned int *cache;
map->cache = kzalloc(sizeof(unsigned int) * (map->max_register + 1),
GFP_KERNEL);
if (!map->cache)
return -ENOMEM;
cache = map->cache;
for (i = 0; i < map->num_reg_defaults; i++)
cache[map->reg_defaults[i].reg] = map->reg_defaults[i].def;
return 0;
}
static int regcache_flat_exit(struct regmap *map)
{
kfree(map->cache);
map->cache = NULL;
return 0;
}
static int regcache_flat_read(struct regmap *map,
unsigned int reg, unsigned int *value)
{
unsigned int *cache = map->cache;
*value = cache[reg];
return 0;
}
static int regcache_flat_write(struct regmap *map, unsigned int reg,
unsigned int value)
{
unsigned int *cache = map->cache;
cache[reg] = value;
return 0;
}
struct regcache_ops regcache_flat_ops = {
.type = REGCACHE_FLAT,
.name = "flat",
.init = regcache_flat_init,
.exit = regcache_flat_exit,
.read = regcache_flat_read,
.write = regcache_flat_write,
};

View file

@ -0,0 +1,378 @@
/*
* Register cache access API - LZO caching support
*
* Copyright 2011 Wolfson Microelectronics plc
*
* Author: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/lzo.h>
#include "internal.h"
static int regcache_lzo_exit(struct regmap *map);
struct regcache_lzo_ctx {
void *wmem;
void *dst;
const void *src;
size_t src_len;
size_t dst_len;
size_t decompressed_size;
unsigned long *sync_bmp;
int sync_bmp_nbits;
};
#define LZO_BLOCK_NUM 8
static int regcache_lzo_block_count(struct regmap *map)
{
return LZO_BLOCK_NUM;
}
static int regcache_lzo_prepare(struct regcache_lzo_ctx *lzo_ctx)
{
lzo_ctx->wmem = kmalloc(LZO1X_MEM_COMPRESS, GFP_KERNEL);
if (!lzo_ctx->wmem)
return -ENOMEM;
return 0;
}
static int regcache_lzo_compress(struct regcache_lzo_ctx *lzo_ctx)
{
size_t compress_size;
int ret;
ret = lzo1x_1_compress(lzo_ctx->src, lzo_ctx->src_len,
lzo_ctx->dst, &compress_size, lzo_ctx->wmem);
if (ret != LZO_E_OK || compress_size > lzo_ctx->dst_len)
return -EINVAL;
lzo_ctx->dst_len = compress_size;
return 0;
}
static int regcache_lzo_decompress(struct regcache_lzo_ctx *lzo_ctx)
{
size_t dst_len;
int ret;
dst_len = lzo_ctx->dst_len;
ret = lzo1x_decompress_safe(lzo_ctx->src, lzo_ctx->src_len,
lzo_ctx->dst, &dst_len);
if (ret != LZO_E_OK || dst_len != lzo_ctx->dst_len)
return -EINVAL;
return 0;
}
static int regcache_lzo_compress_cache_block(struct regmap *map,
struct regcache_lzo_ctx *lzo_ctx)
{
int ret;
lzo_ctx->dst_len = lzo1x_worst_compress(PAGE_SIZE);
lzo_ctx->dst = kmalloc(lzo_ctx->dst_len, GFP_KERNEL);
if (!lzo_ctx->dst) {
lzo_ctx->dst_len = 0;
return -ENOMEM;
}
ret = regcache_lzo_compress(lzo_ctx);
if (ret < 0)
return ret;
return 0;
}
static int regcache_lzo_decompress_cache_block(struct regmap *map,
struct regcache_lzo_ctx *lzo_ctx)
{
int ret;
lzo_ctx->dst_len = lzo_ctx->decompressed_size;
lzo_ctx->dst = kmalloc(lzo_ctx->dst_len, GFP_KERNEL);
if (!lzo_ctx->dst) {
lzo_ctx->dst_len = 0;
return -ENOMEM;
}
ret = regcache_lzo_decompress(lzo_ctx);
if (ret < 0)
return ret;
return 0;
}
static inline int regcache_lzo_get_blkindex(struct regmap *map,
unsigned int reg)
{
return ((reg / map->reg_stride) * map->cache_word_size) /
DIV_ROUND_UP(map->cache_size_raw,
regcache_lzo_block_count(map));
}
static inline int regcache_lzo_get_blkpos(struct regmap *map,
unsigned int reg)
{
return (reg / map->reg_stride) %
(DIV_ROUND_UP(map->cache_size_raw,
regcache_lzo_block_count(map)) /
map->cache_word_size);
}
static inline int regcache_lzo_get_blksize(struct regmap *map)
{
return DIV_ROUND_UP(map->cache_size_raw,
regcache_lzo_block_count(map));
}
static int regcache_lzo_init(struct regmap *map)
{
struct regcache_lzo_ctx **lzo_blocks;
size_t bmp_size;
int ret, i, blksize, blkcount;
const char *p, *end;
unsigned long *sync_bmp;
ret = 0;
blkcount = regcache_lzo_block_count(map);
map->cache = kzalloc(blkcount * sizeof *lzo_blocks,
GFP_KERNEL);
if (!map->cache)
return -ENOMEM;
lzo_blocks = map->cache;
/*
* allocate a bitmap to be used when syncing the cache with
* the hardware. Each time a register is modified, the corresponding
* bit is set in the bitmap, so we know that we have to sync
* that register.
*/
bmp_size = map->num_reg_defaults_raw;
sync_bmp = kmalloc(BITS_TO_LONGS(bmp_size) * sizeof(long),
GFP_KERNEL);
if (!sync_bmp) {
ret = -ENOMEM;
goto err;
}
bitmap_zero(sync_bmp, bmp_size);
/* allocate the lzo blocks and initialize them */
for (i = 0; i < blkcount; i++) {
lzo_blocks[i] = kzalloc(sizeof **lzo_blocks,
GFP_KERNEL);
if (!lzo_blocks[i]) {
kfree(sync_bmp);
ret = -ENOMEM;
goto err;
}
lzo_blocks[i]->sync_bmp = sync_bmp;
lzo_blocks[i]->sync_bmp_nbits = bmp_size;
/* alloc the working space for the compressed block */
ret = regcache_lzo_prepare(lzo_blocks[i]);
if (ret < 0)
goto err;
}
blksize = regcache_lzo_get_blksize(map);
p = map->reg_defaults_raw;
end = map->reg_defaults_raw + map->cache_size_raw;
/* compress the register map and fill the lzo blocks */
for (i = 0; i < blkcount; i++, p += blksize) {
lzo_blocks[i]->src = p;
if (p + blksize > end)
lzo_blocks[i]->src_len = end - p;
else
lzo_blocks[i]->src_len = blksize;
ret = regcache_lzo_compress_cache_block(map,
lzo_blocks[i]);
if (ret < 0)
goto err;
lzo_blocks[i]->decompressed_size =
lzo_blocks[i]->src_len;
}
return 0;
err:
regcache_lzo_exit(map);
return ret;
}
static int regcache_lzo_exit(struct regmap *map)
{
struct regcache_lzo_ctx **lzo_blocks;
int i, blkcount;
lzo_blocks = map->cache;
if (!lzo_blocks)
return 0;
blkcount = regcache_lzo_block_count(map);
/*
* the pointer to the bitmap used for syncing the cache
* is shared amongst all lzo_blocks. Ensure it is freed
* only once.
*/
if (lzo_blocks[0])
kfree(lzo_blocks[0]->sync_bmp);
for (i = 0; i < blkcount; i++) {
if (lzo_blocks[i]) {
kfree(lzo_blocks[i]->wmem);
kfree(lzo_blocks[i]->dst);
}
/* each lzo_block is a pointer returned by kmalloc or NULL */
kfree(lzo_blocks[i]);
}
kfree(lzo_blocks);
map->cache = NULL;
return 0;
}
static int regcache_lzo_read(struct regmap *map,
unsigned int reg, unsigned int *value)
{
struct regcache_lzo_ctx *lzo_block, **lzo_blocks;
int ret, blkindex, blkpos;
size_t blksize, tmp_dst_len;
void *tmp_dst;
/* index of the compressed lzo block */
blkindex = regcache_lzo_get_blkindex(map, reg);
/* register index within the decompressed block */
blkpos = regcache_lzo_get_blkpos(map, reg);
/* size of the compressed block */
blksize = regcache_lzo_get_blksize(map);
lzo_blocks = map->cache;
lzo_block = lzo_blocks[blkindex];
/* save the pointer and length of the compressed block */
tmp_dst = lzo_block->dst;
tmp_dst_len = lzo_block->dst_len;
/* prepare the source to be the compressed block */
lzo_block->src = lzo_block->dst;
lzo_block->src_len = lzo_block->dst_len;
/* decompress the block */
ret = regcache_lzo_decompress_cache_block(map, lzo_block);
if (ret >= 0)
/* fetch the value from the cache */
*value = regcache_get_val(map, lzo_block->dst, blkpos);
kfree(lzo_block->dst);
/* restore the pointer and length of the compressed block */
lzo_block->dst = tmp_dst;
lzo_block->dst_len = tmp_dst_len;
return ret;
}
static int regcache_lzo_write(struct regmap *map,
unsigned int reg, unsigned int value)
{
struct regcache_lzo_ctx *lzo_block, **lzo_blocks;
int ret, blkindex, blkpos;
size_t blksize, tmp_dst_len;
void *tmp_dst;
/* index of the compressed lzo block */
blkindex = regcache_lzo_get_blkindex(map, reg);
/* register index within the decompressed block */
blkpos = regcache_lzo_get_blkpos(map, reg);
/* size of the compressed block */
blksize = regcache_lzo_get_blksize(map);
lzo_blocks = map->cache;
lzo_block = lzo_blocks[blkindex];
/* save the pointer and length of the compressed block */
tmp_dst = lzo_block->dst;
tmp_dst_len = lzo_block->dst_len;
/* prepare the source to be the compressed block */
lzo_block->src = lzo_block->dst;
lzo_block->src_len = lzo_block->dst_len;
/* decompress the block */
ret = regcache_lzo_decompress_cache_block(map, lzo_block);
if (ret < 0) {
kfree(lzo_block->dst);
goto out;
}
/* write the new value to the cache */
if (regcache_set_val(map, lzo_block->dst, blkpos, value)) {
kfree(lzo_block->dst);
goto out;
}
/* prepare the source to be the decompressed block */
lzo_block->src = lzo_block->dst;
lzo_block->src_len = lzo_block->dst_len;
/* compress the block */
ret = regcache_lzo_compress_cache_block(map, lzo_block);
if (ret < 0) {
kfree(lzo_block->dst);
kfree(lzo_block->src);
goto out;
}
/* set the bit so we know we have to sync this register */
set_bit(reg / map->reg_stride, lzo_block->sync_bmp);
kfree(tmp_dst);
kfree(lzo_block->src);
return 0;
out:
lzo_block->dst = tmp_dst;
lzo_block->dst_len = tmp_dst_len;
return ret;
}
static int regcache_lzo_sync(struct regmap *map, unsigned int min,
unsigned int max)
{
struct regcache_lzo_ctx **lzo_blocks;
unsigned int val;
int i;
int ret;
lzo_blocks = map->cache;
i = min;
for_each_set_bit_from(i, lzo_blocks[0]->sync_bmp,
lzo_blocks[0]->sync_bmp_nbits) {
if (i > max)
continue;
ret = regcache_read(map, i, &val);
if (ret)
return ret;
/* Is this the hardware default? If so skip. */
ret = regcache_lookup_reg(map, i);
if (ret > 0 && val == map->reg_defaults[ret].def)
continue;
map->cache_bypass = 1;
ret = _regmap_write(map, i, val);
map->cache_bypass = 0;
if (ret)
return ret;
dev_dbg(map->dev, "Synced register %#x, value %#x\n",
i, val);
}
return 0;
}
struct regcache_ops regcache_lzo_ops = {
.type = REGCACHE_COMPRESSED,
.name = "lzo",
.init = regcache_lzo_init,
.exit = regcache_lzo_exit,
.read = regcache_lzo_read,
.write = regcache_lzo_write,
.sync = regcache_lzo_sync
};

View file

@ -0,0 +1,536 @@
/*
* Register cache access API - rbtree caching support
*
* Copyright 2011 Wolfson Microelectronics plc
*
* Author: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/debugfs.h>
#include <linux/rbtree.h>
#include <linux/seq_file.h>
#include "internal.h"
static int regcache_rbtree_write(struct regmap *map, unsigned int reg,
unsigned int value);
static int regcache_rbtree_exit(struct regmap *map);
struct regcache_rbtree_node {
/* block of adjacent registers */
void *block;
/* Which registers are present */
long *cache_present;
/* base register handled by this block */
unsigned int base_reg;
/* number of registers available in the block */
unsigned int blklen;
/* the actual rbtree node holding this block */
struct rb_node node;
} __attribute__ ((packed));
struct regcache_rbtree_ctx {
struct rb_root root;
struct regcache_rbtree_node *cached_rbnode;
};
static inline void regcache_rbtree_get_base_top_reg(
struct regmap *map,
struct regcache_rbtree_node *rbnode,
unsigned int *base, unsigned int *top)
{
*base = rbnode->base_reg;
*top = rbnode->base_reg + ((rbnode->blklen - 1) * map->reg_stride);
}
static unsigned int regcache_rbtree_get_register(struct regmap *map,
struct regcache_rbtree_node *rbnode, unsigned int idx)
{
return regcache_get_val(map, rbnode->block, idx);
}
static void regcache_rbtree_set_register(struct regmap *map,
struct regcache_rbtree_node *rbnode,
unsigned int idx, unsigned int val)
{
set_bit(idx, rbnode->cache_present);
regcache_set_val(map, rbnode->block, idx, val);
}
static struct regcache_rbtree_node *regcache_rbtree_lookup(struct regmap *map,
unsigned int reg)
{
struct regcache_rbtree_ctx *rbtree_ctx = map->cache;
struct rb_node *node;
struct regcache_rbtree_node *rbnode;
unsigned int base_reg, top_reg;
rbnode = rbtree_ctx->cached_rbnode;
if (rbnode) {
regcache_rbtree_get_base_top_reg(map, rbnode, &base_reg,
&top_reg);
if (reg >= base_reg && reg <= top_reg)
return rbnode;
}
node = rbtree_ctx->root.rb_node;
while (node) {
rbnode = container_of(node, struct regcache_rbtree_node, node);
regcache_rbtree_get_base_top_reg(map, rbnode, &base_reg,
&top_reg);
if (reg >= base_reg && reg <= top_reg) {
rbtree_ctx->cached_rbnode = rbnode;
return rbnode;
} else if (reg > top_reg) {
node = node->rb_right;
} else if (reg < base_reg) {
node = node->rb_left;
}
}
return NULL;
}
static int regcache_rbtree_insert(struct regmap *map, struct rb_root *root,
struct regcache_rbtree_node *rbnode)
{
struct rb_node **new, *parent;
struct regcache_rbtree_node *rbnode_tmp;
unsigned int base_reg_tmp, top_reg_tmp;
unsigned int base_reg;
parent = NULL;
new = &root->rb_node;
while (*new) {
rbnode_tmp = container_of(*new, struct regcache_rbtree_node,
node);
/* base and top registers of the current rbnode */
regcache_rbtree_get_base_top_reg(map, rbnode_tmp, &base_reg_tmp,
&top_reg_tmp);
/* base register of the rbnode to be added */
base_reg = rbnode->base_reg;
parent = *new;
/* if this register has already been inserted, just return */
if (base_reg >= base_reg_tmp &&
base_reg <= top_reg_tmp)
return 0;
else if (base_reg > top_reg_tmp)
new = &((*new)->rb_right);
else if (base_reg < base_reg_tmp)
new = &((*new)->rb_left);
}
/* insert the node into the rbtree */
rb_link_node(&rbnode->node, parent, new);
rb_insert_color(&rbnode->node, root);
return 1;
}
#ifdef CONFIG_DEBUG_FS
static int rbtree_show(struct seq_file *s, void *ignored)
{
struct regmap *map = s->private;
struct regcache_rbtree_ctx *rbtree_ctx = map->cache;
struct regcache_rbtree_node *n;
struct rb_node *node;
unsigned int base, top;
size_t mem_size;
int nodes = 0;
int registers = 0;
int this_registers, average;
map->lock(map->lock_arg);
mem_size = sizeof(*rbtree_ctx);
for (node = rb_first(&rbtree_ctx->root); node != NULL;
node = rb_next(node)) {
n = container_of(node, struct regcache_rbtree_node, node);
mem_size += sizeof(*n);
mem_size += (n->blklen * map->cache_word_size);
mem_size += BITS_TO_LONGS(n->blklen) * sizeof(long);
regcache_rbtree_get_base_top_reg(map, n, &base, &top);
this_registers = ((top - base) / map->reg_stride) + 1;
seq_printf(s, "%x-%x (%d)\n", base, top, this_registers);
nodes++;
registers += this_registers;
}
if (nodes)
average = registers / nodes;
else
average = 0;
seq_printf(s, "%d nodes, %d registers, average %d registers, used %zu bytes\n",
nodes, registers, average, mem_size);
map->unlock(map->lock_arg);
return 0;
}
static int rbtree_open(struct inode *inode, struct file *file)
{
return single_open(file, rbtree_show, inode->i_private);
}
static const struct file_operations rbtree_fops = {
.open = rbtree_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static void rbtree_debugfs_init(struct regmap *map)
{
debugfs_create_file("rbtree", 0400, map->debugfs, map, &rbtree_fops);
}
#endif
static int regcache_rbtree_init(struct regmap *map)
{
struct regcache_rbtree_ctx *rbtree_ctx;
int i;
int ret;
map->cache = kmalloc(sizeof *rbtree_ctx, GFP_KERNEL);
if (!map->cache)
return -ENOMEM;
rbtree_ctx = map->cache;
rbtree_ctx->root = RB_ROOT;
rbtree_ctx->cached_rbnode = NULL;
for (i = 0; i < map->num_reg_defaults; i++) {
ret = regcache_rbtree_write(map,
map->reg_defaults[i].reg,
map->reg_defaults[i].def);
if (ret)
goto err;
}
return 0;
err:
regcache_rbtree_exit(map);
return ret;
}
static int regcache_rbtree_exit(struct regmap *map)
{
struct rb_node *next;
struct regcache_rbtree_ctx *rbtree_ctx;
struct regcache_rbtree_node *rbtree_node;
/* if we've already been called then just return */
rbtree_ctx = map->cache;
if (!rbtree_ctx)
return 0;
/* free up the rbtree */
next = rb_first(&rbtree_ctx->root);
while (next) {
rbtree_node = rb_entry(next, struct regcache_rbtree_node, node);
next = rb_next(&rbtree_node->node);
rb_erase(&rbtree_node->node, &rbtree_ctx->root);
kfree(rbtree_node->cache_present);
kfree(rbtree_node->block);
kfree(rbtree_node);
}
/* release the resources */
kfree(map->cache);
map->cache = NULL;
return 0;
}
static int regcache_rbtree_read(struct regmap *map,
unsigned int reg, unsigned int *value)
{
struct regcache_rbtree_node *rbnode;
unsigned int reg_tmp;
rbnode = regcache_rbtree_lookup(map, reg);
if (rbnode) {
reg_tmp = (reg - rbnode->base_reg) / map->reg_stride;
if (!test_bit(reg_tmp, rbnode->cache_present))
return -ENOENT;
*value = regcache_rbtree_get_register(map, rbnode, reg_tmp);
} else {
return -ENOENT;
}
return 0;
}
static int regcache_rbtree_insert_to_block(struct regmap *map,
struct regcache_rbtree_node *rbnode,
unsigned int base_reg,
unsigned int top_reg,
unsigned int reg,
unsigned int value)
{
unsigned int blklen;
unsigned int pos, offset;
unsigned long *present;
u8 *blk;
blklen = (top_reg - base_reg) / map->reg_stride + 1;
pos = (reg - base_reg) / map->reg_stride;
offset = (rbnode->base_reg - base_reg) / map->reg_stride;
blk = krealloc(rbnode->block,
blklen * map->cache_word_size,
GFP_KERNEL);
if (!blk)
return -ENOMEM;
present = krealloc(rbnode->cache_present,
BITS_TO_LONGS(blklen) * sizeof(*present), GFP_KERNEL);
if (!present) {
kfree(blk);
return -ENOMEM;
}
/* insert the register value in the correct place in the rbnode block */
if (pos == 0) {
memmove(blk + offset * map->cache_word_size,
blk, rbnode->blklen * map->cache_word_size);
bitmap_shift_left(present, present, offset, blklen);
}
/* update the rbnode block, its size and the base register */
rbnode->block = blk;
rbnode->blklen = blklen;
rbnode->base_reg = base_reg;
rbnode->cache_present = present;
regcache_rbtree_set_register(map, rbnode, pos, value);
return 0;
}
static struct regcache_rbtree_node *
regcache_rbtree_node_alloc(struct regmap *map, unsigned int reg)
{
struct regcache_rbtree_node *rbnode;
const struct regmap_range *range;
int i;
rbnode = kzalloc(sizeof(*rbnode), GFP_KERNEL);
if (!rbnode)
return NULL;
/* If there is a read table then use it to guess at an allocation */
if (map->rd_table) {
for (i = 0; i < map->rd_table->n_yes_ranges; i++) {
if (regmap_reg_in_range(reg,
&map->rd_table->yes_ranges[i]))
break;
}
if (i != map->rd_table->n_yes_ranges) {
range = &map->rd_table->yes_ranges[i];
rbnode->blklen = (range->range_max - range->range_min) /
map->reg_stride + 1;
rbnode->base_reg = range->range_min;
}
}
if (!rbnode->blklen) {
rbnode->blklen = 1;
rbnode->base_reg = reg;
}
rbnode->block = kmalloc(rbnode->blklen * map->cache_word_size,
GFP_KERNEL);
if (!rbnode->block)
goto err_free;
rbnode->cache_present = kzalloc(BITS_TO_LONGS(rbnode->blklen) *
sizeof(*rbnode->cache_present), GFP_KERNEL);
if (!rbnode->cache_present)
goto err_free_block;
return rbnode;
err_free_block:
kfree(rbnode->block);
err_free:
kfree(rbnode);
return NULL;
}
static int regcache_rbtree_write(struct regmap *map, unsigned int reg,
unsigned int value)
{
struct regcache_rbtree_ctx *rbtree_ctx;
struct regcache_rbtree_node *rbnode, *rbnode_tmp;
struct rb_node *node;
unsigned int reg_tmp;
int ret;
rbtree_ctx = map->cache;
/* if we can't locate it in the cached rbnode we'll have
* to traverse the rbtree looking for it.
*/
rbnode = regcache_rbtree_lookup(map, reg);
if (rbnode) {
reg_tmp = (reg - rbnode->base_reg) / map->reg_stride;
regcache_rbtree_set_register(map, rbnode, reg_tmp, value);
} else {
unsigned int base_reg, top_reg;
unsigned int new_base_reg, new_top_reg;
unsigned int min, max;
unsigned int max_dist;
max_dist = map->reg_stride * sizeof(*rbnode_tmp) /
map->cache_word_size;
if (reg < max_dist)
min = 0;
else
min = reg - max_dist;
max = reg + max_dist;
/* look for an adjacent register to the one we are about to add */
for (node = rb_first(&rbtree_ctx->root); node;
node = rb_next(node)) {
rbnode_tmp = rb_entry(node, struct regcache_rbtree_node,
node);
regcache_rbtree_get_base_top_reg(map, rbnode_tmp,
&base_reg, &top_reg);
if (base_reg <= max && top_reg >= min) {
new_base_reg = min(reg, base_reg);
new_top_reg = max(reg, top_reg);
} else {
continue;
}
ret = regcache_rbtree_insert_to_block(map, rbnode_tmp,
new_base_reg,
new_top_reg, reg,
value);
if (ret)
return ret;
rbtree_ctx->cached_rbnode = rbnode_tmp;
return 0;
}
/* We did not manage to find a place to insert it in
* an existing block so create a new rbnode.
*/
rbnode = regcache_rbtree_node_alloc(map, reg);
if (!rbnode)
return -ENOMEM;
regcache_rbtree_set_register(map, rbnode,
reg - rbnode->base_reg, value);
regcache_rbtree_insert(map, &rbtree_ctx->root, rbnode);
rbtree_ctx->cached_rbnode = rbnode;
}
return 0;
}
static int regcache_rbtree_sync(struct regmap *map, unsigned int min,
unsigned int max)
{
struct regcache_rbtree_ctx *rbtree_ctx;
struct rb_node *node;
struct regcache_rbtree_node *rbnode;
unsigned int base_reg, top_reg;
unsigned int start, end;
int ret;
rbtree_ctx = map->cache;
for (node = rb_first(&rbtree_ctx->root); node; node = rb_next(node)) {
rbnode = rb_entry(node, struct regcache_rbtree_node, node);
regcache_rbtree_get_base_top_reg(map, rbnode, &base_reg,
&top_reg);
if (base_reg > max)
break;
if (top_reg < min)
continue;
if (min > base_reg)
start = (min - base_reg) / map->reg_stride;
else
start = 0;
if (max < top_reg)
end = (max - base_reg) / map->reg_stride + 1;
else
end = rbnode->blklen;
ret = regcache_sync_block(map, rbnode->block,
rbnode->cache_present,
rbnode->base_reg, start, end);
if (ret != 0)
return ret;
}
return regmap_async_complete(map);
}
static int regcache_rbtree_drop(struct regmap *map, unsigned int min,
unsigned int max)
{
struct regcache_rbtree_ctx *rbtree_ctx;
struct regcache_rbtree_node *rbnode;
struct rb_node *node;
unsigned int base_reg, top_reg;
unsigned int start, end;
rbtree_ctx = map->cache;
for (node = rb_first(&rbtree_ctx->root); node; node = rb_next(node)) {
rbnode = rb_entry(node, struct regcache_rbtree_node, node);
regcache_rbtree_get_base_top_reg(map, rbnode, &base_reg,
&top_reg);
if (base_reg > max)
break;
if (top_reg < min)
continue;
if (min > base_reg)
start = (min - base_reg) / map->reg_stride;
else
start = 0;
if (max < top_reg)
end = (max - base_reg) / map->reg_stride + 1;
else
end = rbnode->blklen;
bitmap_clear(rbnode->cache_present, start, end - start);
}
return 0;
}
struct regcache_ops regcache_rbtree_ops = {
.type = REGCACHE_RBTREE,
.name = "rbtree",
.init = regcache_rbtree_init,
.exit = regcache_rbtree_exit,
#ifdef CONFIG_DEBUG_FS
.debugfs_init = rbtree_debugfs_init,
#endif
.read = regcache_rbtree_read,
.write = regcache_rbtree_write,
.sync = regcache_rbtree_sync,
.drop = regcache_rbtree_drop,
};

View file

@ -0,0 +1,716 @@
/*
* Register cache access API
*
* Copyright 2011 Wolfson Microelectronics plc
*
* Author: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/slab.h>
#include <linux/export.h>
#include <linux/device.h>
#include <trace/events/regmap.h>
#include <linux/bsearch.h>
#include <linux/sort.h>
#include "internal.h"
static const struct regcache_ops *cache_types[] = {
&regcache_rbtree_ops,
&regcache_lzo_ops,
&regcache_flat_ops,
};
static int regcache_hw_init(struct regmap *map)
{
int i, j;
int ret;
int count;
unsigned int val;
void *tmp_buf;
if (!map->num_reg_defaults_raw)
return -EINVAL;
if (!map->reg_defaults_raw) {
u32 cache_bypass = map->cache_bypass;
dev_warn(map->dev, "No cache defaults, reading back from HW\n");
/* Bypass the cache access till data read from HW*/
map->cache_bypass = 1;
tmp_buf = kmalloc(map->cache_size_raw, GFP_KERNEL);
if (!tmp_buf)
return -EINVAL;
ret = regmap_raw_read(map, 0, tmp_buf,
map->num_reg_defaults_raw);
map->cache_bypass = cache_bypass;
if (ret < 0) {
kfree(tmp_buf);
return ret;
}
map->reg_defaults_raw = tmp_buf;
map->cache_free = 1;
}
/* calculate the size of reg_defaults */
for (count = 0, i = 0; i < map->num_reg_defaults_raw; i++) {
val = regcache_get_val(map, map->reg_defaults_raw, i);
if (regmap_volatile(map, i * map->reg_stride))
continue;
count++;
}
map->reg_defaults = kmalloc(count * sizeof(struct reg_default),
GFP_KERNEL);
if (!map->reg_defaults) {
ret = -ENOMEM;
goto err_free;
}
/* fill the reg_defaults */
map->num_reg_defaults = count;
for (i = 0, j = 0; i < map->num_reg_defaults_raw; i++) {
val = regcache_get_val(map, map->reg_defaults_raw, i);
if (regmap_volatile(map, i * map->reg_stride))
continue;
map->reg_defaults[j].reg = i * map->reg_stride;
map->reg_defaults[j].def = val;
j++;
}
return 0;
err_free:
if (map->cache_free)
kfree(map->reg_defaults_raw);
return ret;
}
int regcache_init(struct regmap *map, const struct regmap_config *config)
{
int ret;
int i;
void *tmp_buf;
for (i = 0; i < config->num_reg_defaults; i++)
if (config->reg_defaults[i].reg % map->reg_stride)
return -EINVAL;
if (map->cache_type == REGCACHE_NONE) {
map->cache_bypass = true;
return 0;
}
for (i = 0; i < ARRAY_SIZE(cache_types); i++)
if (cache_types[i]->type == map->cache_type)
break;
if (i == ARRAY_SIZE(cache_types)) {
dev_err(map->dev, "Could not match compress type: %d\n",
map->cache_type);
return -EINVAL;
}
map->num_reg_defaults = config->num_reg_defaults;
map->num_reg_defaults_raw = config->num_reg_defaults_raw;
map->reg_defaults_raw = config->reg_defaults_raw;
map->cache_word_size = DIV_ROUND_UP(config->val_bits, 8);
map->cache_size_raw = map->cache_word_size * config->num_reg_defaults_raw;
map->cache = NULL;
map->cache_ops = cache_types[i];
if (!map->cache_ops->read ||
!map->cache_ops->write ||
!map->cache_ops->name)
return -EINVAL;
/* We still need to ensure that the reg_defaults
* won't vanish from under us. We'll need to make
* a copy of it.
*/
if (config->reg_defaults) {
if (!map->num_reg_defaults)
return -EINVAL;
tmp_buf = kmemdup(config->reg_defaults, map->num_reg_defaults *
sizeof(struct reg_default), GFP_KERNEL);
if (!tmp_buf)
return -ENOMEM;
map->reg_defaults = tmp_buf;
} else if (map->num_reg_defaults_raw) {
/* Some devices such as PMICs don't have cache defaults,
* we cope with this by reading back the HW registers and
* crafting the cache defaults by hand.
*/
ret = regcache_hw_init(map);
if (ret < 0)
return ret;
}
if (!map->max_register)
map->max_register = map->num_reg_defaults_raw;
if (map->cache_ops->init) {
dev_dbg(map->dev, "Initializing %s cache\n",
map->cache_ops->name);
ret = map->cache_ops->init(map);
if (ret)
goto err_free;
}
return 0;
err_free:
kfree(map->reg_defaults);
if (map->cache_free)
kfree(map->reg_defaults_raw);
return ret;
}
void regcache_exit(struct regmap *map)
{
if (map->cache_type == REGCACHE_NONE)
return;
BUG_ON(!map->cache_ops);
kfree(map->reg_defaults);
if (map->cache_free)
kfree(map->reg_defaults_raw);
if (map->cache_ops->exit) {
dev_dbg(map->dev, "Destroying %s cache\n",
map->cache_ops->name);
map->cache_ops->exit(map);
}
}
/**
* regcache_read: Fetch the value of a given register from the cache.
*
* @map: map to configure.
* @reg: The register index.
* @value: The value to be returned.
*
* Return a negative value on failure, 0 on success.
*/
int regcache_read(struct regmap *map,
unsigned int reg, unsigned int *value)
{
int ret;
if (map->cache_type == REGCACHE_NONE)
return -ENOSYS;
BUG_ON(!map->cache_ops);
if (!regmap_volatile(map, reg)) {
ret = map->cache_ops->read(map, reg, value);
if (ret == 0)
trace_regmap_reg_read_cache(map->dev, reg, *value);
return ret;
}
return -EINVAL;
}
/**
* regcache_write: Set the value of a given register in the cache.
*
* @map: map to configure.
* @reg: The register index.
* @value: The new register value.
*
* Return a negative value on failure, 0 on success.
*/
int regcache_write(struct regmap *map,
unsigned int reg, unsigned int value)
{
if (map->cache_type == REGCACHE_NONE)
return 0;
BUG_ON(!map->cache_ops);
if (!regmap_volatile(map, reg))
return map->cache_ops->write(map, reg, value);
return 0;
}
static int regcache_default_sync(struct regmap *map, unsigned int min,
unsigned int max)
{
unsigned int reg;
for (reg = min; reg <= max; reg += map->reg_stride) {
unsigned int val;
int ret;
if (regmap_volatile(map, reg) ||
!regmap_writeable(map, reg))
continue;
ret = regcache_read(map, reg, &val);
if (ret)
return ret;
/* Is this the hardware default? If so skip. */
ret = regcache_lookup_reg(map, reg);
if (ret >= 0 && val == map->reg_defaults[ret].def)
continue;
map->cache_bypass = 1;
ret = _regmap_write(map, reg, val);
map->cache_bypass = 0;
if (ret) {
dev_err(map->dev, "Unable to sync register %#x. %d\n",
reg, ret);
return ret;
}
dev_dbg(map->dev, "Synced register %#x, value %#x\n", reg, val);
}
return 0;
}
/**
* regcache_sync: Sync the register cache with the hardware.
*
* @map: map to configure.
*
* Any registers that should not be synced should be marked as
* volatile. In general drivers can choose not to use the provided
* syncing functionality if they so require.
*
* Return a negative value on failure, 0 on success.
*/
int regcache_sync(struct regmap *map)
{
int ret = 0;
unsigned int i;
const char *name;
unsigned int bypass;
BUG_ON(!map->cache_ops);
map->lock(map->lock_arg);
/* Remember the initial bypass state */
bypass = map->cache_bypass;
dev_dbg(map->dev, "Syncing %s cache\n",
map->cache_ops->name);
name = map->cache_ops->name;
trace_regcache_sync(map->dev, name, "start");
if (!map->cache_dirty)
goto out;
map->async = true;
/* Apply any patch first */
map->cache_bypass = 1;
for (i = 0; i < map->patch_regs; i++) {
ret = _regmap_write(map, map->patch[i].reg, map->patch[i].def);
if (ret != 0) {
dev_err(map->dev, "Failed to write %x = %x: %d\n",
map->patch[i].reg, map->patch[i].def, ret);
goto out;
}
}
map->cache_bypass = 0;
if (map->cache_ops->sync)
ret = map->cache_ops->sync(map, 0, map->max_register);
else
ret = regcache_default_sync(map, 0, map->max_register);
if (ret == 0)
map->cache_dirty = false;
out:
/* Restore the bypass state */
map->async = false;
map->cache_bypass = bypass;
map->unlock(map->lock_arg);
regmap_async_complete(map);
trace_regcache_sync(map->dev, name, "stop");
return ret;
}
EXPORT_SYMBOL_GPL(regcache_sync);
/**
* regcache_sync_region: Sync part of the register cache with the hardware.
*
* @map: map to sync.
* @min: first register to sync
* @max: last register to sync
*
* Write all non-default register values in the specified region to
* the hardware.
*
* Return a negative value on failure, 0 on success.
*/
int regcache_sync_region(struct regmap *map, unsigned int min,
unsigned int max)
{
int ret = 0;
const char *name;
unsigned int bypass;
BUG_ON(!map->cache_ops);
map->lock(map->lock_arg);
/* Remember the initial bypass state */
bypass = map->cache_bypass;
name = map->cache_ops->name;
dev_dbg(map->dev, "Syncing %s cache from %d-%d\n", name, min, max);
trace_regcache_sync(map->dev, name, "start region");
if (!map->cache_dirty)
goto out;
map->async = true;
if (map->cache_ops->sync)
ret = map->cache_ops->sync(map, min, max);
else
ret = regcache_default_sync(map, min, max);
out:
/* Restore the bypass state */
map->cache_bypass = bypass;
map->async = false;
map->unlock(map->lock_arg);
regmap_async_complete(map);
trace_regcache_sync(map->dev, name, "stop region");
return ret;
}
EXPORT_SYMBOL_GPL(regcache_sync_region);
/**
* regcache_drop_region: Discard part of the register cache
*
* @map: map to operate on
* @min: first register to discard
* @max: last register to discard
*
* Discard part of the register cache.
*
* Return a negative value on failure, 0 on success.
*/
int regcache_drop_region(struct regmap *map, unsigned int min,
unsigned int max)
{
int ret = 0;
if (!map->cache_ops || !map->cache_ops->drop)
return -EINVAL;
map->lock(map->lock_arg);
trace_regcache_drop_region(map->dev, min, max);
ret = map->cache_ops->drop(map, min, max);
map->unlock(map->lock_arg);
return ret;
}
EXPORT_SYMBOL_GPL(regcache_drop_region);
/**
* regcache_cache_only: Put a register map into cache only mode
*
* @map: map to configure
* @cache_only: flag if changes should be written to the hardware
*
* When a register map is marked as cache only writes to the register
* map API will only update the register cache, they will not cause
* any hardware changes. This is useful for allowing portions of
* drivers to act as though the device were functioning as normal when
* it is disabled for power saving reasons.
*/
void regcache_cache_only(struct regmap *map, bool enable)
{
map->lock(map->lock_arg);
WARN_ON(map->cache_bypass && enable);
map->cache_only = enable;
trace_regmap_cache_only(map->dev, enable);
map->unlock(map->lock_arg);
}
EXPORT_SYMBOL_GPL(regcache_cache_only);
/**
* regcache_mark_dirty: Mark the register cache as dirty
*
* @map: map to mark
*
* Mark the register cache as dirty, for example due to the device
* having been powered down for suspend. If the cache is not marked
* as dirty then the cache sync will be suppressed.
*/
void regcache_mark_dirty(struct regmap *map)
{
map->lock(map->lock_arg);
map->cache_dirty = true;
map->unlock(map->lock_arg);
}
EXPORT_SYMBOL_GPL(regcache_mark_dirty);
/**
* regcache_cache_bypass: Put a register map into cache bypass mode
*
* @map: map to configure
* @cache_bypass: flag if changes should not be written to the hardware
*
* When a register map is marked with the cache bypass option, writes
* to the register map API will only update the hardware and not the
* the cache directly. This is useful when syncing the cache back to
* the hardware.
*/
void regcache_cache_bypass(struct regmap *map, bool enable)
{
map->lock(map->lock_arg);
WARN_ON(map->cache_only && enable);
map->cache_bypass = enable;
trace_regmap_cache_bypass(map->dev, enable);
map->unlock(map->lock_arg);
}
EXPORT_SYMBOL_GPL(regcache_cache_bypass);
bool regcache_set_val(struct regmap *map, void *base, unsigned int idx,
unsigned int val)
{
if (regcache_get_val(map, base, idx) == val)
return true;
/* Use device native format if possible */
if (map->format.format_val) {
map->format.format_val(base + (map->cache_word_size * idx),
val, 0);
return false;
}
switch (map->cache_word_size) {
case 1: {
u8 *cache = base;
cache[idx] = val;
break;
}
case 2: {
u16 *cache = base;
cache[idx] = val;
break;
}
case 4: {
u32 *cache = base;
cache[idx] = val;
break;
}
default:
BUG();
}
return false;
}
unsigned int regcache_get_val(struct regmap *map, const void *base,
unsigned int idx)
{
if (!base)
return -EINVAL;
/* Use device native format if possible */
if (map->format.parse_val)
return map->format.parse_val(regcache_get_val_addr(map, base,
idx));
switch (map->cache_word_size) {
case 1: {
const u8 *cache = base;
return cache[idx];
}
case 2: {
const u16 *cache = base;
return cache[idx];
}
case 4: {
const u32 *cache = base;
return cache[idx];
}
default:
BUG();
}
/* unreachable */
return -1;
}
static int regcache_default_cmp(const void *a, const void *b)
{
const struct reg_default *_a = a;
const struct reg_default *_b = b;
return _a->reg - _b->reg;
}
int regcache_lookup_reg(struct regmap *map, unsigned int reg)
{
struct reg_default key;
struct reg_default *r;
key.reg = reg;
key.def = 0;
r = bsearch(&key, map->reg_defaults, map->num_reg_defaults,
sizeof(struct reg_default), regcache_default_cmp);
if (r)
return r - map->reg_defaults;
else
return -ENOENT;
}
static bool regcache_reg_present(unsigned long *cache_present, unsigned int idx)
{
if (!cache_present)
return true;
return test_bit(idx, cache_present);
}
static int regcache_sync_block_single(struct regmap *map, void *block,
unsigned long *cache_present,
unsigned int block_base,
unsigned int start, unsigned int end)
{
unsigned int i, regtmp, val;
int ret;
for (i = start; i < end; i++) {
regtmp = block_base + (i * map->reg_stride);
if (!regcache_reg_present(cache_present, i))
continue;
val = regcache_get_val(map, block, i);
/* Is this the hardware default? If so skip. */
ret = regcache_lookup_reg(map, regtmp);
if (ret >= 0 && val == map->reg_defaults[ret].def)
continue;
map->cache_bypass = 1;
ret = _regmap_write(map, regtmp, val);
map->cache_bypass = 0;
if (ret != 0) {
dev_err(map->dev, "Unable to sync register %#x. %d\n",
regtmp, ret);
return ret;
}
dev_dbg(map->dev, "Synced register %#x, value %#x\n",
regtmp, val);
}
return 0;
}
static int regcache_sync_block_raw_flush(struct regmap *map, const void **data,
unsigned int base, unsigned int cur)
{
size_t val_bytes = map->format.val_bytes;
int ret, count;
if (*data == NULL)
return 0;
count = (cur - base) / map->reg_stride;
dev_dbg(map->dev, "Writing %zu bytes for %d registers from 0x%x-0x%x\n",
count * val_bytes, count, base, cur - map->reg_stride);
map->cache_bypass = 1;
ret = _regmap_raw_write(map, base, *data, count * val_bytes);
if (ret)
dev_err(map->dev, "Unable to sync registers %#x-%#x. %d\n",
base, cur - map->reg_stride, ret);
map->cache_bypass = 0;
*data = NULL;
return ret;
}
static int regcache_sync_block_raw(struct regmap *map, void *block,
unsigned long *cache_present,
unsigned int block_base, unsigned int start,
unsigned int end)
{
unsigned int i, val;
unsigned int regtmp = 0;
unsigned int base = 0;
const void *data = NULL;
int ret;
for (i = start; i < end; i++) {
regtmp = block_base + (i * map->reg_stride);
if (!regcache_reg_present(cache_present, i)) {
ret = regcache_sync_block_raw_flush(map, &data,
base, regtmp);
if (ret != 0)
return ret;
continue;
}
val = regcache_get_val(map, block, i);
/* Is this the hardware default? If so skip. */
ret = regcache_lookup_reg(map, regtmp);
if (ret >= 0 && val == map->reg_defaults[ret].def) {
ret = regcache_sync_block_raw_flush(map, &data,
base, regtmp);
if (ret != 0)
return ret;
continue;
}
if (!data) {
data = regcache_get_val_addr(map, block, i);
base = regtmp;
}
}
return regcache_sync_block_raw_flush(map, &data, base, regtmp +
map->reg_stride);
}
int regcache_sync_block(struct regmap *map, void *block,
unsigned long *cache_present,
unsigned int block_base, unsigned int start,
unsigned int end)
{
if (regmap_can_raw_write(map) && !map->use_single_rw)
return regcache_sync_block_raw(map, block, cache_present,
block_base, start, end);
else
return regcache_sync_block_single(map, block, cache_present,
block_base, start, end);
}

View file

@ -0,0 +1,597 @@
/*
* Register map access API - debugfs
*
* Copyright 2011 Wolfson Microelectronics plc
*
* Author: Mark Brown <broonie@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/slab.h>
#include <linux/mutex.h>
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/device.h>
#include <linux/list.h>
#include "internal.h"
struct regmap_debugfs_node {
struct regmap *map;
const char *name;
struct list_head link;
};
static struct dentry *regmap_debugfs_root;
static LIST_HEAD(regmap_debugfs_early_list);
static DEFINE_MUTEX(regmap_debugfs_early_lock);
/* Calculate the length of a fixed format */
static size_t regmap_calc_reg_len(int max_val, char *buf, size_t buf_size)
{
snprintf(buf, buf_size, "%x", max_val);
return strlen(buf);
}
static ssize_t regmap_name_read_file(struct file *file,
char __user *user_buf, size_t count,
loff_t *ppos)
{
struct regmap *map = file->private_data;
int ret;
char *buf;
buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!buf)
return -ENOMEM;
ret = snprintf(buf, PAGE_SIZE, "%s\n", map->dev->driver->name);
if (ret < 0) {
kfree(buf);
return ret;
}
ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret);
kfree(buf);
return ret;
}
static const struct file_operations regmap_name_fops = {
.open = simple_open,
.read = regmap_name_read_file,
.llseek = default_llseek,
};
static void regmap_debugfs_free_dump_cache(struct regmap *map)
{
struct regmap_debugfs_off_cache *c;
while (!list_empty(&map->debugfs_off_cache)) {
c = list_first_entry(&map->debugfs_off_cache,
struct regmap_debugfs_off_cache,
list);
list_del(&c->list);
kfree(c);
}
}
/*
* Work out where the start offset maps into register numbers, bearing
* in mind that we suppress hidden registers.
*/
static unsigned int regmap_debugfs_get_dump_start(struct regmap *map,
unsigned int base,
loff_t from,
loff_t *pos)
{
struct regmap_debugfs_off_cache *c = NULL;
loff_t p = 0;
unsigned int i, ret;
unsigned int fpos_offset;
unsigned int reg_offset;
/* Suppress the cache if we're using a subrange */
if (base)
return base;
/*
* If we don't have a cache build one so we don't have to do a
* linear scan each time.
*/
mutex_lock(&map->cache_lock);
i = base;
if (list_empty(&map->debugfs_off_cache)) {
for (; i <= map->max_register; i += map->reg_stride) {
/* Skip unprinted registers, closing off cache entry */
if (!regmap_readable(map, i) ||
regmap_precious(map, i)) {
if (c) {
c->max = p - 1;
c->max_reg = i - map->reg_stride;
list_add_tail(&c->list,
&map->debugfs_off_cache);
c = NULL;
}
continue;
}
/* No cache entry? Start a new one */
if (!c) {
c = kzalloc(sizeof(*c), GFP_KERNEL);
if (!c) {
regmap_debugfs_free_dump_cache(map);
mutex_unlock(&map->cache_lock);
return base;
}
c->min = p;
c->base_reg = i;
}
p += map->debugfs_tot_len;
}
}
/* Close the last entry off if we didn't scan beyond it */
if (c) {
c->max = p - 1;
c->max_reg = i - map->reg_stride;
list_add_tail(&c->list,
&map->debugfs_off_cache);
}
/*
* This should never happen; we return above if we fail to
* allocate and we should never be in this code if there are
* no registers at all.
*/
WARN_ON(list_empty(&map->debugfs_off_cache));
ret = base;
/* Find the relevant block:offset */
list_for_each_entry(c, &map->debugfs_off_cache, list) {
if (from >= c->min && from <= c->max) {
fpos_offset = from - c->min;
reg_offset = fpos_offset / map->debugfs_tot_len;
*pos = c->min + (reg_offset * map->debugfs_tot_len);
mutex_unlock(&map->cache_lock);
return c->base_reg + (reg_offset * map->reg_stride);
}
*pos = c->max;
ret = c->max_reg;
}
mutex_unlock(&map->cache_lock);
return ret;
}
static inline void regmap_calc_tot_len(struct regmap *map,
void *buf, size_t count)
{
/* Calculate the length of a fixed format */
if (!map->debugfs_tot_len) {
map->debugfs_reg_len = regmap_calc_reg_len(map->max_register,
buf, count);
map->debugfs_val_len = 2 * map->format.val_bytes;
map->debugfs_tot_len = map->debugfs_reg_len +
map->debugfs_val_len + 3; /* : \n */
}
}
static ssize_t regmap_read_debugfs(struct regmap *map, unsigned int from,
unsigned int to, char __user *user_buf,
size_t count, loff_t *ppos)
{
size_t buf_pos = 0;
loff_t p = *ppos;
ssize_t ret;
int i;
char *buf;
unsigned int val, start_reg;
if (*ppos < 0 || !count)
return -EINVAL;
buf = kmalloc(count, GFP_KERNEL);
if (!buf)
return -ENOMEM;
regmap_calc_tot_len(map, buf, count);
/* Work out which register we're starting at */
start_reg = regmap_debugfs_get_dump_start(map, from, *ppos, &p);
for (i = start_reg; i <= to; i += map->reg_stride) {
if (!regmap_readable(map, i))
continue;
if (regmap_precious(map, i))
continue;
/* If we're in the region the user is trying to read */
if (p >= *ppos) {
/* ...but not beyond it */
if (buf_pos + map->debugfs_tot_len > count)
break;
/* Format the register */
snprintf(buf + buf_pos, count - buf_pos, "%.*x: ",
map->debugfs_reg_len, i - from);
buf_pos += map->debugfs_reg_len + 2;
/* Format the value, write all X if we can't read */
ret = regmap_read(map, i, &val);
if (ret == 0)
snprintf(buf + buf_pos, count - buf_pos,
"%.*x", map->debugfs_val_len, val);
else
memset(buf + buf_pos, 'X',
map->debugfs_val_len);
buf_pos += 2 * map->format.val_bytes;
buf[buf_pos++] = '\n';
}
p += map->debugfs_tot_len;
}
ret = buf_pos;
if (copy_to_user(user_buf, buf, buf_pos)) {
ret = -EFAULT;
goto out;
}
*ppos += buf_pos;
out:
kfree(buf);
return ret;
}
static ssize_t regmap_map_read_file(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
{
struct regmap *map = file->private_data;
return regmap_read_debugfs(map, 0, map->max_register, user_buf,
count, ppos);
}
#undef REGMAP_ALLOW_WRITE_DEBUGFS
#ifdef REGMAP_ALLOW_WRITE_DEBUGFS
/*
* This can be dangerous especially when we have clients such as
* PMICs, therefore don't provide any real compile time configuration option
* for this feature, people who want to use this will need to modify
* the source code directly.
*/
static ssize_t regmap_map_write_file(struct file *file,
const char __user *user_buf,
size_t count, loff_t *ppos)
{
char buf[32];
size_t buf_size;
char *start = buf;
unsigned long reg, value;
struct regmap *map = file->private_data;
int ret;
buf_size = min(count, (sizeof(buf)-1));
if (copy_from_user(buf, user_buf, buf_size))
return -EFAULT;
buf[buf_size] = 0;
while (*start == ' ')
start++;
reg = simple_strtoul(start, &start, 16);
while (*start == ' ')
start++;
if (kstrtoul(start, 16, &value))
return -EINVAL;
/* Userspace has been fiddling around behind the kernel's back */
add_taint(TAINT_USER, LOCKDEP_STILL_OK);
ret = regmap_write(map, reg, value);
if (ret < 0)
return ret;
return buf_size;
}
#else
#define regmap_map_write_file NULL
#endif
static const struct file_operations regmap_map_fops = {
.open = simple_open,
.read = regmap_map_read_file,
.write = regmap_map_write_file,
.llseek = default_llseek,
};
static ssize_t regmap_range_read_file(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
{
struct regmap_range_node *range = file->private_data;
struct regmap *map = range->map;
return regmap_read_debugfs(map, range->range_min, range->range_max,
user_buf, count, ppos);
}
static const struct file_operations regmap_range_fops = {
.open = simple_open,
.read = regmap_range_read_file,
.llseek = default_llseek,
};
static ssize_t regmap_reg_ranges_read_file(struct file *file,
char __user *user_buf, size_t count,
loff_t *ppos)
{
struct regmap *map = file->private_data;
struct regmap_debugfs_off_cache *c;
loff_t p = 0;
size_t buf_pos = 0;
char *buf;
char *entry;
int ret;
if (*ppos < 0 || !count)
return -EINVAL;
buf = kmalloc(count, GFP_KERNEL);
if (!buf)
return -ENOMEM;
entry = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!entry) {
kfree(buf);
return -ENOMEM;
}
/* While we are at it, build the register dump cache
* now so the read() operation on the `registers' file
* can benefit from using the cache. We do not care
* about the file position information that is contained
* in the cache, just about the actual register blocks */
regmap_calc_tot_len(map, buf, count);
regmap_debugfs_get_dump_start(map, 0, *ppos, &p);
/* Reset file pointer as the fixed-format of the `registers'
* file is not compatible with the `range' file */
p = 0;
mutex_lock(&map->cache_lock);
list_for_each_entry(c, &map->debugfs_off_cache, list) {
snprintf(entry, PAGE_SIZE, "%x-%x",
c->base_reg, c->max_reg);
if (p >= *ppos) {
if (buf_pos + 1 + strlen(entry) > count)
break;
snprintf(buf + buf_pos, count - buf_pos,
"%s", entry);
buf_pos += strlen(entry);
buf[buf_pos] = '\n';
buf_pos++;
}
p += strlen(entry) + 1;
}
mutex_unlock(&map->cache_lock);
kfree(entry);
ret = buf_pos;
if (copy_to_user(user_buf, buf, buf_pos)) {
ret = -EFAULT;
goto out_buf;
}
*ppos += buf_pos;
out_buf:
kfree(buf);
return ret;
}
static const struct file_operations regmap_reg_ranges_fops = {
.open = simple_open,
.read = regmap_reg_ranges_read_file,
.llseek = default_llseek,
};
static ssize_t regmap_access_read_file(struct file *file,
char __user *user_buf, size_t count,
loff_t *ppos)
{
int reg_len, tot_len;
size_t buf_pos = 0;
loff_t p = 0;
ssize_t ret;
int i;
struct regmap *map = file->private_data;
char *buf;
if (*ppos < 0 || !count)
return -EINVAL;
buf = kmalloc(count, GFP_KERNEL);
if (!buf)
return -ENOMEM;
/* Calculate the length of a fixed format */
reg_len = regmap_calc_reg_len(map->max_register, buf, count);
tot_len = reg_len + 10; /* ': R W V P\n' */
for (i = 0; i <= map->max_register; i += map->reg_stride) {
/* Ignore registers which are neither readable nor writable */
if (!regmap_readable(map, i) && !regmap_writeable(map, i))
continue;
/* If we're in the region the user is trying to read */
if (p >= *ppos) {
/* ...but not beyond it */
if (buf_pos >= count - 1 - tot_len)
break;
/* Format the register */
snprintf(buf + buf_pos, count - buf_pos,
"%.*x: %c %c %c %c\n",
reg_len, i,
regmap_readable(map, i) ? 'y' : 'n',
regmap_writeable(map, i) ? 'y' : 'n',
regmap_volatile(map, i) ? 'y' : 'n',
regmap_precious(map, i) ? 'y' : 'n');
buf_pos += tot_len;
}
p += tot_len;
}
ret = buf_pos;
if (copy_to_user(user_buf, buf, buf_pos)) {
ret = -EFAULT;
goto out;
}
*ppos += buf_pos;
out:
kfree(buf);
return ret;
}
static const struct file_operations regmap_access_fops = {
.open = simple_open,
.read = regmap_access_read_file,
.llseek = default_llseek,
};
void regmap_debugfs_init(struct regmap *map, const char *name)
{
struct rb_node *next;
struct regmap_range_node *range_node;
const char *devname = "dummy";
/* If we don't have the debugfs root yet, postpone init */
if (!regmap_debugfs_root) {
struct regmap_debugfs_node *node;
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (!node)
return;
node->map = map;
node->name = name;
mutex_lock(&regmap_debugfs_early_lock);
list_add(&node->link, &regmap_debugfs_early_list);
mutex_unlock(&regmap_debugfs_early_lock);
return;
}
INIT_LIST_HEAD(&map->debugfs_off_cache);
mutex_init(&map->cache_lock);
if (map->dev)
devname = dev_name(map->dev);
if (name) {
map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s",
devname, name);
name = map->debugfs_name;
} else {
name = devname;
}
map->debugfs = debugfs_create_dir(name, regmap_debugfs_root);
if (!map->debugfs) {
dev_warn(map->dev, "Failed to create debugfs directory\n");
return;
}
debugfs_create_file("name", 0400, map->debugfs,
map, &regmap_name_fops);
debugfs_create_file("range", 0400, map->debugfs,
map, &regmap_reg_ranges_fops);
if (map->max_register || regmap_readable(map, 0)) {
umode_t registers_mode;
if (IS_ENABLED(REGMAP_ALLOW_WRITE_DEBUGFS))
registers_mode = 0600;
else
registers_mode = 0400;
debugfs_create_file("registers", registers_mode, map->debugfs,
map, &regmap_map_fops);
debugfs_create_file("access", 0400, map->debugfs,
map, &regmap_access_fops);
}
if (map->cache_type) {
debugfs_create_bool("cache_only", 0400, map->debugfs,
&map->cache_only);
debugfs_create_bool("cache_dirty", 0400, map->debugfs,
&map->cache_dirty);
debugfs_create_bool("cache_bypass", 0400, map->debugfs,
&map->cache_bypass);
}
next = rb_first(&map->range_tree);
while (next) {
range_node = rb_entry(next, struct regmap_range_node, node);
if (range_node->name)
debugfs_create_file(range_node->name, 0400,
map->debugfs, range_node,
&regmap_range_fops);
next = rb_next(&range_node->node);
}
if (map->cache_ops && map->cache_ops->debugfs_init)
map->cache_ops->debugfs_init(map);
}
void regmap_debugfs_exit(struct regmap *map)
{
if (map->debugfs) {
debugfs_remove_recursive(map->debugfs);
mutex_lock(&map->cache_lock);
regmap_debugfs_free_dump_cache(map);
mutex_unlock(&map->cache_lock);
kfree(map->debugfs_name);
} else {
struct regmap_debugfs_node *node, *tmp;
mutex_lock(&regmap_debugfs_early_lock);
list_for_each_entry_safe(node, tmp, &regmap_debugfs_early_list,
link) {
if (node->map == map) {
list_del(&node->link);
kfree(node);
}
}
mutex_unlock(&regmap_debugfs_early_lock);
}
}
void regmap_debugfs_initcall(void)
{
struct regmap_debugfs_node *node, *tmp;
regmap_debugfs_root = debugfs_create_dir("regmap", NULL);
if (!regmap_debugfs_root) {
pr_warn("regmap: Failed to create debugfs root\n");
return;
}
mutex_lock(&regmap_debugfs_early_lock);
list_for_each_entry_safe(node, tmp, &regmap_debugfs_early_list, link) {
regmap_debugfs_init(node->map, node->name);
list_del(&node->link);
kfree(node);
}
mutex_unlock(&regmap_debugfs_early_lock);
}

View file

@ -0,0 +1,272 @@
/*
* Register map access API - I2C support
*
* Copyright 2011 Wolfson Microelectronics plc
*
* Author: Mark Brown <broonie@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/regmap.h>
#include <linux/i2c.h>
#include <linux/module.h>
static int regmap_smbus_byte_reg_read(void *context, unsigned int reg,
unsigned int *val)
{
struct device *dev = context;
struct i2c_client *i2c = to_i2c_client(dev);
int ret;
if (reg > 0xff)
return -EINVAL;
ret = i2c_smbus_read_byte_data(i2c, reg);
if (ret < 0)
return ret;
*val = ret;
return 0;
}
static int regmap_smbus_byte_reg_write(void *context, unsigned int reg,
unsigned int val)
{
struct device *dev = context;
struct i2c_client *i2c = to_i2c_client(dev);
if (val > 0xff || reg > 0xff)
return -EINVAL;
return i2c_smbus_write_byte_data(i2c, reg, val);
}
static struct regmap_bus regmap_smbus_byte = {
.reg_write = regmap_smbus_byte_reg_write,
.reg_read = regmap_smbus_byte_reg_read,
};
static int regmap_smbus_word_reg_read(void *context, unsigned int reg,
unsigned int *val)
{
struct device *dev = context;
struct i2c_client *i2c = to_i2c_client(dev);
int ret;
if (reg > 0xff)
return -EINVAL;
ret = i2c_smbus_read_word_data(i2c, reg);
if (ret < 0)
return ret;
*val = ret;
return 0;
}
static int regmap_smbus_word_reg_write(void *context, unsigned int reg,
unsigned int val)
{
struct device *dev = context;
struct i2c_client *i2c = to_i2c_client(dev);
if (val > 0xffff || reg > 0xff)
return -EINVAL;
return i2c_smbus_write_word_data(i2c, reg, val);
}
static struct regmap_bus regmap_smbus_word = {
.reg_write = regmap_smbus_word_reg_write,
.reg_read = regmap_smbus_word_reg_read,
};
static int regmap_i2c_write(void *context, const void *data, size_t count)
{
struct device *dev = context;
struct i2c_client *i2c = to_i2c_client(dev);
int ret;
ret = i2c_master_send(i2c, data, count);
if (ret == count)
return 0;
else if (ret < 0)
return ret;
else
return -EIO;
}
static int regmap_i2c_gather_write(void *context,
const void *reg, size_t reg_size,
const void *val, size_t val_size)
{
struct device *dev = context;
struct i2c_client *i2c = to_i2c_client(dev);
struct i2c_msg xfer[2];
int ret;
/* If the I2C controller can't do a gather tell the core, it
* will substitute in a linear write for us.
*/
if (!i2c_check_functionality(i2c->adapter, I2C_FUNC_NOSTART))
return -ENOTSUPP;
xfer[0].addr = i2c->addr;
xfer[0].flags = 0;
xfer[0].len = reg_size;
xfer[0].buf = (void *)reg;
xfer[1].addr = i2c->addr;
xfer[1].flags = I2C_M_NOSTART;
xfer[1].len = val_size;
xfer[1].buf = (void *)val;
ret = i2c_transfer(i2c->adapter, xfer, 2);
if (ret == 2)
return 0;
if (ret < 0)
return ret;
else
return -EIO;
}
static int regmap_i2c_read(void *context,
const void *reg, size_t reg_size,
void *val, size_t val_size)
{
struct device *dev = context;
struct i2c_client *i2c = to_i2c_client(dev);
struct i2c_msg xfer[2];
int ret;
if (i2c->flags & I2C_CLIENT_TEN) {
xfer[0].flags = I2C_M_RD|I2C_CLIENT_TEN;
xfer[0].len = val_size;
xfer[0].buf = (void *)reg;
xfer[0].addr = xfer[0].buf[0] +
(((i2c->addr & 0x7f) << 8) & 0xff00);
xfer[0].buf = val;
ret = i2c_transfer(i2c->adapter, xfer, 1);
if (ret == 1)
return 0;
else if (ret < 0)
return ret;
else
return -EIO;
} else if (i2c->flags & I2C_CLIENT_SPEEDY) {
xfer[0].flags = I2C_M_RD|I2C_CLIENT_SPEEDY;
xfer[0].len = val_size;
xfer[0].buf = (void *)reg;
/*
* The upper 4bit of 12bit speedy slave address is for device id.
* 8bit is for device register offset.
*/
xfer[0].addr = xfer[0].buf[0] + ((i2c->addr & 0xf) << 8);
xfer[0].buf = val;
ret = i2c_transfer(i2c->adapter, xfer, 1);
if (ret == 1)
return 0;
else if (ret < 0)
return ret;
else
return -EIO;
} else {
xfer[0].addr = i2c->addr;
xfer[0].flags = 0;
xfer[0].len = reg_size;
xfer[0].buf = (void *)reg;
xfer[1].addr = i2c->addr;
xfer[1].flags = I2C_M_RD;
xfer[1].len = val_size;
xfer[1].buf = val;
ret = i2c_transfer(i2c->adapter, xfer, 2);
if (ret == 2)
return 0;
else if (ret < 0)
return ret;
else
return -EIO;
}
}
static struct regmap_bus regmap_i2c = {
.write = regmap_i2c_write,
.gather_write = regmap_i2c_gather_write,
.read = regmap_i2c_read,
.reg_format_endian_default = REGMAP_ENDIAN_BIG,
.val_format_endian_default = REGMAP_ENDIAN_BIG,
};
static const struct regmap_bus *regmap_get_i2c_bus(struct i2c_client *i2c,
const struct regmap_config *config)
{
if (i2c_check_functionality(i2c->adapter, I2C_FUNC_I2C))
return &regmap_i2c;
else if (config->val_bits == 16 && config->reg_bits == 8 &&
i2c_check_functionality(i2c->adapter,
I2C_FUNC_SMBUS_WORD_DATA))
return &regmap_smbus_word;
else if (config->val_bits == 8 && config->reg_bits == 8 &&
i2c_check_functionality(i2c->adapter,
I2C_FUNC_SMBUS_BYTE_DATA))
return &regmap_smbus_byte;
return ERR_PTR(-ENOTSUPP);
}
/**
* regmap_init_i2c(): Initialise register map
*
* @i2c: Device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer to
* a struct regmap.
*/
struct regmap *regmap_init_i2c(struct i2c_client *i2c,
const struct regmap_config *config)
{
const struct regmap_bus *bus = regmap_get_i2c_bus(i2c, config);
if (IS_ERR(bus))
return ERR_CAST(bus);
return regmap_init(&i2c->dev, bus, &i2c->dev, config);
}
EXPORT_SYMBOL_GPL(regmap_init_i2c);
/**
* devm_regmap_init_i2c(): Initialise managed register map
*
* @i2c: Device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer
* to a struct regmap. The regmap will be automatically freed by the
* device management code.
*/
struct regmap *devm_regmap_init_i2c(struct i2c_client *i2c,
const struct regmap_config *config)
{
const struct regmap_bus *bus = regmap_get_i2c_bus(i2c, config);
if (IS_ERR(bus))
return ERR_CAST(bus);
return devm_regmap_init(&i2c->dev, bus, &i2c->dev, config);
}
EXPORT_SYMBOL_GPL(devm_regmap_init_i2c);
MODULE_LICENSE("GPL");

View file

@ -0,0 +1,598 @@
/*
* regmap based irq_chip
*
* Copyright 2011 Wolfson Microelectronics plc
*
* Author: Mark Brown <broonie@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/device.h>
#include <linux/export.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/slab.h>
#include "internal.h"
struct regmap_irq_chip_data {
struct mutex lock;
struct irq_chip irq_chip;
struct regmap *map;
const struct regmap_irq_chip *chip;
int irq_base;
struct irq_domain *domain;
int irq;
int wake_count;
void *status_reg_buf;
unsigned int *status_buf;
unsigned int *mask_buf;
unsigned int *mask_buf_def;
unsigned int *wake_buf;
unsigned int irq_reg_stride;
};
static inline const
struct regmap_irq *irq_to_regmap_irq(struct regmap_irq_chip_data *data,
int irq)
{
return &data->chip->irqs[irq];
}
static void regmap_irq_lock(struct irq_data *data)
{
struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);
mutex_lock(&d->lock);
}
static void regmap_irq_sync_unlock(struct irq_data *data)
{
struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);
struct regmap *map = d->map;
int i, ret;
u32 reg;
if (d->chip->runtime_pm) {
ret = pm_runtime_get_sync(map->dev);
if (ret < 0)
dev_err(map->dev, "IRQ sync failed to resume: %d\n",
ret);
}
/*
* If there's been a change in the mask write it back to the
* hardware. We rely on the use of the regmap core cache to
* suppress pointless writes.
*/
for (i = 0; i < d->chip->num_regs; i++) {
reg = d->chip->mask_base +
(i * map->reg_stride * d->irq_reg_stride);
if (d->chip->mask_invert)
ret = regmap_update_bits(d->map, reg,
d->mask_buf_def[i], ~d->mask_buf[i]);
else
ret = regmap_update_bits(d->map, reg,
d->mask_buf_def[i], d->mask_buf[i]);
if (ret != 0)
dev_err(d->map->dev, "Failed to sync masks in %x\n",
reg);
reg = d->chip->wake_base +
(i * map->reg_stride * d->irq_reg_stride);
if (d->wake_buf) {
if (d->chip->wake_invert)
ret = regmap_update_bits(d->map, reg,
d->mask_buf_def[i],
~d->wake_buf[i]);
else
ret = regmap_update_bits(d->map, reg,
d->mask_buf_def[i],
d->wake_buf[i]);
if (ret != 0)
dev_err(d->map->dev,
"Failed to sync wakes in %x: %d\n",
reg, ret);
}
if (!d->chip->init_ack_masked)
continue;
/*
* Ack all the masked interrupts uncondictionly,
* OR if there is masked interrupt which hasn't been Acked,
* it'll be ignored in irq handler, then may introduce irq storm
*/
if (d->mask_buf[i] && (d->chip->ack_base || d->chip->use_ack)) {
reg = d->chip->ack_base +
(i * map->reg_stride * d->irq_reg_stride);
ret = regmap_write(map, reg, d->mask_buf[i]);
if (ret != 0)
dev_err(d->map->dev, "Failed to ack 0x%x: %d\n",
reg, ret);
}
}
if (d->chip->runtime_pm)
pm_runtime_put(map->dev);
/* If we've changed our wakeup count propagate it to the parent */
if (d->wake_count < 0)
for (i = d->wake_count; i < 0; i++)
irq_set_irq_wake(d->irq, 0);
else if (d->wake_count > 0)
for (i = 0; i < d->wake_count; i++)
irq_set_irq_wake(d->irq, 1);
d->wake_count = 0;
mutex_unlock(&d->lock);
}
static void regmap_irq_enable(struct irq_data *data)
{
struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);
struct regmap *map = d->map;
const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq);
d->mask_buf[irq_data->reg_offset / map->reg_stride] &= ~irq_data->mask;
}
static void regmap_irq_disable(struct irq_data *data)
{
struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);
struct regmap *map = d->map;
const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq);
d->mask_buf[irq_data->reg_offset / map->reg_stride] |= irq_data->mask;
}
static int regmap_irq_set_wake(struct irq_data *data, unsigned int on)
{
struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);
struct regmap *map = d->map;
const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq);
if (on) {
if (d->wake_buf)
d->wake_buf[irq_data->reg_offset / map->reg_stride]
&= ~irq_data->mask;
d->wake_count++;
} else {
if (d->wake_buf)
d->wake_buf[irq_data->reg_offset / map->reg_stride]
|= irq_data->mask;
d->wake_count--;
}
return 0;
}
static const struct irq_chip regmap_irq_chip = {
.irq_bus_lock = regmap_irq_lock,
.irq_bus_sync_unlock = regmap_irq_sync_unlock,
.irq_disable = regmap_irq_disable,
.irq_enable = regmap_irq_enable,
.irq_set_wake = regmap_irq_set_wake,
};
static irqreturn_t regmap_irq_thread(int irq, void *d)
{
struct regmap_irq_chip_data *data = d;
const struct regmap_irq_chip *chip = data->chip;
struct regmap *map = data->map;
int ret, i;
bool handled = false;
u32 reg;
if (chip->runtime_pm) {
ret = pm_runtime_get_sync(map->dev);
if (ret < 0) {
dev_err(map->dev, "IRQ thread failed to resume: %d\n",
ret);
pm_runtime_put(map->dev);
return IRQ_NONE;
}
}
/*
* Read in the statuses, using a single bulk read if possible
* in order to reduce the I/O overheads.
*/
if (!map->use_single_rw && map->reg_stride == 1 &&
data->irq_reg_stride == 1) {
u8 *buf8 = data->status_reg_buf;
u16 *buf16 = data->status_reg_buf;
u32 *buf32 = data->status_reg_buf;
BUG_ON(!data->status_reg_buf);
ret = regmap_bulk_read(map, chip->status_base,
data->status_reg_buf,
chip->num_regs);
if (ret != 0) {
dev_err(map->dev, "Failed to read IRQ status: %d\n",
ret);
return IRQ_NONE;
}
for (i = 0; i < data->chip->num_regs; i++) {
switch (map->format.val_bytes) {
case 1:
data->status_buf[i] = buf8[i];
break;
case 2:
data->status_buf[i] = buf16[i];
break;
case 4:
data->status_buf[i] = buf32[i];
break;
default:
BUG();
return IRQ_NONE;
}
}
} else {
for (i = 0; i < data->chip->num_regs; i++) {
ret = regmap_read(map, chip->status_base +
(i * map->reg_stride
* data->irq_reg_stride),
&data->status_buf[i]);
if (ret != 0) {
dev_err(map->dev,
"Failed to read IRQ status: %d\n",
ret);
if (chip->runtime_pm)
pm_runtime_put(map->dev);
return IRQ_NONE;
}
}
}
/*
* Ignore masked IRQs and ack if we need to; we ack early so
* there is no race between handling and acknowleding the
* interrupt. We assume that typically few of the interrupts
* will fire simultaneously so don't worry about overhead from
* doing a write per register.
*/
for (i = 0; i < data->chip->num_regs; i++) {
data->status_buf[i] &= ~data->mask_buf[i];
if (data->status_buf[i] && (chip->ack_base || chip->use_ack)) {
reg = chip->ack_base +
(i * map->reg_stride * data->irq_reg_stride);
ret = regmap_write(map, reg, data->status_buf[i]);
if (ret != 0)
dev_err(map->dev, "Failed to ack 0x%x: %d\n",
reg, ret);
}
}
for (i = 0; i < chip->num_irqs; i++) {
if (data->status_buf[chip->irqs[i].reg_offset /
map->reg_stride] & chip->irqs[i].mask) {
handle_nested_irq(irq_find_mapping(data->domain, i));
handled = true;
}
}
if (chip->runtime_pm)
pm_runtime_put(map->dev);
if (handled)
return IRQ_HANDLED;
else
return IRQ_NONE;
}
static int regmap_irq_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
struct regmap_irq_chip_data *data = h->host_data;
irq_set_chip_data(virq, data);
irq_set_chip(virq, &data->irq_chip);
irq_set_nested_thread(virq, 1);
/* ARM needs us to explicitly flag the IRQ as valid
* and will set them noprobe when we do so. */
#ifdef CONFIG_ARM
set_irq_flags(virq, IRQF_VALID);
#else
irq_set_noprobe(virq);
#endif
return 0;
}
static struct irq_domain_ops regmap_domain_ops = {
.map = regmap_irq_map,
.xlate = irq_domain_xlate_twocell,
};
/**
* regmap_add_irq_chip(): Use standard regmap IRQ controller handling
*
* map: The regmap for the device.
* irq: The IRQ the device uses to signal interrupts
* irq_flags: The IRQF_ flags to use for the primary interrupt.
* chip: Configuration for the interrupt controller.
* data: Runtime data structure for the controller, allocated on success
*
* Returns 0 on success or an errno on failure.
*
* In order for this to be efficient the chip really should use a
* register cache. The chip driver is responsible for restoring the
* register values used by the IRQ controller over suspend and resume.
*/
int regmap_add_irq_chip(struct regmap *map, int irq, int irq_flags,
int irq_base, const struct regmap_irq_chip *chip,
struct regmap_irq_chip_data **data)
{
struct regmap_irq_chip_data *d;
int i;
int ret = -ENOMEM;
u32 reg;
if (chip->num_regs <= 0)
return -EINVAL;
for (i = 0; i < chip->num_irqs; i++) {
if (chip->irqs[i].reg_offset % map->reg_stride)
return -EINVAL;
if (chip->irqs[i].reg_offset / map->reg_stride >=
chip->num_regs)
return -EINVAL;
}
if (irq_base) {
irq_base = irq_alloc_descs(irq_base, 0, chip->num_irqs, 0);
if (irq_base < 0) {
dev_warn(map->dev, "Failed to allocate IRQs: %d\n",
irq_base);
return irq_base;
}
}
d = kzalloc(sizeof(*d), GFP_KERNEL);
if (!d)
return -ENOMEM;
d->status_buf = kzalloc(sizeof(unsigned int) * chip->num_regs,
GFP_KERNEL);
if (!d->status_buf)
goto err_alloc;
d->mask_buf = kzalloc(sizeof(unsigned int) * chip->num_regs,
GFP_KERNEL);
if (!d->mask_buf)
goto err_alloc;
d->mask_buf_def = kzalloc(sizeof(unsigned int) * chip->num_regs,
GFP_KERNEL);
if (!d->mask_buf_def)
goto err_alloc;
if (chip->wake_base) {
d->wake_buf = kzalloc(sizeof(unsigned int) * chip->num_regs,
GFP_KERNEL);
if (!d->wake_buf)
goto err_alloc;
}
d->irq_chip = regmap_irq_chip;
d->irq_chip.name = chip->name;
d->irq = irq;
d->map = map;
d->chip = chip;
d->irq_base = irq_base;
if (chip->irq_reg_stride)
d->irq_reg_stride = chip->irq_reg_stride;
else
d->irq_reg_stride = 1;
if (!map->use_single_rw && map->reg_stride == 1 &&
d->irq_reg_stride == 1) {
d->status_reg_buf = kmalloc(map->format.val_bytes *
chip->num_regs, GFP_KERNEL);
if (!d->status_reg_buf)
goto err_alloc;
}
mutex_init(&d->lock);
for (i = 0; i < chip->num_irqs; i++)
d->mask_buf_def[chip->irqs[i].reg_offset / map->reg_stride]
|= chip->irqs[i].mask;
/* Mask all the interrupts by default */
for (i = 0; i < chip->num_regs; i++) {
d->mask_buf[i] = d->mask_buf_def[i];
reg = chip->mask_base +
(i * map->reg_stride * d->irq_reg_stride);
if (chip->mask_invert)
ret = regmap_update_bits(map, reg,
d->mask_buf[i], ~d->mask_buf[i]);
else
ret = regmap_update_bits(map, reg,
d->mask_buf[i], d->mask_buf[i]);
if (ret != 0) {
dev_err(map->dev, "Failed to set masks in 0x%x: %d\n",
reg, ret);
goto err_alloc;
}
if (!chip->init_ack_masked)
continue;
/* Ack masked but set interrupts */
reg = chip->status_base +
(i * map->reg_stride * d->irq_reg_stride);
ret = regmap_read(map, reg, &d->status_buf[i]);
if (ret != 0) {
dev_err(map->dev, "Failed to read IRQ status: %d\n",
ret);
goto err_alloc;
}
if (d->status_buf[i] && (chip->ack_base || chip->use_ack)) {
reg = chip->ack_base +
(i * map->reg_stride * d->irq_reg_stride);
ret = regmap_write(map, reg,
d->status_buf[i] & d->mask_buf[i]);
if (ret != 0) {
dev_err(map->dev, "Failed to ack 0x%x: %d\n",
reg, ret);
goto err_alloc;
}
}
}
/* Wake is disabled by default */
if (d->wake_buf) {
for (i = 0; i < chip->num_regs; i++) {
d->wake_buf[i] = d->mask_buf_def[i];
reg = chip->wake_base +
(i * map->reg_stride * d->irq_reg_stride);
if (chip->wake_invert)
ret = regmap_update_bits(map, reg,
d->mask_buf_def[i],
0);
else
ret = regmap_update_bits(map, reg,
d->mask_buf_def[i],
d->wake_buf[i]);
if (ret != 0) {
dev_err(map->dev, "Failed to set masks in 0x%x: %d\n",
reg, ret);
goto err_alloc;
}
}
}
if (irq_base)
d->domain = irq_domain_add_legacy(map->dev->of_node,
chip->num_irqs, irq_base, 0,
&regmap_domain_ops, d);
else
d->domain = irq_domain_add_linear(map->dev->of_node,
chip->num_irqs,
&regmap_domain_ops, d);
if (!d->domain) {
dev_err(map->dev, "Failed to create IRQ domain\n");
ret = -ENOMEM;
goto err_alloc;
}
ret = request_threaded_irq(irq, NULL, regmap_irq_thread, irq_flags,
chip->name, d);
if (ret != 0) {
dev_err(map->dev, "Failed to request IRQ %d for %s: %d\n",
irq, chip->name, ret);
goto err_domain;
}
*data = d;
return 0;
err_domain:
/* Should really dispose of the domain but... */
err_alloc:
kfree(d->wake_buf);
kfree(d->mask_buf_def);
kfree(d->mask_buf);
kfree(d->status_buf);
kfree(d->status_reg_buf);
kfree(d);
return ret;
}
EXPORT_SYMBOL_GPL(regmap_add_irq_chip);
/**
* regmap_del_irq_chip(): Stop interrupt handling for a regmap IRQ chip
*
* @irq: Primary IRQ for the device
* @d: regmap_irq_chip_data allocated by regmap_add_irq_chip()
*/
void regmap_del_irq_chip(int irq, struct regmap_irq_chip_data *d)
{
if (!d)
return;
free_irq(irq, d);
irq_domain_remove(d->domain);
kfree(d->wake_buf);
kfree(d->mask_buf_def);
kfree(d->mask_buf);
kfree(d->status_reg_buf);
kfree(d->status_buf);
kfree(d);
}
EXPORT_SYMBOL_GPL(regmap_del_irq_chip);
/**
* regmap_irq_chip_get_base(): Retrieve interrupt base for a regmap IRQ chip
*
* Useful for drivers to request their own IRQs.
*
* @data: regmap_irq controller to operate on.
*/
int regmap_irq_chip_get_base(struct regmap_irq_chip_data *data)
{
WARN_ON(!data->irq_base);
return data->irq_base;
}
EXPORT_SYMBOL_GPL(regmap_irq_chip_get_base);
/**
* regmap_irq_get_virq(): Map an interrupt on a chip to a virtual IRQ
*
* Useful for drivers to request their own IRQs.
*
* @data: regmap_irq controller to operate on.
* @irq: index of the interrupt requested in the chip IRQs
*/
int regmap_irq_get_virq(struct regmap_irq_chip_data *data, int irq)
{
/* Handle holes in the IRQ list */
if (!data->chip->irqs[irq].mask)
return -EINVAL;
return irq_create_mapping(data->domain, irq);
}
EXPORT_SYMBOL_GPL(regmap_irq_get_virq);
/**
* regmap_irq_get_domain(): Retrieve the irq_domain for the chip
*
* Useful for drivers to request their own IRQs and for integration
* with subsystems. For ease of integration NULL is accepted as a
* domain, allowing devices to just call this even if no domain is
* allocated.
*
* @data: regmap_irq controller to operate on.
*/
struct irq_domain *regmap_irq_get_domain(struct regmap_irq_chip_data *data)
{
if (data)
return data->domain;
else
return NULL;
}
EXPORT_SYMBOL_GPL(regmap_irq_get_domain);

View file

@ -0,0 +1,350 @@
/*
* Register map access API - MMIO support
*
* Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/regmap.h>
#include <linux/slab.h>
struct regmap_mmio_context {
void __iomem *regs;
unsigned reg_bytes;
unsigned val_bytes;
unsigned pad_bytes;
struct clk *clk;
};
static inline void regmap_mmio_regsize_check(size_t reg_size)
{
switch (reg_size) {
case 1:
case 2:
case 4:
#ifdef CONFIG_64BIT
case 8:
#endif
break;
default:
BUG();
}
}
static int regmap_mmio_regbits_check(size_t reg_bits)
{
switch (reg_bits) {
case 8:
case 16:
case 32:
#ifdef CONFIG_64BIT
case 64:
#endif
return 0;
default:
return -EINVAL;
}
}
static inline void regmap_mmio_count_check(size_t count, u32 offset)
{
BUG_ON(count <= offset);
}
static inline unsigned int
regmap_mmio_get_offset(const void *reg, size_t reg_size)
{
switch (reg_size) {
case 1:
return *(u8 *)reg;
case 2:
return *(u16 *)reg;
case 4:
return *(u32 *)reg;
#ifdef CONFIG_64BIT
case 8:
return *(u64 *)reg;
#endif
default:
BUG();
}
}
static int regmap_mmio_gather_write(void *context,
const void *reg, size_t reg_size,
const void *val, size_t val_size)
{
struct regmap_mmio_context *ctx = context;
unsigned int offset;
int ret;
regmap_mmio_regsize_check(reg_size);
if (!IS_ERR(ctx->clk)) {
ret = clk_enable(ctx->clk);
if (ret < 0)
return ret;
}
offset = regmap_mmio_get_offset(reg, reg_size);
while (val_size) {
switch (ctx->val_bytes) {
case 1:
writeb(*(u8 *)val, ctx->regs + offset);
break;
case 2:
writew(*(u16 *)val, ctx->regs + offset);
break;
case 4:
writel(*(u32 *)val, ctx->regs + offset);
break;
#ifdef CONFIG_64BIT
case 8:
writeq(*(u64 *)val, ctx->regs + offset);
break;
#endif
default:
/* Should be caught by regmap_mmio_check_config */
BUG();
}
val_size -= ctx->val_bytes;
val += ctx->val_bytes;
offset += ctx->val_bytes;
}
if (!IS_ERR(ctx->clk))
clk_disable(ctx->clk);
return 0;
}
static int regmap_mmio_write(void *context, const void *data, size_t count)
{
struct regmap_mmio_context *ctx = context;
unsigned int offset = ctx->reg_bytes + ctx->pad_bytes;
regmap_mmio_count_check(count, offset);
return regmap_mmio_gather_write(context, data, ctx->reg_bytes,
data + offset, count - offset);
}
static int regmap_mmio_read(void *context,
const void *reg, size_t reg_size,
void *val, size_t val_size)
{
struct regmap_mmio_context *ctx = context;
unsigned int offset;
int ret;
regmap_mmio_regsize_check(reg_size);
if (!IS_ERR(ctx->clk)) {
ret = clk_enable(ctx->clk);
if (ret < 0)
return ret;
}
offset = regmap_mmio_get_offset(reg, reg_size);
while (val_size) {
switch (ctx->val_bytes) {
case 1:
*(u8 *)val = readb(ctx->regs + offset);
break;
case 2:
*(u16 *)val = readw(ctx->regs + offset);
break;
case 4:
*(u32 *)val = readl(ctx->regs + offset);
break;
#ifdef CONFIG_64BIT
case 8:
*(u64 *)val = readq(ctx->regs + offset);
break;
#endif
default:
/* Should be caught by regmap_mmio_check_config */
BUG();
}
val_size -= ctx->val_bytes;
val += ctx->val_bytes;
offset += ctx->val_bytes;
}
if (!IS_ERR(ctx->clk))
clk_disable(ctx->clk);
return 0;
}
static void regmap_mmio_free_context(void *context)
{
struct regmap_mmio_context *ctx = context;
if (!IS_ERR(ctx->clk)) {
clk_unprepare(ctx->clk);
clk_put(ctx->clk);
}
kfree(context);
}
static struct regmap_bus regmap_mmio = {
.fast_io = true,
.write = regmap_mmio_write,
.gather_write = regmap_mmio_gather_write,
.read = regmap_mmio_read,
.free_context = regmap_mmio_free_context,
.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
};
static struct regmap_mmio_context *regmap_mmio_gen_context(struct device *dev,
const char *clk_id,
void __iomem *regs,
const struct regmap_config *config)
{
struct regmap_mmio_context *ctx;
int min_stride;
int ret;
ret = regmap_mmio_regbits_check(config->reg_bits);
if (ret)
return ERR_PTR(ret);
if (config->pad_bits)
return ERR_PTR(-EINVAL);
switch (config->val_bits) {
case 8:
/* The core treats 0 as 1 */
min_stride = 0;
break;
case 16:
min_stride = 2;
break;
case 32:
min_stride = 4;
break;
#ifdef CONFIG_64BIT
case 64:
min_stride = 8;
break;
#endif
break;
default:
return ERR_PTR(-EINVAL);
}
if (config->reg_stride < min_stride)
return ERR_PTR(-EINVAL);
switch (config->reg_format_endian) {
case REGMAP_ENDIAN_DEFAULT:
case REGMAP_ENDIAN_NATIVE:
break;
default:
return ERR_PTR(-EINVAL);
}
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return ERR_PTR(-ENOMEM);
ctx->regs = regs;
ctx->val_bytes = config->val_bits / 8;
ctx->reg_bytes = config->reg_bits / 8;
ctx->pad_bytes = config->pad_bits / 8;
ctx->clk = ERR_PTR(-ENODEV);
if (clk_id == NULL)
return ctx;
ctx->clk = clk_get(dev, clk_id);
if (IS_ERR(ctx->clk)) {
ret = PTR_ERR(ctx->clk);
goto err_free;
}
ret = clk_prepare(ctx->clk);
if (ret < 0) {
clk_put(ctx->clk);
goto err_free;
}
return ctx;
err_free:
kfree(ctx);
return ERR_PTR(ret);
}
/**
* regmap_init_mmio_clk(): Initialise register map with register clock
*
* @dev: Device that will be interacted with
* @clk_id: register clock consumer ID
* @regs: Pointer to memory-mapped IO region
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer to
* a struct regmap.
*/
struct regmap *regmap_init_mmio_clk(struct device *dev, const char *clk_id,
void __iomem *regs,
const struct regmap_config *config)
{
struct regmap_mmio_context *ctx;
ctx = regmap_mmio_gen_context(dev, clk_id, regs, config);
if (IS_ERR(ctx))
return ERR_CAST(ctx);
return regmap_init(dev, &regmap_mmio, ctx, config);
}
EXPORT_SYMBOL_GPL(regmap_init_mmio_clk);
/**
* devm_regmap_init_mmio_clk(): Initialise managed register map with clock
*
* @dev: Device that will be interacted with
* @clk_id: register clock consumer ID
* @regs: Pointer to memory-mapped IO region
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer
* to a struct regmap. The regmap will be automatically freed by the
* device management code.
*/
struct regmap *devm_regmap_init_mmio_clk(struct device *dev, const char *clk_id,
void __iomem *regs,
const struct regmap_config *config)
{
struct regmap_mmio_context *ctx;
ctx = regmap_mmio_gen_context(dev, clk_id, regs, config);
if (IS_ERR(ctx))
return ERR_CAST(ctx);
return devm_regmap_init(dev, &regmap_mmio, ctx, config);
}
EXPORT_SYMBOL_GPL(devm_regmap_init_mmio_clk);
MODULE_LICENSE("GPL v2");

View file

@ -0,0 +1,149 @@
/*
* Register map access API - SPI support
*
* Copyright 2011 Wolfson Microelectronics plc
*
* Author: Mark Brown <broonie@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/regmap.h>
#include <linux/spi/spi.h>
#include <linux/module.h>
#include "internal.h"
struct regmap_async_spi {
struct regmap_async core;
struct spi_message m;
struct spi_transfer t[2];
};
static void regmap_spi_complete(void *data)
{
struct regmap_async_spi *async = data;
regmap_async_complete_cb(&async->core, async->m.status);
}
static int regmap_spi_write(void *context, const void *data, size_t count)
{
struct device *dev = context;
struct spi_device *spi = to_spi_device(dev);
return spi_write(spi, data, count);
}
static int regmap_spi_gather_write(void *context,
const void *reg, size_t reg_len,
const void *val, size_t val_len)
{
struct device *dev = context;
struct spi_device *spi = to_spi_device(dev);
struct spi_message m;
struct spi_transfer t[2] = { { .tx_buf = reg, .len = reg_len, },
{ .tx_buf = val, .len = val_len, }, };
spi_message_init(&m);
spi_message_add_tail(&t[0], &m);
spi_message_add_tail(&t[1], &m);
return spi_sync(spi, &m);
}
static int regmap_spi_async_write(void *context,
const void *reg, size_t reg_len,
const void *val, size_t val_len,
struct regmap_async *a)
{
struct regmap_async_spi *async = container_of(a,
struct regmap_async_spi,
core);
struct device *dev = context;
struct spi_device *spi = to_spi_device(dev);
async->t[0].tx_buf = reg;
async->t[0].len = reg_len;
async->t[1].tx_buf = val;
async->t[1].len = val_len;
spi_message_init(&async->m);
spi_message_add_tail(&async->t[0], &async->m);
if (val)
spi_message_add_tail(&async->t[1], &async->m);
async->m.complete = regmap_spi_complete;
async->m.context = async;
return spi_async(spi, &async->m);
}
static struct regmap_async *regmap_spi_async_alloc(void)
{
struct regmap_async_spi *async_spi;
async_spi = kzalloc(sizeof(*async_spi), GFP_KERNEL);
if (!async_spi)
return NULL;
return &async_spi->core;
}
static int regmap_spi_read(void *context,
const void *reg, size_t reg_size,
void *val, size_t val_size)
{
struct device *dev = context;
struct spi_device *spi = to_spi_device(dev);
return spi_write_then_read(spi, reg, reg_size, val, val_size);
}
static struct regmap_bus regmap_spi = {
.write = regmap_spi_write,
.gather_write = regmap_spi_gather_write,
.async_write = regmap_spi_async_write,
.async_alloc = regmap_spi_async_alloc,
.read = regmap_spi_read,
.read_flag_mask = 0x80,
.reg_format_endian_default = REGMAP_ENDIAN_BIG,
.val_format_endian_default = REGMAP_ENDIAN_BIG,
};
/**
* regmap_init_spi(): Initialise register map
*
* @spi: Device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer to
* a struct regmap.
*/
struct regmap *regmap_init_spi(struct spi_device *spi,
const struct regmap_config *config)
{
return regmap_init(&spi->dev, &regmap_spi, &spi->dev, config);
}
EXPORT_SYMBOL_GPL(regmap_init_spi);
/**
* devm_regmap_init_spi(): Initialise register map
*
* @spi: Device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer
* to a struct regmap. The map will be automatically freed by the
* device management code.
*/
struct regmap *devm_regmap_init_spi(struct spi_device *spi,
const struct regmap_config *config)
{
return devm_regmap_init(&spi->dev, &regmap_spi, &spi->dev, config);
}
EXPORT_SYMBOL_GPL(devm_regmap_init_spi);
MODULE_LICENSE("GPL");

View file

@ -0,0 +1,256 @@
/*
* Register map access API - SPMI support
*
* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* Based on regmap-i2c.c:
* Copyright 2011 Wolfson Microelectronics plc
* Author: Mark Brown <broonie@opensource.wolfsonmicro.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/regmap.h>
#include <linux/spmi.h>
#include <linux/module.h>
#include <linux/init.h>
static int regmap_spmi_base_read(void *context,
const void *reg, size_t reg_size,
void *val, size_t val_size)
{
u8 addr = *(u8 *)reg;
int err = 0;
BUG_ON(reg_size != 1);
while (val_size-- && !err)
err = spmi_register_read(context, addr++, val++);
return err;
}
static int regmap_spmi_base_gather_write(void *context,
const void *reg, size_t reg_size,
const void *val, size_t val_size)
{
const u8 *data = val;
u8 addr = *(u8 *)reg;
int err = 0;
BUG_ON(reg_size != 1);
/*
* SPMI defines a more bandwidth-efficient 'Register 0 Write' sequence,
* use it when possible.
*/
if (addr == 0 && val_size) {
err = spmi_register_zero_write(context, *data);
if (err)
goto err_out;
data++;
addr++;
val_size--;
}
while (val_size) {
err = spmi_register_write(context, addr, *data);
if (err)
goto err_out;
data++;
addr++;
val_size--;
}
err_out:
return err;
}
static int regmap_spmi_base_write(void *context, const void *data,
size_t count)
{
BUG_ON(count < 1);
return regmap_spmi_base_gather_write(context, data, 1, data + 1,
count - 1);
}
static struct regmap_bus regmap_spmi_base = {
.read = regmap_spmi_base_read,
.write = regmap_spmi_base_write,
.gather_write = regmap_spmi_base_gather_write,
.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
};
/**
* regmap_init_spmi_base(): Create regmap for the Base register space
* @sdev: SPMI device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer to
* a struct regmap.
*/
struct regmap *regmap_init_spmi_base(struct spmi_device *sdev,
const struct regmap_config *config)
{
return regmap_init(&sdev->dev, &regmap_spmi_base, sdev, config);
}
EXPORT_SYMBOL_GPL(regmap_init_spmi_base);
/**
* devm_regmap_init_spmi_base(): Create managed regmap for Base register space
* @sdev: SPMI device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer
* to a struct regmap. The regmap will be automatically freed by the
* device management code.
*/
struct regmap *devm_regmap_init_spmi_base(struct spmi_device *sdev,
const struct regmap_config *config)
{
return devm_regmap_init(&sdev->dev, &regmap_spmi_base, sdev, config);
}
EXPORT_SYMBOL_GPL(devm_regmap_init_spmi_base);
static int regmap_spmi_ext_read(void *context,
const void *reg, size_t reg_size,
void *val, size_t val_size)
{
int err = 0;
size_t len;
u16 addr;
BUG_ON(reg_size != 2);
addr = *(u16 *)reg;
/*
* Split accesses into two to take advantage of the more
* bandwidth-efficient 'Extended Register Read' command when possible
*/
while (addr <= 0xFF && val_size) {
len = min_t(size_t, val_size, 16);
err = spmi_ext_register_read(context, addr, val, len);
if (err)
goto err_out;
addr += len;
val += len;
val_size -= len;
}
while (val_size) {
len = min_t(size_t, val_size, 8);
err = spmi_ext_register_readl(context, addr, val, val_size);
if (err)
goto err_out;
addr += len;
val += len;
val_size -= len;
}
err_out:
return err;
}
static int regmap_spmi_ext_gather_write(void *context,
const void *reg, size_t reg_size,
const void *val, size_t val_size)
{
int err = 0;
size_t len;
u16 addr;
BUG_ON(reg_size != 2);
addr = *(u16 *)reg;
while (addr <= 0xFF && val_size) {
len = min_t(size_t, val_size, 16);
err = spmi_ext_register_write(context, addr, val, len);
if (err)
goto err_out;
addr += len;
val += len;
val_size -= len;
}
while (val_size) {
len = min_t(size_t, val_size, 8);
err = spmi_ext_register_writel(context, addr, val, len);
if (err)
goto err_out;
addr += len;
val += len;
val_size -= len;
}
err_out:
return err;
}
static int regmap_spmi_ext_write(void *context, const void *data,
size_t count)
{
BUG_ON(count < 2);
return regmap_spmi_ext_gather_write(context, data, 2, data + 2,
count - 2);
}
static struct regmap_bus regmap_spmi_ext = {
.read = regmap_spmi_ext_read,
.write = regmap_spmi_ext_write,
.gather_write = regmap_spmi_ext_gather_write,
.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
};
/**
* regmap_init_spmi_ext(): Create regmap for Ext register space
* @sdev: Device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer to
* a struct regmap.
*/
struct regmap *regmap_init_spmi_ext(struct spmi_device *sdev,
const struct regmap_config *config)
{
return regmap_init(&sdev->dev, &regmap_spmi_ext, sdev, config);
}
EXPORT_SYMBOL_GPL(regmap_init_spmi_ext);
/**
* devm_regmap_init_spmi_ext(): Create managed regmap for Ext register space
* @sdev: SPMI device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer
* to a struct regmap. The regmap will be automatically freed by the
* device management code.
*/
struct regmap *devm_regmap_init_spmi_ext(struct spmi_device *sdev,
const struct regmap_config *config)
{
return devm_regmap_init(&sdev->dev, &regmap_spmi_ext, sdev, config);
}
EXPORT_SYMBOL_GPL(devm_regmap_init_spmi_ext);
MODULE_LICENSE("GPL");

2645
drivers/base/regmap/regmap.c Normal file

File diff suppressed because it is too large Load diff

181
drivers/base/soc.c Normal file
View file

@ -0,0 +1,181 @@
/*
* Copyright (C) ST-Ericsson SA 2011
*
* Author: Lee Jones <lee.jones@linaro.org> for ST-Ericsson.
* License terms: GNU General Public License (GPL), version 2
*/
#include <linux/sysfs.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/stat.h>
#include <linux/slab.h>
#include <linux/idr.h>
#include <linux/spinlock.h>
#include <linux/sys_soc.h>
#include <linux/err.h>
static DEFINE_IDA(soc_ida);
static DEFINE_SPINLOCK(soc_lock);
static ssize_t soc_info_get(struct device *dev,
struct device_attribute *attr,
char *buf);
struct soc_device {
struct device dev;
struct soc_device_attribute *attr;
int soc_dev_num;
};
static struct bus_type soc_bus_type = {
.name = "soc",
};
static DEVICE_ATTR(machine, S_IRUGO, soc_info_get, NULL);
static DEVICE_ATTR(family, S_IRUGO, soc_info_get, NULL);
static DEVICE_ATTR(soc_id, S_IRUGO, soc_info_get, NULL);
static DEVICE_ATTR(revision, S_IRUGO, soc_info_get, NULL);
struct device *soc_device_to_device(struct soc_device *soc_dev)
{
return &soc_dev->dev;
}
static umode_t soc_attribute_mode(struct kobject *kobj,
struct attribute *attr,
int index)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct soc_device *soc_dev = container_of(dev, struct soc_device, dev);
if ((attr == &dev_attr_machine.attr)
&& (soc_dev->attr->machine != NULL))
return attr->mode;
if ((attr == &dev_attr_family.attr)
&& (soc_dev->attr->family != NULL))
return attr->mode;
if ((attr == &dev_attr_revision.attr)
&& (soc_dev->attr->revision != NULL))
return attr->mode;
if ((attr == &dev_attr_soc_id.attr)
&& (soc_dev->attr->soc_id != NULL))
return attr->mode;
/* Unknown or unfilled attribute. */
return 0;
}
static ssize_t soc_info_get(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct soc_device *soc_dev = container_of(dev, struct soc_device, dev);
if (attr == &dev_attr_machine)
return sprintf(buf, "%s\n", soc_dev->attr->machine);
if (attr == &dev_attr_family)
return sprintf(buf, "%s\n", soc_dev->attr->family);
if (attr == &dev_attr_revision)
return sprintf(buf, "%s\n", soc_dev->attr->revision);
if (attr == &dev_attr_soc_id)
return sprintf(buf, "%s\n", soc_dev->attr->soc_id);
return -EINVAL;
}
static struct attribute *soc_attr[] = {
&dev_attr_machine.attr,
&dev_attr_family.attr,
&dev_attr_soc_id.attr,
&dev_attr_revision.attr,
NULL,
};
static const struct attribute_group soc_attr_group = {
.attrs = soc_attr,
.is_visible = soc_attribute_mode,
};
static const struct attribute_group *soc_attr_groups[] = {
&soc_attr_group,
NULL,
};
static void soc_release(struct device *dev)
{
struct soc_device *soc_dev = container_of(dev, struct soc_device, dev);
kfree(soc_dev);
}
struct soc_device *soc_device_register(struct soc_device_attribute *soc_dev_attr)
{
struct soc_device *soc_dev;
int ret;
soc_dev = kzalloc(sizeof(*soc_dev), GFP_KERNEL);
if (!soc_dev) {
ret = -ENOMEM;
goto out1;
}
/* Fetch a unique (reclaimable) SOC ID. */
do {
if (!ida_pre_get(&soc_ida, GFP_KERNEL)) {
ret = -ENOMEM;
goto out2;
}
spin_lock(&soc_lock);
ret = ida_get_new(&soc_ida, &soc_dev->soc_dev_num);
spin_unlock(&soc_lock);
} while (ret == -EAGAIN);
if (ret)
goto out2;
soc_dev->attr = soc_dev_attr;
soc_dev->dev.bus = &soc_bus_type;
soc_dev->dev.groups = soc_attr_groups;
soc_dev->dev.release = soc_release;
dev_set_name(&soc_dev->dev, "soc%d", soc_dev->soc_dev_num);
ret = device_register(&soc_dev->dev);
if (ret)
goto out3;
return soc_dev;
out3:
ida_remove(&soc_ida, soc_dev->soc_dev_num);
out2:
kfree(soc_dev);
out1:
return ERR_PTR(ret);
}
/* Ensure soc_dev->attr is freed prior to calling soc_device_unregister. */
void soc_device_unregister(struct soc_device *soc_dev)
{
ida_remove(&soc_ida, soc_dev->soc_dev_num);
device_unregister(&soc_dev->dev);
}
static int __init soc_bus_register(void)
{
return bus_register(&soc_bus_type);
}
core_initcall(soc_bus_register);
static void __exit soc_bus_unregister(void)
{
ida_destroy(&soc_ida);
bus_unregister(&soc_bus_type);
}
module_exit(soc_bus_unregister);

139
drivers/base/syscore.c Normal file
View file

@ -0,0 +1,139 @@
/*
* syscore.c - Execution of system core operations.
*
* Copyright (C) 2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
*
* This file is released under the GPLv2.
*/
#include <linux/syscore_ops.h>
#include <linux/mutex.h>
#include <linux/module.h>
#include <linux/suspend.h>
#include <trace/events/power.h>
#include <linux/wakeup_reason.h>
#include <linux/exynos-ss.h>
static LIST_HEAD(syscore_ops_list);
static DEFINE_MUTEX(syscore_ops_lock);
/**
* register_syscore_ops - Register a set of system core operations.
* @ops: System core operations to register.
*/
void register_syscore_ops(struct syscore_ops *ops)
{
mutex_lock(&syscore_ops_lock);
list_add_tail(&ops->node, &syscore_ops_list);
mutex_unlock(&syscore_ops_lock);
}
EXPORT_SYMBOL_GPL(register_syscore_ops);
/**
* unregister_syscore_ops - Unregister a set of system core operations.
* @ops: System core operations to unregister.
*/
void unregister_syscore_ops(struct syscore_ops *ops)
{
mutex_lock(&syscore_ops_lock);
list_del(&ops->node);
mutex_unlock(&syscore_ops_lock);
}
EXPORT_SYMBOL_GPL(unregister_syscore_ops);
#ifdef CONFIG_PM_SLEEP
/**
* syscore_suspend - Execute all the registered system core suspend callbacks.
*
* This function is executed with one CPU on-line and disabled interrupts.
*/
int syscore_suspend(void)
{
struct syscore_ops *ops;
int ret = 0;
trace_suspend_resume(TPS("syscore_suspend"), 0, true);
pr_debug("Checking wakeup interrupts\n");
/* Return error code if there are any wakeup interrupts pending. */
if (pm_wakeup_pending())
return -EBUSY;
WARN_ONCE(!irqs_disabled(),
"Interrupts enabled before system core suspend.\n");
list_for_each_entry_reverse(ops, &syscore_ops_list, node)
if (ops->suspend) {
if (initcall_debug)
pr_info("PM: Calling %pF\n", ops->suspend);
exynos_ss_suspend(ops->suspend, NULL, ESS_FLAG_IN);
ret = ops->suspend();
exynos_ss_suspend(ops->suspend, NULL, ESS_FLAG_OUT);
if (ret)
goto err_out;
WARN_ONCE(!irqs_disabled(),
"Interrupts enabled after %pF\n", ops->suspend);
}
trace_suspend_resume(TPS("syscore_suspend"), 0, false);
return 0;
err_out:
log_suspend_abort_reason("System core suspend callback %pF failed",
ops->suspend);
pr_err("PM: System core suspend callback %pF failed.\n", ops->suspend);
list_for_each_entry_continue(ops, &syscore_ops_list, node)
if (ops->resume)
ops->resume();
return ret;
}
EXPORT_SYMBOL_GPL(syscore_suspend);
/**
* syscore_resume - Execute all the registered system core resume callbacks.
*
* This function is executed with one CPU on-line and disabled interrupts.
*/
void syscore_resume(void)
{
struct syscore_ops *ops;
trace_suspend_resume(TPS("syscore_resume"), 0, true);
WARN_ONCE(!irqs_disabled(),
"Interrupts enabled before system core resume.\n");
list_for_each_entry(ops, &syscore_ops_list, node)
if (ops->resume) {
if (initcall_debug)
pr_info("PM: Calling %pF\n", ops->resume);
exynos_ss_suspend(ops->resume, NULL, ESS_FLAG_IN);
ops->resume();
exynos_ss_suspend(ops->resume, NULL, ESS_FLAG_OUT);
WARN_ONCE(!irqs_disabled(),
"Interrupts enabled after %pF\n", ops->resume);
}
trace_suspend_resume(TPS("syscore_resume"), 0, false);
}
EXPORT_SYMBOL_GPL(syscore_resume);
#endif /* CONFIG_PM_SLEEP */
/**
* syscore_shutdown - Execute all the registered system core shutdown callbacks.
*/
void syscore_shutdown(void)
{
struct syscore_ops *ops;
mutex_lock(&syscore_ops_lock);
list_for_each_entry_reverse(ops, &syscore_ops_list, node)
if (ops->shutdown) {
if (initcall_debug)
pr_info("PM: Calling %pF\n", ops->shutdown);
ops->shutdown();
}
mutex_unlock(&syscore_ops_lock);
}

178
drivers/base/topology.c Normal file
View file

@ -0,0 +1,178 @@
/*
* driver/base/topology.c - Populate sysfs with cpu topology information
*
* Written by: Zhang Yanmin, Intel Corporation
*
* Copyright (C) 2006, Intel Corp.
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
*/
#include <linux/mm.h>
#include <linux/cpu.h>
#include <linux/module.h>
#include <linux/hardirq.h>
#include <linux/topology.h>
#define define_one_ro_named(_name, _func) \
static DEVICE_ATTR(_name, 0444, _func, NULL)
#define define_one_ro(_name) \
static DEVICE_ATTR(_name, 0444, show_##_name, NULL)
#define define_id_show_func(name) \
static ssize_t show_##name(struct device *dev, \
struct device_attribute *attr, char *buf) \
{ \
return sprintf(buf, "%d\n", topology_##name(dev->id)); \
}
#if defined(topology_thread_cpumask) || defined(topology_core_cpumask) || \
defined(topology_book_cpumask)
static ssize_t show_cpumap(int type, const struct cpumask *mask, char *buf)
{
ptrdiff_t len = PTR_ALIGN(buf + PAGE_SIZE - 1, PAGE_SIZE) - buf;
int n = 0;
if (len > 1) {
n = type?
cpulist_scnprintf(buf, len-2, mask) :
cpumask_scnprintf(buf, len-2, mask);
buf[n++] = '\n';
buf[n] = '\0';
}
return n;
}
#endif
#define define_siblings_show_map(name) \
static ssize_t show_##name(struct device *dev, \
struct device_attribute *attr, char *buf) \
{ \
return show_cpumap(0, topology_##name(dev->id), buf); \
}
#define define_siblings_show_list(name) \
static ssize_t show_##name##_list(struct device *dev, \
struct device_attribute *attr, \
char *buf) \
{ \
return show_cpumap(1, topology_##name(dev->id), buf); \
}
#define define_siblings_show_func(name) \
define_siblings_show_map(name); define_siblings_show_list(name)
define_id_show_func(physical_package_id);
define_one_ro(physical_package_id);
define_id_show_func(core_id);
define_one_ro(core_id);
define_siblings_show_func(thread_cpumask);
define_one_ro_named(thread_siblings, show_thread_cpumask);
define_one_ro_named(thread_siblings_list, show_thread_cpumask_list);
define_siblings_show_func(core_cpumask);
define_one_ro_named(core_siblings, show_core_cpumask);
define_one_ro_named(core_siblings_list, show_core_cpumask_list);
#ifdef CONFIG_SCHED_BOOK
define_id_show_func(book_id);
define_one_ro(book_id);
define_siblings_show_func(book_cpumask);
define_one_ro_named(book_siblings, show_book_cpumask);
define_one_ro_named(book_siblings_list, show_book_cpumask_list);
#endif
static struct attribute *default_attrs[] = {
&dev_attr_physical_package_id.attr,
&dev_attr_core_id.attr,
&dev_attr_thread_siblings.attr,
&dev_attr_thread_siblings_list.attr,
&dev_attr_core_siblings.attr,
&dev_attr_core_siblings_list.attr,
#ifdef CONFIG_SCHED_BOOK
&dev_attr_book_id.attr,
&dev_attr_book_siblings.attr,
&dev_attr_book_siblings_list.attr,
#endif
NULL
};
static struct attribute_group topology_attr_group = {
.attrs = default_attrs,
.name = "topology"
};
/* Add/Remove cpu_topology interface for CPU device */
static int topology_add_dev(unsigned int cpu)
{
struct device *dev = get_cpu_device(cpu);
return sysfs_create_group(&dev->kobj, &topology_attr_group);
}
static void topology_remove_dev(unsigned int cpu)
{
struct device *dev = get_cpu_device(cpu);
sysfs_remove_group(&dev->kobj, &topology_attr_group);
}
static int topology_cpu_callback(struct notifier_block *nfb,
unsigned long action, void *hcpu)
{
unsigned int cpu = (unsigned long)hcpu;
int rc = 0;
switch (action) {
case CPU_UP_PREPARE:
case CPU_UP_PREPARE_FROZEN:
rc = topology_add_dev(cpu);
break;
case CPU_UP_CANCELED:
case CPU_UP_CANCELED_FROZEN:
case CPU_DEAD:
case CPU_DEAD_FROZEN:
topology_remove_dev(cpu);
break;
}
return notifier_from_errno(rc);
}
static int topology_sysfs_init(void)
{
int cpu;
int rc = 0;
cpu_notifier_register_begin();
for_each_online_cpu(cpu) {
rc = topology_add_dev(cpu);
if (rc)
goto out;
}
__hotcpu_notifier(topology_cpu_callback, 0);
out:
cpu_notifier_register_done();
return rc;
}
device_initcall(topology_sysfs_init);

View file

@ -0,0 +1,280 @@
/*
* transport_class.c - implementation of generic transport classes
* using attribute_containers
*
* Copyright (c) 2005 - James Bottomley <James.Bottomley@steeleye.com>
*
* This file is licensed under GPLv2
*
* The basic idea here is to allow any "device controller" (which
* would most often be a Host Bus Adapter to use the services of one
* or more tranport classes for performing transport specific
* services. Transport specific services are things that the generic
* command layer doesn't want to know about (speed settings, line
* condidtioning, etc), but which the user might be interested in.
* Thus, the HBA's use the routines exported by the transport classes
* to perform these functions. The transport classes export certain
* values to the user via sysfs using attribute containers.
*
* Note: because not every HBA will care about every transport
* attribute, there's a many to one relationship that goes like this:
*
* transport class<-----attribute container<----class device
*
* Usually the attribute container is per-HBA, but the design doesn't
* mandate that. Although most of the services will be specific to
* the actual external storage connection used by the HBA, the generic
* transport class is framed entirely in terms of generic devices to
* allow it to be used by any physical HBA in the system.
*/
#include <linux/export.h>
#include <linux/attribute_container.h>
#include <linux/transport_class.h>
/**
* transport_class_register - register an initial transport class
*
* @tclass: a pointer to the transport class structure to be initialised
*
* The transport class contains an embedded class which is used to
* identify it. The caller should initialise this structure with
* zeros and then generic class must have been initialised with the
* actual transport class unique name. There's a macro
* DECLARE_TRANSPORT_CLASS() to do this (declared classes still must
* be registered).
*
* Returns 0 on success or error on failure.
*/
int transport_class_register(struct transport_class *tclass)
{
return class_register(&tclass->class);
}
EXPORT_SYMBOL_GPL(transport_class_register);
/**
* transport_class_unregister - unregister a previously registered class
*
* @tclass: The transport class to unregister
*
* Must be called prior to deallocating the memory for the transport
* class.
*/
void transport_class_unregister(struct transport_class *tclass)
{
class_unregister(&tclass->class);
}
EXPORT_SYMBOL_GPL(transport_class_unregister);
static int anon_transport_dummy_function(struct transport_container *tc,
struct device *dev,
struct device *cdev)
{
/* do nothing */
return 0;
}
/**
* anon_transport_class_register - register an anonymous class
*
* @atc: The anon transport class to register
*
* The anonymous transport class contains both a transport class and a
* container. The idea of an anonymous class is that it never
* actually has any device attributes associated with it (and thus
* saves on container storage). So it can only be used for triggering
* events. Use prezero and then use DECLARE_ANON_TRANSPORT_CLASS() to
* initialise the anon transport class storage.
*/
int anon_transport_class_register(struct anon_transport_class *atc)
{
int error;
atc->container.class = &atc->tclass.class;
attribute_container_set_no_classdevs(&atc->container);
error = attribute_container_register(&atc->container);
if (error)
return error;
atc->tclass.setup = anon_transport_dummy_function;
atc->tclass.remove = anon_transport_dummy_function;
return 0;
}
EXPORT_SYMBOL_GPL(anon_transport_class_register);
/**
* anon_transport_class_unregister - unregister an anon class
*
* @atc: Pointer to the anon transport class to unregister
*
* Must be called prior to deallocating the memory for the anon
* transport class.
*/
void anon_transport_class_unregister(struct anon_transport_class *atc)
{
if (unlikely(attribute_container_unregister(&atc->container)))
BUG();
}
EXPORT_SYMBOL_GPL(anon_transport_class_unregister);
static int transport_setup_classdev(struct attribute_container *cont,
struct device *dev,
struct device *classdev)
{
struct transport_class *tclass = class_to_transport_class(cont->class);
struct transport_container *tcont = attribute_container_to_transport_container(cont);
if (tclass->setup)
tclass->setup(tcont, dev, classdev);
return 0;
}
/**
* transport_setup_device - declare a new dev for transport class association but don't make it visible yet.
* @dev: the generic device representing the entity being added
*
* Usually, dev represents some component in the HBA system (either
* the HBA itself or a device remote across the HBA bus). This
* routine is simply a trigger point to see if any set of transport
* classes wishes to associate with the added device. This allocates
* storage for the class device and initialises it, but does not yet
* add it to the system or add attributes to it (you do this with
* transport_add_device). If you have no need for a separate setup
* and add operations, use transport_register_device (see
* transport_class.h).
*/
void transport_setup_device(struct device *dev)
{
attribute_container_add_device(dev, transport_setup_classdev);
}
EXPORT_SYMBOL_GPL(transport_setup_device);
static int transport_add_class_device(struct attribute_container *cont,
struct device *dev,
struct device *classdev)
{
int error = attribute_container_add_class_device(classdev);
struct transport_container *tcont =
attribute_container_to_transport_container(cont);
if (!error && tcont->statistics)
error = sysfs_create_group(&classdev->kobj, tcont->statistics);
return error;
}
/**
* transport_add_device - declare a new dev for transport class association
*
* @dev: the generic device representing the entity being added
*
* Usually, dev represents some component in the HBA system (either
* the HBA itself or a device remote across the HBA bus). This
* routine is simply a trigger point used to add the device to the
* system and register attributes for it.
*/
void transport_add_device(struct device *dev)
{
attribute_container_device_trigger(dev, transport_add_class_device);
}
EXPORT_SYMBOL_GPL(transport_add_device);
static int transport_configure(struct attribute_container *cont,
struct device *dev,
struct device *cdev)
{
struct transport_class *tclass = class_to_transport_class(cont->class);
struct transport_container *tcont = attribute_container_to_transport_container(cont);
if (tclass->configure)
tclass->configure(tcont, dev, cdev);
return 0;
}
/**
* transport_configure_device - configure an already set up device
*
* @dev: generic device representing device to be configured
*
* The idea of configure is simply to provide a point within the setup
* process to allow the transport class to extract information from a
* device after it has been setup. This is used in SCSI because we
* have to have a setup device to begin using the HBA, but after we
* send the initial inquiry, we use configure to extract the device
* parameters. The device need not have been added to be configured.
*/
void transport_configure_device(struct device *dev)
{
attribute_container_device_trigger(dev, transport_configure);
}
EXPORT_SYMBOL_GPL(transport_configure_device);
static int transport_remove_classdev(struct attribute_container *cont,
struct device *dev,
struct device *classdev)
{
struct transport_container *tcont =
attribute_container_to_transport_container(cont);
struct transport_class *tclass = class_to_transport_class(cont->class);
if (tclass->remove)
tclass->remove(tcont, dev, classdev);
if (tclass->remove != anon_transport_dummy_function) {
if (tcont->statistics)
sysfs_remove_group(&classdev->kobj, tcont->statistics);
attribute_container_class_device_del(classdev);
}
return 0;
}
/**
* transport_remove_device - remove the visibility of a device
*
* @dev: generic device to remove
*
* This call removes the visibility of the device (to the user from
* sysfs), but does not destroy it. To eliminate a device entirely
* you must also call transport_destroy_device. If you don't need to
* do remove and destroy as separate operations, use
* transport_unregister_device() (see transport_class.h) which will
* perform both calls for you.
*/
void transport_remove_device(struct device *dev)
{
attribute_container_device_trigger(dev, transport_remove_classdev);
}
EXPORT_SYMBOL_GPL(transport_remove_device);
static void transport_destroy_classdev(struct attribute_container *cont,
struct device *dev,
struct device *classdev)
{
struct transport_class *tclass = class_to_transport_class(cont->class);
if (tclass->remove != anon_transport_dummy_function)
put_device(classdev);
}
/**
* transport_destroy_device - destroy a removed device
*
* @dev: device to eliminate from the transport class.
*
* This call triggers the elimination of storage associated with the
* transport classdev. Note: all it really does is relinquish a
* reference to the classdev. The memory will not be freed until the
* last reference goes to zero. Note also that the classdev retains a
* reference count on dev, so dev too will remain for as long as the
* transport class device remains around.
*/
void transport_destroy_device(struct device *dev)
{
attribute_container_remove_device(dev, transport_destroy_classdev);
}
EXPORT_SYMBOL_GPL(transport_destroy_device);