Fixed MTP to work with TWRP

This commit is contained in:
awab228 2018-06-19 23:16:04 +02:00
commit f6dfaef42e
50820 changed files with 20846062 additions and 0 deletions

188
fs/jffs2/Kconfig Normal file
View file

@ -0,0 +1,188 @@
config JFFS2_FS
tristate "Journalling Flash File System v2 (JFFS2) support"
select CRC32
depends on MTD
help
JFFS2 is the second generation of the Journalling Flash File System
for use on diskless embedded devices. It provides improved wear
levelling, compression and support for hard links. You cannot use
this on normal block devices, only on 'MTD' devices.
Further information on the design and implementation of JFFS2 is
available at <http://sources.redhat.com/jffs2/>.
config JFFS2_FS_DEBUG
int "JFFS2 debugging verbosity (0 = quiet, 2 = noisy)"
depends on JFFS2_FS
default "0"
help
This controls the amount of debugging messages produced by the JFFS2
code. Set it to zero for use in production systems. For evaluation,
testing and debugging, it's advisable to set it to one. This will
enable a few assertions and will print debugging messages at the
KERN_DEBUG loglevel, where they won't normally be visible. Level 2
is unlikely to be useful - it enables extra debugging in certain
areas which at one point needed debugging, but when the bugs were
located and fixed, the detailed messages were relegated to level 2.
If reporting bugs, please try to have available a full dump of the
messages at debug level 1 while the misbehaviour was occurring.
config JFFS2_FS_WRITEBUFFER
bool "JFFS2 write-buffering support"
depends on JFFS2_FS
default y
help
This enables the write-buffering support in JFFS2.
This functionality is required to support JFFS2 on the following
types of flash devices:
- NAND flash
- NOR flash with transparent ECC
- DataFlash
config JFFS2_FS_WBUF_VERIFY
bool "Verify JFFS2 write-buffer reads"
depends on JFFS2_FS_WRITEBUFFER
default n
help
This causes JFFS2 to read back every page written through the
write-buffer, and check for errors.
config JFFS2_SUMMARY
bool "JFFS2 summary support"
depends on JFFS2_FS
default n
help
This feature makes it possible to use summary information
for faster filesystem mount.
The summary information can be inserted into a filesystem image
by the utility 'sumtool'.
If unsure, say 'N'.
config JFFS2_FS_XATTR
bool "JFFS2 XATTR support"
depends on JFFS2_FS
default n
help
Extended attributes are name:value pairs associated with inodes by
the kernel or by users (see the attr(5) manual page, or visit
<http://acl.bestbits.at/> for details).
If unsure, say N.
config JFFS2_FS_POSIX_ACL
bool "JFFS2 POSIX Access Control Lists"
depends on JFFS2_FS_XATTR
default y
select FS_POSIX_ACL
help
Posix Access Control Lists (ACLs) support permissions for users and
groups beyond the owner/group/world scheme.
To learn more about Access Control Lists, visit the Posix ACLs for
Linux website <http://acl.bestbits.at/>.
If you don't know what Access Control Lists are, say N
config JFFS2_FS_SECURITY
bool "JFFS2 Security Labels"
depends on JFFS2_FS_XATTR
default y
help
Security labels support alternative access control models
implemented by security modules like SELinux. This option
enables an extended attribute handler for file security
labels in the jffs2 filesystem.
If you are not using a security module that requires using
extended attributes for file security labels, say N.
config JFFS2_COMPRESSION_OPTIONS
bool "Advanced compression options for JFFS2"
depends on JFFS2_FS
default n
help
Enabling this option allows you to explicitly choose which
compression modules, if any, are enabled in JFFS2. Removing
compressors can mean you cannot read existing file systems,
and enabling experimental compressors can mean that you
write a file system which cannot be read by a standard kernel.
If unsure, you should _definitely_ say 'N'.
config JFFS2_ZLIB
bool "JFFS2 ZLIB compression support" if JFFS2_COMPRESSION_OPTIONS
select ZLIB_INFLATE
select ZLIB_DEFLATE
depends on JFFS2_FS
default y
help
Zlib is designed to be a free, general-purpose, legally unencumbered,
lossless data-compression library for use on virtually any computer
hardware and operating system. See <http://www.gzip.org/zlib/> for
further information.
Say 'Y' if unsure.
config JFFS2_LZO
bool "JFFS2 LZO compression support" if JFFS2_COMPRESSION_OPTIONS
select LZO_COMPRESS
select LZO_DECOMPRESS
depends on JFFS2_FS
default n
help
minilzo-based compression. Generally works better than Zlib.
This feature was added in July, 2007. Say 'N' if you need
compatibility with older bootloaders or kernels.
config JFFS2_RTIME
bool "JFFS2 RTIME compression support" if JFFS2_COMPRESSION_OPTIONS
depends on JFFS2_FS
default y
help
Rtime does manage to recompress already-compressed data. Say 'Y' if unsure.
config JFFS2_RUBIN
bool "JFFS2 RUBIN compression support" if JFFS2_COMPRESSION_OPTIONS
depends on JFFS2_FS
default n
help
RUBINMIPS and DYNRUBIN compressors. Say 'N' if unsure.
choice
prompt "JFFS2 default compression mode" if JFFS2_COMPRESSION_OPTIONS
default JFFS2_CMODE_PRIORITY
depends on JFFS2_FS
help
You can set here the default compression mode of JFFS2 from
the available compression modes. Don't touch if unsure.
config JFFS2_CMODE_NONE
bool "no compression"
help
Uses no compression.
config JFFS2_CMODE_PRIORITY
bool "priority"
help
Tries the compressors in a predefined order and chooses the first
successful one.
config JFFS2_CMODE_SIZE
bool "size"
help
Tries all compressors and chooses the one which has the smallest
result.
config JFFS2_CMODE_FAVOURLZO
bool "Favour LZO"
help
Tries all compressors and chooses the one which has the smallest
result but gives some preference to LZO (which has faster
decompression) at the expense of size.
endchoice

30
fs/jffs2/LICENCE Normal file
View file

@ -0,0 +1,30 @@
The files in this directory and elsewhere which refer to this LICENCE
file are part of JFFS2, the Journalling Flash File System v2.
Copyright © 2001-2007 Red Hat, Inc. and others
JFFS2 is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 2 or (at your option) any later
version.
JFFS2 is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License along
with JFFS2; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
As a special exception, if other files instantiate templates or use
macros or inline functions from these files, or you compile these
files and link them with other works to produce a work based on these
files, these files do not by themselves cause the resulting work to be
covered by the GNU General Public License. However the source code for
these files must still be made available in accordance with section (3)
of the GNU General Public License.
This exception does not invalidate any other reasons why a work based on
this file might be covered by the GNU General Public License.

21
fs/jffs2/Makefile Normal file
View file

@ -0,0 +1,21 @@
#
# Makefile for the Linux Journalling Flash File System v2 (JFFS2)
#
#
obj-$(CONFIG_JFFS2_FS) += jffs2.o
jffs2-y := compr.o dir.o file.o ioctl.o nodelist.o malloc.o
jffs2-y += read.o nodemgmt.o readinode.o write.o scan.o gc.o
jffs2-y += symlink.o build.o erase.o background.o fs.o writev.o
jffs2-y += super.o debug.o
jffs2-$(CONFIG_JFFS2_FS_WRITEBUFFER) += wbuf.o
jffs2-$(CONFIG_JFFS2_FS_XATTR) += xattr.o xattr_trusted.o xattr_user.o
jffs2-$(CONFIG_JFFS2_FS_SECURITY) += security.o
jffs2-$(CONFIG_JFFS2_FS_POSIX_ACL) += acl.o
jffs2-$(CONFIG_JFFS2_RUBIN) += compr_rubin.o
jffs2-$(CONFIG_JFFS2_RTIME) += compr_rtime.o
jffs2-$(CONFIG_JFFS2_ZLIB) += compr_zlib.o
jffs2-$(CONFIG_JFFS2_LZO) += compr_lzo.o
jffs2-$(CONFIG_JFFS2_SUMMARY) += summary.o

172
fs/jffs2/README.Locking Normal file
View file

@ -0,0 +1,172 @@
JFFS2 LOCKING DOCUMENTATION
---------------------------
At least theoretically, JFFS2 does not require the Big Kernel Lock
(BKL), which was always helpfully obtained for it by Linux 2.4 VFS
code. It has its own locking, as described below.
This document attempts to describe the existing locking rules for
JFFS2. It is not expected to remain perfectly up to date, but ought to
be fairly close.
alloc_sem
---------
The alloc_sem is a per-filesystem mutex, used primarily to ensure
contiguous allocation of space on the medium. It is automatically
obtained during space allocations (jffs2_reserve_space()) and freed
upon write completion (jffs2_complete_reservation()). Note that
the garbage collector will obtain this right at the beginning of
jffs2_garbage_collect_pass() and release it at the end, thereby
preventing any other write activity on the file system during a
garbage collect pass.
When writing new nodes, the alloc_sem must be held until the new nodes
have been properly linked into the data structures for the inode to
which they belong. This is for the benefit of NAND flash - adding new
nodes to an inode may obsolete old ones, and by holding the alloc_sem
until this happens we ensure that any data in the write-buffer at the
time this happens are part of the new node, not just something that
was written afterwards. Hence, we can ensure the newly-obsoleted nodes
don't actually get erased until the write-buffer has been flushed to
the medium.
With the introduction of NAND flash support and the write-buffer,
the alloc_sem is also used to protect the wbuf-related members of the
jffs2_sb_info structure. Atomically reading the wbuf_len member to see
if the wbuf is currently holding any data is permitted, though.
Ordering constraints: See f->sem.
File Mutex f->sem
---------------------
This is the JFFS2-internal equivalent of the inode mutex i->i_sem.
It protects the contents of the jffs2_inode_info private inode data,
including the linked list of node fragments (but see the notes below on
erase_completion_lock), etc.
The reason that the i_sem itself isn't used for this purpose is to
avoid deadlocks with garbage collection -- the VFS will lock the i_sem
before calling a function which may need to allocate space. The
allocation may trigger garbage-collection, which may need to move a
node belonging to the inode which was locked in the first place by the
VFS. If the garbage collection code were to attempt to lock the i_sem
of the inode from which it's garbage-collecting a physical node, this
lead to deadlock, unless we played games with unlocking the i_sem
before calling the space allocation functions.
Instead of playing such games, we just have an extra internal
mutex, which is obtained by the garbage collection code and also
by the normal file system code _after_ allocation of space.
Ordering constraints:
1. Never attempt to allocate space or lock alloc_sem with
any f->sem held.
2. Never attempt to lock two file mutexes in one thread.
No ordering rules have been made for doing so.
erase_completion_lock spinlock
------------------------------
This is used to serialise access to the eraseblock lists, to the
per-eraseblock lists of physical jffs2_raw_node_ref structures, and
(NB) the per-inode list of physical nodes. The latter is a special
case - see below.
As the MTD API no longer permits erase-completion callback functions
to be called from bottom-half (timer) context (on the basis that nobody
ever actually implemented such a thing), it's now sufficient to use
a simple spin_lock() rather than spin_lock_bh().
Note that the per-inode list of physical nodes (f->nodes) is a special
case. Any changes to _valid_ nodes (i.e. ->flash_offset & 1 == 0) in
the list are protected by the file mutex f->sem. But the erase code
may remove _obsolete_ nodes from the list while holding only the
erase_completion_lock. So you can walk the list only while holding the
erase_completion_lock, and can drop the lock temporarily mid-walk as
long as the pointer you're holding is to a _valid_ node, not an
obsolete one.
The erase_completion_lock is also used to protect the c->gc_task
pointer when the garbage collection thread exits. The code to kill the
GC thread locks it, sends the signal, then unlocks it - while the GC
thread itself locks it, zeroes c->gc_task, then unlocks on the exit path.
inocache_lock spinlock
----------------------
This spinlock protects the hashed list (c->inocache_list) of the
in-core jffs2_inode_cache objects (each inode in JFFS2 has the
correspondent jffs2_inode_cache object). So, the inocache_lock
has to be locked while walking the c->inocache_list hash buckets.
This spinlock also covers allocation of new inode numbers, which is
currently just '++->highest_ino++', but might one day get more complicated
if we need to deal with wrapping after 4 milliard inode numbers are used.
Note, the f->sem guarantees that the correspondent jffs2_inode_cache
will not be removed. So, it is allowed to access it without locking
the inocache_lock spinlock.
Ordering constraints:
If both erase_completion_lock and inocache_lock are needed, the
c->erase_completion has to be acquired first.
erase_free_sem
--------------
This mutex is only used by the erase code which frees obsolete node
references and the jffs2_garbage_collect_deletion_dirent() function.
The latter function on NAND flash must read _obsolete_ nodes to
determine whether the 'deletion dirent' under consideration can be
discarded or whether it is still required to show that an inode has
been unlinked. Because reading from the flash may sleep, the
erase_completion_lock cannot be held, so an alternative, more
heavyweight lock was required to prevent the erase code from freeing
the jffs2_raw_node_ref structures in question while the garbage
collection code is looking at them.
Suggestions for alternative solutions to this problem would be welcomed.
wbuf_sem
--------
This read/write semaphore protects against concurrent access to the
write-behind buffer ('wbuf') used for flash chips where we must write
in blocks. It protects both the contents of the wbuf and the metadata
which indicates which flash region (if any) is currently covered by
the buffer.
Ordering constraints:
Lock wbuf_sem last, after the alloc_sem or and f->sem.
c->xattr_sem
------------
This read/write semaphore protects against concurrent access to the
xattr related objects which include stuff in superblock and ic->xref.
In read-only path, write-semaphore is too much exclusion. It's enough
by read-semaphore. But you must hold write-semaphore when updating,
creating or deleting any xattr related object.
Once xattr_sem released, there would be no assurance for the existence
of those objects. Thus, a series of processes is often required to retry,
when updating such a object is necessary under holding read semaphore.
For example, do_jffs2_getxattr() holds read-semaphore to scan xref and
xdatum at first. But it retries this process with holding write-semaphore
after release read-semaphore, if it's necessary to load name/value pair
from medium.
Ordering constraints:
Lock xattr_sem last, after the alloc_sem.

37
fs/jffs2/TODO Normal file
View file

@ -0,0 +1,37 @@
- support asynchronous operation -- add a per-fs 'reserved_space' count,
let each outstanding write reserve the _maximum_ amount of physical
space it could take. Let GC flush the outstanding writes because the
reservations will necessarily be pessimistic. With this we could even
do shared writable mmap, if we can have a fs hook for do_wp_page() to
make the reservation.
- disable compression in commit_write()?
- fine-tune the allocation / GC thresholds
- chattr support - turning on/off and tuning compression per-inode
- checkpointing (do we need this? scan is quite fast)
- make the scan code populate real inodes so read_inode just after
mount doesn't have to read the flash twice for large files.
Make this a per-inode option, changeable with chattr, so you can
decide which inodes should be in-core immediately after mount.
- test, test, test
- NAND flash support:
- almost done :)
- use bad block check instead of the hardwired byte check
- Optimisations:
- Split writes so they go to two separate blocks rather than just c->nextblock.
By writing _new_ nodes to one block, and garbage-collected REF_PRISTINE
nodes to a different one, we can separate clean nodes from those which
are likely to become dirty, and end up with blocks which are each far
closer to 100% or 0% clean, hence speeding up later GC progress dramatically.
- Stop keeping name in-core with struct jffs2_full_dirent. If we keep the hash in
the full dirent, we only need to go to the flash in lookup() when we think we've
got a match, and in readdir().
- Doubly-linked next_in_ino list to allow us to free obsoleted raw_node_refs immediately?
- Remove size from jffs2_raw_node_frag.
dedekind:
1. __jffs2_flush_wbuf() has a strange 'pad' parameter. Eliminate.
2. get_sb()->build_fs()->scan() path... Why get_sb() removes scan()'s crap in
case of failure? scan() does not clean everything. Fix.

309
fs/jffs2/acl.c Normal file
View file

@ -0,0 +1,309 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2006 NEC Corporation
*
* Created by KaiGai Kohei <kaigai@ak.jp.nec.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/time.h>
#include <linux/crc32.h>
#include <linux/jffs2.h>
#include <linux/xattr.h>
#include <linux/posix_acl_xattr.h>
#include <linux/mtd/mtd.h>
#include "nodelist.h"
static size_t jffs2_acl_size(int count)
{
if (count <= 4) {
return sizeof(struct jffs2_acl_header)
+ count * sizeof(struct jffs2_acl_entry_short);
} else {
return sizeof(struct jffs2_acl_header)
+ 4 * sizeof(struct jffs2_acl_entry_short)
+ (count - 4) * sizeof(struct jffs2_acl_entry);
}
}
static int jffs2_acl_count(size_t size)
{
size_t s;
size -= sizeof(struct jffs2_acl_header);
if (size < 4 * sizeof(struct jffs2_acl_entry_short)) {
if (size % sizeof(struct jffs2_acl_entry_short))
return -1;
return size / sizeof(struct jffs2_acl_entry_short);
} else {
s = size - 4 * sizeof(struct jffs2_acl_entry_short);
if (s % sizeof(struct jffs2_acl_entry))
return -1;
return s / sizeof(struct jffs2_acl_entry) + 4;
}
}
static struct posix_acl *jffs2_acl_from_medium(void *value, size_t size)
{
void *end = value + size;
struct jffs2_acl_header *header = value;
struct jffs2_acl_entry *entry;
struct posix_acl *acl;
uint32_t ver;
int i, count;
if (!value)
return NULL;
if (size < sizeof(struct jffs2_acl_header))
return ERR_PTR(-EINVAL);
ver = je32_to_cpu(header->a_version);
if (ver != JFFS2_ACL_VERSION) {
JFFS2_WARNING("Invalid ACL version. (=%u)\n", ver);
return ERR_PTR(-EINVAL);
}
value += sizeof(struct jffs2_acl_header);
count = jffs2_acl_count(size);
if (count < 0)
return ERR_PTR(-EINVAL);
if (count == 0)
return NULL;
acl = posix_acl_alloc(count, GFP_KERNEL);
if (!acl)
return ERR_PTR(-ENOMEM);
for (i=0; i < count; i++) {
entry = value;
if (value + sizeof(struct jffs2_acl_entry_short) > end)
goto fail;
acl->a_entries[i].e_tag = je16_to_cpu(entry->e_tag);
acl->a_entries[i].e_perm = je16_to_cpu(entry->e_perm);
switch (acl->a_entries[i].e_tag) {
case ACL_USER_OBJ:
case ACL_GROUP_OBJ:
case ACL_MASK:
case ACL_OTHER:
value += sizeof(struct jffs2_acl_entry_short);
break;
case ACL_USER:
value += sizeof(struct jffs2_acl_entry);
if (value > end)
goto fail;
acl->a_entries[i].e_uid =
make_kuid(&init_user_ns,
je32_to_cpu(entry->e_id));
break;
case ACL_GROUP:
value += sizeof(struct jffs2_acl_entry);
if (value > end)
goto fail;
acl->a_entries[i].e_gid =
make_kgid(&init_user_ns,
je32_to_cpu(entry->e_id));
break;
default:
goto fail;
}
}
if (value != end)
goto fail;
return acl;
fail:
posix_acl_release(acl);
return ERR_PTR(-EINVAL);
}
static void *jffs2_acl_to_medium(const struct posix_acl *acl, size_t *size)
{
struct jffs2_acl_header *header;
struct jffs2_acl_entry *entry;
void *e;
size_t i;
*size = jffs2_acl_size(acl->a_count);
header = kmalloc(sizeof(*header) + acl->a_count * sizeof(*entry), GFP_KERNEL);
if (!header)
return ERR_PTR(-ENOMEM);
header->a_version = cpu_to_je32(JFFS2_ACL_VERSION);
e = header + 1;
for (i=0; i < acl->a_count; i++) {
const struct posix_acl_entry *acl_e = &acl->a_entries[i];
entry = e;
entry->e_tag = cpu_to_je16(acl_e->e_tag);
entry->e_perm = cpu_to_je16(acl_e->e_perm);
switch(acl_e->e_tag) {
case ACL_USER:
entry->e_id = cpu_to_je32(
from_kuid(&init_user_ns, acl_e->e_uid));
e += sizeof(struct jffs2_acl_entry);
break;
case ACL_GROUP:
entry->e_id = cpu_to_je32(
from_kgid(&init_user_ns, acl_e->e_gid));
e += sizeof(struct jffs2_acl_entry);
break;
case ACL_USER_OBJ:
case ACL_GROUP_OBJ:
case ACL_MASK:
case ACL_OTHER:
e += sizeof(struct jffs2_acl_entry_short);
break;
default:
goto fail;
}
}
return header;
fail:
kfree(header);
return ERR_PTR(-EINVAL);
}
struct posix_acl *jffs2_get_acl(struct inode *inode, int type)
{
struct posix_acl *acl;
char *value = NULL;
int rc, xprefix;
switch (type) {
case ACL_TYPE_ACCESS:
xprefix = JFFS2_XPREFIX_ACL_ACCESS;
break;
case ACL_TYPE_DEFAULT:
xprefix = JFFS2_XPREFIX_ACL_DEFAULT;
break;
default:
BUG();
}
rc = do_jffs2_getxattr(inode, xprefix, "", NULL, 0);
if (rc > 0) {
value = kmalloc(rc, GFP_KERNEL);
if (!value)
return ERR_PTR(-ENOMEM);
rc = do_jffs2_getxattr(inode, xprefix, "", value, rc);
}
if (rc > 0) {
acl = jffs2_acl_from_medium(value, rc);
} else if (rc == -ENODATA || rc == -ENOSYS) {
acl = NULL;
} else {
acl = ERR_PTR(rc);
}
kfree(value);
if (!IS_ERR(acl))
set_cached_acl(inode, type, acl);
return acl;
}
static int __jffs2_set_acl(struct inode *inode, int xprefix, struct posix_acl *acl)
{
char *value = NULL;
size_t size = 0;
int rc;
if (acl) {
value = jffs2_acl_to_medium(acl, &size);
if (IS_ERR(value))
return PTR_ERR(value);
}
rc = do_jffs2_setxattr(inode, xprefix, "", value, size, 0);
if (!value && rc == -ENODATA)
rc = 0;
kfree(value);
return rc;
}
int jffs2_set_acl(struct inode *inode, struct posix_acl *acl, int type)
{
int rc, xprefix;
switch (type) {
case ACL_TYPE_ACCESS:
xprefix = JFFS2_XPREFIX_ACL_ACCESS;
if (acl) {
umode_t mode = inode->i_mode;
rc = posix_acl_equiv_mode(acl, &mode);
if (rc < 0)
return rc;
if (inode->i_mode != mode) {
struct iattr attr;
attr.ia_valid = ATTR_MODE | ATTR_CTIME;
attr.ia_mode = mode;
attr.ia_ctime = CURRENT_TIME_SEC;
rc = jffs2_do_setattr(inode, &attr);
if (rc < 0)
return rc;
}
if (rc == 0)
acl = NULL;
}
break;
case ACL_TYPE_DEFAULT:
xprefix = JFFS2_XPREFIX_ACL_DEFAULT;
if (!S_ISDIR(inode->i_mode))
return acl ? -EACCES : 0;
break;
default:
return -EINVAL;
}
rc = __jffs2_set_acl(inode, xprefix, acl);
if (!rc)
set_cached_acl(inode, type, acl);
return rc;
}
int jffs2_init_acl_pre(struct inode *dir_i, struct inode *inode, umode_t *i_mode)
{
struct posix_acl *default_acl, *acl;
int rc;
cache_no_acl(inode);
rc = posix_acl_create(dir_i, i_mode, &default_acl, &acl);
if (rc)
return rc;
if (default_acl) {
set_cached_acl(inode, ACL_TYPE_DEFAULT, default_acl);
posix_acl_release(default_acl);
}
if (acl) {
set_cached_acl(inode, ACL_TYPE_ACCESS, acl);
posix_acl_release(acl);
}
return 0;
}
int jffs2_init_acl_post(struct inode *inode)
{
int rc;
if (inode->i_default_acl) {
rc = __jffs2_set_acl(inode, JFFS2_XPREFIX_ACL_DEFAULT, inode->i_default_acl);
if (rc)
return rc;
}
if (inode->i_acl) {
rc = __jffs2_set_acl(inode, JFFS2_XPREFIX_ACL_ACCESS, inode->i_acl);
if (rc)
return rc;
}
return 0;
}

41
fs/jffs2/acl.h Normal file
View file

@ -0,0 +1,41 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2006 NEC Corporation
*
* Created by KaiGai Kohei <kaigai@ak.jp.nec.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
struct jffs2_acl_entry {
jint16_t e_tag;
jint16_t e_perm;
jint32_t e_id;
};
struct jffs2_acl_entry_short {
jint16_t e_tag;
jint16_t e_perm;
};
struct jffs2_acl_header {
jint32_t a_version;
};
#ifdef CONFIG_JFFS2_FS_POSIX_ACL
struct posix_acl *jffs2_get_acl(struct inode *inode, int type);
int jffs2_set_acl(struct inode *inode, struct posix_acl *acl, int type);
extern int jffs2_init_acl_pre(struct inode *, struct inode *, umode_t *);
extern int jffs2_init_acl_post(struct inode *);
#else
#define jffs2_get_acl (NULL)
#define jffs2_set_acl (NULL)
#define jffs2_init_acl_pre(dir_i,inode,mode) (0)
#define jffs2_init_acl_post(inode) (0)
#endif /* CONFIG_JFFS2_FS_POSIX_ACL */

168
fs/jffs2/background.c Normal file
View file

@ -0,0 +1,168 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/jffs2.h>
#include <linux/mtd/mtd.h>
#include <linux/completion.h>
#include <linux/sched.h>
#include <linux/freezer.h>
#include <linux/kthread.h>
#include "nodelist.h"
static int jffs2_garbage_collect_thread(void *);
void jffs2_garbage_collect_trigger(struct jffs2_sb_info *c)
{
assert_spin_locked(&c->erase_completion_lock);
if (c->gc_task && jffs2_thread_should_wake(c))
send_sig(SIGHUP, c->gc_task, 1);
}
/* This must only ever be called when no GC thread is currently running */
int jffs2_start_garbage_collect_thread(struct jffs2_sb_info *c)
{
struct task_struct *tsk;
int ret = 0;
BUG_ON(c->gc_task);
init_completion(&c->gc_thread_start);
init_completion(&c->gc_thread_exit);
tsk = kthread_run(jffs2_garbage_collect_thread, c, "jffs2_gcd_mtd%d", c->mtd->index);
if (IS_ERR(tsk)) {
pr_warn("fork failed for JFFS2 garbage collect thread: %ld\n",
-PTR_ERR(tsk));
complete(&c->gc_thread_exit);
ret = PTR_ERR(tsk);
} else {
/* Wait for it... */
jffs2_dbg(1, "Garbage collect thread is pid %d\n", tsk->pid);
wait_for_completion(&c->gc_thread_start);
ret = tsk->pid;
}
return ret;
}
void jffs2_stop_garbage_collect_thread(struct jffs2_sb_info *c)
{
int wait = 0;
spin_lock(&c->erase_completion_lock);
if (c->gc_task) {
jffs2_dbg(1, "Killing GC task %d\n", c->gc_task->pid);
send_sig(SIGKILL, c->gc_task, 1);
wait = 1;
}
spin_unlock(&c->erase_completion_lock);
if (wait)
wait_for_completion(&c->gc_thread_exit);
}
static int jffs2_garbage_collect_thread(void *_c)
{
struct jffs2_sb_info *c = _c;
sigset_t hupmask;
siginitset(&hupmask, sigmask(SIGHUP));
allow_signal(SIGKILL);
allow_signal(SIGSTOP);
allow_signal(SIGCONT);
allow_signal(SIGHUP);
c->gc_task = current;
complete(&c->gc_thread_start);
set_user_nice(current, 10);
set_freezable();
for (;;) {
sigprocmask(SIG_UNBLOCK, &hupmask, NULL);
again:
spin_lock(&c->erase_completion_lock);
if (!jffs2_thread_should_wake(c)) {
set_current_state (TASK_INTERRUPTIBLE);
spin_unlock(&c->erase_completion_lock);
jffs2_dbg(1, "%s(): sleeping...\n", __func__);
schedule();
} else {
spin_unlock(&c->erase_completion_lock);
}
/* Problem - immediately after bootup, the GCD spends a lot
* of time in places like jffs2_kill_fragtree(); so much so
* that userspace processes (like gdm and X) are starved
* despite plenty of cond_resched()s and renicing. Yield()
* doesn't help, either (presumably because userspace and GCD
* are generally competing for a higher latency resource -
* disk).
* This forces the GCD to slow the hell down. Pulling an
* inode in with read_inode() is much preferable to having
* the GC thread get there first. */
schedule_timeout_interruptible(msecs_to_jiffies(50));
if (kthread_should_stop()) {
jffs2_dbg(1, "%s(): kthread_stop() called\n", __func__);
goto die;
}
/* Put_super will send a SIGKILL and then wait on the sem.
*/
while (signal_pending(current) || freezing(current)) {
siginfo_t info;
unsigned long signr;
if (try_to_freeze())
goto again;
signr = dequeue_signal_lock(current, &current->blocked, &info);
switch(signr) {
case SIGSTOP:
jffs2_dbg(1, "%s(): SIGSTOP received\n",
__func__);
set_current_state(TASK_STOPPED);
schedule();
break;
case SIGKILL:
jffs2_dbg(1, "%s(): SIGKILL received\n",
__func__);
goto die;
case SIGHUP:
jffs2_dbg(1, "%s(): SIGHUP received\n",
__func__);
break;
default:
jffs2_dbg(1, "%s(): signal %ld received\n",
__func__, signr);
}
}
/* We don't want SIGHUP to interrupt us. STOP and KILL are OK though. */
sigprocmask(SIG_BLOCK, &hupmask, NULL);
jffs2_dbg(1, "%s(): pass\n", __func__);
if (jffs2_garbage_collect_pass(c) == -ENOSPC) {
pr_notice("No space for garbage collection. Aborting GC thread\n");
goto die;
}
}
die:
spin_lock(&c->erase_completion_lock);
c->gc_task = NULL;
spin_unlock(&c->erase_completion_lock);
complete_and_exit(&c->gc_thread_exit, 0);
}

394
fs/jffs2/build.c Normal file
View file

@ -0,0 +1,394 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/mtd/mtd.h>
#include "nodelist.h"
static void jffs2_build_remove_unlinked_inode(struct jffs2_sb_info *,
struct jffs2_inode_cache *, struct jffs2_full_dirent **);
static inline struct jffs2_inode_cache *
first_inode_chain(int *i, struct jffs2_sb_info *c)
{
for (; *i < c->inocache_hashsize; (*i)++) {
if (c->inocache_list[*i])
return c->inocache_list[*i];
}
return NULL;
}
static inline struct jffs2_inode_cache *
next_inode(int *i, struct jffs2_inode_cache *ic, struct jffs2_sb_info *c)
{
/* More in this chain? */
if (ic->next)
return ic->next;
(*i)++;
return first_inode_chain(i, c);
}
#define for_each_inode(i, c, ic) \
for (i = 0, ic = first_inode_chain(&i, (c)); \
ic; \
ic = next_inode(&i, ic, (c)))
static void jffs2_build_inode_pass1(struct jffs2_sb_info *c,
struct jffs2_inode_cache *ic)
{
struct jffs2_full_dirent *fd;
dbg_fsbuild("building directory inode #%u\n", ic->ino);
/* For each child, increase nlink */
for(fd = ic->scan_dents; fd; fd = fd->next) {
struct jffs2_inode_cache *child_ic;
if (!fd->ino)
continue;
/* we can get high latency here with huge directories */
child_ic = jffs2_get_ino_cache(c, fd->ino);
if (!child_ic) {
dbg_fsbuild("child \"%s\" (ino #%u) of dir ino #%u doesn't exist!\n",
fd->name, fd->ino, ic->ino);
jffs2_mark_node_obsolete(c, fd->raw);
continue;
}
if (fd->type == DT_DIR) {
if (child_ic->pino_nlink) {
JFFS2_ERROR("child dir \"%s\" (ino #%u) of dir ino #%u appears to be a hard link\n",
fd->name, fd->ino, ic->ino);
/* TODO: What do we do about it? */
} else {
child_ic->pino_nlink = ic->ino;
}
} else
child_ic->pino_nlink++;
dbg_fsbuild("increased nlink for child \"%s\" (ino #%u)\n", fd->name, fd->ino);
/* Can't free scan_dents so far. We might need them in pass 2 */
}
}
/* Scan plan:
- Scan physical nodes. Build map of inodes/dirents. Allocate inocaches as we go
- Scan directory tree from top down, setting nlink in inocaches
- Scan inocaches for inodes with nlink==0
*/
static int jffs2_build_filesystem(struct jffs2_sb_info *c)
{
int ret;
int i;
struct jffs2_inode_cache *ic;
struct jffs2_full_dirent *fd;
struct jffs2_full_dirent *dead_fds = NULL;
dbg_fsbuild("build FS data structures\n");
/* First, scan the medium and build all the inode caches with
lists of physical nodes */
c->flags |= JFFS2_SB_FLAG_SCANNING;
ret = jffs2_scan_medium(c);
c->flags &= ~JFFS2_SB_FLAG_SCANNING;
if (ret)
goto exit;
dbg_fsbuild("scanned flash completely\n");
jffs2_dbg_dump_block_lists_nolock(c);
dbg_fsbuild("pass 1 starting\n");
c->flags |= JFFS2_SB_FLAG_BUILDING;
/* Now scan the directory tree, increasing nlink according to every dirent found. */
for_each_inode(i, c, ic) {
if (ic->scan_dents) {
jffs2_build_inode_pass1(c, ic);
cond_resched();
}
}
dbg_fsbuild("pass 1 complete\n");
/* Next, scan for inodes with nlink == 0 and remove them. If
they were directories, then decrement the nlink of their
children too, and repeat the scan. As that's going to be
a fairly uncommon occurrence, it's not so evil to do it this
way. Recursion bad. */
dbg_fsbuild("pass 2 starting\n");
for_each_inode(i, c, ic) {
if (ic->pino_nlink)
continue;
jffs2_build_remove_unlinked_inode(c, ic, &dead_fds);
cond_resched();
}
dbg_fsbuild("pass 2a starting\n");
while (dead_fds) {
fd = dead_fds;
dead_fds = fd->next;
ic = jffs2_get_ino_cache(c, fd->ino);
if (ic)
jffs2_build_remove_unlinked_inode(c, ic, &dead_fds);
jffs2_free_full_dirent(fd);
}
dbg_fsbuild("pass 2a complete\n");
dbg_fsbuild("freeing temporary data structures\n");
/* Finally, we can scan again and free the dirent structs */
for_each_inode(i, c, ic) {
while(ic->scan_dents) {
fd = ic->scan_dents;
ic->scan_dents = fd->next;
jffs2_free_full_dirent(fd);
}
ic->scan_dents = NULL;
cond_resched();
}
jffs2_build_xattr_subsystem(c);
c->flags &= ~JFFS2_SB_FLAG_BUILDING;
dbg_fsbuild("FS build complete\n");
/* Rotate the lists by some number to ensure wear levelling */
jffs2_rotate_lists(c);
ret = 0;
exit:
if (ret) {
for_each_inode(i, c, ic) {
while(ic->scan_dents) {
fd = ic->scan_dents;
ic->scan_dents = fd->next;
jffs2_free_full_dirent(fd);
}
}
jffs2_clear_xattr_subsystem(c);
}
return ret;
}
static void jffs2_build_remove_unlinked_inode(struct jffs2_sb_info *c,
struct jffs2_inode_cache *ic,
struct jffs2_full_dirent **dead_fds)
{
struct jffs2_raw_node_ref *raw;
struct jffs2_full_dirent *fd;
dbg_fsbuild("removing ino #%u with nlink == zero.\n", ic->ino);
raw = ic->nodes;
while (raw != (void *)ic) {
struct jffs2_raw_node_ref *next = raw->next_in_ino;
dbg_fsbuild("obsoleting node at 0x%08x\n", ref_offset(raw));
jffs2_mark_node_obsolete(c, raw);
raw = next;
}
if (ic->scan_dents) {
int whinged = 0;
dbg_fsbuild("inode #%u was a directory which may have children...\n", ic->ino);
while(ic->scan_dents) {
struct jffs2_inode_cache *child_ic;
fd = ic->scan_dents;
ic->scan_dents = fd->next;
if (!fd->ino) {
/* It's a deletion dirent. Ignore it */
dbg_fsbuild("child \"%s\" is a deletion dirent, skipping...\n", fd->name);
jffs2_free_full_dirent(fd);
continue;
}
if (!whinged)
whinged = 1;
dbg_fsbuild("removing child \"%s\", ino #%u\n", fd->name, fd->ino);
child_ic = jffs2_get_ino_cache(c, fd->ino);
if (!child_ic) {
dbg_fsbuild("cannot remove child \"%s\", ino #%u, because it doesn't exist\n",
fd->name, fd->ino);
jffs2_free_full_dirent(fd);
continue;
}
/* Reduce nlink of the child. If it's now zero, stick it on the
dead_fds list to be cleaned up later. Else just free the fd */
if (fd->type == DT_DIR)
child_ic->pino_nlink = 0;
else
child_ic->pino_nlink--;
if (!child_ic->pino_nlink) {
dbg_fsbuild("inode #%u (\"%s\") now has no links; adding to dead_fds list.\n",
fd->ino, fd->name);
fd->next = *dead_fds;
*dead_fds = fd;
} else {
dbg_fsbuild("inode #%u (\"%s\") has now got nlink %d. Ignoring.\n",
fd->ino, fd->name, child_ic->pino_nlink);
jffs2_free_full_dirent(fd);
}
}
}
/*
We don't delete the inocache from the hash list and free it yet.
The erase code will do that, when all the nodes are completely gone.
*/
}
static void jffs2_calc_trigger_levels(struct jffs2_sb_info *c)
{
uint32_t size;
/* Deletion should almost _always_ be allowed. We're fairly
buggered once we stop allowing people to delete stuff
because there's not enough free space... */
c->resv_blocks_deletion = 2;
/* Be conservative about how much space we need before we allow writes.
On top of that which is required for deletia, require an extra 2%
of the medium to be available, for overhead caused by nodes being
split across blocks, etc. */
size = c->flash_size / 50; /* 2% of flash size */
size += c->nr_blocks * 100; /* And 100 bytes per eraseblock */
size += c->sector_size - 1; /* ... and round up */
c->resv_blocks_write = c->resv_blocks_deletion + (size / c->sector_size);
/* When do we let the GC thread run in the background */
c->resv_blocks_gctrigger = c->resv_blocks_write + 1;
/* When do we allow garbage collection to merge nodes to make
long-term progress at the expense of short-term space exhaustion? */
c->resv_blocks_gcmerge = c->resv_blocks_deletion + 1;
/* When do we allow garbage collection to eat from bad blocks rather
than actually making progress? */
c->resv_blocks_gcbad = 0;//c->resv_blocks_deletion + 2;
/* What number of 'very dirty' eraseblocks do we allow before we
trigger the GC thread even if we don't _need_ the space. When we
can't mark nodes obsolete on the medium, the old dirty nodes cause
performance problems because we have to inspect and discard them. */
c->vdirty_blocks_gctrigger = c->resv_blocks_gctrigger;
if (jffs2_can_mark_obsolete(c))
c->vdirty_blocks_gctrigger *= 10;
/* If there's less than this amount of dirty space, don't bother
trying to GC to make more space. It'll be a fruitless task */
c->nospc_dirty_size = c->sector_size + (c->flash_size / 100);
dbg_fsbuild("trigger levels (size %d KiB, block size %d KiB, %d blocks)\n",
c->flash_size / 1024, c->sector_size / 1024, c->nr_blocks);
dbg_fsbuild("Blocks required to allow deletion: %d (%d KiB)\n",
c->resv_blocks_deletion, c->resv_blocks_deletion*c->sector_size/1024);
dbg_fsbuild("Blocks required to allow writes: %d (%d KiB)\n",
c->resv_blocks_write, c->resv_blocks_write*c->sector_size/1024);
dbg_fsbuild("Blocks required to quiesce GC thread: %d (%d KiB)\n",
c->resv_blocks_gctrigger, c->resv_blocks_gctrigger*c->sector_size/1024);
dbg_fsbuild("Blocks required to allow GC merges: %d (%d KiB)\n",
c->resv_blocks_gcmerge, c->resv_blocks_gcmerge*c->sector_size/1024);
dbg_fsbuild("Blocks required to GC bad blocks: %d (%d KiB)\n",
c->resv_blocks_gcbad, c->resv_blocks_gcbad*c->sector_size/1024);
dbg_fsbuild("Amount of dirty space required to GC: %d bytes\n",
c->nospc_dirty_size);
dbg_fsbuild("Very dirty blocks before GC triggered: %d\n",
c->vdirty_blocks_gctrigger);
}
int jffs2_do_mount_fs(struct jffs2_sb_info *c)
{
int ret;
int i;
int size;
c->free_size = c->flash_size;
c->nr_blocks = c->flash_size / c->sector_size;
size = sizeof(struct jffs2_eraseblock) * c->nr_blocks;
#ifndef __ECOS
if (jffs2_blocks_use_vmalloc(c))
c->blocks = vzalloc(size);
else
#endif
c->blocks = kzalloc(size, GFP_KERNEL);
if (!c->blocks)
return -ENOMEM;
for (i=0; i<c->nr_blocks; i++) {
INIT_LIST_HEAD(&c->blocks[i].list);
c->blocks[i].offset = i * c->sector_size;
c->blocks[i].free_size = c->sector_size;
}
INIT_LIST_HEAD(&c->clean_list);
INIT_LIST_HEAD(&c->very_dirty_list);
INIT_LIST_HEAD(&c->dirty_list);
INIT_LIST_HEAD(&c->erasable_list);
INIT_LIST_HEAD(&c->erasing_list);
INIT_LIST_HEAD(&c->erase_checking_list);
INIT_LIST_HEAD(&c->erase_pending_list);
INIT_LIST_HEAD(&c->erasable_pending_wbuf_list);
INIT_LIST_HEAD(&c->erase_complete_list);
INIT_LIST_HEAD(&c->free_list);
INIT_LIST_HEAD(&c->bad_list);
INIT_LIST_HEAD(&c->bad_used_list);
c->highest_ino = 1;
c->summary = NULL;
ret = jffs2_sum_init(c);
if (ret)
goto out_free;
if (jffs2_build_filesystem(c)) {
dbg_fsbuild("build_fs failed\n");
jffs2_free_ino_caches(c);
jffs2_free_raw_node_refs(c);
ret = -EIO;
goto out_free;
}
jffs2_calc_trigger_levels(c);
return 0;
out_free:
#ifndef __ECOS
if (jffs2_blocks_use_vmalloc(c))
vfree(c->blocks);
else
#endif
kfree(c->blocks);
return ret;
}

418
fs/jffs2/compr.c Normal file
View file

@ -0,0 +1,418 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
* Copyright © 2004 Ferenc Havasi <havasi@inf.u-szeged.hu>,
* University of Szeged, Hungary
*
* Created by Arjan van de Ven <arjan@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "compr.h"
static DEFINE_SPINLOCK(jffs2_compressor_list_lock);
/* Available compressors are on this list */
static LIST_HEAD(jffs2_compressor_list);
/* Actual compression mode */
static int jffs2_compression_mode = JFFS2_COMPR_MODE_PRIORITY;
/* Statistics for blocks stored without compression */
static uint32_t none_stat_compr_blocks=0,none_stat_decompr_blocks=0,none_stat_compr_size=0;
/*
* Return 1 to use this compression
*/
static int jffs2_is_best_compression(struct jffs2_compressor *this,
struct jffs2_compressor *best, uint32_t size, uint32_t bestsize)
{
switch (jffs2_compression_mode) {
case JFFS2_COMPR_MODE_SIZE:
if (bestsize > size)
return 1;
return 0;
case JFFS2_COMPR_MODE_FAVOURLZO:
if ((this->compr == JFFS2_COMPR_LZO) && (bestsize > size))
return 1;
if ((best->compr != JFFS2_COMPR_LZO) && (bestsize > size))
return 1;
if ((this->compr == JFFS2_COMPR_LZO) && (bestsize > (size * FAVOUR_LZO_PERCENT / 100)))
return 1;
if ((bestsize * FAVOUR_LZO_PERCENT / 100) > size)
return 1;
return 0;
}
/* Shouldn't happen */
return 0;
}
/*
* jffs2_selected_compress:
* @compr: Explicit compression type to use (ie, JFFS2_COMPR_ZLIB).
* If 0, just take the first available compression mode.
* @data_in: Pointer to uncompressed data
* @cpage_out: Pointer to returned pointer to buffer for compressed data
* @datalen: On entry, holds the amount of data available for compression.
* On exit, expected to hold the amount of data actually compressed.
* @cdatalen: On entry, holds the amount of space available for compressed
* data. On exit, expected to hold the actual size of the compressed
* data.
*
* Returns: the compression type used. Zero is used to show that the data
* could not be compressed; probably because we couldn't find the requested
* compression mode.
*/
static int jffs2_selected_compress(u8 compr, unsigned char *data_in,
unsigned char **cpage_out, u32 *datalen, u32 *cdatalen)
{
struct jffs2_compressor *this;
int err, ret = JFFS2_COMPR_NONE;
uint32_t orig_slen, orig_dlen;
char *output_buf;
output_buf = kmalloc(*cdatalen, GFP_KERNEL);
if (!output_buf) {
pr_warn("No memory for compressor allocation. Compression failed.\n");
return ret;
}
orig_slen = *datalen;
orig_dlen = *cdatalen;
spin_lock(&jffs2_compressor_list_lock);
list_for_each_entry(this, &jffs2_compressor_list, list) {
/* Skip decompress-only and disabled modules */
if (!this->compress || this->disabled)
continue;
/* Skip if not the desired compression type */
if (compr && (compr != this->compr))
continue;
/*
* Either compression type was unspecified, or we found our
* compressor; either way, we're good to go.
*/
this->usecount++;
spin_unlock(&jffs2_compressor_list_lock);
*datalen = orig_slen;
*cdatalen = orig_dlen;
err = this->compress(data_in, output_buf, datalen, cdatalen);
spin_lock(&jffs2_compressor_list_lock);
this->usecount--;
if (!err) {
/* Success */
ret = this->compr;
this->stat_compr_blocks++;
this->stat_compr_orig_size += *datalen;
this->stat_compr_new_size += *cdatalen;
break;
}
}
spin_unlock(&jffs2_compressor_list_lock);
if (ret == JFFS2_COMPR_NONE)
kfree(output_buf);
else
*cpage_out = output_buf;
return ret;
}
/* jffs2_compress:
* @data_in: Pointer to uncompressed data
* @cpage_out: Pointer to returned pointer to buffer for compressed data
* @datalen: On entry, holds the amount of data available for compression.
* On exit, expected to hold the amount of data actually compressed.
* @cdatalen: On entry, holds the amount of space available for compressed
* data. On exit, expected to hold the actual size of the compressed
* data.
*
* Returns: Lower byte to be stored with data indicating compression type used.
* Zero is used to show that the data could not be compressed - the
* compressed version was actually larger than the original.
* Upper byte will be used later. (soon)
*
* If the cdata buffer isn't large enough to hold all the uncompressed data,
* jffs2_compress should compress as much as will fit, and should set
* *datalen accordingly to show the amount of data which were compressed.
*/
uint16_t jffs2_compress(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
unsigned char *data_in, unsigned char **cpage_out,
uint32_t *datalen, uint32_t *cdatalen)
{
int ret = JFFS2_COMPR_NONE;
int mode, compr_ret;
struct jffs2_compressor *this, *best=NULL;
unsigned char *output_buf = NULL, *tmp_buf;
uint32_t orig_slen, orig_dlen;
uint32_t best_slen=0, best_dlen=0;
if (c->mount_opts.override_compr)
mode = c->mount_opts.compr;
else
mode = jffs2_compression_mode;
switch (mode) {
case JFFS2_COMPR_MODE_NONE:
break;
case JFFS2_COMPR_MODE_PRIORITY:
ret = jffs2_selected_compress(0, data_in, cpage_out, datalen,
cdatalen);
break;
case JFFS2_COMPR_MODE_SIZE:
case JFFS2_COMPR_MODE_FAVOURLZO:
orig_slen = *datalen;
orig_dlen = *cdatalen;
spin_lock(&jffs2_compressor_list_lock);
list_for_each_entry(this, &jffs2_compressor_list, list) {
/* Skip decompress-only backwards-compatibility and disabled modules */
if ((!this->compress)||(this->disabled))
continue;
/* Allocating memory for output buffer if necessary */
if ((this->compr_buf_size < orig_slen) && (this->compr_buf)) {
spin_unlock(&jffs2_compressor_list_lock);
kfree(this->compr_buf);
spin_lock(&jffs2_compressor_list_lock);
this->compr_buf_size=0;
this->compr_buf=NULL;
}
if (!this->compr_buf) {
spin_unlock(&jffs2_compressor_list_lock);
tmp_buf = kmalloc(orig_slen, GFP_KERNEL);
spin_lock(&jffs2_compressor_list_lock);
if (!tmp_buf) {
pr_warn("No memory for compressor allocation. (%d bytes)\n",
orig_slen);
continue;
}
else {
this->compr_buf = tmp_buf;
this->compr_buf_size = orig_slen;
}
}
this->usecount++;
spin_unlock(&jffs2_compressor_list_lock);
*datalen = orig_slen;
*cdatalen = orig_dlen;
compr_ret = this->compress(data_in, this->compr_buf, datalen, cdatalen);
spin_lock(&jffs2_compressor_list_lock);
this->usecount--;
if (!compr_ret) {
if (((!best_dlen) || jffs2_is_best_compression(this, best, *cdatalen, best_dlen))
&& (*cdatalen < *datalen)) {
best_dlen = *cdatalen;
best_slen = *datalen;
best = this;
}
}
}
if (best_dlen) {
*cdatalen = best_dlen;
*datalen = best_slen;
output_buf = best->compr_buf;
best->compr_buf = NULL;
best->compr_buf_size = 0;
best->stat_compr_blocks++;
best->stat_compr_orig_size += best_slen;
best->stat_compr_new_size += best_dlen;
ret = best->compr;
*cpage_out = output_buf;
}
spin_unlock(&jffs2_compressor_list_lock);
break;
case JFFS2_COMPR_MODE_FORCELZO:
ret = jffs2_selected_compress(JFFS2_COMPR_LZO, data_in,
cpage_out, datalen, cdatalen);
break;
case JFFS2_COMPR_MODE_FORCEZLIB:
ret = jffs2_selected_compress(JFFS2_COMPR_ZLIB, data_in,
cpage_out, datalen, cdatalen);
break;
default:
pr_err("unknown compression mode\n");
}
if (ret == JFFS2_COMPR_NONE) {
*cpage_out = data_in;
*datalen = *cdatalen;
none_stat_compr_blocks++;
none_stat_compr_size += *datalen;
}
return ret;
}
int jffs2_decompress(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
uint16_t comprtype, unsigned char *cdata_in,
unsigned char *data_out, uint32_t cdatalen, uint32_t datalen)
{
struct jffs2_compressor *this;
int ret;
/* Older code had a bug where it would write non-zero 'usercompr'
fields. Deal with it. */
if ((comprtype & 0xff) <= JFFS2_COMPR_ZLIB)
comprtype &= 0xff;
switch (comprtype & 0xff) {
case JFFS2_COMPR_NONE:
/* This should be special-cased elsewhere, but we might as well deal with it */
memcpy(data_out, cdata_in, datalen);
none_stat_decompr_blocks++;
break;
case JFFS2_COMPR_ZERO:
memset(data_out, 0, datalen);
break;
default:
spin_lock(&jffs2_compressor_list_lock);
list_for_each_entry(this, &jffs2_compressor_list, list) {
if (comprtype == this->compr) {
this->usecount++;
spin_unlock(&jffs2_compressor_list_lock);
ret = this->decompress(cdata_in, data_out, cdatalen, datalen);
spin_lock(&jffs2_compressor_list_lock);
if (ret) {
pr_warn("Decompressor \"%s\" returned %d\n",
this->name, ret);
}
else {
this->stat_decompr_blocks++;
}
this->usecount--;
spin_unlock(&jffs2_compressor_list_lock);
return ret;
}
}
pr_warn("compression type 0x%02x not available\n", comprtype);
spin_unlock(&jffs2_compressor_list_lock);
return -EIO;
}
return 0;
}
int jffs2_register_compressor(struct jffs2_compressor *comp)
{
struct jffs2_compressor *this;
if (!comp->name) {
pr_warn("NULL compressor name at registering JFFS2 compressor. Failed.\n");
return -1;
}
comp->compr_buf_size=0;
comp->compr_buf=NULL;
comp->usecount=0;
comp->stat_compr_orig_size=0;
comp->stat_compr_new_size=0;
comp->stat_compr_blocks=0;
comp->stat_decompr_blocks=0;
jffs2_dbg(1, "Registering JFFS2 compressor \"%s\"\n", comp->name);
spin_lock(&jffs2_compressor_list_lock);
list_for_each_entry(this, &jffs2_compressor_list, list) {
if (this->priority < comp->priority) {
list_add(&comp->list, this->list.prev);
goto out;
}
}
list_add_tail(&comp->list, &jffs2_compressor_list);
out:
D2(list_for_each_entry(this, &jffs2_compressor_list, list) {
printk(KERN_DEBUG "Compressor \"%s\", prio %d\n", this->name, this->priority);
})
spin_unlock(&jffs2_compressor_list_lock);
return 0;
}
int jffs2_unregister_compressor(struct jffs2_compressor *comp)
{
D2(struct jffs2_compressor *this);
jffs2_dbg(1, "Unregistering JFFS2 compressor \"%s\"\n", comp->name);
spin_lock(&jffs2_compressor_list_lock);
if (comp->usecount) {
spin_unlock(&jffs2_compressor_list_lock);
pr_warn("Compressor module is in use. Unregister failed.\n");
return -1;
}
list_del(&comp->list);
D2(list_for_each_entry(this, &jffs2_compressor_list, list) {
printk(KERN_DEBUG "Compressor \"%s\", prio %d\n", this->name, this->priority);
})
spin_unlock(&jffs2_compressor_list_lock);
return 0;
}
void jffs2_free_comprbuf(unsigned char *comprbuf, unsigned char *orig)
{
if (orig != comprbuf)
kfree(comprbuf);
}
int __init jffs2_compressors_init(void)
{
/* Registering compressors */
#ifdef CONFIG_JFFS2_ZLIB
jffs2_zlib_init();
#endif
#ifdef CONFIG_JFFS2_RTIME
jffs2_rtime_init();
#endif
#ifdef CONFIG_JFFS2_RUBIN
jffs2_rubinmips_init();
jffs2_dynrubin_init();
#endif
#ifdef CONFIG_JFFS2_LZO
jffs2_lzo_init();
#endif
/* Setting default compression mode */
#ifdef CONFIG_JFFS2_CMODE_NONE
jffs2_compression_mode = JFFS2_COMPR_MODE_NONE;
jffs2_dbg(1, "default compression mode: none\n");
#else
#ifdef CONFIG_JFFS2_CMODE_SIZE
jffs2_compression_mode = JFFS2_COMPR_MODE_SIZE;
jffs2_dbg(1, "default compression mode: size\n");
#else
#ifdef CONFIG_JFFS2_CMODE_FAVOURLZO
jffs2_compression_mode = JFFS2_COMPR_MODE_FAVOURLZO;
jffs2_dbg(1, "default compression mode: favourlzo\n");
#else
jffs2_dbg(1, "default compression mode: priority\n");
#endif
#endif
#endif
return 0;
}
int jffs2_compressors_exit(void)
{
/* Unregistering compressors */
#ifdef CONFIG_JFFS2_LZO
jffs2_lzo_exit();
#endif
#ifdef CONFIG_JFFS2_RUBIN
jffs2_dynrubin_exit();
jffs2_rubinmips_exit();
#endif
#ifdef CONFIG_JFFS2_RTIME
jffs2_rtime_exit();
#endif
#ifdef CONFIG_JFFS2_ZLIB
jffs2_zlib_exit();
#endif
return 0;
}

105
fs/jffs2/compr.h Normal file
View file

@ -0,0 +1,105 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2004 Ferenc Havasi <havasi@inf.u-szeged.hu>,
* University of Szeged, Hungary
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#ifndef __JFFS2_COMPR_H__
#define __JFFS2_COMPR_H__
#include <linux/kernel.h>
#include <linux/vmalloc.h>
#include <linux/list.h>
#include <linux/types.h>
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/jffs2.h>
#include "jffs2_fs_i.h"
#include "jffs2_fs_sb.h"
#include "nodelist.h"
#define JFFS2_RUBINMIPS_PRIORITY 10
#define JFFS2_DYNRUBIN_PRIORITY 20
#define JFFS2_LZARI_PRIORITY 30
#define JFFS2_RTIME_PRIORITY 50
#define JFFS2_ZLIB_PRIORITY 60
#define JFFS2_LZO_PRIORITY 80
#define JFFS2_RUBINMIPS_DISABLED /* RUBINs will be used only */
#define JFFS2_DYNRUBIN_DISABLED /* for decompression */
#define JFFS2_COMPR_MODE_NONE 0
#define JFFS2_COMPR_MODE_PRIORITY 1
#define JFFS2_COMPR_MODE_SIZE 2
#define JFFS2_COMPR_MODE_FAVOURLZO 3
#define JFFS2_COMPR_MODE_FORCELZO 4
#define JFFS2_COMPR_MODE_FORCEZLIB 5
#define FAVOUR_LZO_PERCENT 80
struct jffs2_compressor {
struct list_head list;
int priority; /* used by prirority comr. mode */
char *name;
char compr; /* JFFS2_COMPR_XXX */
int (*compress)(unsigned char *data_in, unsigned char *cpage_out,
uint32_t *srclen, uint32_t *destlen);
int (*decompress)(unsigned char *cdata_in, unsigned char *data_out,
uint32_t cdatalen, uint32_t datalen);
int usecount;
int disabled; /* if set the compressor won't compress */
unsigned char *compr_buf; /* used by size compr. mode */
uint32_t compr_buf_size; /* used by size compr. mode */
uint32_t stat_compr_orig_size;
uint32_t stat_compr_new_size;
uint32_t stat_compr_blocks;
uint32_t stat_decompr_blocks;
};
int jffs2_register_compressor(struct jffs2_compressor *comp);
int jffs2_unregister_compressor(struct jffs2_compressor *comp);
int jffs2_compressors_init(void);
int jffs2_compressors_exit(void);
uint16_t jffs2_compress(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
unsigned char *data_in, unsigned char **cpage_out,
uint32_t *datalen, uint32_t *cdatalen);
int jffs2_decompress(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
uint16_t comprtype, unsigned char *cdata_in,
unsigned char *data_out, uint32_t cdatalen, uint32_t datalen);
void jffs2_free_comprbuf(unsigned char *comprbuf, unsigned char *orig);
/* Compressor modules */
/* These functions will be called by jffs2_compressors_init/exit */
#ifdef CONFIG_JFFS2_RUBIN
int jffs2_rubinmips_init(void);
void jffs2_rubinmips_exit(void);
int jffs2_dynrubin_init(void);
void jffs2_dynrubin_exit(void);
#endif
#ifdef CONFIG_JFFS2_RTIME
int jffs2_rtime_init(void);
void jffs2_rtime_exit(void);
#endif
#ifdef CONFIG_JFFS2_ZLIB
int jffs2_zlib_init(void);
void jffs2_zlib_exit(void);
#endif
#ifdef CONFIG_JFFS2_LZO
int jffs2_lzo_init(void);
void jffs2_lzo_exit(void);
#endif
#endif /* __JFFS2_COMPR_H__ */

110
fs/jffs2/compr_lzo.c Normal file
View file

@ -0,0 +1,110 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2007 Nokia Corporation. All rights reserved.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by Richard Purdie <rpurdie@openedhand.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
#include <linux/init.h>
#include <linux/lzo.h>
#include "compr.h"
static void *lzo_mem;
static void *lzo_compress_buf;
static DEFINE_MUTEX(deflate_mutex); /* for lzo_mem and lzo_compress_buf */
static void free_workspace(void)
{
vfree(lzo_mem);
vfree(lzo_compress_buf);
}
static int __init alloc_workspace(void)
{
lzo_mem = vmalloc(LZO1X_MEM_COMPRESS);
lzo_compress_buf = vmalloc(lzo1x_worst_compress(PAGE_SIZE));
if (!lzo_mem || !lzo_compress_buf) {
free_workspace();
return -ENOMEM;
}
return 0;
}
static int jffs2_lzo_compress(unsigned char *data_in, unsigned char *cpage_out,
uint32_t *sourcelen, uint32_t *dstlen)
{
size_t compress_size;
int ret;
mutex_lock(&deflate_mutex);
ret = lzo1x_1_compress(data_in, *sourcelen, lzo_compress_buf, &compress_size, lzo_mem);
if (ret != LZO_E_OK)
goto fail;
if (compress_size > *dstlen)
goto fail;
memcpy(cpage_out, lzo_compress_buf, compress_size);
mutex_unlock(&deflate_mutex);
*dstlen = compress_size;
return 0;
fail:
mutex_unlock(&deflate_mutex);
return -1;
}
static int jffs2_lzo_decompress(unsigned char *data_in, unsigned char *cpage_out,
uint32_t srclen, uint32_t destlen)
{
size_t dl = destlen;
int ret;
ret = lzo1x_decompress_safe(data_in, srclen, cpage_out, &dl);
if (ret != LZO_E_OK || dl != destlen)
return -1;
return 0;
}
static struct jffs2_compressor jffs2_lzo_comp = {
.priority = JFFS2_LZO_PRIORITY,
.name = "lzo",
.compr = JFFS2_COMPR_LZO,
.compress = &jffs2_lzo_compress,
.decompress = &jffs2_lzo_decompress,
.disabled = 0,
};
int __init jffs2_lzo_init(void)
{
int ret;
ret = alloc_workspace();
if (ret < 0)
return ret;
ret = jffs2_register_compressor(&jffs2_lzo_comp);
if (ret)
free_workspace();
return ret;
}
void jffs2_lzo_exit(void)
{
jffs2_unregister_compressor(&jffs2_lzo_comp);
free_workspace();
}

130
fs/jffs2/compr_rtime.c Normal file
View file

@ -0,0 +1,130 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by Arjan van de Ven <arjanv@redhat.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*
*
* Very simple lz77-ish encoder.
*
* Theory of operation: Both encoder and decoder have a list of "last
* occurrences" for every possible source-value; after sending the
* first source-byte, the second byte indicated the "run" length of
* matches
*
* The algorithm is intended to only send "whole bytes", no bit-messing.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/jffs2.h>
#include "compr.h"
/* _compress returns the compressed size, -1 if bigger */
static int jffs2_rtime_compress(unsigned char *data_in,
unsigned char *cpage_out,
uint32_t *sourcelen, uint32_t *dstlen)
{
unsigned short positions[256];
int outpos = 0;
int pos=0;
memset(positions,0,sizeof(positions));
while (pos < (*sourcelen) && outpos <= (*dstlen)-2) {
int backpos, runlen=0;
unsigned char value;
value = data_in[pos];
cpage_out[outpos++] = data_in[pos++];
backpos = positions[value];
positions[value]=pos;
while ((backpos < pos) && (pos < (*sourcelen)) &&
(data_in[pos]==data_in[backpos++]) && (runlen<255)) {
pos++;
runlen++;
}
cpage_out[outpos++] = runlen;
}
if (outpos >= pos) {
/* We failed */
return -1;
}
/* Tell the caller how much we managed to compress, and how much space it took */
*sourcelen = pos;
*dstlen = outpos;
return 0;
}
static int jffs2_rtime_decompress(unsigned char *data_in,
unsigned char *cpage_out,
uint32_t srclen, uint32_t destlen)
{
unsigned short positions[256];
int outpos = 0;
int pos=0;
memset(positions,0,sizeof(positions));
while (outpos<destlen) {
unsigned char value;
int backoffs;
int repeat;
value = data_in[pos++];
cpage_out[outpos++] = value; /* first the verbatim copied byte */
repeat = data_in[pos++];
backoffs = positions[value];
positions[value]=outpos;
if (repeat) {
if (backoffs + repeat >= outpos) {
while(repeat) {
cpage_out[outpos++] = cpage_out[backoffs++];
repeat--;
}
} else {
memcpy(&cpage_out[outpos],&cpage_out[backoffs],repeat);
outpos+=repeat;
}
}
}
return 0;
}
static struct jffs2_compressor jffs2_rtime_comp = {
.priority = JFFS2_RTIME_PRIORITY,
.name = "rtime",
.compr = JFFS2_COMPR_RTIME,
.compress = &jffs2_rtime_compress,
.decompress = &jffs2_rtime_decompress,
#ifdef JFFS2_RTIME_DISABLED
.disabled = 1,
#else
.disabled = 0,
#endif
};
int jffs2_rtime_init(void)
{
return jffs2_register_compressor(&jffs2_rtime_comp);
}
void jffs2_rtime_exit(void)
{
jffs2_unregister_compressor(&jffs2_rtime_comp);
}

457
fs/jffs2/compr_rubin.c Normal file
View file

@ -0,0 +1,457 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by Arjan van de Ven <arjanv@redhat.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/string.h>
#include <linux/types.h>
#include <linux/jffs2.h>
#include <linux/errno.h>
#include "compr.h"
#define RUBIN_REG_SIZE 16
#define UPPER_BIT_RUBIN (((long) 1)<<(RUBIN_REG_SIZE-1))
#define LOWER_BITS_RUBIN ((((long) 1)<<(RUBIN_REG_SIZE-1))-1)
#define BIT_DIVIDER_MIPS 1043
static int bits_mips[8] = { 277, 249, 290, 267, 229, 341, 212, 241};
struct pushpull {
unsigned char *buf;
unsigned int buflen;
unsigned int ofs;
unsigned int reserve;
};
struct rubin_state {
unsigned long p;
unsigned long q;
unsigned long rec_q;
long bit_number;
struct pushpull pp;
int bit_divider;
int bits[8];
};
static inline void init_pushpull(struct pushpull *pp, char *buf,
unsigned buflen, unsigned ofs,
unsigned reserve)
{
pp->buf = buf;
pp->buflen = buflen;
pp->ofs = ofs;
pp->reserve = reserve;
}
static inline int pushbit(struct pushpull *pp, int bit, int use_reserved)
{
if (pp->ofs >= pp->buflen - (use_reserved?0:pp->reserve))
return -ENOSPC;
if (bit)
pp->buf[pp->ofs >> 3] |= (1<<(7-(pp->ofs & 7)));
else
pp->buf[pp->ofs >> 3] &= ~(1<<(7-(pp->ofs & 7)));
pp->ofs++;
return 0;
}
static inline int pushedbits(struct pushpull *pp)
{
return pp->ofs;
}
static inline int pullbit(struct pushpull *pp)
{
int bit;
bit = (pp->buf[pp->ofs >> 3] >> (7-(pp->ofs & 7))) & 1;
pp->ofs++;
return bit;
}
static inline int pulledbits(struct pushpull *pp)
{
return pp->ofs;
}
static void init_rubin(struct rubin_state *rs, int div, int *bits)
{
int c;
rs->q = 0;
rs->p = (long) (2 * UPPER_BIT_RUBIN);
rs->bit_number = (long) 0;
rs->bit_divider = div;
for (c=0; c<8; c++)
rs->bits[c] = bits[c];
}
static int encode(struct rubin_state *rs, long A, long B, int symbol)
{
long i0, i1;
int ret;
while ((rs->q >= UPPER_BIT_RUBIN) ||
((rs->p + rs->q) <= UPPER_BIT_RUBIN)) {
rs->bit_number++;
ret = pushbit(&rs->pp, (rs->q & UPPER_BIT_RUBIN) ? 1 : 0, 0);
if (ret)
return ret;
rs->q &= LOWER_BITS_RUBIN;
rs->q <<= 1;
rs->p <<= 1;
}
i0 = A * rs->p / (A + B);
if (i0 <= 0)
i0 = 1;
if (i0 >= rs->p)
i0 = rs->p - 1;
i1 = rs->p - i0;
if (symbol == 0)
rs->p = i0;
else {
rs->p = i1;
rs->q += i0;
}
return 0;
}
static void end_rubin(struct rubin_state *rs)
{
int i;
for (i = 0; i < RUBIN_REG_SIZE; i++) {
pushbit(&rs->pp, (UPPER_BIT_RUBIN & rs->q) ? 1 : 0, 1);
rs->q &= LOWER_BITS_RUBIN;
rs->q <<= 1;
}
}
static void init_decode(struct rubin_state *rs, int div, int *bits)
{
init_rubin(rs, div, bits);
/* behalve lower */
rs->rec_q = 0;
for (rs->bit_number = 0; rs->bit_number++ < RUBIN_REG_SIZE;
rs->rec_q = rs->rec_q * 2 + (long) (pullbit(&rs->pp)))
;
}
static void __do_decode(struct rubin_state *rs, unsigned long p,
unsigned long q)
{
register unsigned long lower_bits_rubin = LOWER_BITS_RUBIN;
unsigned long rec_q;
int c, bits = 0;
/*
* First, work out how many bits we need from the input stream.
* Note that we have already done the initial check on this
* loop prior to calling this function.
*/
do {
bits++;
q &= lower_bits_rubin;
q <<= 1;
p <<= 1;
} while ((q >= UPPER_BIT_RUBIN) || ((p + q) <= UPPER_BIT_RUBIN));
rs->p = p;
rs->q = q;
rs->bit_number += bits;
/*
* Now get the bits. We really want this to be "get n bits".
*/
rec_q = rs->rec_q;
do {
c = pullbit(&rs->pp);
rec_q &= lower_bits_rubin;
rec_q <<= 1;
rec_q += c;
} while (--bits);
rs->rec_q = rec_q;
}
static int decode(struct rubin_state *rs, long A, long B)
{
unsigned long p = rs->p, q = rs->q;
long i0, threshold;
int symbol;
if (q >= UPPER_BIT_RUBIN || ((p + q) <= UPPER_BIT_RUBIN))
__do_decode(rs, p, q);
i0 = A * rs->p / (A + B);
if (i0 <= 0)
i0 = 1;
if (i0 >= rs->p)
i0 = rs->p - 1;
threshold = rs->q + i0;
symbol = rs->rec_q >= threshold;
if (rs->rec_q >= threshold) {
rs->q += i0;
i0 = rs->p - i0;
}
rs->p = i0;
return symbol;
}
static int out_byte(struct rubin_state *rs, unsigned char byte)
{
int i, ret;
struct rubin_state rs_copy;
rs_copy = *rs;
for (i=0; i<8; i++) {
ret = encode(rs, rs->bit_divider-rs->bits[i],
rs->bits[i], byte & 1);
if (ret) {
/* Failed. Restore old state */
*rs = rs_copy;
return ret;
}
byte >>= 1 ;
}
return 0;
}
static int in_byte(struct rubin_state *rs)
{
int i, result = 0, bit_divider = rs->bit_divider;
for (i = 0; i < 8; i++)
result |= decode(rs, bit_divider - rs->bits[i],
rs->bits[i]) << i;
return result;
}
static int rubin_do_compress(int bit_divider, int *bits, unsigned char *data_in,
unsigned char *cpage_out, uint32_t *sourcelen,
uint32_t *dstlen)
{
int outpos = 0;
int pos=0;
struct rubin_state rs;
init_pushpull(&rs.pp, cpage_out, *dstlen * 8, 0, 32);
init_rubin(&rs, bit_divider, bits);
while (pos < (*sourcelen) && !out_byte(&rs, data_in[pos]))
pos++;
end_rubin(&rs);
if (outpos > pos) {
/* We failed */
return -1;
}
/* Tell the caller how much we managed to compress,
* and how much space it took */
outpos = (pushedbits(&rs.pp)+7)/8;
if (outpos >= pos)
return -1; /* We didn't actually compress */
*sourcelen = pos;
*dstlen = outpos;
return 0;
}
#if 0
/* _compress returns the compressed size, -1 if bigger */
int jffs2_rubinmips_compress(unsigned char *data_in, unsigned char *cpage_out,
uint32_t *sourcelen, uint32_t *dstlen)
{
return rubin_do_compress(BIT_DIVIDER_MIPS, bits_mips, data_in,
cpage_out, sourcelen, dstlen);
}
#endif
static int jffs2_dynrubin_compress(unsigned char *data_in,
unsigned char *cpage_out,
uint32_t *sourcelen, uint32_t *dstlen)
{
int bits[8];
unsigned char histo[256];
int i;
int ret;
uint32_t mysrclen, mydstlen;
mysrclen = *sourcelen;
mydstlen = *dstlen - 8;
if (*dstlen <= 12)
return -1;
memset(histo, 0, 256);
for (i=0; i<mysrclen; i++)
histo[data_in[i]]++;
memset(bits, 0, sizeof(int)*8);
for (i=0; i<256; i++) {
if (i&128)
bits[7] += histo[i];
if (i&64)
bits[6] += histo[i];
if (i&32)
bits[5] += histo[i];
if (i&16)
bits[4] += histo[i];
if (i&8)
bits[3] += histo[i];
if (i&4)
bits[2] += histo[i];
if (i&2)
bits[1] += histo[i];
if (i&1)
bits[0] += histo[i];
}
for (i=0; i<8; i++) {
bits[i] = (bits[i] * 256) / mysrclen;
if (!bits[i]) bits[i] = 1;
if (bits[i] > 255) bits[i] = 255;
cpage_out[i] = bits[i];
}
ret = rubin_do_compress(256, bits, data_in, cpage_out+8, &mysrclen,
&mydstlen);
if (ret)
return ret;
/* Add back the 8 bytes we took for the probabilities */
mydstlen += 8;
if (mysrclen <= mydstlen) {
/* We compressed */
return -1;
}
*sourcelen = mysrclen;
*dstlen = mydstlen;
return 0;
}
static void rubin_do_decompress(int bit_divider, int *bits,
unsigned char *cdata_in,
unsigned char *page_out, uint32_t srclen,
uint32_t destlen)
{
int outpos = 0;
struct rubin_state rs;
init_pushpull(&rs.pp, cdata_in, srclen, 0, 0);
init_decode(&rs, bit_divider, bits);
while (outpos < destlen)
page_out[outpos++] = in_byte(&rs);
}
static int jffs2_rubinmips_decompress(unsigned char *data_in,
unsigned char *cpage_out,
uint32_t sourcelen, uint32_t dstlen)
{
rubin_do_decompress(BIT_DIVIDER_MIPS, bits_mips, data_in,
cpage_out, sourcelen, dstlen);
return 0;
}
static int jffs2_dynrubin_decompress(unsigned char *data_in,
unsigned char *cpage_out,
uint32_t sourcelen, uint32_t dstlen)
{
int bits[8];
int c;
for (c=0; c<8; c++)
bits[c] = data_in[c];
rubin_do_decompress(256, bits, data_in+8, cpage_out, sourcelen-8,
dstlen);
return 0;
}
static struct jffs2_compressor jffs2_rubinmips_comp = {
.priority = JFFS2_RUBINMIPS_PRIORITY,
.name = "rubinmips",
.compr = JFFS2_COMPR_DYNRUBIN,
.compress = NULL, /*&jffs2_rubinmips_compress,*/
.decompress = &jffs2_rubinmips_decompress,
#ifdef JFFS2_RUBINMIPS_DISABLED
.disabled = 1,
#else
.disabled = 0,
#endif
};
int jffs2_rubinmips_init(void)
{
return jffs2_register_compressor(&jffs2_rubinmips_comp);
}
void jffs2_rubinmips_exit(void)
{
jffs2_unregister_compressor(&jffs2_rubinmips_comp);
}
static struct jffs2_compressor jffs2_dynrubin_comp = {
.priority = JFFS2_DYNRUBIN_PRIORITY,
.name = "dynrubin",
.compr = JFFS2_COMPR_RUBINMIPS,
.compress = jffs2_dynrubin_compress,
.decompress = &jffs2_dynrubin_decompress,
#ifdef JFFS2_DYNRUBIN_DISABLED
.disabled = 1,
#else
.disabled = 0,
#endif
};
int jffs2_dynrubin_init(void)
{
return jffs2_register_compressor(&jffs2_dynrubin_comp);
}
void jffs2_dynrubin_exit(void)
{
jffs2_unregister_compressor(&jffs2_dynrubin_comp);
}

222
fs/jffs2/compr_zlib.c Normal file
View file

@ -0,0 +1,222 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#if !defined(__KERNEL__) && !defined(__ECOS)
#error "The userspace support got too messy and was removed. Update your mkfs.jffs2"
#endif
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/zlib.h>
#include <linux/zutil.h>
#include "nodelist.h"
#include "compr.h"
/* Plan: call deflate() with avail_in == *sourcelen,
avail_out = *dstlen - 12 and flush == Z_FINISH.
If it doesn't manage to finish, call it again with
avail_in == 0 and avail_out set to the remaining 12
bytes for it to clean up.
Q: Is 12 bytes sufficient?
*/
#define STREAM_END_SPACE 12
static DEFINE_MUTEX(deflate_mutex);
static DEFINE_MUTEX(inflate_mutex);
static z_stream inf_strm, def_strm;
#ifdef __KERNEL__ /* Linux-only */
#include <linux/vmalloc.h>
#include <linux/init.h>
#include <linux/mutex.h>
static int __init alloc_workspaces(void)
{
def_strm.workspace = vmalloc(zlib_deflate_workspacesize(MAX_WBITS,
MAX_MEM_LEVEL));
if (!def_strm.workspace)
return -ENOMEM;
jffs2_dbg(1, "Allocated %d bytes for deflate workspace\n",
zlib_deflate_workspacesize(MAX_WBITS, MAX_MEM_LEVEL));
inf_strm.workspace = vmalloc(zlib_inflate_workspacesize());
if (!inf_strm.workspace) {
vfree(def_strm.workspace);
return -ENOMEM;
}
jffs2_dbg(1, "Allocated %d bytes for inflate workspace\n",
zlib_inflate_workspacesize());
return 0;
}
static void free_workspaces(void)
{
vfree(def_strm.workspace);
vfree(inf_strm.workspace);
}
#else
#define alloc_workspaces() (0)
#define free_workspaces() do { } while(0)
#endif /* __KERNEL__ */
static int jffs2_zlib_compress(unsigned char *data_in,
unsigned char *cpage_out,
uint32_t *sourcelen, uint32_t *dstlen)
{
int ret;
if (*dstlen <= STREAM_END_SPACE)
return -1;
mutex_lock(&deflate_mutex);
if (Z_OK != zlib_deflateInit(&def_strm, 3)) {
pr_warn("deflateInit failed\n");
mutex_unlock(&deflate_mutex);
return -1;
}
def_strm.next_in = data_in;
def_strm.total_in = 0;
def_strm.next_out = cpage_out;
def_strm.total_out = 0;
while (def_strm.total_out < *dstlen - STREAM_END_SPACE && def_strm.total_in < *sourcelen) {
def_strm.avail_out = *dstlen - (def_strm.total_out + STREAM_END_SPACE);
def_strm.avail_in = min_t(unsigned long,
(*sourcelen-def_strm.total_in), def_strm.avail_out);
jffs2_dbg(1, "calling deflate with avail_in %ld, avail_out %ld\n",
def_strm.avail_in, def_strm.avail_out);
ret = zlib_deflate(&def_strm, Z_PARTIAL_FLUSH);
jffs2_dbg(1, "deflate returned with avail_in %ld, avail_out %ld, total_in %ld, total_out %ld\n",
def_strm.avail_in, def_strm.avail_out,
def_strm.total_in, def_strm.total_out);
if (ret != Z_OK) {
jffs2_dbg(1, "deflate in loop returned %d\n", ret);
zlib_deflateEnd(&def_strm);
mutex_unlock(&deflate_mutex);
return -1;
}
}
def_strm.avail_out += STREAM_END_SPACE;
def_strm.avail_in = 0;
ret = zlib_deflate(&def_strm, Z_FINISH);
zlib_deflateEnd(&def_strm);
if (ret != Z_STREAM_END) {
jffs2_dbg(1, "final deflate returned %d\n", ret);
ret = -1;
goto out;
}
if (def_strm.total_out >= def_strm.total_in) {
jffs2_dbg(1, "zlib compressed %ld bytes into %ld; failing\n",
def_strm.total_in, def_strm.total_out);
ret = -1;
goto out;
}
jffs2_dbg(1, "zlib compressed %ld bytes into %ld\n",
def_strm.total_in, def_strm.total_out);
*dstlen = def_strm.total_out;
*sourcelen = def_strm.total_in;
ret = 0;
out:
mutex_unlock(&deflate_mutex);
return ret;
}
static int jffs2_zlib_decompress(unsigned char *data_in,
unsigned char *cpage_out,
uint32_t srclen, uint32_t destlen)
{
int ret;
int wbits = MAX_WBITS;
mutex_lock(&inflate_mutex);
inf_strm.next_in = data_in;
inf_strm.avail_in = srclen;
inf_strm.total_in = 0;
inf_strm.next_out = cpage_out;
inf_strm.avail_out = destlen;
inf_strm.total_out = 0;
/* If it's deflate, and it's got no preset dictionary, then
we can tell zlib to skip the adler32 check. */
if (srclen > 2 && !(data_in[1] & PRESET_DICT) &&
((data_in[0] & 0x0f) == Z_DEFLATED) &&
!(((data_in[0]<<8) + data_in[1]) % 31)) {
jffs2_dbg(2, "inflate skipping adler32\n");
wbits = -((data_in[0] >> 4) + 8);
inf_strm.next_in += 2;
inf_strm.avail_in -= 2;
} else {
/* Let this remain D1 for now -- it should never happen */
jffs2_dbg(1, "inflate not skipping adler32\n");
}
if (Z_OK != zlib_inflateInit2(&inf_strm, wbits)) {
pr_warn("inflateInit failed\n");
mutex_unlock(&inflate_mutex);
return 1;
}
while((ret = zlib_inflate(&inf_strm, Z_FINISH)) == Z_OK)
;
if (ret != Z_STREAM_END) {
pr_notice("inflate returned %d\n", ret);
}
zlib_inflateEnd(&inf_strm);
mutex_unlock(&inflate_mutex);
return 0;
}
static struct jffs2_compressor jffs2_zlib_comp = {
.priority = JFFS2_ZLIB_PRIORITY,
.name = "zlib",
.compr = JFFS2_COMPR_ZLIB,
.compress = &jffs2_zlib_compress,
.decompress = &jffs2_zlib_decompress,
#ifdef JFFS2_ZLIB_DISABLED
.disabled = 1,
#else
.disabled = 0,
#endif
};
int __init jffs2_zlib_init(void)
{
int ret;
ret = alloc_workspaces();
if (ret)
return ret;
ret = jffs2_register_compressor(&jffs2_zlib_comp);
if (ret)
free_workspaces();
return ret;
}
void jffs2_zlib_exit(void)
{
jffs2_unregister_compressor(&jffs2_zlib_comp);
free_workspaces();
}

866
fs/jffs2/debug.c Normal file
View file

@ -0,0 +1,866 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pagemap.h>
#include <linux/crc32.h>
#include <linux/jffs2.h>
#include <linux/mtd/mtd.h>
#include <linux/slab.h>
#include "nodelist.h"
#include "debug.h"
#ifdef JFFS2_DBG_SANITY_CHECKS
void
__jffs2_dbg_acct_sanity_check_nolock(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb)
{
if (unlikely(jeb && jeb->used_size + jeb->dirty_size +
jeb->free_size + jeb->wasted_size +
jeb->unchecked_size != c->sector_size)) {
JFFS2_ERROR("eeep, space accounting for block at 0x%08x is screwed.\n", jeb->offset);
JFFS2_ERROR("free %#08x + dirty %#08x + used %#08x + wasted %#08x + unchecked %#08x != total %#08x.\n",
jeb->free_size, jeb->dirty_size, jeb->used_size,
jeb->wasted_size, jeb->unchecked_size, c->sector_size);
BUG();
}
if (unlikely(c->used_size + c->dirty_size + c->free_size + c->erasing_size + c->bad_size
+ c->wasted_size + c->unchecked_size != c->flash_size)) {
JFFS2_ERROR("eeep, space accounting superblock info is screwed.\n");
JFFS2_ERROR("free %#08x + dirty %#08x + used %#08x + erasing %#08x + bad %#08x + wasted %#08x + unchecked %#08x != total %#08x.\n",
c->free_size, c->dirty_size, c->used_size, c->erasing_size, c->bad_size,
c->wasted_size, c->unchecked_size, c->flash_size);
BUG();
}
}
void
__jffs2_dbg_acct_sanity_check(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb)
{
spin_lock(&c->erase_completion_lock);
jffs2_dbg_acct_sanity_check_nolock(c, jeb);
spin_unlock(&c->erase_completion_lock);
}
#endif /* JFFS2_DBG_SANITY_CHECKS */
#ifdef JFFS2_DBG_PARANOIA_CHECKS
/*
* Check the fragtree.
*/
void
__jffs2_dbg_fragtree_paranoia_check(struct jffs2_inode_info *f)
{
mutex_lock(&f->sem);
__jffs2_dbg_fragtree_paranoia_check_nolock(f);
mutex_unlock(&f->sem);
}
void
__jffs2_dbg_fragtree_paranoia_check_nolock(struct jffs2_inode_info *f)
{
struct jffs2_node_frag *frag;
int bitched = 0;
for (frag = frag_first(&f->fragtree); frag; frag = frag_next(frag)) {
struct jffs2_full_dnode *fn = frag->node;
if (!fn || !fn->raw)
continue;
if (ref_flags(fn->raw) == REF_PRISTINE) {
if (fn->frags > 1) {
JFFS2_ERROR("REF_PRISTINE node at 0x%08x had %d frags. Tell dwmw2.\n",
ref_offset(fn->raw), fn->frags);
bitched = 1;
}
/* A hole node which isn't multi-page should be garbage-collected
and merged anyway, so we just check for the frag size here,
rather than mucking around with actually reading the node
and checking the compression type, which is the real way
to tell a hole node. */
if (frag->ofs & (PAGE_CACHE_SIZE-1) && frag_prev(frag)
&& frag_prev(frag)->size < PAGE_CACHE_SIZE && frag_prev(frag)->node) {
JFFS2_ERROR("REF_PRISTINE node at 0x%08x had a previous non-hole frag in the same page. Tell dwmw2.\n",
ref_offset(fn->raw));
bitched = 1;
}
if ((frag->ofs+frag->size) & (PAGE_CACHE_SIZE-1) && frag_next(frag)
&& frag_next(frag)->size < PAGE_CACHE_SIZE && frag_next(frag)->node) {
JFFS2_ERROR("REF_PRISTINE node at 0x%08x (%08x-%08x) had a following non-hole frag in the same page. Tell dwmw2.\n",
ref_offset(fn->raw), frag->ofs, frag->ofs+frag->size);
bitched = 1;
}
}
}
if (bitched) {
JFFS2_ERROR("fragtree is corrupted.\n");
__jffs2_dbg_dump_fragtree_nolock(f);
BUG();
}
}
/*
* Check if the flash contains all 0xFF before we start writing.
*/
void
__jffs2_dbg_prewrite_paranoia_check(struct jffs2_sb_info *c,
uint32_t ofs, int len)
{
size_t retlen;
int ret, i;
unsigned char *buf;
buf = kmalloc(len, GFP_KERNEL);
if (!buf)
return;
ret = jffs2_flash_read(c, ofs, len, &retlen, buf);
if (ret || (retlen != len)) {
JFFS2_WARNING("read %d bytes failed or short. ret %d, retlen %zd.\n",
len, ret, retlen);
kfree(buf);
return;
}
ret = 0;
for (i = 0; i < len; i++)
if (buf[i] != 0xff)
ret = 1;
if (ret) {
JFFS2_ERROR("argh, about to write node to %#08x on flash, but there are data already there. The first corrupted byte is at %#08x offset.\n",
ofs, ofs + i);
__jffs2_dbg_dump_buffer(buf, len, ofs);
kfree(buf);
BUG();
}
kfree(buf);
}
void __jffs2_dbg_superblock_counts(struct jffs2_sb_info *c)
{
struct jffs2_eraseblock *jeb;
uint32_t free = 0, dirty = 0, used = 0, wasted = 0,
erasing = 0, bad = 0, unchecked = 0;
int nr_counted = 0;
int dump = 0;
if (c->gcblock) {
nr_counted++;
free += c->gcblock->free_size;
dirty += c->gcblock->dirty_size;
used += c->gcblock->used_size;
wasted += c->gcblock->wasted_size;
unchecked += c->gcblock->unchecked_size;
}
if (c->nextblock) {
nr_counted++;
free += c->nextblock->free_size;
dirty += c->nextblock->dirty_size;
used += c->nextblock->used_size;
wasted += c->nextblock->wasted_size;
unchecked += c->nextblock->unchecked_size;
}
list_for_each_entry(jeb, &c->clean_list, list) {
nr_counted++;
free += jeb->free_size;
dirty += jeb->dirty_size;
used += jeb->used_size;
wasted += jeb->wasted_size;
unchecked += jeb->unchecked_size;
}
list_for_each_entry(jeb, &c->very_dirty_list, list) {
nr_counted++;
free += jeb->free_size;
dirty += jeb->dirty_size;
used += jeb->used_size;
wasted += jeb->wasted_size;
unchecked += jeb->unchecked_size;
}
list_for_each_entry(jeb, &c->dirty_list, list) {
nr_counted++;
free += jeb->free_size;
dirty += jeb->dirty_size;
used += jeb->used_size;
wasted += jeb->wasted_size;
unchecked += jeb->unchecked_size;
}
list_for_each_entry(jeb, &c->erasable_list, list) {
nr_counted++;
free += jeb->free_size;
dirty += jeb->dirty_size;
used += jeb->used_size;
wasted += jeb->wasted_size;
unchecked += jeb->unchecked_size;
}
list_for_each_entry(jeb, &c->erasable_pending_wbuf_list, list) {
nr_counted++;
free += jeb->free_size;
dirty += jeb->dirty_size;
used += jeb->used_size;
wasted += jeb->wasted_size;
unchecked += jeb->unchecked_size;
}
list_for_each_entry(jeb, &c->erase_pending_list, list) {
nr_counted++;
free += jeb->free_size;
dirty += jeb->dirty_size;
used += jeb->used_size;
wasted += jeb->wasted_size;
unchecked += jeb->unchecked_size;
}
list_for_each_entry(jeb, &c->free_list, list) {
nr_counted++;
free += jeb->free_size;
dirty += jeb->dirty_size;
used += jeb->used_size;
wasted += jeb->wasted_size;
unchecked += jeb->unchecked_size;
}
list_for_each_entry(jeb, &c->bad_used_list, list) {
nr_counted++;
free += jeb->free_size;
dirty += jeb->dirty_size;
used += jeb->used_size;
wasted += jeb->wasted_size;
unchecked += jeb->unchecked_size;
}
list_for_each_entry(jeb, &c->erasing_list, list) {
nr_counted++;
erasing += c->sector_size;
}
list_for_each_entry(jeb, &c->erase_checking_list, list) {
nr_counted++;
erasing += c->sector_size;
}
list_for_each_entry(jeb, &c->erase_complete_list, list) {
nr_counted++;
erasing += c->sector_size;
}
list_for_each_entry(jeb, &c->bad_list, list) {
nr_counted++;
bad += c->sector_size;
}
#define check(sz) \
do { \
if (sz != c->sz##_size) { \
pr_warn("%s_size mismatch counted 0x%x, c->%s_size 0x%x\n", \
#sz, sz, #sz, c->sz##_size); \
dump = 1; \
} \
} while (0)
check(free);
check(dirty);
check(used);
check(wasted);
check(unchecked);
check(bad);
check(erasing);
#undef check
if (nr_counted != c->nr_blocks) {
pr_warn("%s counted only 0x%x blocks of 0x%x. Where are the others?\n",
__func__, nr_counted, c->nr_blocks);
dump = 1;
}
if (dump) {
__jffs2_dbg_dump_block_lists_nolock(c);
BUG();
}
}
/*
* Check the space accounting and node_ref list correctness for the JFFS2 erasable block 'jeb'.
*/
void
__jffs2_dbg_acct_paranoia_check(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb)
{
spin_lock(&c->erase_completion_lock);
__jffs2_dbg_acct_paranoia_check_nolock(c, jeb);
spin_unlock(&c->erase_completion_lock);
}
void
__jffs2_dbg_acct_paranoia_check_nolock(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb)
{
uint32_t my_used_size = 0;
uint32_t my_unchecked_size = 0;
uint32_t my_dirty_size = 0;
struct jffs2_raw_node_ref *ref2 = jeb->first_node;
while (ref2) {
uint32_t totlen = ref_totlen(c, jeb, ref2);
if (ref_offset(ref2) < jeb->offset ||
ref_offset(ref2) > jeb->offset + c->sector_size) {
JFFS2_ERROR("node_ref %#08x shouldn't be in block at %#08x.\n",
ref_offset(ref2), jeb->offset);
goto error;
}
if (ref_flags(ref2) == REF_UNCHECKED)
my_unchecked_size += totlen;
else if (!ref_obsolete(ref2))
my_used_size += totlen;
else
my_dirty_size += totlen;
if ((!ref_next(ref2)) != (ref2 == jeb->last_node)) {
JFFS2_ERROR("node_ref for node at %#08x (mem %p) has next at %#08x (mem %p), last_node is at %#08x (mem %p).\n",
ref_offset(ref2), ref2, ref_offset(ref_next(ref2)), ref_next(ref2),
ref_offset(jeb->last_node), jeb->last_node);
goto error;
}
ref2 = ref_next(ref2);
}
if (my_used_size != jeb->used_size) {
JFFS2_ERROR("Calculated used size %#08x != stored used size %#08x.\n",
my_used_size, jeb->used_size);
goto error;
}
if (my_unchecked_size != jeb->unchecked_size) {
JFFS2_ERROR("Calculated unchecked size %#08x != stored unchecked size %#08x.\n",
my_unchecked_size, jeb->unchecked_size);
goto error;
}
#if 0
/* This should work when we implement ref->__totlen elemination */
if (my_dirty_size != jeb->dirty_size + jeb->wasted_size) {
JFFS2_ERROR("Calculated dirty+wasted size %#08x != stored dirty + wasted size %#08x\n",
my_dirty_size, jeb->dirty_size + jeb->wasted_size);
goto error;
}
if (jeb->free_size == 0
&& my_used_size + my_unchecked_size + my_dirty_size != c->sector_size) {
JFFS2_ERROR("The sum of all nodes in block (%#x) != size of block (%#x)\n",
my_used_size + my_unchecked_size + my_dirty_size,
c->sector_size);
goto error;
}
#endif
if (!(c->flags & (JFFS2_SB_FLAG_BUILDING|JFFS2_SB_FLAG_SCANNING)))
__jffs2_dbg_superblock_counts(c);
return;
error:
__jffs2_dbg_dump_node_refs_nolock(c, jeb);
__jffs2_dbg_dump_jeb_nolock(jeb);
__jffs2_dbg_dump_block_lists_nolock(c);
BUG();
}
#endif /* JFFS2_DBG_PARANOIA_CHECKS */
#if defined(JFFS2_DBG_DUMPS) || defined(JFFS2_DBG_PARANOIA_CHECKS)
/*
* Dump the node_refs of the 'jeb' JFFS2 eraseblock.
*/
void
__jffs2_dbg_dump_node_refs(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb)
{
spin_lock(&c->erase_completion_lock);
__jffs2_dbg_dump_node_refs_nolock(c, jeb);
spin_unlock(&c->erase_completion_lock);
}
void
__jffs2_dbg_dump_node_refs_nolock(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb)
{
struct jffs2_raw_node_ref *ref;
int i = 0;
printk(JFFS2_DBG_MSG_PREFIX " Dump node_refs of the eraseblock %#08x\n", jeb->offset);
if (!jeb->first_node) {
printk(JFFS2_DBG_MSG_PREFIX " no nodes in the eraseblock %#08x\n", jeb->offset);
return;
}
printk(JFFS2_DBG);
for (ref = jeb->first_node; ; ref = ref_next(ref)) {
printk("%#08x", ref_offset(ref));
#ifdef TEST_TOTLEN
printk("(%x)", ref->__totlen);
#endif
if (ref_next(ref))
printk("->");
else
break;
if (++i == 4) {
i = 0;
printk("\n" JFFS2_DBG);
}
}
printk("\n");
}
/*
* Dump an eraseblock's space accounting.
*/
void
__jffs2_dbg_dump_jeb(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb)
{
spin_lock(&c->erase_completion_lock);
__jffs2_dbg_dump_jeb_nolock(jeb);
spin_unlock(&c->erase_completion_lock);
}
void
__jffs2_dbg_dump_jeb_nolock(struct jffs2_eraseblock *jeb)
{
if (!jeb)
return;
printk(JFFS2_DBG_MSG_PREFIX " dump space accounting for the eraseblock at %#08x:\n",
jeb->offset);
printk(JFFS2_DBG "used_size: %#08x\n", jeb->used_size);
printk(JFFS2_DBG "dirty_size: %#08x\n", jeb->dirty_size);
printk(JFFS2_DBG "wasted_size: %#08x\n", jeb->wasted_size);
printk(JFFS2_DBG "unchecked_size: %#08x\n", jeb->unchecked_size);
printk(JFFS2_DBG "free_size: %#08x\n", jeb->free_size);
}
void
__jffs2_dbg_dump_block_lists(struct jffs2_sb_info *c)
{
spin_lock(&c->erase_completion_lock);
__jffs2_dbg_dump_block_lists_nolock(c);
spin_unlock(&c->erase_completion_lock);
}
void
__jffs2_dbg_dump_block_lists_nolock(struct jffs2_sb_info *c)
{
printk(JFFS2_DBG_MSG_PREFIX " dump JFFS2 blocks lists:\n");
printk(JFFS2_DBG "flash_size: %#08x\n", c->flash_size);
printk(JFFS2_DBG "used_size: %#08x\n", c->used_size);
printk(JFFS2_DBG "dirty_size: %#08x\n", c->dirty_size);
printk(JFFS2_DBG "wasted_size: %#08x\n", c->wasted_size);
printk(JFFS2_DBG "unchecked_size: %#08x\n", c->unchecked_size);
printk(JFFS2_DBG "free_size: %#08x\n", c->free_size);
printk(JFFS2_DBG "erasing_size: %#08x\n", c->erasing_size);
printk(JFFS2_DBG "bad_size: %#08x\n", c->bad_size);
printk(JFFS2_DBG "sector_size: %#08x\n", c->sector_size);
printk(JFFS2_DBG "jffs2_reserved_blocks size: %#08x\n",
c->sector_size * c->resv_blocks_write);
if (c->nextblock)
printk(JFFS2_DBG "nextblock: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
c->nextblock->offset, c->nextblock->used_size,
c->nextblock->dirty_size, c->nextblock->wasted_size,
c->nextblock->unchecked_size, c->nextblock->free_size);
else
printk(JFFS2_DBG "nextblock: NULL\n");
if (c->gcblock)
printk(JFFS2_DBG "gcblock: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
c->gcblock->offset, c->gcblock->used_size, c->gcblock->dirty_size,
c->gcblock->wasted_size, c->gcblock->unchecked_size, c->gcblock->free_size);
else
printk(JFFS2_DBG "gcblock: NULL\n");
if (list_empty(&c->clean_list)) {
printk(JFFS2_DBG "clean_list: empty\n");
} else {
struct list_head *this;
int numblocks = 0;
uint32_t dirty = 0;
list_for_each(this, &c->clean_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
numblocks ++;
dirty += jeb->wasted_size;
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "clean_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
printk (JFFS2_DBG "Contains %d blocks with total wasted size %u, average wasted size: %u\n",
numblocks, dirty, dirty / numblocks);
}
if (list_empty(&c->very_dirty_list)) {
printk(JFFS2_DBG "very_dirty_list: empty\n");
} else {
struct list_head *this;
int numblocks = 0;
uint32_t dirty = 0;
list_for_each(this, &c->very_dirty_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
numblocks ++;
dirty += jeb->dirty_size;
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "very_dirty_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
printk (JFFS2_DBG "Contains %d blocks with total dirty size %u, average dirty size: %u\n",
numblocks, dirty, dirty / numblocks);
}
if (list_empty(&c->dirty_list)) {
printk(JFFS2_DBG "dirty_list: empty\n");
} else {
struct list_head *this;
int numblocks = 0;
uint32_t dirty = 0;
list_for_each(this, &c->dirty_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
numblocks ++;
dirty += jeb->dirty_size;
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "dirty_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
printk (JFFS2_DBG "contains %d blocks with total dirty size %u, average dirty size: %u\n",
numblocks, dirty, dirty / numblocks);
}
if (list_empty(&c->erasable_list)) {
printk(JFFS2_DBG "erasable_list: empty\n");
} else {
struct list_head *this;
list_for_each(this, &c->erasable_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "erasable_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
}
if (list_empty(&c->erasing_list)) {
printk(JFFS2_DBG "erasing_list: empty\n");
} else {
struct list_head *this;
list_for_each(this, &c->erasing_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "erasing_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
}
if (list_empty(&c->erase_checking_list)) {
printk(JFFS2_DBG "erase_checking_list: empty\n");
} else {
struct list_head *this;
list_for_each(this, &c->erase_checking_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "erase_checking_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
}
if (list_empty(&c->erase_pending_list)) {
printk(JFFS2_DBG "erase_pending_list: empty\n");
} else {
struct list_head *this;
list_for_each(this, &c->erase_pending_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "erase_pending_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
}
if (list_empty(&c->erasable_pending_wbuf_list)) {
printk(JFFS2_DBG "erasable_pending_wbuf_list: empty\n");
} else {
struct list_head *this;
list_for_each(this, &c->erasable_pending_wbuf_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "erasable_pending_wbuf_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
}
if (list_empty(&c->free_list)) {
printk(JFFS2_DBG "free_list: empty\n");
} else {
struct list_head *this;
list_for_each(this, &c->free_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "free_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
}
if (list_empty(&c->bad_list)) {
printk(JFFS2_DBG "bad_list: empty\n");
} else {
struct list_head *this;
list_for_each(this, &c->bad_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "bad_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
}
if (list_empty(&c->bad_used_list)) {
printk(JFFS2_DBG "bad_used_list: empty\n");
} else {
struct list_head *this;
list_for_each(this, &c->bad_used_list) {
struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list);
if (!(jeb->used_size == 0 && jeb->dirty_size == 0 && jeb->wasted_size == 0)) {
printk(JFFS2_DBG "bad_used_list: %#08x (used %#08x, dirty %#08x, wasted %#08x, unchecked %#08x, free %#08x)\n",
jeb->offset, jeb->used_size, jeb->dirty_size, jeb->wasted_size,
jeb->unchecked_size, jeb->free_size);
}
}
}
}
void
__jffs2_dbg_dump_fragtree(struct jffs2_inode_info *f)
{
mutex_lock(&f->sem);
jffs2_dbg_dump_fragtree_nolock(f);
mutex_unlock(&f->sem);
}
void
__jffs2_dbg_dump_fragtree_nolock(struct jffs2_inode_info *f)
{
struct jffs2_node_frag *this = frag_first(&f->fragtree);
uint32_t lastofs = 0;
int buggy = 0;
printk(JFFS2_DBG_MSG_PREFIX " dump fragtree of ino #%u\n", f->inocache->ino);
while(this) {
if (this->node)
printk(JFFS2_DBG "frag %#04x-%#04x: %#08x(%d) on flash (*%p), left (%p), right (%p), parent (%p)\n",
this->ofs, this->ofs+this->size, ref_offset(this->node->raw),
ref_flags(this->node->raw), this, frag_left(this), frag_right(this),
frag_parent(this));
else
printk(JFFS2_DBG "frag %#04x-%#04x: hole (*%p). left (%p), right (%p), parent (%p)\n",
this->ofs, this->ofs+this->size, this, frag_left(this),
frag_right(this), frag_parent(this));
if (this->ofs != lastofs)
buggy = 1;
lastofs = this->ofs + this->size;
this = frag_next(this);
}
if (f->metadata)
printk(JFFS2_DBG "metadata at 0x%08x\n", ref_offset(f->metadata->raw));
if (buggy) {
JFFS2_ERROR("frag tree got a hole in it.\n");
BUG();
}
}
#define JFFS2_BUFDUMP_BYTES_PER_LINE 32
void
__jffs2_dbg_dump_buffer(unsigned char *buf, int len, uint32_t offs)
{
int skip;
int i;
printk(JFFS2_DBG_MSG_PREFIX " dump from offset %#08x to offset %#08x (%x bytes).\n",
offs, offs + len, len);
i = skip = offs % JFFS2_BUFDUMP_BYTES_PER_LINE;
offs = offs & ~(JFFS2_BUFDUMP_BYTES_PER_LINE - 1);
if (skip != 0)
printk(JFFS2_DBG "%#08x: ", offs);
while (skip--)
printk(" ");
while (i < len) {
if ((i % JFFS2_BUFDUMP_BYTES_PER_LINE) == 0 && i != len -1) {
if (i != 0)
printk("\n");
offs += JFFS2_BUFDUMP_BYTES_PER_LINE;
printk(JFFS2_DBG "%0#8x: ", offs);
}
printk("%02x ", buf[i]);
i += 1;
}
printk("\n");
}
/*
* Dump a JFFS2 node.
*/
void
__jffs2_dbg_dump_node(struct jffs2_sb_info *c, uint32_t ofs)
{
union jffs2_node_union node;
int len = sizeof(union jffs2_node_union);
size_t retlen;
uint32_t crc;
int ret;
printk(JFFS2_DBG_MSG_PREFIX " dump node at offset %#08x.\n", ofs);
ret = jffs2_flash_read(c, ofs, len, &retlen, (unsigned char *)&node);
if (ret || (retlen != len)) {
JFFS2_ERROR("read %d bytes failed or short. ret %d, retlen %zd.\n",
len, ret, retlen);
return;
}
printk(JFFS2_DBG "magic:\t%#04x\n", je16_to_cpu(node.u.magic));
printk(JFFS2_DBG "nodetype:\t%#04x\n", je16_to_cpu(node.u.nodetype));
printk(JFFS2_DBG "totlen:\t%#08x\n", je32_to_cpu(node.u.totlen));
printk(JFFS2_DBG "hdr_crc:\t%#08x\n", je32_to_cpu(node.u.hdr_crc));
crc = crc32(0, &node.u, sizeof(node.u) - 4);
if (crc != je32_to_cpu(node.u.hdr_crc)) {
JFFS2_ERROR("wrong common header CRC.\n");
return;
}
if (je16_to_cpu(node.u.magic) != JFFS2_MAGIC_BITMASK &&
je16_to_cpu(node.u.magic) != JFFS2_OLD_MAGIC_BITMASK)
{
JFFS2_ERROR("wrong node magic: %#04x instead of %#04x.\n",
je16_to_cpu(node.u.magic), JFFS2_MAGIC_BITMASK);
return;
}
switch(je16_to_cpu(node.u.nodetype)) {
case JFFS2_NODETYPE_INODE:
printk(JFFS2_DBG "the node is inode node\n");
printk(JFFS2_DBG "ino:\t%#08x\n", je32_to_cpu(node.i.ino));
printk(JFFS2_DBG "version:\t%#08x\n", je32_to_cpu(node.i.version));
printk(JFFS2_DBG "mode:\t%#08x\n", node.i.mode.m);
printk(JFFS2_DBG "uid:\t%#04x\n", je16_to_cpu(node.i.uid));
printk(JFFS2_DBG "gid:\t%#04x\n", je16_to_cpu(node.i.gid));
printk(JFFS2_DBG "isize:\t%#08x\n", je32_to_cpu(node.i.isize));
printk(JFFS2_DBG "atime:\t%#08x\n", je32_to_cpu(node.i.atime));
printk(JFFS2_DBG "mtime:\t%#08x\n", je32_to_cpu(node.i.mtime));
printk(JFFS2_DBG "ctime:\t%#08x\n", je32_to_cpu(node.i.ctime));
printk(JFFS2_DBG "offset:\t%#08x\n", je32_to_cpu(node.i.offset));
printk(JFFS2_DBG "csize:\t%#08x\n", je32_to_cpu(node.i.csize));
printk(JFFS2_DBG "dsize:\t%#08x\n", je32_to_cpu(node.i.dsize));
printk(JFFS2_DBG "compr:\t%#02x\n", node.i.compr);
printk(JFFS2_DBG "usercompr:\t%#02x\n", node.i.usercompr);
printk(JFFS2_DBG "flags:\t%#04x\n", je16_to_cpu(node.i.flags));
printk(JFFS2_DBG "data_crc:\t%#08x\n", je32_to_cpu(node.i.data_crc));
printk(JFFS2_DBG "node_crc:\t%#08x\n", je32_to_cpu(node.i.node_crc));
crc = crc32(0, &node.i, sizeof(node.i) - 8);
if (crc != je32_to_cpu(node.i.node_crc)) {
JFFS2_ERROR("wrong node header CRC.\n");
return;
}
break;
case JFFS2_NODETYPE_DIRENT:
printk(JFFS2_DBG "the node is dirent node\n");
printk(JFFS2_DBG "pino:\t%#08x\n", je32_to_cpu(node.d.pino));
printk(JFFS2_DBG "version:\t%#08x\n", je32_to_cpu(node.d.version));
printk(JFFS2_DBG "ino:\t%#08x\n", je32_to_cpu(node.d.ino));
printk(JFFS2_DBG "mctime:\t%#08x\n", je32_to_cpu(node.d.mctime));
printk(JFFS2_DBG "nsize:\t%#02x\n", node.d.nsize);
printk(JFFS2_DBG "type:\t%#02x\n", node.d.type);
printk(JFFS2_DBG "node_crc:\t%#08x\n", je32_to_cpu(node.d.node_crc));
printk(JFFS2_DBG "name_crc:\t%#08x\n", je32_to_cpu(node.d.name_crc));
node.d.name[node.d.nsize] = '\0';
printk(JFFS2_DBG "name:\t\"%s\"\n", node.d.name);
crc = crc32(0, &node.d, sizeof(node.d) - 8);
if (crc != je32_to_cpu(node.d.node_crc)) {
JFFS2_ERROR("wrong node header CRC.\n");
return;
}
break;
default:
printk(JFFS2_DBG "node type is unknown\n");
break;
}
}
#endif /* JFFS2_DBG_DUMPS || JFFS2_DBG_PARANOIA_CHECKS */

275
fs/jffs2/debug.h Normal file
View file

@ -0,0 +1,275 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#ifndef _JFFS2_DEBUG_H_
#define _JFFS2_DEBUG_H_
#include <linux/sched.h>
#ifndef CONFIG_JFFS2_FS_DEBUG
#define CONFIG_JFFS2_FS_DEBUG 0
#endif
#if CONFIG_JFFS2_FS_DEBUG > 0
/* Enable "paranoia" checks and dumps */
#define JFFS2_DBG_PARANOIA_CHECKS
#define JFFS2_DBG_DUMPS
/*
* By defining/undefining the below macros one may select debugging messages
* fro specific JFFS2 subsystems.
*/
#define JFFS2_DBG_READINODE_MESSAGES
#define JFFS2_DBG_FRAGTREE_MESSAGES
#define JFFS2_DBG_DENTLIST_MESSAGES
#define JFFS2_DBG_NODEREF_MESSAGES
#define JFFS2_DBG_INOCACHE_MESSAGES
#define JFFS2_DBG_SUMMARY_MESSAGES
#define JFFS2_DBG_FSBUILD_MESSAGES
#endif
#if CONFIG_JFFS2_FS_DEBUG > 1
#define JFFS2_DBG_FRAGTREE2_MESSAGES
#define JFFS2_DBG_READINODE2_MESSAGES
#define JFFS2_DBG_MEMALLOC_MESSAGES
#endif
/* Sanity checks are supposed to be light-weight and enabled by default */
#define JFFS2_DBG_SANITY_CHECKS
/*
* Dx() are mainly used for debugging messages, they must go away and be
* superseded by nicer dbg_xxx() macros...
*/
#if CONFIG_JFFS2_FS_DEBUG > 0
#define DEBUG
#define D1(x) x
#else
#define D1(x)
#endif
#if CONFIG_JFFS2_FS_DEBUG > 1
#define D2(x) x
#else
#define D2(x)
#endif
#define jffs2_dbg(level, fmt, ...) \
do { \
if (CONFIG_JFFS2_FS_DEBUG >= level) \
pr_debug(fmt, ##__VA_ARGS__); \
} while (0)
/* The prefixes of JFFS2 messages */
#define JFFS2_DBG KERN_DEBUG
#define JFFS2_DBG_PREFIX "[JFFS2 DBG]"
#define JFFS2_DBG_MSG_PREFIX JFFS2_DBG JFFS2_DBG_PREFIX
/* JFFS2 message macros */
#define JFFS2_ERROR(fmt, ...) \
pr_err("error: (%d) %s: " fmt, \
task_pid_nr(current), __func__, ##__VA_ARGS__)
#define JFFS2_WARNING(fmt, ...) \
pr_warn("warning: (%d) %s: " fmt, \
task_pid_nr(current), __func__, ##__VA_ARGS__)
#define JFFS2_NOTICE(fmt, ...) \
pr_notice("notice: (%d) %s: " fmt, \
task_pid_nr(current), __func__, ##__VA_ARGS__)
#define JFFS2_DEBUG(fmt, ...) \
printk(KERN_DEBUG "[JFFS2 DBG] (%d) %s: " fmt, \
task_pid_nr(current), __func__, ##__VA_ARGS__)
/*
* We split our debugging messages on several parts, depending on the JFFS2
* subsystem the message belongs to.
*/
/* Read inode debugging messages */
#ifdef JFFS2_DBG_READINODE_MESSAGES
#define dbg_readinode(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_readinode(fmt, ...)
#endif
#ifdef JFFS2_DBG_READINODE2_MESSAGES
#define dbg_readinode2(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_readinode2(fmt, ...)
#endif
/* Fragtree build debugging messages */
#ifdef JFFS2_DBG_FRAGTREE_MESSAGES
#define dbg_fragtree(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_fragtree(fmt, ...)
#endif
#ifdef JFFS2_DBG_FRAGTREE2_MESSAGES
#define dbg_fragtree2(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_fragtree2(fmt, ...)
#endif
/* Directory entry list manilulation debugging messages */
#ifdef JFFS2_DBG_DENTLIST_MESSAGES
#define dbg_dentlist(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_dentlist(fmt, ...)
#endif
/* Print the messages about manipulating node_refs */
#ifdef JFFS2_DBG_NODEREF_MESSAGES
#define dbg_noderef(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_noderef(fmt, ...)
#endif
/* Manipulations with the list of inodes (JFFS2 inocache) */
#ifdef JFFS2_DBG_INOCACHE_MESSAGES
#define dbg_inocache(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_inocache(fmt, ...)
#endif
/* Summary debugging messages */
#ifdef JFFS2_DBG_SUMMARY_MESSAGES
#define dbg_summary(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_summary(fmt, ...)
#endif
/* File system build messages */
#ifdef JFFS2_DBG_FSBUILD_MESSAGES
#define dbg_fsbuild(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_fsbuild(fmt, ...)
#endif
/* Watch the object allocations */
#ifdef JFFS2_DBG_MEMALLOC_MESSAGES
#define dbg_memalloc(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_memalloc(fmt, ...)
#endif
/* Watch the XATTR subsystem */
#ifdef JFFS2_DBG_XATTR_MESSAGES
#define dbg_xattr(fmt, ...) JFFS2_DEBUG(fmt, ##__VA_ARGS__)
#else
#define dbg_xattr(fmt, ...)
#endif
/* "Sanity" checks */
void
__jffs2_dbg_acct_sanity_check_nolock(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb);
void
__jffs2_dbg_acct_sanity_check(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb);
/* "Paranoia" checks */
void
__jffs2_dbg_fragtree_paranoia_check(struct jffs2_inode_info *f);
void
__jffs2_dbg_fragtree_paranoia_check_nolock(struct jffs2_inode_info *f);
void
__jffs2_dbg_acct_paranoia_check(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb);
void
__jffs2_dbg_acct_paranoia_check_nolock(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb);
void
__jffs2_dbg_prewrite_paranoia_check(struct jffs2_sb_info *c,
uint32_t ofs, int len);
/* "Dump" functions */
void
__jffs2_dbg_dump_jeb(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
void
__jffs2_dbg_dump_jeb_nolock(struct jffs2_eraseblock *jeb);
void
__jffs2_dbg_dump_block_lists(struct jffs2_sb_info *c);
void
__jffs2_dbg_dump_block_lists_nolock(struct jffs2_sb_info *c);
void
__jffs2_dbg_dump_node_refs(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb);
void
__jffs2_dbg_dump_node_refs_nolock(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb);
void
__jffs2_dbg_dump_fragtree(struct jffs2_inode_info *f);
void
__jffs2_dbg_dump_fragtree_nolock(struct jffs2_inode_info *f);
void
__jffs2_dbg_dump_buffer(unsigned char *buf, int len, uint32_t offs);
void
__jffs2_dbg_dump_node(struct jffs2_sb_info *c, uint32_t ofs);
#ifdef JFFS2_DBG_PARANOIA_CHECKS
#define jffs2_dbg_fragtree_paranoia_check(f) \
__jffs2_dbg_fragtree_paranoia_check(f)
#define jffs2_dbg_fragtree_paranoia_check_nolock(f) \
__jffs2_dbg_fragtree_paranoia_check_nolock(f)
#define jffs2_dbg_acct_paranoia_check(c, jeb) \
__jffs2_dbg_acct_paranoia_check(c,jeb)
#define jffs2_dbg_acct_paranoia_check_nolock(c, jeb) \
__jffs2_dbg_acct_paranoia_check_nolock(c,jeb)
#define jffs2_dbg_prewrite_paranoia_check(c, ofs, len) \
__jffs2_dbg_prewrite_paranoia_check(c, ofs, len)
#else
#define jffs2_dbg_fragtree_paranoia_check(f)
#define jffs2_dbg_fragtree_paranoia_check_nolock(f)
#define jffs2_dbg_acct_paranoia_check(c, jeb)
#define jffs2_dbg_acct_paranoia_check_nolock(c, jeb)
#define jffs2_dbg_prewrite_paranoia_check(c, ofs, len)
#endif /* !JFFS2_PARANOIA_CHECKS */
#ifdef JFFS2_DBG_DUMPS
#define jffs2_dbg_dump_jeb(c, jeb) \
__jffs2_dbg_dump_jeb(c, jeb);
#define jffs2_dbg_dump_jeb_nolock(jeb) \
__jffs2_dbg_dump_jeb_nolock(jeb);
#define jffs2_dbg_dump_block_lists(c) \
__jffs2_dbg_dump_block_lists(c)
#define jffs2_dbg_dump_block_lists_nolock(c) \
__jffs2_dbg_dump_block_lists_nolock(c)
#define jffs2_dbg_dump_fragtree(f) \
__jffs2_dbg_dump_fragtree(f);
#define jffs2_dbg_dump_fragtree_nolock(f) \
__jffs2_dbg_dump_fragtree_nolock(f);
#define jffs2_dbg_dump_buffer(buf, len, offs) \
__jffs2_dbg_dump_buffer(*buf, len, offs);
#define jffs2_dbg_dump_node(c, ofs) \
__jffs2_dbg_dump_node(c, ofs);
#else
#define jffs2_dbg_dump_jeb(c, jeb)
#define jffs2_dbg_dump_jeb_nolock(jeb)
#define jffs2_dbg_dump_block_lists(c)
#define jffs2_dbg_dump_block_lists_nolock(c)
#define jffs2_dbg_dump_fragtree(f)
#define jffs2_dbg_dump_fragtree_nolock(f)
#define jffs2_dbg_dump_buffer(buf, len, offs)
#define jffs2_dbg_dump_node(c, ofs)
#endif /* !JFFS2_DBG_DUMPS */
#ifdef JFFS2_DBG_SANITY_CHECKS
#define jffs2_dbg_acct_sanity_check(c, jeb) \
__jffs2_dbg_acct_sanity_check(c, jeb)
#define jffs2_dbg_acct_sanity_check_nolock(c, jeb) \
__jffs2_dbg_acct_sanity_check_nolock(c, jeb)
#else
#define jffs2_dbg_acct_sanity_check(c, jeb)
#define jffs2_dbg_acct_sanity_check_nolock(c, jeb)
#endif /* !JFFS2_DBG_SANITY_CHECKS */
#endif /* _JFFS2_DEBUG_H_ */

862
fs/jffs2/dir.c Normal file
View file

@ -0,0 +1,862 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/crc32.h>
#include <linux/jffs2.h>
#include "jffs2_fs_i.h"
#include "jffs2_fs_sb.h"
#include <linux/time.h>
#include "nodelist.h"
static int jffs2_readdir (struct file *, struct dir_context *);
static int jffs2_create (struct inode *,struct dentry *,umode_t,
bool);
static struct dentry *jffs2_lookup (struct inode *,struct dentry *,
unsigned int);
static int jffs2_link (struct dentry *,struct inode *,struct dentry *);
static int jffs2_unlink (struct inode *,struct dentry *);
static int jffs2_symlink (struct inode *,struct dentry *,const char *);
static int jffs2_mkdir (struct inode *,struct dentry *,umode_t);
static int jffs2_rmdir (struct inode *,struct dentry *);
static int jffs2_mknod (struct inode *,struct dentry *,umode_t,dev_t);
static int jffs2_rename (struct inode *, struct dentry *,
struct inode *, struct dentry *);
const struct file_operations jffs2_dir_operations =
{
.read = generic_read_dir,
.iterate = jffs2_readdir,
.unlocked_ioctl=jffs2_ioctl,
.fsync = jffs2_fsync,
.llseek = generic_file_llseek,
};
const struct inode_operations jffs2_dir_inode_operations =
{
.create = jffs2_create,
.lookup = jffs2_lookup,
.link = jffs2_link,
.unlink = jffs2_unlink,
.symlink = jffs2_symlink,
.mkdir = jffs2_mkdir,
.rmdir = jffs2_rmdir,
.mknod = jffs2_mknod,
.rename = jffs2_rename,
.get_acl = jffs2_get_acl,
.set_acl = jffs2_set_acl,
.setattr = jffs2_setattr,
.setxattr = jffs2_setxattr,
.getxattr = jffs2_getxattr,
.listxattr = jffs2_listxattr,
.removexattr = jffs2_removexattr
};
/***********************************************************************/
/* We keep the dirent list sorted in increasing order of name hash,
and we use the same hash function as the dentries. Makes this
nice and simple
*/
static struct dentry *jffs2_lookup(struct inode *dir_i, struct dentry *target,
unsigned int flags)
{
struct jffs2_inode_info *dir_f;
struct jffs2_full_dirent *fd = NULL, *fd_list;
uint32_t ino = 0;
struct inode *inode = NULL;
jffs2_dbg(1, "jffs2_lookup()\n");
if (target->d_name.len > JFFS2_MAX_NAME_LEN)
return ERR_PTR(-ENAMETOOLONG);
dir_f = JFFS2_INODE_INFO(dir_i);
mutex_lock(&dir_f->sem);
/* NB: The 2.2 backport will need to explicitly check for '.' and '..' here */
for (fd_list = dir_f->dents; fd_list && fd_list->nhash <= target->d_name.hash; fd_list = fd_list->next) {
if (fd_list->nhash == target->d_name.hash &&
(!fd || fd_list->version > fd->version) &&
strlen(fd_list->name) == target->d_name.len &&
!strncmp(fd_list->name, target->d_name.name, target->d_name.len)) {
fd = fd_list;
}
}
if (fd)
ino = fd->ino;
mutex_unlock(&dir_f->sem);
if (ino) {
inode = jffs2_iget(dir_i->i_sb, ino);
if (IS_ERR(inode))
pr_warn("iget() failed for ino #%u\n", ino);
}
return d_splice_alias(inode, target);
}
/***********************************************************************/
static int jffs2_readdir(struct file *file, struct dir_context *ctx)
{
struct inode *inode = file_inode(file);
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
struct jffs2_full_dirent *fd;
unsigned long curofs = 1;
jffs2_dbg(1, "jffs2_readdir() for dir_i #%lu\n", inode->i_ino);
if (!dir_emit_dots(file, ctx))
return 0;
mutex_lock(&f->sem);
for (fd = f->dents; fd; fd = fd->next) {
curofs++;
/* First loop: curofs = 2; pos = 2 */
if (curofs < ctx->pos) {
jffs2_dbg(2, "Skipping dirent: \"%s\", ino #%u, type %d, because curofs %ld < offset %ld\n",
fd->name, fd->ino, fd->type, curofs, (unsigned long)ctx->pos);
continue;
}
if (!fd->ino) {
jffs2_dbg(2, "Skipping deletion dirent \"%s\"\n",
fd->name);
ctx->pos++;
continue;
}
jffs2_dbg(2, "Dirent %ld: \"%s\", ino #%u, type %d\n",
(unsigned long)ctx->pos, fd->name, fd->ino, fd->type);
if (!dir_emit(ctx, fd->name, strlen(fd->name), fd->ino, fd->type))
break;
ctx->pos++;
}
mutex_unlock(&f->sem);
return 0;
}
/***********************************************************************/
static int jffs2_create(struct inode *dir_i, struct dentry *dentry,
umode_t mode, bool excl)
{
struct jffs2_raw_inode *ri;
struct jffs2_inode_info *f, *dir_f;
struct jffs2_sb_info *c;
struct inode *inode;
int ret;
ri = jffs2_alloc_raw_inode();
if (!ri)
return -ENOMEM;
c = JFFS2_SB_INFO(dir_i->i_sb);
jffs2_dbg(1, "%s()\n", __func__);
inode = jffs2_new_inode(dir_i, mode, ri);
if (IS_ERR(inode)) {
jffs2_dbg(1, "jffs2_new_inode() failed\n");
jffs2_free_raw_inode(ri);
return PTR_ERR(inode);
}
inode->i_op = &jffs2_file_inode_operations;
inode->i_fop = &jffs2_file_operations;
inode->i_mapping->a_ops = &jffs2_file_address_operations;
inode->i_mapping->nrpages = 0;
f = JFFS2_INODE_INFO(inode);
dir_f = JFFS2_INODE_INFO(dir_i);
/* jffs2_do_create() will want to lock it, _after_ reserving
space and taking c-alloc_sem. If we keep it locked here,
lockdep gets unhappy (although it's a false positive;
nothing else will be looking at this inode yet so there's
no chance of AB-BA deadlock involving its f->sem). */
mutex_unlock(&f->sem);
ret = jffs2_do_create(c, dir_f, f, ri, &dentry->d_name);
if (ret)
goto fail;
dir_i->i_mtime = dir_i->i_ctime = ITIME(je32_to_cpu(ri->ctime));
jffs2_free_raw_inode(ri);
jffs2_dbg(1, "%s(): Created ino #%lu with mode %o, nlink %d(%d). nrpages %ld\n",
__func__, inode->i_ino, inode->i_mode, inode->i_nlink,
f->inocache->pino_nlink, inode->i_mapping->nrpages);
unlock_new_inode(inode);
d_instantiate(dentry, inode);
return 0;
fail:
iget_failed(inode);
jffs2_free_raw_inode(ri);
return ret;
}
/***********************************************************************/
static int jffs2_unlink(struct inode *dir_i, struct dentry *dentry)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(dir_i->i_sb);
struct jffs2_inode_info *dir_f = JFFS2_INODE_INFO(dir_i);
struct jffs2_inode_info *dead_f = JFFS2_INODE_INFO(dentry->d_inode);
int ret;
uint32_t now = get_seconds();
ret = jffs2_do_unlink(c, dir_f, dentry->d_name.name,
dentry->d_name.len, dead_f, now);
if (dead_f->inocache)
set_nlink(dentry->d_inode, dead_f->inocache->pino_nlink);
if (!ret)
dir_i->i_mtime = dir_i->i_ctime = ITIME(now);
return ret;
}
/***********************************************************************/
static int jffs2_link (struct dentry *old_dentry, struct inode *dir_i, struct dentry *dentry)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(old_dentry->d_inode->i_sb);
struct jffs2_inode_info *f = JFFS2_INODE_INFO(old_dentry->d_inode);
struct jffs2_inode_info *dir_f = JFFS2_INODE_INFO(dir_i);
int ret;
uint8_t type;
uint32_t now;
/* Don't let people make hard links to bad inodes. */
if (!f->inocache)
return -EIO;
if (S_ISDIR(old_dentry->d_inode->i_mode))
return -EPERM;
/* XXX: This is ugly */
type = (old_dentry->d_inode->i_mode & S_IFMT) >> 12;
if (!type) type = DT_REG;
now = get_seconds();
ret = jffs2_do_link(c, dir_f, f->inocache->ino, type, dentry->d_name.name, dentry->d_name.len, now);
if (!ret) {
mutex_lock(&f->sem);
set_nlink(old_dentry->d_inode, ++f->inocache->pino_nlink);
mutex_unlock(&f->sem);
d_instantiate(dentry, old_dentry->d_inode);
dir_i->i_mtime = dir_i->i_ctime = ITIME(now);
ihold(old_dentry->d_inode);
}
return ret;
}
/***********************************************************************/
static int jffs2_symlink (struct inode *dir_i, struct dentry *dentry, const char *target)
{
struct jffs2_inode_info *f, *dir_f;
struct jffs2_sb_info *c;
struct inode *inode;
struct jffs2_raw_inode *ri;
struct jffs2_raw_dirent *rd;
struct jffs2_full_dnode *fn;
struct jffs2_full_dirent *fd;
int namelen;
uint32_t alloclen;
int ret, targetlen = strlen(target);
/* FIXME: If you care. We'd need to use frags for the target
if it grows much more than this */
if (targetlen > 254)
return -ENAMETOOLONG;
ri = jffs2_alloc_raw_inode();
if (!ri)
return -ENOMEM;
c = JFFS2_SB_INFO(dir_i->i_sb);
/* Try to reserve enough space for both node and dirent.
* Just the node will do for now, though
*/
namelen = dentry->d_name.len;
ret = jffs2_reserve_space(c, sizeof(*ri) + targetlen, &alloclen,
ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
if (ret) {
jffs2_free_raw_inode(ri);
return ret;
}
inode = jffs2_new_inode(dir_i, S_IFLNK | S_IRWXUGO, ri);
if (IS_ERR(inode)) {
jffs2_free_raw_inode(ri);
jffs2_complete_reservation(c);
return PTR_ERR(inode);
}
inode->i_op = &jffs2_symlink_inode_operations;
f = JFFS2_INODE_INFO(inode);
inode->i_size = targetlen;
ri->isize = ri->dsize = ri->csize = cpu_to_je32(inode->i_size);
ri->totlen = cpu_to_je32(sizeof(*ri) + inode->i_size);
ri->hdr_crc = cpu_to_je32(crc32(0, ri, sizeof(struct jffs2_unknown_node)-4));
ri->compr = JFFS2_COMPR_NONE;
ri->data_crc = cpu_to_je32(crc32(0, target, targetlen));
ri->node_crc = cpu_to_je32(crc32(0, ri, sizeof(*ri)-8));
fn = jffs2_write_dnode(c, f, ri, target, targetlen, ALLOC_NORMAL);
jffs2_free_raw_inode(ri);
if (IS_ERR(fn)) {
/* Eeek. Wave bye bye */
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = PTR_ERR(fn);
goto fail;
}
/* We use f->target field to store the target path. */
f->target = kmemdup(target, targetlen + 1, GFP_KERNEL);
if (!f->target) {
pr_warn("Can't allocate %d bytes of memory\n", targetlen + 1);
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = -ENOMEM;
goto fail;
}
jffs2_dbg(1, "%s(): symlink's target '%s' cached\n",
__func__, (char *)f->target);
/* No data here. Only a metadata node, which will be
obsoleted by the first data write
*/
f->metadata = fn;
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = jffs2_init_security(inode, dir_i, &dentry->d_name);
if (ret)
goto fail;
ret = jffs2_init_acl_post(inode);
if (ret)
goto fail;
ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &alloclen,
ALLOC_NORMAL, JFFS2_SUMMARY_DIRENT_SIZE(namelen));
if (ret)
goto fail;
rd = jffs2_alloc_raw_dirent();
if (!rd) {
/* Argh. Now we treat it like a normal delete */
jffs2_complete_reservation(c);
ret = -ENOMEM;
goto fail;
}
dir_f = JFFS2_INODE_INFO(dir_i);
mutex_lock(&dir_f->sem);
rd->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
rd->nodetype = cpu_to_je16(JFFS2_NODETYPE_DIRENT);
rd->totlen = cpu_to_je32(sizeof(*rd) + namelen);
rd->hdr_crc = cpu_to_je32(crc32(0, rd, sizeof(struct jffs2_unknown_node)-4));
rd->pino = cpu_to_je32(dir_i->i_ino);
rd->version = cpu_to_je32(++dir_f->highest_version);
rd->ino = cpu_to_je32(inode->i_ino);
rd->mctime = cpu_to_je32(get_seconds());
rd->nsize = namelen;
rd->type = DT_LNK;
rd->node_crc = cpu_to_je32(crc32(0, rd, sizeof(*rd)-8));
rd->name_crc = cpu_to_je32(crc32(0, dentry->d_name.name, namelen));
fd = jffs2_write_dirent(c, dir_f, rd, dentry->d_name.name, namelen, ALLOC_NORMAL);
if (IS_ERR(fd)) {
/* dirent failed to write. Delete the inode normally
as if it were the final unlink() */
jffs2_complete_reservation(c);
jffs2_free_raw_dirent(rd);
mutex_unlock(&dir_f->sem);
ret = PTR_ERR(fd);
goto fail;
}
dir_i->i_mtime = dir_i->i_ctime = ITIME(je32_to_cpu(rd->mctime));
jffs2_free_raw_dirent(rd);
/* Link the fd into the inode's list, obsoleting an old
one if necessary. */
jffs2_add_fd_to_list(c, fd, &dir_f->dents);
mutex_unlock(&dir_f->sem);
jffs2_complete_reservation(c);
unlock_new_inode(inode);
d_instantiate(dentry, inode);
return 0;
fail:
iget_failed(inode);
return ret;
}
static int jffs2_mkdir (struct inode *dir_i, struct dentry *dentry, umode_t mode)
{
struct jffs2_inode_info *f, *dir_f;
struct jffs2_sb_info *c;
struct inode *inode;
struct jffs2_raw_inode *ri;
struct jffs2_raw_dirent *rd;
struct jffs2_full_dnode *fn;
struct jffs2_full_dirent *fd;
int namelen;
uint32_t alloclen;
int ret;
mode |= S_IFDIR;
ri = jffs2_alloc_raw_inode();
if (!ri)
return -ENOMEM;
c = JFFS2_SB_INFO(dir_i->i_sb);
/* Try to reserve enough space for both node and dirent.
* Just the node will do for now, though
*/
namelen = dentry->d_name.len;
ret = jffs2_reserve_space(c, sizeof(*ri), &alloclen, ALLOC_NORMAL,
JFFS2_SUMMARY_INODE_SIZE);
if (ret) {
jffs2_free_raw_inode(ri);
return ret;
}
inode = jffs2_new_inode(dir_i, mode, ri);
if (IS_ERR(inode)) {
jffs2_free_raw_inode(ri);
jffs2_complete_reservation(c);
return PTR_ERR(inode);
}
inode->i_op = &jffs2_dir_inode_operations;
inode->i_fop = &jffs2_dir_operations;
f = JFFS2_INODE_INFO(inode);
/* Directories get nlink 2 at start */
set_nlink(inode, 2);
/* but ic->pino_nlink is the parent ino# */
f->inocache->pino_nlink = dir_i->i_ino;
ri->data_crc = cpu_to_je32(0);
ri->node_crc = cpu_to_je32(crc32(0, ri, sizeof(*ri)-8));
fn = jffs2_write_dnode(c, f, ri, NULL, 0, ALLOC_NORMAL);
jffs2_free_raw_inode(ri);
if (IS_ERR(fn)) {
/* Eeek. Wave bye bye */
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = PTR_ERR(fn);
goto fail;
}
/* No data here. Only a metadata node, which will be
obsoleted by the first data write
*/
f->metadata = fn;
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = jffs2_init_security(inode, dir_i, &dentry->d_name);
if (ret)
goto fail;
ret = jffs2_init_acl_post(inode);
if (ret)
goto fail;
ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &alloclen,
ALLOC_NORMAL, JFFS2_SUMMARY_DIRENT_SIZE(namelen));
if (ret)
goto fail;
rd = jffs2_alloc_raw_dirent();
if (!rd) {
/* Argh. Now we treat it like a normal delete */
jffs2_complete_reservation(c);
ret = -ENOMEM;
goto fail;
}
dir_f = JFFS2_INODE_INFO(dir_i);
mutex_lock(&dir_f->sem);
rd->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
rd->nodetype = cpu_to_je16(JFFS2_NODETYPE_DIRENT);
rd->totlen = cpu_to_je32(sizeof(*rd) + namelen);
rd->hdr_crc = cpu_to_je32(crc32(0, rd, sizeof(struct jffs2_unknown_node)-4));
rd->pino = cpu_to_je32(dir_i->i_ino);
rd->version = cpu_to_je32(++dir_f->highest_version);
rd->ino = cpu_to_je32(inode->i_ino);
rd->mctime = cpu_to_je32(get_seconds());
rd->nsize = namelen;
rd->type = DT_DIR;
rd->node_crc = cpu_to_je32(crc32(0, rd, sizeof(*rd)-8));
rd->name_crc = cpu_to_je32(crc32(0, dentry->d_name.name, namelen));
fd = jffs2_write_dirent(c, dir_f, rd, dentry->d_name.name, namelen, ALLOC_NORMAL);
if (IS_ERR(fd)) {
/* dirent failed to write. Delete the inode normally
as if it were the final unlink() */
jffs2_complete_reservation(c);
jffs2_free_raw_dirent(rd);
mutex_unlock(&dir_f->sem);
ret = PTR_ERR(fd);
goto fail;
}
dir_i->i_mtime = dir_i->i_ctime = ITIME(je32_to_cpu(rd->mctime));
inc_nlink(dir_i);
jffs2_free_raw_dirent(rd);
/* Link the fd into the inode's list, obsoleting an old
one if necessary. */
jffs2_add_fd_to_list(c, fd, &dir_f->dents);
mutex_unlock(&dir_f->sem);
jffs2_complete_reservation(c);
unlock_new_inode(inode);
d_instantiate(dentry, inode);
return 0;
fail:
iget_failed(inode);
return ret;
}
static int jffs2_rmdir (struct inode *dir_i, struct dentry *dentry)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(dir_i->i_sb);
struct jffs2_inode_info *dir_f = JFFS2_INODE_INFO(dir_i);
struct jffs2_inode_info *f = JFFS2_INODE_INFO(dentry->d_inode);
struct jffs2_full_dirent *fd;
int ret;
uint32_t now = get_seconds();
for (fd = f->dents ; fd; fd = fd->next) {
if (fd->ino)
return -ENOTEMPTY;
}
ret = jffs2_do_unlink(c, dir_f, dentry->d_name.name,
dentry->d_name.len, f, now);
if (!ret) {
dir_i->i_mtime = dir_i->i_ctime = ITIME(now);
clear_nlink(dentry->d_inode);
drop_nlink(dir_i);
}
return ret;
}
static int jffs2_mknod (struct inode *dir_i, struct dentry *dentry, umode_t mode, dev_t rdev)
{
struct jffs2_inode_info *f, *dir_f;
struct jffs2_sb_info *c;
struct inode *inode;
struct jffs2_raw_inode *ri;
struct jffs2_raw_dirent *rd;
struct jffs2_full_dnode *fn;
struct jffs2_full_dirent *fd;
int namelen;
union jffs2_device_node dev;
int devlen = 0;
uint32_t alloclen;
int ret;
if (!new_valid_dev(rdev))
return -EINVAL;
ri = jffs2_alloc_raw_inode();
if (!ri)
return -ENOMEM;
c = JFFS2_SB_INFO(dir_i->i_sb);
if (S_ISBLK(mode) || S_ISCHR(mode))
devlen = jffs2_encode_dev(&dev, rdev);
/* Try to reserve enough space for both node and dirent.
* Just the node will do for now, though
*/
namelen = dentry->d_name.len;
ret = jffs2_reserve_space(c, sizeof(*ri) + devlen, &alloclen,
ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
if (ret) {
jffs2_free_raw_inode(ri);
return ret;
}
inode = jffs2_new_inode(dir_i, mode, ri);
if (IS_ERR(inode)) {
jffs2_free_raw_inode(ri);
jffs2_complete_reservation(c);
return PTR_ERR(inode);
}
inode->i_op = &jffs2_file_inode_operations;
init_special_inode(inode, inode->i_mode, rdev);
f = JFFS2_INODE_INFO(inode);
ri->dsize = ri->csize = cpu_to_je32(devlen);
ri->totlen = cpu_to_je32(sizeof(*ri) + devlen);
ri->hdr_crc = cpu_to_je32(crc32(0, ri, sizeof(struct jffs2_unknown_node)-4));
ri->compr = JFFS2_COMPR_NONE;
ri->data_crc = cpu_to_je32(crc32(0, &dev, devlen));
ri->node_crc = cpu_to_je32(crc32(0, ri, sizeof(*ri)-8));
fn = jffs2_write_dnode(c, f, ri, (char *)&dev, devlen, ALLOC_NORMAL);
jffs2_free_raw_inode(ri);
if (IS_ERR(fn)) {
/* Eeek. Wave bye bye */
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = PTR_ERR(fn);
goto fail;
}
/* No data here. Only a metadata node, which will be
obsoleted by the first data write
*/
f->metadata = fn;
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = jffs2_init_security(inode, dir_i, &dentry->d_name);
if (ret)
goto fail;
ret = jffs2_init_acl_post(inode);
if (ret)
goto fail;
ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &alloclen,
ALLOC_NORMAL, JFFS2_SUMMARY_DIRENT_SIZE(namelen));
if (ret)
goto fail;
rd = jffs2_alloc_raw_dirent();
if (!rd) {
/* Argh. Now we treat it like a normal delete */
jffs2_complete_reservation(c);
ret = -ENOMEM;
goto fail;
}
dir_f = JFFS2_INODE_INFO(dir_i);
mutex_lock(&dir_f->sem);
rd->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
rd->nodetype = cpu_to_je16(JFFS2_NODETYPE_DIRENT);
rd->totlen = cpu_to_je32(sizeof(*rd) + namelen);
rd->hdr_crc = cpu_to_je32(crc32(0, rd, sizeof(struct jffs2_unknown_node)-4));
rd->pino = cpu_to_je32(dir_i->i_ino);
rd->version = cpu_to_je32(++dir_f->highest_version);
rd->ino = cpu_to_je32(inode->i_ino);
rd->mctime = cpu_to_je32(get_seconds());
rd->nsize = namelen;
/* XXX: This is ugly. */
rd->type = (mode & S_IFMT) >> 12;
rd->node_crc = cpu_to_je32(crc32(0, rd, sizeof(*rd)-8));
rd->name_crc = cpu_to_je32(crc32(0, dentry->d_name.name, namelen));
fd = jffs2_write_dirent(c, dir_f, rd, dentry->d_name.name, namelen, ALLOC_NORMAL);
if (IS_ERR(fd)) {
/* dirent failed to write. Delete the inode normally
as if it were the final unlink() */
jffs2_complete_reservation(c);
jffs2_free_raw_dirent(rd);
mutex_unlock(&dir_f->sem);
ret = PTR_ERR(fd);
goto fail;
}
dir_i->i_mtime = dir_i->i_ctime = ITIME(je32_to_cpu(rd->mctime));
jffs2_free_raw_dirent(rd);
/* Link the fd into the inode's list, obsoleting an old
one if necessary. */
jffs2_add_fd_to_list(c, fd, &dir_f->dents);
mutex_unlock(&dir_f->sem);
jffs2_complete_reservation(c);
unlock_new_inode(inode);
d_instantiate(dentry, inode);
return 0;
fail:
iget_failed(inode);
return ret;
}
static int jffs2_rename (struct inode *old_dir_i, struct dentry *old_dentry,
struct inode *new_dir_i, struct dentry *new_dentry)
{
int ret;
struct jffs2_sb_info *c = JFFS2_SB_INFO(old_dir_i->i_sb);
struct jffs2_inode_info *victim_f = NULL;
uint8_t type;
uint32_t now;
/* The VFS will check for us and prevent trying to rename a
* file over a directory and vice versa, but if it's a directory,
* the VFS can't check whether the victim is empty. The filesystem
* needs to do that for itself.
*/
if (new_dentry->d_inode) {
victim_f = JFFS2_INODE_INFO(new_dentry->d_inode);
if (S_ISDIR(new_dentry->d_inode->i_mode)) {
struct jffs2_full_dirent *fd;
mutex_lock(&victim_f->sem);
for (fd = victim_f->dents; fd; fd = fd->next) {
if (fd->ino) {
mutex_unlock(&victim_f->sem);
return -ENOTEMPTY;
}
}
mutex_unlock(&victim_f->sem);
}
}
/* XXX: We probably ought to alloc enough space for
both nodes at the same time. Writing the new link,
then getting -ENOSPC, is quite bad :)
*/
/* Make a hard link */
/* XXX: This is ugly */
type = (old_dentry->d_inode->i_mode & S_IFMT) >> 12;
if (!type) type = DT_REG;
now = get_seconds();
ret = jffs2_do_link(c, JFFS2_INODE_INFO(new_dir_i),
old_dentry->d_inode->i_ino, type,
new_dentry->d_name.name, new_dentry->d_name.len, now);
if (ret)
return ret;
if (victim_f) {
/* There was a victim. Kill it off nicely */
if (S_ISDIR(new_dentry->d_inode->i_mode))
clear_nlink(new_dentry->d_inode);
else
drop_nlink(new_dentry->d_inode);
/* Don't oops if the victim was a dirent pointing to an
inode which didn't exist. */
if (victim_f->inocache) {
mutex_lock(&victim_f->sem);
if (S_ISDIR(new_dentry->d_inode->i_mode))
victim_f->inocache->pino_nlink = 0;
else
victim_f->inocache->pino_nlink--;
mutex_unlock(&victim_f->sem);
}
}
/* If it was a directory we moved, and there was no victim,
increase i_nlink on its new parent */
if (S_ISDIR(old_dentry->d_inode->i_mode) && !victim_f)
inc_nlink(new_dir_i);
/* Unlink the original */
ret = jffs2_do_unlink(c, JFFS2_INODE_INFO(old_dir_i),
old_dentry->d_name.name, old_dentry->d_name.len, NULL, now);
/* We don't touch inode->i_nlink */
if (ret) {
/* Oh shit. We really ought to make a single node which can do both atomically */
struct jffs2_inode_info *f = JFFS2_INODE_INFO(old_dentry->d_inode);
mutex_lock(&f->sem);
inc_nlink(old_dentry->d_inode);
if (f->inocache && !S_ISDIR(old_dentry->d_inode->i_mode))
f->inocache->pino_nlink++;
mutex_unlock(&f->sem);
pr_notice("%s(): Link succeeded, unlink failed (err %d). You now have a hard link\n",
__func__, ret);
/* Might as well let the VFS know */
d_instantiate(new_dentry, old_dentry->d_inode);
ihold(old_dentry->d_inode);
new_dir_i->i_mtime = new_dir_i->i_ctime = ITIME(now);
return ret;
}
if (S_ISDIR(old_dentry->d_inode->i_mode))
drop_nlink(old_dir_i);
new_dir_i->i_mtime = new_dir_i->i_ctime = old_dir_i->i_mtime = old_dir_i->i_ctime = ITIME(now);
return 0;
}

515
fs/jffs2/erase.c Normal file
View file

@ -0,0 +1,515 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/mtd/mtd.h>
#include <linux/compiler.h>
#include <linux/crc32.h>
#include <linux/sched.h>
#include <linux/pagemap.h>
#include "nodelist.h"
struct erase_priv_struct {
struct jffs2_eraseblock *jeb;
struct jffs2_sb_info *c;
};
#ifndef __ECOS
static void jffs2_erase_callback(struct erase_info *);
#endif
static void jffs2_erase_failed(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t bad_offset);
static void jffs2_erase_succeeded(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
static void jffs2_mark_erased_block(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
static void jffs2_erase_block(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb)
{
int ret;
uint32_t bad_offset;
#ifdef __ECOS
ret = jffs2_flash_erase(c, jeb);
if (!ret) {
jffs2_erase_succeeded(c, jeb);
return;
}
bad_offset = jeb->offset;
#else /* Linux */
struct erase_info *instr;
jffs2_dbg(1, "%s(): erase block %#08x (range %#08x-%#08x)\n",
__func__,
jeb->offset, jeb->offset, jeb->offset + c->sector_size);
instr = kmalloc(sizeof(struct erase_info) + sizeof(struct erase_priv_struct), GFP_KERNEL);
if (!instr) {
pr_warn("kmalloc for struct erase_info in jffs2_erase_block failed. Refiling block for later\n");
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
list_move(&jeb->list, &c->erase_pending_list);
c->erasing_size -= c->sector_size;
c->dirty_size += c->sector_size;
jeb->dirty_size = c->sector_size;
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
return;
}
memset(instr, 0, sizeof(*instr));
instr->mtd = c->mtd;
instr->addr = jeb->offset;
instr->len = c->sector_size;
instr->callback = jffs2_erase_callback;
instr->priv = (unsigned long)(&instr[1]);
((struct erase_priv_struct *)instr->priv)->jeb = jeb;
((struct erase_priv_struct *)instr->priv)->c = c;
ret = mtd_erase(c->mtd, instr);
if (!ret)
return;
bad_offset = instr->fail_addr;
kfree(instr);
#endif /* __ECOS */
if (ret == -ENOMEM || ret == -EAGAIN) {
/* Erase failed immediately. Refile it on the list */
jffs2_dbg(1, "Erase at 0x%08x failed: %d. Refiling on erase_pending_list\n",
jeb->offset, ret);
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
list_move(&jeb->list, &c->erase_pending_list);
c->erasing_size -= c->sector_size;
c->dirty_size += c->sector_size;
jeb->dirty_size = c->sector_size;
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
return;
}
if (ret == -EROFS)
pr_warn("Erase at 0x%08x failed immediately: -EROFS. Is the sector locked?\n",
jeb->offset);
else
pr_warn("Erase at 0x%08x failed immediately: errno %d\n",
jeb->offset, ret);
jffs2_erase_failed(c, jeb, bad_offset);
}
int jffs2_erase_pending_blocks(struct jffs2_sb_info *c, int count)
{
struct jffs2_eraseblock *jeb;
int work_done = 0;
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
while (!list_empty(&c->erase_complete_list) ||
!list_empty(&c->erase_pending_list)) {
if (!list_empty(&c->erase_complete_list)) {
jeb = list_entry(c->erase_complete_list.next, struct jffs2_eraseblock, list);
list_move(&jeb->list, &c->erase_checking_list);
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
jffs2_mark_erased_block(c, jeb);
work_done++;
if (!--count) {
jffs2_dbg(1, "Count reached. jffs2_erase_pending_blocks leaving\n");
goto done;
}
} else if (!list_empty(&c->erase_pending_list)) {
jeb = list_entry(c->erase_pending_list.next, struct jffs2_eraseblock, list);
jffs2_dbg(1, "Starting erase of pending block 0x%08x\n",
jeb->offset);
list_del(&jeb->list);
c->erasing_size += c->sector_size;
c->wasted_size -= jeb->wasted_size;
c->free_size -= jeb->free_size;
c->used_size -= jeb->used_size;
c->dirty_size -= jeb->dirty_size;
jeb->wasted_size = jeb->used_size = jeb->dirty_size = jeb->free_size = 0;
jffs2_free_jeb_node_refs(c, jeb);
list_add(&jeb->list, &c->erasing_list);
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
jffs2_erase_block(c, jeb);
} else {
BUG();
}
/* Be nice */
cond_resched();
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
}
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
done:
jffs2_dbg(1, "jffs2_erase_pending_blocks completed\n");
return work_done;
}
static void jffs2_erase_succeeded(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb)
{
jffs2_dbg(1, "Erase completed successfully at 0x%08x\n", jeb->offset);
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
list_move_tail(&jeb->list, &c->erase_complete_list);
/* Wake the GC thread to mark them clean */
jffs2_garbage_collect_trigger(c);
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
wake_up(&c->erase_wait);
}
static void jffs2_erase_failed(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t bad_offset)
{
/* For NAND, if the failure did not occur at the device level for a
specific physical page, don't bother updating the bad block table. */
if (jffs2_cleanmarker_oob(c) && (bad_offset != (uint32_t)MTD_FAIL_ADDR_UNKNOWN)) {
/* We had a device-level failure to erase. Let's see if we've
failed too many times. */
if (!jffs2_write_nand_badblock(c, jeb, bad_offset)) {
/* We'd like to give this block another try. */
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
list_move(&jeb->list, &c->erase_pending_list);
c->erasing_size -= c->sector_size;
c->dirty_size += c->sector_size;
jeb->dirty_size = c->sector_size;
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
return;
}
}
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
c->erasing_size -= c->sector_size;
c->bad_size += c->sector_size;
list_move(&jeb->list, &c->bad_list);
c->nr_erasing_blocks--;
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
wake_up(&c->erase_wait);
}
#ifndef __ECOS
static void jffs2_erase_callback(struct erase_info *instr)
{
struct erase_priv_struct *priv = (void *)instr->priv;
if(instr->state != MTD_ERASE_DONE) {
pr_warn("Erase at 0x%08llx finished, but state != MTD_ERASE_DONE. State is 0x%x instead.\n",
(unsigned long long)instr->addr, instr->state);
jffs2_erase_failed(priv->c, priv->jeb, instr->fail_addr);
} else {
jffs2_erase_succeeded(priv->c, priv->jeb);
}
kfree(instr);
}
#endif /* !__ECOS */
/* Hmmm. Maybe we should accept the extra space it takes and make
this a standard doubly-linked list? */
static inline void jffs2_remove_node_refs_from_ino_list(struct jffs2_sb_info *c,
struct jffs2_raw_node_ref *ref, struct jffs2_eraseblock *jeb)
{
struct jffs2_inode_cache *ic = NULL;
struct jffs2_raw_node_ref **prev;
prev = &ref->next_in_ino;
/* Walk the inode's list once, removing any nodes from this eraseblock */
while (1) {
if (!(*prev)->next_in_ino) {
/* We're looking at the jffs2_inode_cache, which is
at the end of the linked list. Stash it and continue
from the beginning of the list */
ic = (struct jffs2_inode_cache *)(*prev);
prev = &ic->nodes;
continue;
}
if (SECTOR_ADDR((*prev)->flash_offset) == jeb->offset) {
/* It's in the block we're erasing */
struct jffs2_raw_node_ref *this;
this = *prev;
*prev = this->next_in_ino;
this->next_in_ino = NULL;
if (this == ref)
break;
continue;
}
/* Not to be deleted. Skip */
prev = &((*prev)->next_in_ino);
}
/* PARANOIA */
if (!ic) {
JFFS2_WARNING("inode_cache/xattr_datum/xattr_ref"
" not found in remove_node_refs()!!\n");
return;
}
jffs2_dbg(1, "Removed nodes in range 0x%08x-0x%08x from ino #%u\n",
jeb->offset, jeb->offset + c->sector_size, ic->ino);
D2({
int i=0;
struct jffs2_raw_node_ref *this;
printk(KERN_DEBUG "After remove_node_refs_from_ino_list: \n");
this = ic->nodes;
printk(KERN_DEBUG);
while(this) {
pr_cont("0x%08x(%d)->",
ref_offset(this), ref_flags(this));
if (++i == 5) {
printk(KERN_DEBUG);
i=0;
}
this = this->next_in_ino;
}
pr_cont("\n");
});
switch (ic->class) {
#ifdef CONFIG_JFFS2_FS_XATTR
case RAWNODE_CLASS_XATTR_DATUM:
jffs2_release_xattr_datum(c, (struct jffs2_xattr_datum *)ic);
break;
case RAWNODE_CLASS_XATTR_REF:
jffs2_release_xattr_ref(c, (struct jffs2_xattr_ref *)ic);
break;
#endif
default:
if (ic->nodes == (void *)ic && ic->pino_nlink == 0)
jffs2_del_ino_cache(c, ic);
}
}
void jffs2_free_jeb_node_refs(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb)
{
struct jffs2_raw_node_ref *block, *ref;
jffs2_dbg(1, "Freeing all node refs for eraseblock offset 0x%08x\n",
jeb->offset);
block = ref = jeb->first_node;
while (ref) {
if (ref->flash_offset == REF_LINK_NODE) {
ref = ref->next_in_ino;
jffs2_free_refblock(block);
block = ref;
continue;
}
if (ref->flash_offset != REF_EMPTY_NODE && ref->next_in_ino)
jffs2_remove_node_refs_from_ino_list(c, ref, jeb);
/* else it was a non-inode node or already removed, so don't bother */
ref++;
}
jeb->first_node = jeb->last_node = NULL;
}
static int jffs2_block_check_erase(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t *bad_offset)
{
void *ebuf;
uint32_t ofs;
size_t retlen;
int ret;
unsigned long *wordebuf;
ret = mtd_point(c->mtd, jeb->offset, c->sector_size, &retlen,
&ebuf, NULL);
if (ret != -EOPNOTSUPP) {
if (ret) {
jffs2_dbg(1, "MTD point failed %d\n", ret);
goto do_flash_read;
}
if (retlen < c->sector_size) {
/* Don't muck about if it won't let us point to the whole erase sector */
jffs2_dbg(1, "MTD point returned len too short: 0x%zx\n",
retlen);
mtd_unpoint(c->mtd, jeb->offset, retlen);
goto do_flash_read;
}
wordebuf = ebuf-sizeof(*wordebuf);
retlen /= sizeof(*wordebuf);
do {
if (*++wordebuf != ~0)
break;
} while(--retlen);
mtd_unpoint(c->mtd, jeb->offset, c->sector_size);
if (retlen) {
pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08tx\n",
*wordebuf,
jeb->offset +
c->sector_size-retlen * sizeof(*wordebuf));
return -EIO;
}
return 0;
}
do_flash_read:
ebuf = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!ebuf) {
pr_warn("Failed to allocate page buffer for verifying erase at 0x%08x. Refiling\n",
jeb->offset);
return -EAGAIN;
}
jffs2_dbg(1, "Verifying erase at 0x%08x\n", jeb->offset);
for (ofs = jeb->offset; ofs < jeb->offset + c->sector_size; ) {
uint32_t readlen = min((uint32_t)PAGE_SIZE, jeb->offset + c->sector_size - ofs);
int i;
*bad_offset = ofs;
ret = mtd_read(c->mtd, ofs, readlen, &retlen, ebuf);
if (ret) {
pr_warn("Read of newly-erased block at 0x%08x failed: %d. Putting on bad_list\n",
ofs, ret);
ret = -EIO;
goto fail;
}
if (retlen != readlen) {
pr_warn("Short read from newly-erased block at 0x%08x. Wanted %d, got %zd\n",
ofs, readlen, retlen);
ret = -EIO;
goto fail;
}
for (i=0; i<readlen; i += sizeof(unsigned long)) {
/* It's OK. We know it's properly aligned */
unsigned long *datum = ebuf + i;
if (*datum + 1) {
*bad_offset += i;
pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08x\n",
*datum, *bad_offset);
ret = -EIO;
goto fail;
}
}
ofs += readlen;
cond_resched();
}
ret = 0;
fail:
kfree(ebuf);
return ret;
}
static void jffs2_mark_erased_block(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb)
{
size_t retlen;
int ret;
uint32_t uninitialized_var(bad_offset);
switch (jffs2_block_check_erase(c, jeb, &bad_offset)) {
case -EAGAIN: goto refile;
case -EIO: goto filebad;
}
/* Write the erase complete marker */
jffs2_dbg(1, "Writing erased marker to block at 0x%08x\n", jeb->offset);
bad_offset = jeb->offset;
/* Cleanmarker in oob area or no cleanmarker at all ? */
if (jffs2_cleanmarker_oob(c) || c->cleanmarker_size == 0) {
if (jffs2_cleanmarker_oob(c)) {
if (jffs2_write_nand_cleanmarker(c, jeb))
goto filebad;
}
} else {
struct kvec vecs[1];
struct jffs2_unknown_node marker = {
.magic = cpu_to_je16(JFFS2_MAGIC_BITMASK),
.nodetype = cpu_to_je16(JFFS2_NODETYPE_CLEANMARKER),
.totlen = cpu_to_je32(c->cleanmarker_size)
};
jffs2_prealloc_raw_node_refs(c, jeb, 1);
marker.hdr_crc = cpu_to_je32(crc32(0, &marker, sizeof(struct jffs2_unknown_node)-4));
vecs[0].iov_base = (unsigned char *) &marker;
vecs[0].iov_len = sizeof(marker);
ret = jffs2_flash_direct_writev(c, vecs, 1, jeb->offset, &retlen);
if (ret || retlen != sizeof(marker)) {
if (ret)
pr_warn("Write clean marker to block at 0x%08x failed: %d\n",
jeb->offset, ret);
else
pr_warn("Short write to newly-erased block at 0x%08x: Wanted %zd, got %zd\n",
jeb->offset, sizeof(marker), retlen);
goto filebad;
}
}
/* Everything else got zeroed before the erase */
jeb->free_size = c->sector_size;
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
c->erasing_size -= c->sector_size;
c->free_size += c->sector_size;
/* Account for cleanmarker now, if it's in-band */
if (c->cleanmarker_size && !jffs2_cleanmarker_oob(c))
jffs2_link_node_ref(c, jeb, jeb->offset | REF_NORMAL, c->cleanmarker_size, NULL);
list_move_tail(&jeb->list, &c->free_list);
c->nr_erasing_blocks--;
c->nr_free_blocks++;
jffs2_dbg_acct_sanity_check_nolock(c, jeb);
jffs2_dbg_acct_paranoia_check_nolock(c, jeb);
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
wake_up(&c->erase_wait);
return;
filebad:
jffs2_erase_failed(c, jeb, bad_offset);
return;
refile:
/* Stick it back on the list from whence it came and come back later */
mutex_lock(&c->erase_free_sem);
spin_lock(&c->erase_completion_lock);
jffs2_garbage_collect_trigger(c);
list_move(&jeb->list, &c->erase_complete_list);
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->erase_free_sem);
return;
}

339
fs/jffs2/file.c Normal file
View file

@ -0,0 +1,339 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/time.h>
#include <linux/pagemap.h>
#include <linux/highmem.h>
#include <linux/crc32.h>
#include <linux/jffs2.h>
#include "nodelist.h"
static int jffs2_write_end(struct file *filp, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *pg, void *fsdata);
static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata);
static int jffs2_readpage (struct file *filp, struct page *pg);
int jffs2_fsync(struct file *filp, loff_t start, loff_t end, int datasync)
{
struct inode *inode = filp->f_mapping->host;
struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
int ret;
ret = filemap_write_and_wait_range(inode->i_mapping, start, end);
if (ret)
return ret;
mutex_lock(&inode->i_mutex);
/* Trigger GC to flush any pending writes for this inode */
jffs2_flush_wbuf_gc(c, inode->i_ino);
mutex_unlock(&inode->i_mutex);
return 0;
}
const struct file_operations jffs2_file_operations =
{
.llseek = generic_file_llseek,
.open = generic_file_open,
.read = new_sync_read,
.read_iter = generic_file_read_iter,
.write = new_sync_write,
.write_iter = generic_file_write_iter,
.unlocked_ioctl=jffs2_ioctl,
.mmap = generic_file_readonly_mmap,
.fsync = jffs2_fsync,
.splice_read = generic_file_splice_read,
};
/* jffs2_file_inode_operations */
const struct inode_operations jffs2_file_inode_operations =
{
.get_acl = jffs2_get_acl,
.set_acl = jffs2_set_acl,
.setattr = jffs2_setattr,
.setxattr = jffs2_setxattr,
.getxattr = jffs2_getxattr,
.listxattr = jffs2_listxattr,
.removexattr = jffs2_removexattr
};
const struct address_space_operations jffs2_file_address_operations =
{
.readpage = jffs2_readpage,
.write_begin = jffs2_write_begin,
.write_end = jffs2_write_end,
};
static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg)
{
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
unsigned char *pg_buf;
int ret;
jffs2_dbg(2, "%s(): ino #%lu, page at offset 0x%lx\n",
__func__, inode->i_ino, pg->index << PAGE_CACHE_SHIFT);
BUG_ON(!PageLocked(pg));
pg_buf = kmap(pg);
/* FIXME: Can kmap fail? */
ret = jffs2_read_inode_range(c, f, pg_buf, pg->index << PAGE_CACHE_SHIFT, PAGE_CACHE_SIZE);
if (ret) {
ClearPageUptodate(pg);
SetPageError(pg);
} else {
SetPageUptodate(pg);
ClearPageError(pg);
}
flush_dcache_page(pg);
kunmap(pg);
jffs2_dbg(2, "readpage finished\n");
return ret;
}
int jffs2_do_readpage_unlock(struct inode *inode, struct page *pg)
{
int ret = jffs2_do_readpage_nolock(inode, pg);
unlock_page(pg);
return ret;
}
static int jffs2_readpage (struct file *filp, struct page *pg)
{
struct jffs2_inode_info *f = JFFS2_INODE_INFO(pg->mapping->host);
int ret;
mutex_lock(&f->sem);
ret = jffs2_do_readpage_unlock(pg->mapping->host, pg);
mutex_unlock(&f->sem);
return ret;
}
static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata)
{
struct page *pg;
struct inode *inode = mapping->host;
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
struct jffs2_raw_inode ri;
uint32_t alloc_len = 0;
pgoff_t index = pos >> PAGE_CACHE_SHIFT;
uint32_t pageofs = index << PAGE_CACHE_SHIFT;
int ret = 0;
jffs2_dbg(1, "%s()\n", __func__);
if (pageofs > inode->i_size) {
ret = jffs2_reserve_space(c, sizeof(ri), &alloc_len,
ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
if (ret)
return ret;
}
mutex_lock(&f->sem);
pg = grab_cache_page_write_begin(mapping, index, flags);
if (!pg) {
if (alloc_len)
jffs2_complete_reservation(c);
mutex_unlock(&f->sem);
return -ENOMEM;
}
*pagep = pg;
if (alloc_len) {
/* Make new hole frag from old EOF to new page */
struct jffs2_full_dnode *fn;
jffs2_dbg(1, "Writing new hole frag 0x%x-0x%x between current EOF and new page\n",
(unsigned int)inode->i_size, pageofs);
memset(&ri, 0, sizeof(ri));
ri.magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
ri.nodetype = cpu_to_je16(JFFS2_NODETYPE_INODE);
ri.totlen = cpu_to_je32(sizeof(ri));
ri.hdr_crc = cpu_to_je32(crc32(0, &ri, sizeof(struct jffs2_unknown_node)-4));
ri.ino = cpu_to_je32(f->inocache->ino);
ri.version = cpu_to_je32(++f->highest_version);
ri.mode = cpu_to_jemode(inode->i_mode);
ri.uid = cpu_to_je16(i_uid_read(inode));
ri.gid = cpu_to_je16(i_gid_read(inode));
ri.isize = cpu_to_je32(max((uint32_t)inode->i_size, pageofs));
ri.atime = ri.ctime = ri.mtime = cpu_to_je32(get_seconds());
ri.offset = cpu_to_je32(inode->i_size);
ri.dsize = cpu_to_je32(pageofs - inode->i_size);
ri.csize = cpu_to_je32(0);
ri.compr = JFFS2_COMPR_ZERO;
ri.node_crc = cpu_to_je32(crc32(0, &ri, sizeof(ri)-8));
ri.data_crc = cpu_to_je32(0);
fn = jffs2_write_dnode(c, f, &ri, NULL, 0, ALLOC_NORMAL);
if (IS_ERR(fn)) {
ret = PTR_ERR(fn);
jffs2_complete_reservation(c);
goto out_page;
}
ret = jffs2_add_full_dnode_to_inode(c, f, fn);
if (f->metadata) {
jffs2_mark_node_obsolete(c, f->metadata->raw);
jffs2_free_full_dnode(f->metadata);
f->metadata = NULL;
}
if (ret) {
jffs2_dbg(1, "Eep. add_full_dnode_to_inode() failed in write_begin, returned %d\n",
ret);
jffs2_mark_node_obsolete(c, fn->raw);
jffs2_free_full_dnode(fn);
jffs2_complete_reservation(c);
goto out_page;
}
jffs2_complete_reservation(c);
inode->i_size = pageofs;
}
/*
* Read in the page if it wasn't already present. Cannot optimize away
* the whole page write case until jffs2_write_end can handle the
* case of a short-copy.
*/
if (!PageUptodate(pg)) {
ret = jffs2_do_readpage_nolock(inode, pg);
if (ret)
goto out_page;
}
mutex_unlock(&f->sem);
jffs2_dbg(1, "end write_begin(). pg->flags %lx\n", pg->flags);
return ret;
out_page:
unlock_page(pg);
page_cache_release(pg);
mutex_unlock(&f->sem);
return ret;
}
static int jffs2_write_end(struct file *filp, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *pg, void *fsdata)
{
/* Actually commit the write from the page cache page we're looking at.
* For now, we write the full page out each time. It sucks, but it's simple
*/
struct inode *inode = mapping->host;
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
struct jffs2_raw_inode *ri;
unsigned start = pos & (PAGE_CACHE_SIZE - 1);
unsigned end = start + copied;
unsigned aligned_start = start & ~3;
int ret = 0;
uint32_t writtenlen = 0;
jffs2_dbg(1, "%s(): ino #%lu, page at 0x%lx, range %d-%d, flags %lx\n",
__func__, inode->i_ino, pg->index << PAGE_CACHE_SHIFT,
start, end, pg->flags);
/* We need to avoid deadlock with page_cache_read() in
jffs2_garbage_collect_pass(). So the page must be
up to date to prevent page_cache_read() from trying
to re-lock it. */
BUG_ON(!PageUptodate(pg));
if (end == PAGE_CACHE_SIZE) {
/* When writing out the end of a page, write out the
_whole_ page. This helps to reduce the number of
nodes in files which have many short writes, like
syslog files. */
aligned_start = 0;
}
ri = jffs2_alloc_raw_inode();
if (!ri) {
jffs2_dbg(1, "%s(): Allocation of raw inode failed\n",
__func__);
unlock_page(pg);
page_cache_release(pg);
return -ENOMEM;
}
/* Set the fields that the generic jffs2_write_inode_range() code can't find */
ri->ino = cpu_to_je32(inode->i_ino);
ri->mode = cpu_to_jemode(inode->i_mode);
ri->uid = cpu_to_je16(i_uid_read(inode));
ri->gid = cpu_to_je16(i_gid_read(inode));
ri->isize = cpu_to_je32((uint32_t)inode->i_size);
ri->atime = ri->ctime = ri->mtime = cpu_to_je32(get_seconds());
/* In 2.4, it was already kmapped by generic_file_write(). Doesn't
hurt to do it again. The alternative is ifdefs, which are ugly. */
kmap(pg);
ret = jffs2_write_inode_range(c, f, ri, page_address(pg) + aligned_start,
(pg->index << PAGE_CACHE_SHIFT) + aligned_start,
end - aligned_start, &writtenlen);
kunmap(pg);
if (ret) {
/* There was an error writing. */
SetPageError(pg);
}
/* Adjust writtenlen for the padding we did, so we don't confuse our caller */
writtenlen -= min(writtenlen, (start - aligned_start));
if (writtenlen) {
if (inode->i_size < pos + writtenlen) {
inode->i_size = pos + writtenlen;
inode->i_blocks = (inode->i_size + 511) >> 9;
inode->i_ctime = inode->i_mtime = ITIME(je32_to_cpu(ri->ctime));
}
}
jffs2_free_raw_inode(ri);
if (start+writtenlen < end) {
/* generic_file_write has written more to the page cache than we've
actually written to the medium. Mark the page !Uptodate so that
it gets reread */
jffs2_dbg(1, "%s(): Not all bytes written. Marking page !uptodate\n",
__func__);
SetPageError(pg);
ClearPageUptodate(pg);
}
jffs2_dbg(1, "%s() returning %d\n",
__func__, writtenlen > 0 ? writtenlen : ret);
unlock_page(pg);
page_cache_release(pg);
return writtenlen > 0 ? writtenlen : ret;
}

766
fs/jffs2/fs.c Normal file
View file

@ -0,0 +1,766 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/capability.h>
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/list.h>
#include <linux/mtd/mtd.h>
#include <linux/pagemap.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/vfs.h>
#include <linux/crc32.h>
#include "nodelist.h"
static int jffs2_flash_setup(struct jffs2_sb_info *c);
int jffs2_do_setattr (struct inode *inode, struct iattr *iattr)
{
struct jffs2_full_dnode *old_metadata, *new_metadata;
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
struct jffs2_raw_inode *ri;
union jffs2_device_node dev;
unsigned char *mdata = NULL;
int mdatalen = 0;
unsigned int ivalid;
uint32_t alloclen;
int ret;
int alloc_type = ALLOC_NORMAL;
jffs2_dbg(1, "%s(): ino #%lu\n", __func__, inode->i_ino);
/* Special cases - we don't want more than one data node
for these types on the medium at any time. So setattr
must read the original data associated with the node
(i.e. the device numbers or the target name) and write
it out again with the appropriate data attached */
if (S_ISBLK(inode->i_mode) || S_ISCHR(inode->i_mode)) {
/* For these, we don't actually need to read the old node */
mdatalen = jffs2_encode_dev(&dev, inode->i_rdev);
mdata = (char *)&dev;
jffs2_dbg(1, "%s(): Writing %d bytes of kdev_t\n",
__func__, mdatalen);
} else if (S_ISLNK(inode->i_mode)) {
mutex_lock(&f->sem);
mdatalen = f->metadata->size;
mdata = kmalloc(f->metadata->size, GFP_USER);
if (!mdata) {
mutex_unlock(&f->sem);
return -ENOMEM;
}
ret = jffs2_read_dnode(c, f, f->metadata, mdata, 0, mdatalen);
if (ret) {
mutex_unlock(&f->sem);
kfree(mdata);
return ret;
}
mutex_unlock(&f->sem);
jffs2_dbg(1, "%s(): Writing %d bytes of symlink target\n",
__func__, mdatalen);
}
ri = jffs2_alloc_raw_inode();
if (!ri) {
if (S_ISLNK(inode->i_mode))
kfree(mdata);
return -ENOMEM;
}
ret = jffs2_reserve_space(c, sizeof(*ri) + mdatalen, &alloclen,
ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
if (ret) {
jffs2_free_raw_inode(ri);
if (S_ISLNK(inode->i_mode))
kfree(mdata);
return ret;
}
mutex_lock(&f->sem);
ivalid = iattr->ia_valid;
ri->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
ri->nodetype = cpu_to_je16(JFFS2_NODETYPE_INODE);
ri->totlen = cpu_to_je32(sizeof(*ri) + mdatalen);
ri->hdr_crc = cpu_to_je32(crc32(0, ri, sizeof(struct jffs2_unknown_node)-4));
ri->ino = cpu_to_je32(inode->i_ino);
ri->version = cpu_to_je32(++f->highest_version);
ri->uid = cpu_to_je16((ivalid & ATTR_UID)?
from_kuid(&init_user_ns, iattr->ia_uid):i_uid_read(inode));
ri->gid = cpu_to_je16((ivalid & ATTR_GID)?
from_kgid(&init_user_ns, iattr->ia_gid):i_gid_read(inode));
if (ivalid & ATTR_MODE)
ri->mode = cpu_to_jemode(iattr->ia_mode);
else
ri->mode = cpu_to_jemode(inode->i_mode);
ri->isize = cpu_to_je32((ivalid & ATTR_SIZE)?iattr->ia_size:inode->i_size);
ri->atime = cpu_to_je32(I_SEC((ivalid & ATTR_ATIME)?iattr->ia_atime:inode->i_atime));
ri->mtime = cpu_to_je32(I_SEC((ivalid & ATTR_MTIME)?iattr->ia_mtime:inode->i_mtime));
ri->ctime = cpu_to_je32(I_SEC((ivalid & ATTR_CTIME)?iattr->ia_ctime:inode->i_ctime));
ri->offset = cpu_to_je32(0);
ri->csize = ri->dsize = cpu_to_je32(mdatalen);
ri->compr = JFFS2_COMPR_NONE;
if (ivalid & ATTR_SIZE && inode->i_size < iattr->ia_size) {
/* It's an extension. Make it a hole node */
ri->compr = JFFS2_COMPR_ZERO;
ri->dsize = cpu_to_je32(iattr->ia_size - inode->i_size);
ri->offset = cpu_to_je32(inode->i_size);
} else if (ivalid & ATTR_SIZE && !iattr->ia_size) {
/* For truncate-to-zero, treat it as deletion because
it'll always be obsoleting all previous nodes */
alloc_type = ALLOC_DELETION;
}
ri->node_crc = cpu_to_je32(crc32(0, ri, sizeof(*ri)-8));
if (mdatalen)
ri->data_crc = cpu_to_je32(crc32(0, mdata, mdatalen));
else
ri->data_crc = cpu_to_je32(0);
new_metadata = jffs2_write_dnode(c, f, ri, mdata, mdatalen, alloc_type);
if (S_ISLNK(inode->i_mode))
kfree(mdata);
if (IS_ERR(new_metadata)) {
jffs2_complete_reservation(c);
jffs2_free_raw_inode(ri);
mutex_unlock(&f->sem);
return PTR_ERR(new_metadata);
}
/* It worked. Update the inode */
inode->i_atime = ITIME(je32_to_cpu(ri->atime));
inode->i_ctime = ITIME(je32_to_cpu(ri->ctime));
inode->i_mtime = ITIME(je32_to_cpu(ri->mtime));
inode->i_mode = jemode_to_cpu(ri->mode);
i_uid_write(inode, je16_to_cpu(ri->uid));
i_gid_write(inode, je16_to_cpu(ri->gid));
old_metadata = f->metadata;
if (ivalid & ATTR_SIZE && inode->i_size > iattr->ia_size)
jffs2_truncate_fragtree (c, &f->fragtree, iattr->ia_size);
if (ivalid & ATTR_SIZE && inode->i_size < iattr->ia_size) {
jffs2_add_full_dnode_to_inode(c, f, new_metadata);
inode->i_size = iattr->ia_size;
inode->i_blocks = (inode->i_size + 511) >> 9;
f->metadata = NULL;
} else {
f->metadata = new_metadata;
}
if (old_metadata) {
jffs2_mark_node_obsolete(c, old_metadata->raw);
jffs2_free_full_dnode(old_metadata);
}
jffs2_free_raw_inode(ri);
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
/* We have to do the truncate_setsize() without f->sem held, since
some pages may be locked and waiting for it in readpage().
We are protected from a simultaneous write() extending i_size
back past iattr->ia_size, because do_truncate() holds the
generic inode semaphore. */
if (ivalid & ATTR_SIZE && inode->i_size > iattr->ia_size) {
truncate_setsize(inode, iattr->ia_size);
inode->i_blocks = (inode->i_size + 511) >> 9;
}
return 0;
}
int jffs2_setattr(struct dentry *dentry, struct iattr *iattr)
{
struct inode *inode = dentry->d_inode;
int rc;
rc = inode_change_ok(inode, iattr);
if (rc)
return rc;
rc = jffs2_do_setattr(inode, iattr);
if (!rc && (iattr->ia_valid & ATTR_MODE))
rc = posix_acl_chmod(inode, inode->i_mode);
return rc;
}
int jffs2_statfs(struct dentry *dentry, struct kstatfs *buf)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(dentry->d_sb);
unsigned long avail;
buf->f_type = JFFS2_SUPER_MAGIC;
buf->f_bsize = 1 << PAGE_SHIFT;
buf->f_blocks = c->flash_size >> PAGE_SHIFT;
buf->f_files = 0;
buf->f_ffree = 0;
buf->f_namelen = JFFS2_MAX_NAME_LEN;
buf->f_fsid.val[0] = JFFS2_SUPER_MAGIC;
buf->f_fsid.val[1] = c->mtd->index;
spin_lock(&c->erase_completion_lock);
avail = c->dirty_size + c->free_size;
if (avail > c->sector_size * c->resv_blocks_write)
avail -= c->sector_size * c->resv_blocks_write;
else
avail = 0;
spin_unlock(&c->erase_completion_lock);
buf->f_bavail = buf->f_bfree = avail >> PAGE_SHIFT;
return 0;
}
void jffs2_evict_inode (struct inode *inode)
{
/* We can forget about this inode for now - drop all
* the nodelists associated with it, etc.
*/
struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
jffs2_dbg(1, "%s(): ino #%lu mode %o\n",
__func__, inode->i_ino, inode->i_mode);
truncate_inode_pages_final(&inode->i_data);
clear_inode(inode);
jffs2_do_clear_inode(c, f);
}
struct inode *jffs2_iget(struct super_block *sb, unsigned long ino)
{
struct jffs2_inode_info *f;
struct jffs2_sb_info *c;
struct jffs2_raw_inode latest_node;
union jffs2_device_node jdev;
struct inode *inode;
dev_t rdev = 0;
int ret;
jffs2_dbg(1, "%s(): ino == %lu\n", __func__, ino);
inode = iget_locked(sb, ino);
if (!inode)
return ERR_PTR(-ENOMEM);
if (!(inode->i_state & I_NEW))
return inode;
f = JFFS2_INODE_INFO(inode);
c = JFFS2_SB_INFO(inode->i_sb);
jffs2_init_inode_info(f);
mutex_lock(&f->sem);
ret = jffs2_do_read_inode(c, f, inode->i_ino, &latest_node);
if (ret) {
mutex_unlock(&f->sem);
iget_failed(inode);
return ERR_PTR(ret);
}
inode->i_mode = jemode_to_cpu(latest_node.mode);
i_uid_write(inode, je16_to_cpu(latest_node.uid));
i_gid_write(inode, je16_to_cpu(latest_node.gid));
inode->i_size = je32_to_cpu(latest_node.isize);
inode->i_atime = ITIME(je32_to_cpu(latest_node.atime));
inode->i_mtime = ITIME(je32_to_cpu(latest_node.mtime));
inode->i_ctime = ITIME(je32_to_cpu(latest_node.ctime));
set_nlink(inode, f->inocache->pino_nlink);
inode->i_blocks = (inode->i_size + 511) >> 9;
switch (inode->i_mode & S_IFMT) {
case S_IFLNK:
inode->i_op = &jffs2_symlink_inode_operations;
break;
case S_IFDIR:
{
struct jffs2_full_dirent *fd;
set_nlink(inode, 2); /* parent and '.' */
for (fd=f->dents; fd; fd = fd->next) {
if (fd->type == DT_DIR && fd->ino)
inc_nlink(inode);
}
/* Root dir gets i_nlink 3 for some reason */
if (inode->i_ino == 1)
inc_nlink(inode);
inode->i_op = &jffs2_dir_inode_operations;
inode->i_fop = &jffs2_dir_operations;
break;
}
case S_IFREG:
inode->i_op = &jffs2_file_inode_operations;
inode->i_fop = &jffs2_file_operations;
inode->i_mapping->a_ops = &jffs2_file_address_operations;
inode->i_mapping->nrpages = 0;
break;
case S_IFBLK:
case S_IFCHR:
/* Read the device numbers from the media */
if (f->metadata->size != sizeof(jdev.old_id) &&
f->metadata->size != sizeof(jdev.new_id)) {
pr_notice("Device node has strange size %d\n",
f->metadata->size);
goto error_io;
}
jffs2_dbg(1, "Reading device numbers from flash\n");
ret = jffs2_read_dnode(c, f, f->metadata, (char *)&jdev, 0, f->metadata->size);
if (ret < 0) {
/* Eep */
pr_notice("Read device numbers for inode %lu failed\n",
(unsigned long)inode->i_ino);
goto error;
}
if (f->metadata->size == sizeof(jdev.old_id))
rdev = old_decode_dev(je16_to_cpu(jdev.old_id));
else
rdev = new_decode_dev(je32_to_cpu(jdev.new_id));
case S_IFSOCK:
case S_IFIFO:
inode->i_op = &jffs2_file_inode_operations;
init_special_inode(inode, inode->i_mode, rdev);
break;
default:
pr_warn("%s(): Bogus i_mode %o for ino %lu\n",
__func__, inode->i_mode, (unsigned long)inode->i_ino);
}
mutex_unlock(&f->sem);
jffs2_dbg(1, "jffs2_read_inode() returning\n");
unlock_new_inode(inode);
return inode;
error_io:
ret = -EIO;
error:
mutex_unlock(&f->sem);
jffs2_do_clear_inode(c, f);
iget_failed(inode);
return ERR_PTR(ret);
}
void jffs2_dirty_inode(struct inode *inode, int flags)
{
struct iattr iattr;
if (!(inode->i_state & I_DIRTY_DATASYNC)) {
jffs2_dbg(2, "%s(): not calling setattr() for ino #%lu\n",
__func__, inode->i_ino);
return;
}
jffs2_dbg(1, "%s(): calling setattr() for ino #%lu\n",
__func__, inode->i_ino);
iattr.ia_valid = ATTR_MODE|ATTR_UID|ATTR_GID|ATTR_ATIME|ATTR_MTIME|ATTR_CTIME;
iattr.ia_mode = inode->i_mode;
iattr.ia_uid = inode->i_uid;
iattr.ia_gid = inode->i_gid;
iattr.ia_atime = inode->i_atime;
iattr.ia_mtime = inode->i_mtime;
iattr.ia_ctime = inode->i_ctime;
jffs2_do_setattr(inode, &iattr);
}
int jffs2_do_remount_fs(struct super_block *sb, int *flags, char *data)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(sb);
if (c->flags & JFFS2_SB_FLAG_RO && !(sb->s_flags & MS_RDONLY))
return -EROFS;
/* We stop if it was running, then restart if it needs to.
This also catches the case where it was stopped and this
is just a remount to restart it.
Flush the writebuffer, if neccecary, else we loose it */
if (!(sb->s_flags & MS_RDONLY)) {
jffs2_stop_garbage_collect_thread(c);
mutex_lock(&c->alloc_sem);
jffs2_flush_wbuf_pad(c);
mutex_unlock(&c->alloc_sem);
}
if (!(*flags & MS_RDONLY))
jffs2_start_garbage_collect_thread(c);
*flags |= MS_NOATIME;
return 0;
}
/* jffs2_new_inode: allocate a new inode and inocache, add it to the hash,
fill in the raw_inode while you're at it. */
struct inode *jffs2_new_inode (struct inode *dir_i, umode_t mode, struct jffs2_raw_inode *ri)
{
struct inode *inode;
struct super_block *sb = dir_i->i_sb;
struct jffs2_sb_info *c;
struct jffs2_inode_info *f;
int ret;
jffs2_dbg(1, "%s(): dir_i %ld, mode 0x%x\n",
__func__, dir_i->i_ino, mode);
c = JFFS2_SB_INFO(sb);
inode = new_inode(sb);
if (!inode)
return ERR_PTR(-ENOMEM);
f = JFFS2_INODE_INFO(inode);
jffs2_init_inode_info(f);
mutex_lock(&f->sem);
memset(ri, 0, sizeof(*ri));
/* Set OS-specific defaults for new inodes */
ri->uid = cpu_to_je16(from_kuid(&init_user_ns, current_fsuid()));
if (dir_i->i_mode & S_ISGID) {
ri->gid = cpu_to_je16(i_gid_read(dir_i));
if (S_ISDIR(mode))
mode |= S_ISGID;
} else {
ri->gid = cpu_to_je16(from_kgid(&init_user_ns, current_fsgid()));
}
/* POSIX ACLs have to be processed now, at least partly.
The umask is only applied if there's no default ACL */
ret = jffs2_init_acl_pre(dir_i, inode, &mode);
if (ret) {
mutex_unlock(&f->sem);
make_bad_inode(inode);
iput(inode);
return ERR_PTR(ret);
}
ret = jffs2_do_new_inode (c, f, mode, ri);
if (ret) {
mutex_unlock(&f->sem);
make_bad_inode(inode);
iput(inode);
return ERR_PTR(ret);
}
set_nlink(inode, 1);
inode->i_ino = je32_to_cpu(ri->ino);
inode->i_mode = jemode_to_cpu(ri->mode);
i_gid_write(inode, je16_to_cpu(ri->gid));
i_uid_write(inode, je16_to_cpu(ri->uid));
inode->i_atime = inode->i_ctime = inode->i_mtime = CURRENT_TIME_SEC;
ri->atime = ri->mtime = ri->ctime = cpu_to_je32(I_SEC(inode->i_mtime));
inode->i_blocks = 0;
inode->i_size = 0;
if (insert_inode_locked(inode) < 0) {
mutex_unlock(&f->sem);
make_bad_inode(inode);
iput(inode);
return ERR_PTR(-EINVAL);
}
return inode;
}
static int calculate_inocache_hashsize(uint32_t flash_size)
{
/*
* Pick a inocache hash size based on the size of the medium.
* Count how many megabytes we're dealing with, apply a hashsize twice
* that size, but rounding down to the usual big powers of 2. And keep
* to sensible bounds.
*/
int size_mb = flash_size / 1024 / 1024;
int hashsize = (size_mb * 2) & ~0x3f;
if (hashsize < INOCACHE_HASHSIZE_MIN)
return INOCACHE_HASHSIZE_MIN;
if (hashsize > INOCACHE_HASHSIZE_MAX)
return INOCACHE_HASHSIZE_MAX;
return hashsize;
}
int jffs2_do_fill_super(struct super_block *sb, void *data, int silent)
{
struct jffs2_sb_info *c;
struct inode *root_i;
int ret;
size_t blocks;
c = JFFS2_SB_INFO(sb);
/* Do not support the MLC nand */
if (c->mtd->type == MTD_MLCNANDFLASH)
return -EINVAL;
#ifndef CONFIG_JFFS2_FS_WRITEBUFFER
if (c->mtd->type == MTD_NANDFLASH) {
pr_err("Cannot operate on NAND flash unless jffs2 NAND support is compiled in\n");
return -EINVAL;
}
if (c->mtd->type == MTD_DATAFLASH) {
pr_err("Cannot operate on DataFlash unless jffs2 DataFlash support is compiled in\n");
return -EINVAL;
}
#endif
c->flash_size = c->mtd->size;
c->sector_size = c->mtd->erasesize;
blocks = c->flash_size / c->sector_size;
/*
* Size alignment check
*/
if ((c->sector_size * blocks) != c->flash_size) {
c->flash_size = c->sector_size * blocks;
pr_info("Flash size not aligned to erasesize, reducing to %dKiB\n",
c->flash_size / 1024);
}
if (c->flash_size < 5*c->sector_size) {
pr_err("Too few erase blocks (%d)\n",
c->flash_size / c->sector_size);
return -EINVAL;
}
c->cleanmarker_size = sizeof(struct jffs2_unknown_node);
/* NAND (or other bizarre) flash... do setup accordingly */
ret = jffs2_flash_setup(c);
if (ret)
return ret;
c->inocache_hashsize = calculate_inocache_hashsize(c->flash_size);
c->inocache_list = kcalloc(c->inocache_hashsize, sizeof(struct jffs2_inode_cache *), GFP_KERNEL);
if (!c->inocache_list) {
ret = -ENOMEM;
goto out_wbuf;
}
jffs2_init_xattr_subsystem(c);
if ((ret = jffs2_do_mount_fs(c)))
goto out_inohash;
jffs2_dbg(1, "%s(): Getting root inode\n", __func__);
root_i = jffs2_iget(sb, 1);
if (IS_ERR(root_i)) {
jffs2_dbg(1, "get root inode failed\n");
ret = PTR_ERR(root_i);
goto out_root;
}
ret = -ENOMEM;
jffs2_dbg(1, "%s(): d_make_root()\n", __func__);
sb->s_root = d_make_root(root_i);
if (!sb->s_root)
goto out_root;
sb->s_maxbytes = 0xFFFFFFFF;
sb->s_blocksize = PAGE_CACHE_SIZE;
sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
sb->s_magic = JFFS2_SUPER_MAGIC;
if (!(sb->s_flags & MS_RDONLY))
jffs2_start_garbage_collect_thread(c);
return 0;
out_root:
jffs2_free_ino_caches(c);
jffs2_free_raw_node_refs(c);
if (jffs2_blocks_use_vmalloc(c))
vfree(c->blocks);
else
kfree(c->blocks);
out_inohash:
jffs2_clear_xattr_subsystem(c);
kfree(c->inocache_list);
out_wbuf:
jffs2_flash_cleanup(c);
return ret;
}
void jffs2_gc_release_inode(struct jffs2_sb_info *c,
struct jffs2_inode_info *f)
{
iput(OFNI_EDONI_2SFFJ(f));
}
struct jffs2_inode_info *jffs2_gc_fetch_inode(struct jffs2_sb_info *c,
int inum, int unlinked)
{
struct inode *inode;
struct jffs2_inode_cache *ic;
if (unlinked) {
/* The inode has zero nlink but its nodes weren't yet marked
obsolete. This has to be because we're still waiting for
the final (close() and) iput() to happen.
There's a possibility that the final iput() could have
happened while we were contemplating. In order to ensure
that we don't cause a new read_inode() (which would fail)
for the inode in question, we use ilookup() in this case
instead of iget().
The nlink can't _become_ zero at this point because we're
holding the alloc_sem, and jffs2_do_unlink() would also
need that while decrementing nlink on any inode.
*/
inode = ilookup(OFNI_BS_2SFFJ(c), inum);
if (!inode) {
jffs2_dbg(1, "ilookup() failed for ino #%u; inode is probably deleted.\n",
inum);
spin_lock(&c->inocache_lock);
ic = jffs2_get_ino_cache(c, inum);
if (!ic) {
jffs2_dbg(1, "Inode cache for ino #%u is gone\n",
inum);
spin_unlock(&c->inocache_lock);
return NULL;
}
if (ic->state != INO_STATE_CHECKEDABSENT) {
/* Wait for progress. Don't just loop */
jffs2_dbg(1, "Waiting for ino #%u in state %d\n",
ic->ino, ic->state);
sleep_on_spinunlock(&c->inocache_wq, &c->inocache_lock);
} else {
spin_unlock(&c->inocache_lock);
}
return NULL;
}
} else {
/* Inode has links to it still; they're not going away because
jffs2_do_unlink() would need the alloc_sem and we have it.
Just iget() it, and if read_inode() is necessary that's OK.
*/
inode = jffs2_iget(OFNI_BS_2SFFJ(c), inum);
if (IS_ERR(inode))
return ERR_CAST(inode);
}
if (is_bad_inode(inode)) {
pr_notice("Eep. read_inode() failed for ino #%u. unlinked %d\n",
inum, unlinked);
/* NB. This will happen again. We need to do something appropriate here. */
iput(inode);
return ERR_PTR(-EIO);
}
return JFFS2_INODE_INFO(inode);
}
unsigned char *jffs2_gc_fetch_page(struct jffs2_sb_info *c,
struct jffs2_inode_info *f,
unsigned long offset,
unsigned long *priv)
{
struct inode *inode = OFNI_EDONI_2SFFJ(f);
struct page *pg;
pg = read_cache_page(inode->i_mapping, offset >> PAGE_CACHE_SHIFT,
(void *)jffs2_do_readpage_unlock, inode);
if (IS_ERR(pg))
return (void *)pg;
*priv = (unsigned long)pg;
return kmap(pg);
}
void jffs2_gc_release_page(struct jffs2_sb_info *c,
unsigned char *ptr,
unsigned long *priv)
{
struct page *pg = (void *)*priv;
kunmap(pg);
page_cache_release(pg);
}
static int jffs2_flash_setup(struct jffs2_sb_info *c) {
int ret = 0;
if (jffs2_cleanmarker_oob(c)) {
/* NAND flash... do setup accordingly */
ret = jffs2_nand_flash_setup(c);
if (ret)
return ret;
}
/* and Dataflash */
if (jffs2_dataflash(c)) {
ret = jffs2_dataflash_setup(c);
if (ret)
return ret;
}
/* and Intel "Sibley" flash */
if (jffs2_nor_wbuf_flash(c)) {
ret = jffs2_nor_wbuf_flash_setup(c);
if (ret)
return ret;
}
/* and an UBI volume */
if (jffs2_ubivol(c)) {
ret = jffs2_ubivol_setup(c);
if (ret)
return ret;
}
return ret;
}
void jffs2_flash_cleanup(struct jffs2_sb_info *c) {
if (jffs2_cleanmarker_oob(c)) {
jffs2_nand_flash_cleanup(c);
}
/* and DataFlash */
if (jffs2_dataflash(c)) {
jffs2_dataflash_cleanup(c);
}
/* and Intel "Sibley" flash */
if (jffs2_nor_wbuf_flash(c)) {
jffs2_nor_wbuf_flash_cleanup(c);
}
/* and an UBI volume */
if (jffs2_ubivol(c)) {
jffs2_ubivol_cleanup(c);
}
}

1378
fs/jffs2/gc.c Normal file

File diff suppressed because it is too large Load diff

22
fs/jffs2/ioctl.c Normal file
View file

@ -0,0 +1,22 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#include <linux/fs.h>
#include "nodelist.h"
long jffs2_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
/* Later, this will provide for lsattr.jffs2 and chattr.jffs2, which
will include compression support etc. */
return -ENOTTY;
}

56
fs/jffs2/jffs2_fs_i.h Normal file
View file

@ -0,0 +1,56 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#ifndef _JFFS2_FS_I
#define _JFFS2_FS_I
#include <linux/rbtree.h>
#include <linux/posix_acl.h>
#include <linux/mutex.h>
struct jffs2_inode_info {
/* We need an internal mutex similar to inode->i_mutex.
Unfortunately, we can't used the existing one, because
either the GC would deadlock, or we'd have to release it
before letting GC proceed. Or we'd have to put ugliness
into the GC code so it didn't attempt to obtain the i_mutex
for the inode(s) which are already locked */
struct mutex sem;
/* The highest (datanode) version number used for this ino */
uint32_t highest_version;
/* List of data fragments which make up the file */
struct rb_root fragtree;
/* There may be one datanode which isn't referenced by any of the
above fragments, if it contains a metadata update but no actual
data - or if this is a directory inode */
/* This also holds the _only_ dnode for symlinks/device nodes,
etc. */
struct jffs2_full_dnode *metadata;
/* Directory entries */
struct jffs2_full_dirent *dents;
/* The target path if this is the inode of a symlink */
unsigned char *target;
/* Some stuff we just have to keep in-core at all times, for each inode. */
struct jffs2_inode_cache *inocache;
uint16_t flags;
uint8_t usercompr;
struct inode vfs_inode;
};
#endif /* _JFFS2_FS_I */

162
fs/jffs2/jffs2_fs_sb.h Normal file
View file

@ -0,0 +1,162 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
* Copyright © 2004-2010 David Woodhouse <dwmw2@infradead.org>
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#ifndef _JFFS2_FS_SB
#define _JFFS2_FS_SB
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <linux/completion.h>
#include <linux/mutex.h>
#include <linux/timer.h>
#include <linux/wait.h>
#include <linux/list.h>
#include <linux/rwsem.h>
#define JFFS2_SB_FLAG_RO 1
#define JFFS2_SB_FLAG_SCANNING 2 /* Flash scanning is in progress */
#define JFFS2_SB_FLAG_BUILDING 4 /* File system building is in progress */
struct jffs2_inodirty;
struct jffs2_mount_opts {
bool override_compr;
unsigned int compr;
/* The size of the reserved pool. The reserved pool is the JFFS2 flash
* space which may only be used by root cannot be used by the other
* users. This is implemented simply by means of not allowing the
* latter users to write to the file system if the amount if the
* available space is less then 'rp_size'. */
unsigned int rp_size;
};
/* A struct for the overall file system control. Pointers to
jffs2_sb_info structs are named `c' in the source code.
Nee jffs_control
*/
struct jffs2_sb_info {
struct mtd_info *mtd;
uint32_t highest_ino;
uint32_t checked_ino;
unsigned int flags;
struct task_struct *gc_task; /* GC task struct */
struct completion gc_thread_start; /* GC thread start completion */
struct completion gc_thread_exit; /* GC thread exit completion port */
struct mutex alloc_sem; /* Used to protect all the following
fields, and also to protect against
out-of-order writing of nodes. And GC. */
uint32_t cleanmarker_size; /* Size of an _inline_ CLEANMARKER
(i.e. zero for OOB CLEANMARKER */
uint32_t flash_size;
uint32_t used_size;
uint32_t dirty_size;
uint32_t wasted_size;
uint32_t free_size;
uint32_t erasing_size;
uint32_t bad_size;
uint32_t sector_size;
uint32_t unchecked_size;
uint32_t nr_free_blocks;
uint32_t nr_erasing_blocks;
/* Number of free blocks there must be before we... */
uint8_t resv_blocks_write; /* ... allow a normal filesystem write */
uint8_t resv_blocks_deletion; /* ... allow a normal filesystem deletion */
uint8_t resv_blocks_gctrigger; /* ... wake up the GC thread */
uint8_t resv_blocks_gcbad; /* ... pick a block from the bad_list to GC */
uint8_t resv_blocks_gcmerge; /* ... merge pages when garbage collecting */
/* Number of 'very dirty' blocks before we trigger immediate GC */
uint8_t vdirty_blocks_gctrigger;
uint32_t nospc_dirty_size;
uint32_t nr_blocks;
struct jffs2_eraseblock *blocks; /* The whole array of blocks. Used for getting blocks
* from the offset (blocks[ofs / sector_size]) */
struct jffs2_eraseblock *nextblock; /* The block we're currently filling */
struct jffs2_eraseblock *gcblock; /* The block we're currently garbage-collecting */
struct list_head clean_list; /* Blocks 100% full of clean data */
struct list_head very_dirty_list; /* Blocks with lots of dirty space */
struct list_head dirty_list; /* Blocks with some dirty space */
struct list_head erasable_list; /* Blocks which are completely dirty, and need erasing */
struct list_head erasable_pending_wbuf_list; /* Blocks which need erasing but only after the current wbuf is flushed */
struct list_head erasing_list; /* Blocks which are currently erasing */
struct list_head erase_checking_list; /* Blocks which are being checked and marked */
struct list_head erase_pending_list; /* Blocks which need erasing now */
struct list_head erase_complete_list; /* Blocks which are erased and need the clean marker written to them */
struct list_head free_list; /* Blocks which are free and ready to be used */
struct list_head bad_list; /* Bad blocks. */
struct list_head bad_used_list; /* Bad blocks with valid data in. */
spinlock_t erase_completion_lock; /* Protect free_list and erasing_list
against erase completion handler */
wait_queue_head_t erase_wait; /* For waiting for erases to complete */
wait_queue_head_t inocache_wq;
int inocache_hashsize;
struct jffs2_inode_cache **inocache_list;
spinlock_t inocache_lock;
/* Sem to allow jffs2_garbage_collect_deletion_dirent to
drop the erase_completion_lock while it's holding a pointer
to an obsoleted node. I don't like this. Alternatives welcomed. */
struct mutex erase_free_sem;
uint32_t wbuf_pagesize; /* 0 for NOR and other flashes with no wbuf */
#ifdef CONFIG_JFFS2_FS_WBUF_VERIFY
unsigned char *wbuf_verify; /* read-back buffer for verification */
#endif
#ifdef CONFIG_JFFS2_FS_WRITEBUFFER
unsigned char *wbuf; /* Write-behind buffer for NAND flash */
uint32_t wbuf_ofs;
uint32_t wbuf_len;
struct jffs2_inodirty *wbuf_inodes;
struct rw_semaphore wbuf_sem; /* Protects the write buffer */
struct delayed_work wbuf_dwork; /* write-buffer write-out work */
unsigned char *oobbuf;
int oobavail; /* How many bytes are available for JFFS2 in OOB */
#endif
struct jffs2_summary *summary; /* Summary information */
struct jffs2_mount_opts mount_opts;
#ifdef CONFIG_JFFS2_FS_XATTR
#define XATTRINDEX_HASHSIZE (57)
uint32_t highest_xid;
uint32_t highest_xseqno;
struct list_head xattrindex[XATTRINDEX_HASHSIZE];
struct list_head xattr_unchecked;
struct list_head xattr_dead_list;
struct jffs2_xattr_ref *xref_dead_list;
struct jffs2_xattr_ref *xref_temp;
struct rw_semaphore xattr_sem;
uint32_t xdatum_mem_usage;
uint32_t xdatum_mem_threshold;
#endif
/* OS-private pointer for getting back to master superblock info */
void *os_priv;
};
#endif /* _JFFS2_FS_SB */

324
fs/jffs2/malloc.c Normal file
View file

@ -0,0 +1,324 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/jffs2.h>
#include "nodelist.h"
/* These are initialised to NULL in the kernel startup code.
If you're porting to other operating systems, beware */
static struct kmem_cache *full_dnode_slab;
static struct kmem_cache *raw_dirent_slab;
static struct kmem_cache *raw_inode_slab;
static struct kmem_cache *tmp_dnode_info_slab;
static struct kmem_cache *raw_node_ref_slab;
static struct kmem_cache *node_frag_slab;
static struct kmem_cache *inode_cache_slab;
#ifdef CONFIG_JFFS2_FS_XATTR
static struct kmem_cache *xattr_datum_cache;
static struct kmem_cache *xattr_ref_cache;
#endif
int __init jffs2_create_slab_caches(void)
{
full_dnode_slab = kmem_cache_create("jffs2_full_dnode",
sizeof(struct jffs2_full_dnode),
0, 0, NULL);
if (!full_dnode_slab)
goto err;
raw_dirent_slab = kmem_cache_create("jffs2_raw_dirent",
sizeof(struct jffs2_raw_dirent),
0, SLAB_HWCACHE_ALIGN, NULL);
if (!raw_dirent_slab)
goto err;
raw_inode_slab = kmem_cache_create("jffs2_raw_inode",
sizeof(struct jffs2_raw_inode),
0, SLAB_HWCACHE_ALIGN, NULL);
if (!raw_inode_slab)
goto err;
tmp_dnode_info_slab = kmem_cache_create("jffs2_tmp_dnode",
sizeof(struct jffs2_tmp_dnode_info),
0, 0, NULL);
if (!tmp_dnode_info_slab)
goto err;
raw_node_ref_slab = kmem_cache_create("jffs2_refblock",
sizeof(struct jffs2_raw_node_ref) * (REFS_PER_BLOCK + 1),
0, 0, NULL);
if (!raw_node_ref_slab)
goto err;
node_frag_slab = kmem_cache_create("jffs2_node_frag",
sizeof(struct jffs2_node_frag),
0, 0, NULL);
if (!node_frag_slab)
goto err;
inode_cache_slab = kmem_cache_create("jffs2_inode_cache",
sizeof(struct jffs2_inode_cache),
0, 0, NULL);
if (!inode_cache_slab)
goto err;
#ifdef CONFIG_JFFS2_FS_XATTR
xattr_datum_cache = kmem_cache_create("jffs2_xattr_datum",
sizeof(struct jffs2_xattr_datum),
0, 0, NULL);
if (!xattr_datum_cache)
goto err;
xattr_ref_cache = kmem_cache_create("jffs2_xattr_ref",
sizeof(struct jffs2_xattr_ref),
0, 0, NULL);
if (!xattr_ref_cache)
goto err;
#endif
return 0;
err:
jffs2_destroy_slab_caches();
return -ENOMEM;
}
void jffs2_destroy_slab_caches(void)
{
if(full_dnode_slab)
kmem_cache_destroy(full_dnode_slab);
if(raw_dirent_slab)
kmem_cache_destroy(raw_dirent_slab);
if(raw_inode_slab)
kmem_cache_destroy(raw_inode_slab);
if(tmp_dnode_info_slab)
kmem_cache_destroy(tmp_dnode_info_slab);
if(raw_node_ref_slab)
kmem_cache_destroy(raw_node_ref_slab);
if(node_frag_slab)
kmem_cache_destroy(node_frag_slab);
if(inode_cache_slab)
kmem_cache_destroy(inode_cache_slab);
#ifdef CONFIG_JFFS2_FS_XATTR
if (xattr_datum_cache)
kmem_cache_destroy(xattr_datum_cache);
if (xattr_ref_cache)
kmem_cache_destroy(xattr_ref_cache);
#endif
}
struct jffs2_full_dirent *jffs2_alloc_full_dirent(int namesize)
{
struct jffs2_full_dirent *ret;
ret = kmalloc(sizeof(struct jffs2_full_dirent) + namesize, GFP_KERNEL);
dbg_memalloc("%p\n", ret);
return ret;
}
void jffs2_free_full_dirent(struct jffs2_full_dirent *x)
{
dbg_memalloc("%p\n", x);
kfree(x);
}
struct jffs2_full_dnode *jffs2_alloc_full_dnode(void)
{
struct jffs2_full_dnode *ret;
ret = kmem_cache_alloc(full_dnode_slab, GFP_KERNEL);
dbg_memalloc("%p\n", ret);
return ret;
}
void jffs2_free_full_dnode(struct jffs2_full_dnode *x)
{
dbg_memalloc("%p\n", x);
kmem_cache_free(full_dnode_slab, x);
}
struct jffs2_raw_dirent *jffs2_alloc_raw_dirent(void)
{
struct jffs2_raw_dirent *ret;
ret = kmem_cache_alloc(raw_dirent_slab, GFP_KERNEL);
dbg_memalloc("%p\n", ret);
return ret;
}
void jffs2_free_raw_dirent(struct jffs2_raw_dirent *x)
{
dbg_memalloc("%p\n", x);
kmem_cache_free(raw_dirent_slab, x);
}
struct jffs2_raw_inode *jffs2_alloc_raw_inode(void)
{
struct jffs2_raw_inode *ret;
ret = kmem_cache_alloc(raw_inode_slab, GFP_KERNEL);
dbg_memalloc("%p\n", ret);
return ret;
}
void jffs2_free_raw_inode(struct jffs2_raw_inode *x)
{
dbg_memalloc("%p\n", x);
kmem_cache_free(raw_inode_slab, x);
}
struct jffs2_tmp_dnode_info *jffs2_alloc_tmp_dnode_info(void)
{
struct jffs2_tmp_dnode_info *ret;
ret = kmem_cache_alloc(tmp_dnode_info_slab, GFP_KERNEL);
dbg_memalloc("%p\n",
ret);
return ret;
}
void jffs2_free_tmp_dnode_info(struct jffs2_tmp_dnode_info *x)
{
dbg_memalloc("%p\n", x);
kmem_cache_free(tmp_dnode_info_slab, x);
}
static struct jffs2_raw_node_ref *jffs2_alloc_refblock(void)
{
struct jffs2_raw_node_ref *ret;
ret = kmem_cache_alloc(raw_node_ref_slab, GFP_KERNEL);
if (ret) {
int i = 0;
for (i=0; i < REFS_PER_BLOCK; i++) {
ret[i].flash_offset = REF_EMPTY_NODE;
ret[i].next_in_ino = NULL;
}
ret[i].flash_offset = REF_LINK_NODE;
ret[i].next_in_ino = NULL;
}
return ret;
}
int jffs2_prealloc_raw_node_refs(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb, int nr)
{
struct jffs2_raw_node_ref **p, *ref;
int i = nr;
dbg_memalloc("%d\n", nr);
p = &jeb->last_node;
ref = *p;
dbg_memalloc("Reserving %d refs for block @0x%08x\n", nr, jeb->offset);
/* If jeb->last_node is really a valid node then skip over it */
if (ref && ref->flash_offset != REF_EMPTY_NODE)
ref++;
while (i) {
if (!ref) {
dbg_memalloc("Allocating new refblock linked from %p\n", p);
ref = *p = jffs2_alloc_refblock();
if (!ref)
return -ENOMEM;
}
if (ref->flash_offset == REF_LINK_NODE) {
p = &ref->next_in_ino;
ref = *p;
continue;
}
i--;
ref++;
}
jeb->allocated_refs = nr;
dbg_memalloc("Reserved %d refs for block @0x%08x, last_node is %p (%08x,%p)\n",
nr, jeb->offset, jeb->last_node, jeb->last_node->flash_offset,
jeb->last_node->next_in_ino);
return 0;
}
void jffs2_free_refblock(struct jffs2_raw_node_ref *x)
{
dbg_memalloc("%p\n", x);
kmem_cache_free(raw_node_ref_slab, x);
}
struct jffs2_node_frag *jffs2_alloc_node_frag(void)
{
struct jffs2_node_frag *ret;
ret = kmem_cache_alloc(node_frag_slab, GFP_KERNEL);
dbg_memalloc("%p\n", ret);
return ret;
}
void jffs2_free_node_frag(struct jffs2_node_frag *x)
{
dbg_memalloc("%p\n", x);
kmem_cache_free(node_frag_slab, x);
}
struct jffs2_inode_cache *jffs2_alloc_inode_cache(void)
{
struct jffs2_inode_cache *ret;
ret = kmem_cache_alloc(inode_cache_slab, GFP_KERNEL);
dbg_memalloc("%p\n", ret);
return ret;
}
void jffs2_free_inode_cache(struct jffs2_inode_cache *x)
{
dbg_memalloc("%p\n", x);
kmem_cache_free(inode_cache_slab, x);
}
#ifdef CONFIG_JFFS2_FS_XATTR
struct jffs2_xattr_datum *jffs2_alloc_xattr_datum(void)
{
struct jffs2_xattr_datum *xd;
xd = kmem_cache_zalloc(xattr_datum_cache, GFP_KERNEL);
dbg_memalloc("%p\n", xd);
if (!xd)
return NULL;
xd->class = RAWNODE_CLASS_XATTR_DATUM;
xd->node = (void *)xd;
INIT_LIST_HEAD(&xd->xindex);
return xd;
}
void jffs2_free_xattr_datum(struct jffs2_xattr_datum *xd)
{
dbg_memalloc("%p\n", xd);
kmem_cache_free(xattr_datum_cache, xd);
}
struct jffs2_xattr_ref *jffs2_alloc_xattr_ref(void)
{
struct jffs2_xattr_ref *ref;
ref = kmem_cache_zalloc(xattr_ref_cache, GFP_KERNEL);
dbg_memalloc("%p\n", ref);
if (!ref)
return NULL;
ref->class = RAWNODE_CLASS_XATTR_REF;
ref->node = (void *)ref;
return ref;
}
void jffs2_free_xattr_ref(struct jffs2_xattr_ref *ref)
{
dbg_memalloc("%p\n", ref);
kmem_cache_free(xattr_ref_cache, ref);
}
#endif

755
fs/jffs2/nodelist.c Normal file
View file

@ -0,0 +1,755 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/mtd/mtd.h>
#include <linux/rbtree.h>
#include <linux/crc32.h>
#include <linux/pagemap.h>
#include "nodelist.h"
static void jffs2_obsolete_node_frag(struct jffs2_sb_info *c,
struct jffs2_node_frag *this);
void jffs2_add_fd_to_list(struct jffs2_sb_info *c, struct jffs2_full_dirent *new, struct jffs2_full_dirent **list)
{
struct jffs2_full_dirent **prev = list;
dbg_dentlist("add dirent \"%s\", ino #%u\n", new->name, new->ino);
while ((*prev) && (*prev)->nhash <= new->nhash) {
if ((*prev)->nhash == new->nhash && !strcmp((*prev)->name, new->name)) {
/* Duplicate. Free one */
if (new->version < (*prev)->version) {
dbg_dentlist("Eep! Marking new dirent node obsolete, old is \"%s\", ino #%u\n",
(*prev)->name, (*prev)->ino);
jffs2_mark_node_obsolete(c, new->raw);
jffs2_free_full_dirent(new);
} else {
dbg_dentlist("marking old dirent \"%s\", ino #%u obsolete\n",
(*prev)->name, (*prev)->ino);
new->next = (*prev)->next;
/* It may have been a 'placeholder' deletion dirent,
if jffs2_can_mark_obsolete() (see jffs2_do_unlink()) */
if ((*prev)->raw)
jffs2_mark_node_obsolete(c, ((*prev)->raw));
jffs2_free_full_dirent(*prev);
*prev = new;
}
return;
}
prev = &((*prev)->next);
}
new->next = *prev;
*prev = new;
}
uint32_t jffs2_truncate_fragtree(struct jffs2_sb_info *c, struct rb_root *list, uint32_t size)
{
struct jffs2_node_frag *frag = jffs2_lookup_node_frag(list, size);
dbg_fragtree("truncating fragtree to 0x%08x bytes\n", size);
/* We know frag->ofs <= size. That's what lookup does for us */
if (frag && frag->ofs != size) {
if (frag->ofs+frag->size > size) {
frag->size = size - frag->ofs;
}
frag = frag_next(frag);
}
while (frag && frag->ofs >= size) {
struct jffs2_node_frag *next = frag_next(frag);
frag_erase(frag, list);
jffs2_obsolete_node_frag(c, frag);
frag = next;
}
if (size == 0)
return 0;
frag = frag_last(list);
/* Sanity check for truncation to longer than we started with... */
if (!frag)
return 0;
if (frag->ofs + frag->size < size)
return frag->ofs + frag->size;
/* If the last fragment starts at the RAM page boundary, it is
* REF_PRISTINE irrespective of its size. */
if (frag->node && (frag->ofs & (PAGE_CACHE_SIZE - 1)) == 0) {
dbg_fragtree2("marking the last fragment 0x%08x-0x%08x REF_PRISTINE.\n",
frag->ofs, frag->ofs + frag->size);
frag->node->raw->flash_offset = ref_offset(frag->node->raw) | REF_PRISTINE;
}
return size;
}
static void jffs2_obsolete_node_frag(struct jffs2_sb_info *c,
struct jffs2_node_frag *this)
{
if (this->node) {
this->node->frags--;
if (!this->node->frags) {
/* The node has no valid frags left. It's totally obsoleted */
dbg_fragtree2("marking old node @0x%08x (0x%04x-0x%04x) obsolete\n",
ref_offset(this->node->raw), this->node->ofs, this->node->ofs+this->node->size);
jffs2_mark_node_obsolete(c, this->node->raw);
jffs2_free_full_dnode(this->node);
} else {
dbg_fragtree2("marking old node @0x%08x (0x%04x-0x%04x) REF_NORMAL. frags is %d\n",
ref_offset(this->node->raw), this->node->ofs, this->node->ofs+this->node->size, this->node->frags);
mark_ref_normal(this->node->raw);
}
}
jffs2_free_node_frag(this);
}
static void jffs2_fragtree_insert(struct jffs2_node_frag *newfrag, struct jffs2_node_frag *base)
{
struct rb_node *parent = &base->rb;
struct rb_node **link = &parent;
dbg_fragtree2("insert frag (0x%04x-0x%04x)\n", newfrag->ofs, newfrag->ofs + newfrag->size);
while (*link) {
parent = *link;
base = rb_entry(parent, struct jffs2_node_frag, rb);
if (newfrag->ofs > base->ofs)
link = &base->rb.rb_right;
else if (newfrag->ofs < base->ofs)
link = &base->rb.rb_left;
else {
JFFS2_ERROR("duplicate frag at %08x (%p,%p)\n", newfrag->ofs, newfrag, base);
BUG();
}
}
rb_link_node(&newfrag->rb, &base->rb, link);
}
/*
* Allocate and initializes a new fragment.
*/
static struct jffs2_node_frag * new_fragment(struct jffs2_full_dnode *fn, uint32_t ofs, uint32_t size)
{
struct jffs2_node_frag *newfrag;
newfrag = jffs2_alloc_node_frag();
if (likely(newfrag)) {
newfrag->ofs = ofs;
newfrag->size = size;
newfrag->node = fn;
} else {
JFFS2_ERROR("cannot allocate a jffs2_node_frag object\n");
}
return newfrag;
}
/*
* Called when there is no overlapping fragment exist. Inserts a hole before the new
* fragment and inserts the new fragment to the fragtree.
*/
static int no_overlapping_node(struct jffs2_sb_info *c, struct rb_root *root,
struct jffs2_node_frag *newfrag,
struct jffs2_node_frag *this, uint32_t lastend)
{
if (lastend < newfrag->node->ofs) {
/* put a hole in before the new fragment */
struct jffs2_node_frag *holefrag;
holefrag= new_fragment(NULL, lastend, newfrag->node->ofs - lastend);
if (unlikely(!holefrag)) {
jffs2_free_node_frag(newfrag);
return -ENOMEM;
}
if (this) {
/* By definition, the 'this' node has no right-hand child,
because there are no frags with offset greater than it.
So that's where we want to put the hole */
dbg_fragtree2("add hole frag %#04x-%#04x on the right of the new frag.\n",
holefrag->ofs, holefrag->ofs + holefrag->size);
rb_link_node(&holefrag->rb, &this->rb, &this->rb.rb_right);
} else {
dbg_fragtree2("Add hole frag %#04x-%#04x to the root of the tree.\n",
holefrag->ofs, holefrag->ofs + holefrag->size);
rb_link_node(&holefrag->rb, NULL, &root->rb_node);
}
rb_insert_color(&holefrag->rb, root);
this = holefrag;
}
if (this) {
/* By definition, the 'this' node has no right-hand child,
because there are no frags with offset greater than it.
So that's where we want to put new fragment */
dbg_fragtree2("add the new node at the right\n");
rb_link_node(&newfrag->rb, &this->rb, &this->rb.rb_right);
} else {
dbg_fragtree2("insert the new node at the root of the tree\n");
rb_link_node(&newfrag->rb, NULL, &root->rb_node);
}
rb_insert_color(&newfrag->rb, root);
return 0;
}
/* Doesn't set inode->i_size */
static int jffs2_add_frag_to_fragtree(struct jffs2_sb_info *c, struct rb_root *root, struct jffs2_node_frag *newfrag)
{
struct jffs2_node_frag *this;
uint32_t lastend;
/* Skip all the nodes which are completed before this one starts */
this = jffs2_lookup_node_frag(root, newfrag->node->ofs);
if (this) {
dbg_fragtree2("lookup gave frag 0x%04x-0x%04x; phys 0x%08x (*%p)\n",
this->ofs, this->ofs+this->size, this->node?(ref_offset(this->node->raw)):0xffffffff, this);
lastend = this->ofs + this->size;
} else {
dbg_fragtree2("lookup gave no frag\n");
lastend = 0;
}
/* See if we ran off the end of the fragtree */
if (lastend <= newfrag->ofs) {
/* We did */
/* Check if 'this' node was on the same page as the new node.
If so, both 'this' and the new node get marked REF_NORMAL so
the GC can take a look.
*/
if (lastend && (lastend-1) >> PAGE_CACHE_SHIFT == newfrag->ofs >> PAGE_CACHE_SHIFT) {
if (this->node)
mark_ref_normal(this->node->raw);
mark_ref_normal(newfrag->node->raw);
}
return no_overlapping_node(c, root, newfrag, this, lastend);
}
if (this->node)
dbg_fragtree2("dealing with frag %u-%u, phys %#08x(%d).\n",
this->ofs, this->ofs + this->size,
ref_offset(this->node->raw), ref_flags(this->node->raw));
else
dbg_fragtree2("dealing with hole frag %u-%u.\n",
this->ofs, this->ofs + this->size);
/* OK. 'this' is pointing at the first frag that newfrag->ofs at least partially obsoletes,
* - i.e. newfrag->ofs < this->ofs+this->size && newfrag->ofs >= this->ofs
*/
if (newfrag->ofs > this->ofs) {
/* This node isn't completely obsoleted. The start of it remains valid */
/* Mark the new node and the partially covered node REF_NORMAL -- let
the GC take a look at them */
mark_ref_normal(newfrag->node->raw);
if (this->node)
mark_ref_normal(this->node->raw);
if (this->ofs + this->size > newfrag->ofs + newfrag->size) {
/* The new node splits 'this' frag into two */
struct jffs2_node_frag *newfrag2;
if (this->node)
dbg_fragtree2("split old frag 0x%04x-0x%04x, phys 0x%08x\n",
this->ofs, this->ofs+this->size, ref_offset(this->node->raw));
else
dbg_fragtree2("split old hole frag 0x%04x-0x%04x\n",
this->ofs, this->ofs+this->size);
/* New second frag pointing to this's node */
newfrag2 = new_fragment(this->node, newfrag->ofs + newfrag->size,
this->ofs + this->size - newfrag->ofs - newfrag->size);
if (unlikely(!newfrag2))
return -ENOMEM;
if (this->node)
this->node->frags++;
/* Adjust size of original 'this' */
this->size = newfrag->ofs - this->ofs;
/* Now, we know there's no node with offset
greater than this->ofs but smaller than
newfrag2->ofs or newfrag->ofs, for obvious
reasons. So we can do a tree insert from
'this' to insert newfrag, and a tree insert
from newfrag to insert newfrag2. */
jffs2_fragtree_insert(newfrag, this);
rb_insert_color(&newfrag->rb, root);
jffs2_fragtree_insert(newfrag2, newfrag);
rb_insert_color(&newfrag2->rb, root);
return 0;
}
/* New node just reduces 'this' frag in size, doesn't split it */
this->size = newfrag->ofs - this->ofs;
/* Again, we know it lives down here in the tree */
jffs2_fragtree_insert(newfrag, this);
rb_insert_color(&newfrag->rb, root);
} else {
/* New frag starts at the same point as 'this' used to. Replace
it in the tree without doing a delete and insertion */
dbg_fragtree2("inserting newfrag (*%p),%d-%d in before 'this' (*%p),%d-%d\n",
newfrag, newfrag->ofs, newfrag->ofs+newfrag->size, this, this->ofs, this->ofs+this->size);
rb_replace_node(&this->rb, &newfrag->rb, root);
if (newfrag->ofs + newfrag->size >= this->ofs+this->size) {
dbg_fragtree2("obsoleting node frag %p (%x-%x)\n", this, this->ofs, this->ofs+this->size);
jffs2_obsolete_node_frag(c, this);
} else {
this->ofs += newfrag->size;
this->size -= newfrag->size;
jffs2_fragtree_insert(this, newfrag);
rb_insert_color(&this->rb, root);
return 0;
}
}
/* OK, now we have newfrag added in the correct place in the tree, but
frag_next(newfrag) may be a fragment which is overlapped by it
*/
while ((this = frag_next(newfrag)) && newfrag->ofs + newfrag->size >= this->ofs + this->size) {
/* 'this' frag is obsoleted completely. */
dbg_fragtree2("obsoleting node frag %p (%x-%x) and removing from tree\n",
this, this->ofs, this->ofs+this->size);
rb_erase(&this->rb, root);
jffs2_obsolete_node_frag(c, this);
}
/* Now we're pointing at the first frag which isn't totally obsoleted by
the new frag */
if (!this || newfrag->ofs + newfrag->size == this->ofs)
return 0;
/* Still some overlap but we don't need to move it in the tree */
this->size = (this->ofs + this->size) - (newfrag->ofs + newfrag->size);
this->ofs = newfrag->ofs + newfrag->size;
/* And mark them REF_NORMAL so the GC takes a look at them */
if (this->node)
mark_ref_normal(this->node->raw);
mark_ref_normal(newfrag->node->raw);
return 0;
}
/*
* Given an inode, probably with existing tree of fragments, add the new node
* to the fragment tree.
*/
int jffs2_add_full_dnode_to_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, struct jffs2_full_dnode *fn)
{
int ret;
struct jffs2_node_frag *newfrag;
if (unlikely(!fn->size))
return 0;
newfrag = new_fragment(fn, fn->ofs, fn->size);
if (unlikely(!newfrag))
return -ENOMEM;
newfrag->node->frags = 1;
dbg_fragtree("adding node %#04x-%#04x @0x%08x on flash, newfrag *%p\n",
fn->ofs, fn->ofs+fn->size, ref_offset(fn->raw), newfrag);
ret = jffs2_add_frag_to_fragtree(c, &f->fragtree, newfrag);
if (unlikely(ret))
return ret;
/* If we now share a page with other nodes, mark either previous
or next node REF_NORMAL, as appropriate. */
if (newfrag->ofs & (PAGE_CACHE_SIZE-1)) {
struct jffs2_node_frag *prev = frag_prev(newfrag);
mark_ref_normal(fn->raw);
/* If we don't start at zero there's _always_ a previous */
if (prev->node)
mark_ref_normal(prev->node->raw);
}
if ((newfrag->ofs+newfrag->size) & (PAGE_CACHE_SIZE-1)) {
struct jffs2_node_frag *next = frag_next(newfrag);
if (next) {
mark_ref_normal(fn->raw);
if (next->node)
mark_ref_normal(next->node->raw);
}
}
jffs2_dbg_fragtree_paranoia_check_nolock(f);
return 0;
}
void jffs2_set_inocache_state(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic, int state)
{
spin_lock(&c->inocache_lock);
ic->state = state;
wake_up(&c->inocache_wq);
spin_unlock(&c->inocache_lock);
}
/* During mount, this needs no locking. During normal operation, its
callers want to do other stuff while still holding the inocache_lock.
Rather than introducing special case get_ino_cache functions or
callbacks, we just let the caller do the locking itself. */
struct jffs2_inode_cache *jffs2_get_ino_cache(struct jffs2_sb_info *c, uint32_t ino)
{
struct jffs2_inode_cache *ret;
ret = c->inocache_list[ino % c->inocache_hashsize];
while (ret && ret->ino < ino) {
ret = ret->next;
}
if (ret && ret->ino != ino)
ret = NULL;
return ret;
}
void jffs2_add_ino_cache (struct jffs2_sb_info *c, struct jffs2_inode_cache *new)
{
struct jffs2_inode_cache **prev;
spin_lock(&c->inocache_lock);
if (!new->ino)
new->ino = ++c->highest_ino;
dbg_inocache("add %p (ino #%u)\n", new, new->ino);
prev = &c->inocache_list[new->ino % c->inocache_hashsize];
while ((*prev) && (*prev)->ino < new->ino) {
prev = &(*prev)->next;
}
new->next = *prev;
*prev = new;
spin_unlock(&c->inocache_lock);
}
void jffs2_del_ino_cache(struct jffs2_sb_info *c, struct jffs2_inode_cache *old)
{
struct jffs2_inode_cache **prev;
#ifdef CONFIG_JFFS2_FS_XATTR
BUG_ON(old->xref);
#endif
dbg_inocache("del %p (ino #%u)\n", old, old->ino);
spin_lock(&c->inocache_lock);
prev = &c->inocache_list[old->ino % c->inocache_hashsize];
while ((*prev) && (*prev)->ino < old->ino) {
prev = &(*prev)->next;
}
if ((*prev) == old) {
*prev = old->next;
}
/* Free it now unless it's in READING or CLEARING state, which
are the transitions upon read_inode() and clear_inode(). The
rest of the time we know nobody else is looking at it, and
if it's held by read_inode() or clear_inode() they'll free it
for themselves. */
if (old->state != INO_STATE_READING && old->state != INO_STATE_CLEARING)
jffs2_free_inode_cache(old);
spin_unlock(&c->inocache_lock);
}
void jffs2_free_ino_caches(struct jffs2_sb_info *c)
{
int i;
struct jffs2_inode_cache *this, *next;
for (i=0; i < c->inocache_hashsize; i++) {
this = c->inocache_list[i];
while (this) {
next = this->next;
jffs2_xattr_free_inode(c, this);
jffs2_free_inode_cache(this);
this = next;
}
c->inocache_list[i] = NULL;
}
}
void jffs2_free_raw_node_refs(struct jffs2_sb_info *c)
{
int i;
struct jffs2_raw_node_ref *this, *next;
for (i=0; i<c->nr_blocks; i++) {
this = c->blocks[i].first_node;
while (this) {
if (this[REFS_PER_BLOCK].flash_offset == REF_LINK_NODE)
next = this[REFS_PER_BLOCK].next_in_ino;
else
next = NULL;
jffs2_free_refblock(this);
this = next;
}
c->blocks[i].first_node = c->blocks[i].last_node = NULL;
}
}
struct jffs2_node_frag *jffs2_lookup_node_frag(struct rb_root *fragtree, uint32_t offset)
{
/* The common case in lookup is that there will be a node
which precisely matches. So we go looking for that first */
struct rb_node *next;
struct jffs2_node_frag *prev = NULL;
struct jffs2_node_frag *frag = NULL;
dbg_fragtree2("root %p, offset %d\n", fragtree, offset);
next = fragtree->rb_node;
while(next) {
frag = rb_entry(next, struct jffs2_node_frag, rb);
if (frag->ofs + frag->size <= offset) {
/* Remember the closest smaller match on the way down */
if (!prev || frag->ofs > prev->ofs)
prev = frag;
next = frag->rb.rb_right;
} else if (frag->ofs > offset) {
next = frag->rb.rb_left;
} else {
return frag;
}
}
/* Exact match not found. Go back up looking at each parent,
and return the closest smaller one */
if (prev)
dbg_fragtree2("no match. Returning frag %#04x-%#04x, closest previous\n",
prev->ofs, prev->ofs+prev->size);
else
dbg_fragtree2("returning NULL, empty fragtree\n");
return prev;
}
/* Pass 'c' argument to indicate that nodes should be marked obsolete as
they're killed. */
void jffs2_kill_fragtree(struct rb_root *root, struct jffs2_sb_info *c)
{
struct jffs2_node_frag *frag, *next;
dbg_fragtree("killing\n");
rbtree_postorder_for_each_entry_safe(frag, next, root, rb) {
if (frag->node && !(--frag->node->frags)) {
/* Not a hole, and it's the final remaining frag
of this node. Free the node */
if (c)
jffs2_mark_node_obsolete(c, frag->node->raw);
jffs2_free_full_dnode(frag->node);
}
jffs2_free_node_frag(frag);
cond_resched();
}
}
struct jffs2_raw_node_ref *jffs2_link_node_ref(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb,
uint32_t ofs, uint32_t len,
struct jffs2_inode_cache *ic)
{
struct jffs2_raw_node_ref *ref;
BUG_ON(!jeb->allocated_refs);
jeb->allocated_refs--;
ref = jeb->last_node;
dbg_noderef("Last node at %p is (%08x,%p)\n", ref, ref->flash_offset,
ref->next_in_ino);
while (ref->flash_offset != REF_EMPTY_NODE) {
if (ref->flash_offset == REF_LINK_NODE)
ref = ref->next_in_ino;
else
ref++;
}
dbg_noderef("New ref is %p (%08x becomes %08x,%p) len 0x%x\n", ref,
ref->flash_offset, ofs, ref->next_in_ino, len);
ref->flash_offset = ofs;
if (!jeb->first_node) {
jeb->first_node = ref;
BUG_ON(ref_offset(ref) != jeb->offset);
} else if (unlikely(ref_offset(ref) != jeb->offset + c->sector_size - jeb->free_size)) {
uint32_t last_len = ref_totlen(c, jeb, jeb->last_node);
JFFS2_ERROR("Adding new ref %p at (0x%08x-0x%08x) not immediately after previous (0x%08x-0x%08x)\n",
ref, ref_offset(ref), ref_offset(ref)+len,
ref_offset(jeb->last_node),
ref_offset(jeb->last_node)+last_len);
BUG();
}
jeb->last_node = ref;
if (ic) {
ref->next_in_ino = ic->nodes;
ic->nodes = ref;
} else {
ref->next_in_ino = NULL;
}
switch(ref_flags(ref)) {
case REF_UNCHECKED:
c->unchecked_size += len;
jeb->unchecked_size += len;
break;
case REF_NORMAL:
case REF_PRISTINE:
c->used_size += len;
jeb->used_size += len;
break;
case REF_OBSOLETE:
c->dirty_size += len;
jeb->dirty_size += len;
break;
}
c->free_size -= len;
jeb->free_size -= len;
#ifdef TEST_TOTLEN
/* Set (and test) __totlen field... for now */
ref->__totlen = len;
ref_totlen(c, jeb, ref);
#endif
return ref;
}
/* No locking, no reservation of 'ref'. Do not use on a live file system */
int jffs2_scan_dirty_space(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb,
uint32_t size)
{
if (!size)
return 0;
if (unlikely(size > jeb->free_size)) {
pr_crit("Dirty space 0x%x larger then free_size 0x%x (wasted 0x%x)\n",
size, jeb->free_size, jeb->wasted_size);
BUG();
}
/* REF_EMPTY_NODE is !obsolete, so that works OK */
if (jeb->last_node && ref_obsolete(jeb->last_node)) {
#ifdef TEST_TOTLEN
jeb->last_node->__totlen += size;
#endif
c->dirty_size += size;
c->free_size -= size;
jeb->dirty_size += size;
jeb->free_size -= size;
} else {
uint32_t ofs = jeb->offset + c->sector_size - jeb->free_size;
ofs |= REF_OBSOLETE;
jffs2_link_node_ref(c, jeb, ofs, size, NULL);
}
return 0;
}
/* Calculate totlen from surrounding nodes or eraseblock */
static inline uint32_t __ref_totlen(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb,
struct jffs2_raw_node_ref *ref)
{
uint32_t ref_end;
struct jffs2_raw_node_ref *next_ref = ref_next(ref);
if (next_ref)
ref_end = ref_offset(next_ref);
else {
if (!jeb)
jeb = &c->blocks[ref->flash_offset / c->sector_size];
/* Last node in block. Use free_space */
if (unlikely(ref != jeb->last_node)) {
pr_crit("ref %p @0x%08x is not jeb->last_node (%p @0x%08x)\n",
ref, ref_offset(ref), jeb->last_node,
jeb->last_node ?
ref_offset(jeb->last_node) : 0);
BUG();
}
ref_end = jeb->offset + c->sector_size - jeb->free_size;
}
return ref_end - ref_offset(ref);
}
uint32_t __jffs2_ref_totlen(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb,
struct jffs2_raw_node_ref *ref)
{
uint32_t ret;
ret = __ref_totlen(c, jeb, ref);
#ifdef TEST_TOTLEN
if (unlikely(ret != ref->__totlen)) {
if (!jeb)
jeb = &c->blocks[ref->flash_offset / c->sector_size];
pr_crit("Totlen for ref at %p (0x%08x-0x%08x) miscalculated as 0x%x instead of %x\n",
ref, ref_offset(ref), ref_offset(ref) + ref->__totlen,
ret, ref->__totlen);
if (ref_next(ref)) {
pr_crit("next %p (0x%08x-0x%08x)\n",
ref_next(ref), ref_offset(ref_next(ref)),
ref_offset(ref_next(ref)) + ref->__totlen);
} else
pr_crit("No next ref. jeb->last_node is %p\n",
jeb->last_node);
pr_crit("jeb->wasted_size %x, dirty_size %x, used_size %x, free_size %x\n",
jeb->wasted_size, jeb->dirty_size, jeb->used_size,
jeb->free_size);
#if defined(JFFS2_DBG_DUMPS) || defined(JFFS2_DBG_PARANOIA_CHECKS)
__jffs2_dbg_dump_node_refs_nolock(c, jeb);
#endif
WARN_ON(1);
ret = ref->__totlen;
}
#endif /* TEST_TOTLEN */
return ret;
}

480
fs/jffs2/nodelist.h Normal file
View file

@ -0,0 +1,480 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#ifndef __JFFS2_NODELIST_H__
#define __JFFS2_NODELIST_H__
#include <linux/fs.h>
#include <linux/types.h>
#include <linux/jffs2.h>
#include "jffs2_fs_sb.h"
#include "jffs2_fs_i.h"
#include "xattr.h"
#include "acl.h"
#include "summary.h"
#ifdef __ECOS
#include "os-ecos.h"
#else
#include "os-linux.h"
#endif
#define JFFS2_NATIVE_ENDIAN
/* Note we handle mode bits conversion from JFFS2 (i.e. Linux) to/from
whatever OS we're actually running on here too. */
#if defined(JFFS2_NATIVE_ENDIAN)
#define cpu_to_je16(x) ((jint16_t){x})
#define cpu_to_je32(x) ((jint32_t){x})
#define cpu_to_jemode(x) ((jmode_t){os_to_jffs2_mode(x)})
#define constant_cpu_to_je16(x) ((jint16_t){x})
#define constant_cpu_to_je32(x) ((jint32_t){x})
#define je16_to_cpu(x) ((x).v16)
#define je32_to_cpu(x) ((x).v32)
#define jemode_to_cpu(x) (jffs2_to_os_mode((x).m))
#elif defined(JFFS2_BIG_ENDIAN)
#define cpu_to_je16(x) ((jint16_t){cpu_to_be16(x)})
#define cpu_to_je32(x) ((jint32_t){cpu_to_be32(x)})
#define cpu_to_jemode(x) ((jmode_t){cpu_to_be32(os_to_jffs2_mode(x))})
#define constant_cpu_to_je16(x) ((jint16_t){__constant_cpu_to_be16(x)})
#define constant_cpu_to_je32(x) ((jint32_t){__constant_cpu_to_be32(x)})
#define je16_to_cpu(x) (be16_to_cpu(x.v16))
#define je32_to_cpu(x) (be32_to_cpu(x.v32))
#define jemode_to_cpu(x) (be32_to_cpu(jffs2_to_os_mode((x).m)))
#elif defined(JFFS2_LITTLE_ENDIAN)
#define cpu_to_je16(x) ((jint16_t){cpu_to_le16(x)})
#define cpu_to_je32(x) ((jint32_t){cpu_to_le32(x)})
#define cpu_to_jemode(x) ((jmode_t){cpu_to_le32(os_to_jffs2_mode(x))})
#define constant_cpu_to_je16(x) ((jint16_t){__constant_cpu_to_le16(x)})
#define constant_cpu_to_je32(x) ((jint32_t){__constant_cpu_to_le32(x)})
#define je16_to_cpu(x) (le16_to_cpu(x.v16))
#define je32_to_cpu(x) (le32_to_cpu(x.v32))
#define jemode_to_cpu(x) (le32_to_cpu(jffs2_to_os_mode((x).m)))
#else
#error wibble
#endif
/* The minimal node header size */
#define JFFS2_MIN_NODE_HEADER sizeof(struct jffs2_raw_dirent)
/*
This is all we need to keep in-core for each raw node during normal
operation. As and when we do read_inode on a particular inode, we can
scan the nodes which are listed for it and build up a proper map of
which nodes are currently valid. JFFSv1 always used to keep that whole
map in core for each inode.
*/
struct jffs2_raw_node_ref
{
struct jffs2_raw_node_ref *next_in_ino; /* Points to the next raw_node_ref
for this object. If this _is_ the last, it points to the inode_cache,
xattr_ref or xattr_datum instead. The common part of those structures
has NULL in the first word. See jffs2_raw_ref_to_ic() below */
uint32_t flash_offset;
#undef TEST_TOTLEN
#ifdef TEST_TOTLEN
uint32_t __totlen; /* This may die; use ref_totlen(c, jeb, ) below */
#endif
};
#define REF_LINK_NODE ((int32_t)-1)
#define REF_EMPTY_NODE ((int32_t)-2)
/* Use blocks of about 256 bytes */
#define REFS_PER_BLOCK ((255/sizeof(struct jffs2_raw_node_ref))-1)
static inline struct jffs2_raw_node_ref *ref_next(struct jffs2_raw_node_ref *ref)
{
ref++;
/* Link to another block of refs */
if (ref->flash_offset == REF_LINK_NODE) {
ref = ref->next_in_ino;
if (!ref)
return ref;
}
/* End of chain */
if (ref->flash_offset == REF_EMPTY_NODE)
return NULL;
return ref;
}
static inline struct jffs2_inode_cache *jffs2_raw_ref_to_ic(struct jffs2_raw_node_ref *raw)
{
while(raw->next_in_ino)
raw = raw->next_in_ino;
/* NB. This can be a jffs2_xattr_datum or jffs2_xattr_ref and
not actually a jffs2_inode_cache. Check ->class */
return ((struct jffs2_inode_cache *)raw);
}
/* flash_offset & 3 always has to be zero, because nodes are
always aligned at 4 bytes. So we have a couple of extra bits
to play with, which indicate the node's status; see below: */
#define REF_UNCHECKED 0 /* We haven't yet checked the CRC or built its inode */
#define REF_OBSOLETE 1 /* Obsolete, can be completely ignored */
#define REF_PRISTINE 2 /* Completely clean. GC without looking */
#define REF_NORMAL 3 /* Possibly overlapped. Read the page and write again on GC */
#define ref_flags(ref) ((ref)->flash_offset & 3)
#define ref_offset(ref) ((ref)->flash_offset & ~3)
#define ref_obsolete(ref) (((ref)->flash_offset & 3) == REF_OBSOLETE)
#define mark_ref_normal(ref) do { (ref)->flash_offset = ref_offset(ref) | REF_NORMAL; } while(0)
/* Dirent nodes should be REF_PRISTINE only if they are not a deletion
dirent. Deletion dirents should be REF_NORMAL so that GC gets to
throw them away when appropriate */
#define dirent_node_state(rd) ( (je32_to_cpu((rd)->ino)?REF_PRISTINE:REF_NORMAL) )
/* NB: REF_PRISTINE for an inode-less node (ref->next_in_ino == NULL) indicates
it is an unknown node of type JFFS2_NODETYPE_RWCOMPAT_COPY, so it'll get
copied. If you need to do anything different to GC inode-less nodes, then
you need to modify gc.c accordingly. */
/* For each inode in the filesystem, we need to keep a record of
nlink, because it would be a PITA to scan the whole directory tree
at read_inode() time to calculate it, and to keep sufficient information
in the raw_node_ref (basically both parent and child inode number for
dirent nodes) would take more space than this does. We also keep
a pointer to the first physical node which is part of this inode, too.
*/
struct jffs2_inode_cache {
/* First part of structure is shared with other objects which
can terminate the raw node refs' next_in_ino list -- which
currently struct jffs2_xattr_datum and struct jffs2_xattr_ref. */
struct jffs2_full_dirent *scan_dents; /* Used during scan to hold
temporary lists of dirents, and later must be set to
NULL to mark the end of the raw_node_ref->next_in_ino
chain. */
struct jffs2_raw_node_ref *nodes;
uint8_t class; /* It's used for identification */
/* end of shared structure */
uint8_t flags;
uint16_t state;
uint32_t ino;
struct jffs2_inode_cache *next;
#ifdef CONFIG_JFFS2_FS_XATTR
struct jffs2_xattr_ref *xref;
#endif
uint32_t pino_nlink; /* Directories store parent inode
here; other inodes store nlink.
Zero always means that it's
completely unlinked. */
};
/* Inode states for 'state' above. We need the 'GC' state to prevent
someone from doing a read_inode() while we're moving a 'REF_PRISTINE'
node without going through all the iget() nonsense */
#define INO_STATE_UNCHECKED 0 /* CRC checks not yet done */
#define INO_STATE_CHECKING 1 /* CRC checks in progress */
#define INO_STATE_PRESENT 2 /* In core */
#define INO_STATE_CHECKEDABSENT 3 /* Checked, cleared again */
#define INO_STATE_GC 4 /* GCing a 'pristine' node */
#define INO_STATE_READING 5 /* In read_inode() */
#define INO_STATE_CLEARING 6 /* In clear_inode() */
#define INO_FLAGS_XATTR_CHECKED 0x01 /* has no duplicate xattr_ref */
#define RAWNODE_CLASS_INODE_CACHE 0
#define RAWNODE_CLASS_XATTR_DATUM 1
#define RAWNODE_CLASS_XATTR_REF 2
#define INOCACHE_HASHSIZE_MIN 128
#define INOCACHE_HASHSIZE_MAX 1024
#define write_ofs(c) ((c)->nextblock->offset + (c)->sector_size - (c)->nextblock->free_size)
/*
Larger representation of a raw node, kept in-core only when the
struct inode for this particular ino is instantiated.
*/
struct jffs2_full_dnode
{
struct jffs2_raw_node_ref *raw;
uint32_t ofs; /* The offset to which the data of this node belongs */
uint32_t size;
uint32_t frags; /* Number of fragments which currently refer
to this node. When this reaches zero,
the node is obsolete. */
};
/*
Even larger representation of a raw node, kept in-core only while
we're actually building up the original map of which nodes go where,
in read_inode()
*/
struct jffs2_tmp_dnode_info
{
struct rb_node rb;
struct jffs2_full_dnode *fn;
uint32_t version;
uint32_t data_crc;
uint32_t partial_crc;
uint32_t csize;
uint16_t overlapped;
};
/* Temporary data structure used during readinode. */
struct jffs2_readinode_info
{
struct rb_root tn_root;
struct jffs2_tmp_dnode_info *mdata_tn;
uint32_t highest_version;
uint32_t latest_mctime;
uint32_t mctime_ver;
struct jffs2_full_dirent *fds;
struct jffs2_raw_node_ref *latest_ref;
};
struct jffs2_full_dirent
{
struct jffs2_raw_node_ref *raw;
struct jffs2_full_dirent *next;
uint32_t version;
uint32_t ino; /* == zero for unlink */
unsigned int nhash;
unsigned char type;
unsigned char name[0];
};
/*
Fragments - used to build a map of which raw node to obtain
data from for each part of the ino
*/
struct jffs2_node_frag
{
struct rb_node rb;
struct jffs2_full_dnode *node; /* NULL for holes */
uint32_t size;
uint32_t ofs; /* The offset to which this fragment belongs */
};
struct jffs2_eraseblock
{
struct list_head list;
int bad_count;
uint32_t offset; /* of this block in the MTD */
uint32_t unchecked_size;
uint32_t used_size;
uint32_t dirty_size;
uint32_t wasted_size;
uint32_t free_size; /* Note that sector_size - free_size
is the address of the first free space */
uint32_t allocated_refs;
struct jffs2_raw_node_ref *first_node;
struct jffs2_raw_node_ref *last_node;
struct jffs2_raw_node_ref *gc_node; /* Next node to be garbage collected */
};
static inline int jffs2_blocks_use_vmalloc(struct jffs2_sb_info *c)
{
return ((c->flash_size / c->sector_size) * sizeof (struct jffs2_eraseblock)) > (128 * 1024);
}
#define ref_totlen(a, b, c) __jffs2_ref_totlen((a), (b), (c))
#define ALLOC_NORMAL 0 /* Normal allocation */
#define ALLOC_DELETION 1 /* Deletion node. Best to allow it */
#define ALLOC_GC 2 /* Space requested for GC. Give it or die */
#define ALLOC_NORETRY 3 /* For jffs2_write_dnode: On failure, return -EAGAIN instead of retrying */
/* How much dirty space before it goes on the very_dirty_list */
#define VERYDIRTY(c, size) ((size) >= ((c)->sector_size / 2))
/* check if dirty space is more than 255 Byte */
#define ISDIRTY(size) ((size) > sizeof (struct jffs2_raw_inode) + JFFS2_MIN_DATA_LEN)
#define PAD(x) (((x)+3)&~3)
static inline int jffs2_encode_dev(union jffs2_device_node *jdev, dev_t rdev)
{
if (old_valid_dev(rdev)) {
jdev->old_id = cpu_to_je16(old_encode_dev(rdev));
return sizeof(jdev->old_id);
} else {
jdev->new_id = cpu_to_je32(new_encode_dev(rdev));
return sizeof(jdev->new_id);
}
}
static inline struct jffs2_node_frag *frag_first(struct rb_root *root)
{
struct rb_node *node = rb_first(root);
if (!node)
return NULL;
return rb_entry(node, struct jffs2_node_frag, rb);
}
static inline struct jffs2_node_frag *frag_last(struct rb_root *root)
{
struct rb_node *node = rb_last(root);
if (!node)
return NULL;
return rb_entry(node, struct jffs2_node_frag, rb);
}
#define frag_next(frag) rb_entry(rb_next(&(frag)->rb), struct jffs2_node_frag, rb)
#define frag_prev(frag) rb_entry(rb_prev(&(frag)->rb), struct jffs2_node_frag, rb)
#define frag_parent(frag) rb_entry(rb_parent(&(frag)->rb), struct jffs2_node_frag, rb)
#define frag_left(frag) rb_entry((frag)->rb.rb_left, struct jffs2_node_frag, rb)
#define frag_right(frag) rb_entry((frag)->rb.rb_right, struct jffs2_node_frag, rb)
#define frag_erase(frag, list) rb_erase(&frag->rb, list);
#define tn_next(tn) rb_entry(rb_next(&(tn)->rb), struct jffs2_tmp_dnode_info, rb)
#define tn_prev(tn) rb_entry(rb_prev(&(tn)->rb), struct jffs2_tmp_dnode_info, rb)
#define tn_parent(tn) rb_entry(rb_parent(&(tn)->rb), struct jffs2_tmp_dnode_info, rb)
#define tn_left(tn) rb_entry((tn)->rb.rb_left, struct jffs2_tmp_dnode_info, rb)
#define tn_right(tn) rb_entry((tn)->rb.rb_right, struct jffs2_tmp_dnode_info, rb)
#define tn_erase(tn, list) rb_erase(&tn->rb, list);
#define tn_last(list) rb_entry(rb_last(list), struct jffs2_tmp_dnode_info, rb)
#define tn_first(list) rb_entry(rb_first(list), struct jffs2_tmp_dnode_info, rb)
/* nodelist.c */
void jffs2_add_fd_to_list(struct jffs2_sb_info *c, struct jffs2_full_dirent *new, struct jffs2_full_dirent **list);
void jffs2_set_inocache_state(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic, int state);
struct jffs2_inode_cache *jffs2_get_ino_cache(struct jffs2_sb_info *c, uint32_t ino);
void jffs2_add_ino_cache (struct jffs2_sb_info *c, struct jffs2_inode_cache *new);
void jffs2_del_ino_cache(struct jffs2_sb_info *c, struct jffs2_inode_cache *old);
void jffs2_free_ino_caches(struct jffs2_sb_info *c);
void jffs2_free_raw_node_refs(struct jffs2_sb_info *c);
struct jffs2_node_frag *jffs2_lookup_node_frag(struct rb_root *fragtree, uint32_t offset);
void jffs2_kill_fragtree(struct rb_root *root, struct jffs2_sb_info *c_delete);
int jffs2_add_full_dnode_to_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, struct jffs2_full_dnode *fn);
uint32_t jffs2_truncate_fragtree (struct jffs2_sb_info *c, struct rb_root *list, uint32_t size);
struct jffs2_raw_node_ref *jffs2_link_node_ref(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb,
uint32_t ofs, uint32_t len,
struct jffs2_inode_cache *ic);
extern uint32_t __jffs2_ref_totlen(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb,
struct jffs2_raw_node_ref *ref);
/* nodemgmt.c */
int jffs2_thread_should_wake(struct jffs2_sb_info *c);
int jffs2_reserve_space(struct jffs2_sb_info *c, uint32_t minsize,
uint32_t *len, int prio, uint32_t sumsize);
int jffs2_reserve_space_gc(struct jffs2_sb_info *c, uint32_t minsize,
uint32_t *len, uint32_t sumsize);
struct jffs2_raw_node_ref *jffs2_add_physical_node_ref(struct jffs2_sb_info *c,
uint32_t ofs, uint32_t len,
struct jffs2_inode_cache *ic);
void jffs2_complete_reservation(struct jffs2_sb_info *c);
void jffs2_mark_node_obsolete(struct jffs2_sb_info *c, struct jffs2_raw_node_ref *raw);
/* write.c */
int jffs2_do_new_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, uint32_t mode, struct jffs2_raw_inode *ri);
struct jffs2_full_dnode *jffs2_write_dnode(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
struct jffs2_raw_inode *ri, const unsigned char *data,
uint32_t datalen, int alloc_mode);
struct jffs2_full_dirent *jffs2_write_dirent(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
struct jffs2_raw_dirent *rd, const unsigned char *name,
uint32_t namelen, int alloc_mode);
int jffs2_write_inode_range(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
struct jffs2_raw_inode *ri, unsigned char *buf,
uint32_t offset, uint32_t writelen, uint32_t *retlen);
int jffs2_do_create(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, struct jffs2_inode_info *f,
struct jffs2_raw_inode *ri, const struct qstr *qstr);
int jffs2_do_unlink(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, const char *name,
int namelen, struct jffs2_inode_info *dead_f, uint32_t time);
int jffs2_do_link(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, uint32_t ino,
uint8_t type, const char *name, int namelen, uint32_t time);
/* readinode.c */
int jffs2_do_read_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
uint32_t ino, struct jffs2_raw_inode *latest_node);
int jffs2_do_crccheck_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic);
void jffs2_do_clear_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f);
/* malloc.c */
int jffs2_create_slab_caches(void);
void jffs2_destroy_slab_caches(void);
struct jffs2_full_dirent *jffs2_alloc_full_dirent(int namesize);
void jffs2_free_full_dirent(struct jffs2_full_dirent *);
struct jffs2_full_dnode *jffs2_alloc_full_dnode(void);
void jffs2_free_full_dnode(struct jffs2_full_dnode *);
struct jffs2_raw_dirent *jffs2_alloc_raw_dirent(void);
void jffs2_free_raw_dirent(struct jffs2_raw_dirent *);
struct jffs2_raw_inode *jffs2_alloc_raw_inode(void);
void jffs2_free_raw_inode(struct jffs2_raw_inode *);
struct jffs2_tmp_dnode_info *jffs2_alloc_tmp_dnode_info(void);
void jffs2_free_tmp_dnode_info(struct jffs2_tmp_dnode_info *);
int jffs2_prealloc_raw_node_refs(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb, int nr);
void jffs2_free_refblock(struct jffs2_raw_node_ref *);
struct jffs2_node_frag *jffs2_alloc_node_frag(void);
void jffs2_free_node_frag(struct jffs2_node_frag *);
struct jffs2_inode_cache *jffs2_alloc_inode_cache(void);
void jffs2_free_inode_cache(struct jffs2_inode_cache *);
#ifdef CONFIG_JFFS2_FS_XATTR
struct jffs2_xattr_datum *jffs2_alloc_xattr_datum(void);
void jffs2_free_xattr_datum(struct jffs2_xattr_datum *);
struct jffs2_xattr_ref *jffs2_alloc_xattr_ref(void);
void jffs2_free_xattr_ref(struct jffs2_xattr_ref *);
#endif
/* gc.c */
int jffs2_garbage_collect_pass(struct jffs2_sb_info *c);
/* read.c */
int jffs2_read_dnode(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
struct jffs2_full_dnode *fd, unsigned char *buf,
int ofs, int len);
int jffs2_read_inode_range(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
unsigned char *buf, uint32_t offset, uint32_t len);
char *jffs2_getlink(struct jffs2_sb_info *c, struct jffs2_inode_info *f);
/* scan.c */
int jffs2_scan_medium(struct jffs2_sb_info *c);
void jffs2_rotate_lists(struct jffs2_sb_info *c);
struct jffs2_inode_cache *jffs2_scan_make_ino_cache(struct jffs2_sb_info *c, uint32_t ino);
int jffs2_scan_classify_jeb(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
int jffs2_scan_dirty_space(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t size);
/* build.c */
int jffs2_do_mount_fs(struct jffs2_sb_info *c);
/* erase.c */
int jffs2_erase_pending_blocks(struct jffs2_sb_info *c, int count);
void jffs2_free_jeb_node_refs(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
#ifdef CONFIG_JFFS2_FS_WRITEBUFFER
/* wbuf.c */
int jffs2_flush_wbuf_gc(struct jffs2_sb_info *c, uint32_t ino);
int jffs2_flush_wbuf_pad(struct jffs2_sb_info *c);
int jffs2_check_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
int jffs2_write_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
#endif
#include "debug.h"
#endif /* __JFFS2_NODELIST_H__ */

883
fs/jffs2/nodemgmt.c Normal file
View file

@ -0,0 +1,883 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/mtd/mtd.h>
#include <linux/compiler.h>
#include <linux/sched.h> /* For cond_resched() */
#include "nodelist.h"
#include "debug.h"
/*
* Check whether the user is allowed to write.
*/
static int jffs2_rp_can_write(struct jffs2_sb_info *c)
{
uint32_t avail;
struct jffs2_mount_opts *opts = &c->mount_opts;
avail = c->dirty_size + c->free_size + c->unchecked_size +
c->erasing_size - c->resv_blocks_write * c->sector_size
- c->nospc_dirty_size;
if (avail < 2 * opts->rp_size)
jffs2_dbg(1, "rpsize %u, dirty_size %u, free_size %u, "
"erasing_size %u, unchecked_size %u, "
"nr_erasing_blocks %u, avail %u, resrv %u\n",
opts->rp_size, c->dirty_size, c->free_size,
c->erasing_size, c->unchecked_size,
c->nr_erasing_blocks, avail, c->nospc_dirty_size);
if (avail > opts->rp_size)
return 1;
/* Always allow root */
if (capable(CAP_SYS_RESOURCE))
return 1;
jffs2_dbg(1, "forbid writing\n");
return 0;
}
/**
* jffs2_reserve_space - request physical space to write nodes to flash
* @c: superblock info
* @minsize: Minimum acceptable size of allocation
* @len: Returned value of allocation length
* @prio: Allocation type - ALLOC_{NORMAL,DELETION}
*
* Requests a block of physical space on the flash. Returns zero for success
* and puts 'len' into the appropriate place, or returns -ENOSPC or other
* error if appropriate. Doesn't return len since that's
*
* If it returns zero, jffs2_reserve_space() also downs the per-filesystem
* allocation semaphore, to prevent more than one allocation from being
* active at any time. The semaphore is later released by jffs2_commit_allocation()
*
* jffs2_reserve_space() may trigger garbage collection in order to make room
* for the requested allocation.
*/
static int jffs2_do_reserve_space(struct jffs2_sb_info *c, uint32_t minsize,
uint32_t *len, uint32_t sumsize);
int jffs2_reserve_space(struct jffs2_sb_info *c, uint32_t minsize,
uint32_t *len, int prio, uint32_t sumsize)
{
int ret = -EAGAIN;
int blocksneeded = c->resv_blocks_write;
/* align it */
minsize = PAD(minsize);
jffs2_dbg(1, "%s(): Requested 0x%x bytes\n", __func__, minsize);
mutex_lock(&c->alloc_sem);
jffs2_dbg(1, "%s(): alloc sem got\n", __func__);
spin_lock(&c->erase_completion_lock);
/*
* Check if the free space is greater then size of the reserved pool.
* If not, only allow root to proceed with writing.
*/
if (prio != ALLOC_DELETION && !jffs2_rp_can_write(c)) {
ret = -ENOSPC;
goto out;
}
/* this needs a little more thought (true <tglx> :)) */
while(ret == -EAGAIN) {
while(c->nr_free_blocks + c->nr_erasing_blocks < blocksneeded) {
uint32_t dirty, avail;
/* calculate real dirty size
* dirty_size contains blocks on erase_pending_list
* those blocks are counted in c->nr_erasing_blocks.
* If one block is actually erased, it is not longer counted as dirty_space
* but it is counted in c->nr_erasing_blocks, so we add it and subtract it
* with c->nr_erasing_blocks * c->sector_size again.
* Blocks on erasable_list are counted as dirty_size, but not in c->nr_erasing_blocks
* This helps us to force gc and pick eventually a clean block to spread the load.
* We add unchecked_size here, as we hopefully will find some space to use.
* This will affect the sum only once, as gc first finishes checking
* of nodes.
*/
dirty = c->dirty_size + c->erasing_size - c->nr_erasing_blocks * c->sector_size + c->unchecked_size;
if (dirty < c->nospc_dirty_size) {
if (prio == ALLOC_DELETION && c->nr_free_blocks + c->nr_erasing_blocks >= c->resv_blocks_deletion) {
jffs2_dbg(1, "%s(): Low on dirty space to GC, but it's a deletion. Allowing...\n",
__func__);
break;
}
jffs2_dbg(1, "dirty size 0x%08x + unchecked_size 0x%08x < nospc_dirty_size 0x%08x, returning -ENOSPC\n",
dirty, c->unchecked_size,
c->sector_size);
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->alloc_sem);
return -ENOSPC;
}
/* Calc possibly available space. Possibly available means that we
* don't know, if unchecked size contains obsoleted nodes, which could give us some
* more usable space. This will affect the sum only once, as gc first finishes checking
* of nodes.
+ Return -ENOSPC, if the maximum possibly available space is less or equal than
* blocksneeded * sector_size.
* This blocks endless gc looping on a filesystem, which is nearly full, even if
* the check above passes.
*/
avail = c->free_size + c->dirty_size + c->erasing_size + c->unchecked_size;
if ( (avail / c->sector_size) <= blocksneeded) {
if (prio == ALLOC_DELETION && c->nr_free_blocks + c->nr_erasing_blocks >= c->resv_blocks_deletion) {
jffs2_dbg(1, "%s(): Low on possibly available space, but it's a deletion. Allowing...\n",
__func__);
break;
}
jffs2_dbg(1, "max. available size 0x%08x < blocksneeded * sector_size 0x%08x, returning -ENOSPC\n",
avail, blocksneeded * c->sector_size);
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->alloc_sem);
return -ENOSPC;
}
mutex_unlock(&c->alloc_sem);
jffs2_dbg(1, "Triggering GC pass. nr_free_blocks %d, nr_erasing_blocks %d, free_size 0x%08x, dirty_size 0x%08x, wasted_size 0x%08x, used_size 0x%08x, erasing_size 0x%08x, bad_size 0x%08x (total 0x%08x of 0x%08x)\n",
c->nr_free_blocks, c->nr_erasing_blocks,
c->free_size, c->dirty_size, c->wasted_size,
c->used_size, c->erasing_size, c->bad_size,
c->free_size + c->dirty_size +
c->wasted_size + c->used_size +
c->erasing_size + c->bad_size,
c->flash_size);
spin_unlock(&c->erase_completion_lock);
ret = jffs2_garbage_collect_pass(c);
if (ret == -EAGAIN) {
spin_lock(&c->erase_completion_lock);
if (c->nr_erasing_blocks &&
list_empty(&c->erase_pending_list) &&
list_empty(&c->erase_complete_list)) {
DECLARE_WAITQUEUE(wait, current);
set_current_state(TASK_UNINTERRUPTIBLE);
add_wait_queue(&c->erase_wait, &wait);
jffs2_dbg(1, "%s waiting for erase to complete\n",
__func__);
spin_unlock(&c->erase_completion_lock);
schedule();
remove_wait_queue(&c->erase_wait, &wait);
} else
spin_unlock(&c->erase_completion_lock);
} else if (ret)
return ret;
cond_resched();
if (signal_pending(current))
return -EINTR;
mutex_lock(&c->alloc_sem);
spin_lock(&c->erase_completion_lock);
}
ret = jffs2_do_reserve_space(c, minsize, len, sumsize);
if (ret) {
jffs2_dbg(1, "%s(): ret is %d\n", __func__, ret);
}
}
out:
spin_unlock(&c->erase_completion_lock);
if (!ret)
ret = jffs2_prealloc_raw_node_refs(c, c->nextblock, 1);
if (ret)
mutex_unlock(&c->alloc_sem);
return ret;
}
int jffs2_reserve_space_gc(struct jffs2_sb_info *c, uint32_t minsize,
uint32_t *len, uint32_t sumsize)
{
int ret;
minsize = PAD(minsize);
jffs2_dbg(1, "%s(): Requested 0x%x bytes\n", __func__, minsize);
while (true) {
spin_lock(&c->erase_completion_lock);
ret = jffs2_do_reserve_space(c, minsize, len, sumsize);
if (ret) {
jffs2_dbg(1, "%s(): looping, ret is %d\n",
__func__, ret);
}
spin_unlock(&c->erase_completion_lock);
if (ret == -EAGAIN)
cond_resched();
else
break;
}
if (!ret)
ret = jffs2_prealloc_raw_node_refs(c, c->nextblock, 1);
return ret;
}
/* Classify nextblock (clean, dirty of verydirty) and force to select an other one */
static void jffs2_close_nextblock(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb)
{
if (c->nextblock == NULL) {
jffs2_dbg(1, "%s(): Erase block at 0x%08x has already been placed in a list\n",
__func__, jeb->offset);
return;
}
/* Check, if we have a dirty block now, or if it was dirty already */
if (ISDIRTY (jeb->wasted_size + jeb->dirty_size)) {
c->dirty_size += jeb->wasted_size;
c->wasted_size -= jeb->wasted_size;
jeb->dirty_size += jeb->wasted_size;
jeb->wasted_size = 0;
if (VERYDIRTY(c, jeb->dirty_size)) {
jffs2_dbg(1, "Adding full erase block at 0x%08x to very_dirty_list (free 0x%08x, dirty 0x%08x, used 0x%08x\n",
jeb->offset, jeb->free_size, jeb->dirty_size,
jeb->used_size);
list_add_tail(&jeb->list, &c->very_dirty_list);
} else {
jffs2_dbg(1, "Adding full erase block at 0x%08x to dirty_list (free 0x%08x, dirty 0x%08x, used 0x%08x\n",
jeb->offset, jeb->free_size, jeb->dirty_size,
jeb->used_size);
list_add_tail(&jeb->list, &c->dirty_list);
}
} else {
jffs2_dbg(1, "Adding full erase block at 0x%08x to clean_list (free 0x%08x, dirty 0x%08x, used 0x%08x\n",
jeb->offset, jeb->free_size, jeb->dirty_size,
jeb->used_size);
list_add_tail(&jeb->list, &c->clean_list);
}
c->nextblock = NULL;
}
/* Select a new jeb for nextblock */
static int jffs2_find_nextblock(struct jffs2_sb_info *c)
{
struct list_head *next;
/* Take the next block off the 'free' list */
if (list_empty(&c->free_list)) {
if (!c->nr_erasing_blocks &&
!list_empty(&c->erasable_list)) {
struct jffs2_eraseblock *ejeb;
ejeb = list_entry(c->erasable_list.next, struct jffs2_eraseblock, list);
list_move_tail(&ejeb->list, &c->erase_pending_list);
c->nr_erasing_blocks++;
jffs2_garbage_collect_trigger(c);
jffs2_dbg(1, "%s(): Triggering erase of erasable block at 0x%08x\n",
__func__, ejeb->offset);
}
if (!c->nr_erasing_blocks &&
!list_empty(&c->erasable_pending_wbuf_list)) {
jffs2_dbg(1, "%s(): Flushing write buffer\n",
__func__);
/* c->nextblock is NULL, no update to c->nextblock allowed */
spin_unlock(&c->erase_completion_lock);
jffs2_flush_wbuf_pad(c);
spin_lock(&c->erase_completion_lock);
/* Have another go. It'll be on the erasable_list now */
return -EAGAIN;
}
if (!c->nr_erasing_blocks) {
/* Ouch. We're in GC, or we wouldn't have got here.
And there's no space left. At all. */
pr_crit("Argh. No free space left for GC. nr_erasing_blocks is %d. nr_free_blocks is %d. (erasableempty: %s, erasingempty: %s, erasependingempty: %s)\n",
c->nr_erasing_blocks, c->nr_free_blocks,
list_empty(&c->erasable_list) ? "yes" : "no",
list_empty(&c->erasing_list) ? "yes" : "no",
list_empty(&c->erase_pending_list) ? "yes" : "no");
return -ENOSPC;
}
spin_unlock(&c->erase_completion_lock);
/* Don't wait for it; just erase one right now */
jffs2_erase_pending_blocks(c, 1);
spin_lock(&c->erase_completion_lock);
/* An erase may have failed, decreasing the
amount of free space available. So we must
restart from the beginning */
return -EAGAIN;
}
next = c->free_list.next;
list_del(next);
c->nextblock = list_entry(next, struct jffs2_eraseblock, list);
c->nr_free_blocks--;
jffs2_sum_reset_collected(c->summary); /* reset collected summary */
#ifdef CONFIG_JFFS2_FS_WRITEBUFFER
/* adjust write buffer offset, else we get a non contiguous write bug */
if (!(c->wbuf_ofs % c->sector_size) && !c->wbuf_len)
c->wbuf_ofs = 0xffffffff;
#endif
jffs2_dbg(1, "%s(): new nextblock = 0x%08x\n",
__func__, c->nextblock->offset);
return 0;
}
/* Called with alloc sem _and_ erase_completion_lock */
static int jffs2_do_reserve_space(struct jffs2_sb_info *c, uint32_t minsize,
uint32_t *len, uint32_t sumsize)
{
struct jffs2_eraseblock *jeb = c->nextblock;
uint32_t reserved_size; /* for summary information at the end of the jeb */
int ret;
restart:
reserved_size = 0;
if (jffs2_sum_active() && (sumsize != JFFS2_SUMMARY_NOSUM_SIZE)) {
/* NOSUM_SIZE means not to generate summary */
if (jeb) {
reserved_size = PAD(sumsize + c->summary->sum_size + JFFS2_SUMMARY_FRAME_SIZE);
dbg_summary("minsize=%d , jeb->free=%d ,"
"summary->size=%d , sumsize=%d\n",
minsize, jeb->free_size,
c->summary->sum_size, sumsize);
}
/* Is there enough space for writing out the current node, or we have to
write out summary information now, close this jeb and select new nextblock? */
if (jeb && (PAD(minsize) + PAD(c->summary->sum_size + sumsize +
JFFS2_SUMMARY_FRAME_SIZE) > jeb->free_size)) {
/* Has summary been disabled for this jeb? */
if (jffs2_sum_is_disabled(c->summary)) {
sumsize = JFFS2_SUMMARY_NOSUM_SIZE;
goto restart;
}
/* Writing out the collected summary information */
dbg_summary("generating summary for 0x%08x.\n", jeb->offset);
ret = jffs2_sum_write_sumnode(c);
if (ret)
return ret;
if (jffs2_sum_is_disabled(c->summary)) {
/* jffs2_write_sumnode() couldn't write out the summary information
diabling summary for this jeb and free the collected information
*/
sumsize = JFFS2_SUMMARY_NOSUM_SIZE;
goto restart;
}
jffs2_close_nextblock(c, jeb);
jeb = NULL;
/* keep always valid value in reserved_size */
reserved_size = PAD(sumsize + c->summary->sum_size + JFFS2_SUMMARY_FRAME_SIZE);
}
} else {
if (jeb && minsize > jeb->free_size) {
uint32_t waste;
/* Skip the end of this block and file it as having some dirty space */
/* If there's a pending write to it, flush now */
if (jffs2_wbuf_dirty(c)) {
spin_unlock(&c->erase_completion_lock);
jffs2_dbg(1, "%s(): Flushing write buffer\n",
__func__);
jffs2_flush_wbuf_pad(c);
spin_lock(&c->erase_completion_lock);
jeb = c->nextblock;
goto restart;
}
spin_unlock(&c->erase_completion_lock);
ret = jffs2_prealloc_raw_node_refs(c, jeb, 1);
/* Just lock it again and continue. Nothing much can change because
we hold c->alloc_sem anyway. In fact, it's not entirely clear why
we hold c->erase_completion_lock in the majority of this function...
but that's a question for another (more caffeine-rich) day. */
spin_lock(&c->erase_completion_lock);
if (ret)
return ret;
waste = jeb->free_size;
jffs2_link_node_ref(c, jeb,
(jeb->offset + c->sector_size - waste) | REF_OBSOLETE,
waste, NULL);
/* FIXME: that made it count as dirty. Convert to wasted */
jeb->dirty_size -= waste;
c->dirty_size -= waste;
jeb->wasted_size += waste;
c->wasted_size += waste;
jffs2_close_nextblock(c, jeb);
jeb = NULL;
}
}
if (!jeb) {
ret = jffs2_find_nextblock(c);
if (ret)
return ret;
jeb = c->nextblock;
if (jeb->free_size != c->sector_size - c->cleanmarker_size) {
pr_warn("Eep. Block 0x%08x taken from free_list had free_size of 0x%08x!!\n",
jeb->offset, jeb->free_size);
goto restart;
}
}
/* OK, jeb (==c->nextblock) is now pointing at a block which definitely has
enough space */
*len = jeb->free_size - reserved_size;
if (c->cleanmarker_size && jeb->used_size == c->cleanmarker_size &&
!jeb->first_node->next_in_ino) {
/* Only node in it beforehand was a CLEANMARKER node (we think).
So mark it obsolete now that there's going to be another node
in the block. This will reduce used_size to zero but We've
already set c->nextblock so that jffs2_mark_node_obsolete()
won't try to refile it to the dirty_list.
*/
spin_unlock(&c->erase_completion_lock);
jffs2_mark_node_obsolete(c, jeb->first_node);
spin_lock(&c->erase_completion_lock);
}
jffs2_dbg(1, "%s(): Giving 0x%x bytes at 0x%x\n",
__func__,
*len, jeb->offset + (c->sector_size - jeb->free_size));
return 0;
}
/**
* jffs2_add_physical_node_ref - add a physical node reference to the list
* @c: superblock info
* @new: new node reference to add
* @len: length of this physical node
*
* Should only be used to report nodes for which space has been allocated
* by jffs2_reserve_space.
*
* Must be called with the alloc_sem held.
*/
struct jffs2_raw_node_ref *jffs2_add_physical_node_ref(struct jffs2_sb_info *c,
uint32_t ofs, uint32_t len,
struct jffs2_inode_cache *ic)
{
struct jffs2_eraseblock *jeb;
struct jffs2_raw_node_ref *new;
jeb = &c->blocks[ofs / c->sector_size];
jffs2_dbg(1, "%s(): Node at 0x%x(%d), size 0x%x\n",
__func__, ofs & ~3, ofs & 3, len);
#if 1
/* Allow non-obsolete nodes only to be added at the end of c->nextblock,
if c->nextblock is set. Note that wbuf.c will file obsolete nodes
even after refiling c->nextblock */
if ((c->nextblock || ((ofs & 3) != REF_OBSOLETE))
&& (jeb != c->nextblock || (ofs & ~3) != jeb->offset + (c->sector_size - jeb->free_size))) {
pr_warn("argh. node added in wrong place at 0x%08x(%d)\n",
ofs & ~3, ofs & 3);
if (c->nextblock)
pr_warn("nextblock 0x%08x", c->nextblock->offset);
else
pr_warn("No nextblock");
pr_cont(", expected at %08x\n",
jeb->offset + (c->sector_size - jeb->free_size));
return ERR_PTR(-EINVAL);
}
#endif
spin_lock(&c->erase_completion_lock);
new = jffs2_link_node_ref(c, jeb, ofs, len, ic);
if (!jeb->free_size && !jeb->dirty_size && !ISDIRTY(jeb->wasted_size)) {
/* If it lives on the dirty_list, jffs2_reserve_space will put it there */
jffs2_dbg(1, "Adding full erase block at 0x%08x to clean_list (free 0x%08x, dirty 0x%08x, used 0x%08x\n",
jeb->offset, jeb->free_size, jeb->dirty_size,
jeb->used_size);
if (jffs2_wbuf_dirty(c)) {
/* Flush the last write in the block if it's outstanding */
spin_unlock(&c->erase_completion_lock);
jffs2_flush_wbuf_pad(c);
spin_lock(&c->erase_completion_lock);
}
list_add_tail(&jeb->list, &c->clean_list);
c->nextblock = NULL;
}
jffs2_dbg_acct_sanity_check_nolock(c,jeb);
jffs2_dbg_acct_paranoia_check_nolock(c, jeb);
spin_unlock(&c->erase_completion_lock);
return new;
}
void jffs2_complete_reservation(struct jffs2_sb_info *c)
{
jffs2_dbg(1, "jffs2_complete_reservation()\n");
spin_lock(&c->erase_completion_lock);
jffs2_garbage_collect_trigger(c);
spin_unlock(&c->erase_completion_lock);
mutex_unlock(&c->alloc_sem);
}
static inline int on_list(struct list_head *obj, struct list_head *head)
{
struct list_head *this;
list_for_each(this, head) {
if (this == obj) {
jffs2_dbg(1, "%p is on list at %p\n", obj, head);
return 1;
}
}
return 0;
}
void jffs2_mark_node_obsolete(struct jffs2_sb_info *c, struct jffs2_raw_node_ref *ref)
{
struct jffs2_eraseblock *jeb;
int blocknr;
struct jffs2_unknown_node n;
int ret, addedsize;
size_t retlen;
uint32_t freed_len;
if(unlikely(!ref)) {
pr_notice("EEEEEK. jffs2_mark_node_obsolete called with NULL node\n");
return;
}
if (ref_obsolete(ref)) {
jffs2_dbg(1, "%s(): called with already obsolete node at 0x%08x\n",
__func__, ref_offset(ref));
return;
}
blocknr = ref->flash_offset / c->sector_size;
if (blocknr >= c->nr_blocks) {
pr_notice("raw node at 0x%08x is off the end of device!\n",
ref->flash_offset);
BUG();
}
jeb = &c->blocks[blocknr];
if (jffs2_can_mark_obsolete(c) && !jffs2_is_readonly(c) &&
!(c->flags & (JFFS2_SB_FLAG_SCANNING | JFFS2_SB_FLAG_BUILDING))) {
/* Hm. This may confuse static lock analysis. If any of the above
three conditions is false, we're going to return from this
function without actually obliterating any nodes or freeing
any jffs2_raw_node_refs. So we don't need to stop erases from
happening, or protect against people holding an obsolete
jffs2_raw_node_ref without the erase_completion_lock. */
mutex_lock(&c->erase_free_sem);
}
spin_lock(&c->erase_completion_lock);
freed_len = ref_totlen(c, jeb, ref);
if (ref_flags(ref) == REF_UNCHECKED) {
D1(if (unlikely(jeb->unchecked_size < freed_len)) {
pr_notice("raw unchecked node of size 0x%08x freed from erase block %d at 0x%08x, but unchecked_size was already 0x%08x\n",
freed_len, blocknr,
ref->flash_offset, jeb->used_size);
BUG();
})
jffs2_dbg(1, "Obsoleting previously unchecked node at 0x%08x of len %x\n",
ref_offset(ref), freed_len);
jeb->unchecked_size -= freed_len;
c->unchecked_size -= freed_len;
} else {
D1(if (unlikely(jeb->used_size < freed_len)) {
pr_notice("raw node of size 0x%08x freed from erase block %d at 0x%08x, but used_size was already 0x%08x\n",
freed_len, blocknr,
ref->flash_offset, jeb->used_size);
BUG();
})
jffs2_dbg(1, "Obsoleting node at 0x%08x of len %#x: ",
ref_offset(ref), freed_len);
jeb->used_size -= freed_len;
c->used_size -= freed_len;
}
// Take care, that wasted size is taken into concern
if ((jeb->dirty_size || ISDIRTY(jeb->wasted_size + freed_len)) && jeb != c->nextblock) {
jffs2_dbg(1, "Dirtying\n");
addedsize = freed_len;
jeb->dirty_size += freed_len;
c->dirty_size += freed_len;
/* Convert wasted space to dirty, if not a bad block */
if (jeb->wasted_size) {
if (on_list(&jeb->list, &c->bad_used_list)) {
jffs2_dbg(1, "Leaving block at %08x on the bad_used_list\n",
jeb->offset);
addedsize = 0; /* To fool the refiling code later */
} else {
jffs2_dbg(1, "Converting %d bytes of wasted space to dirty in block at %08x\n",
jeb->wasted_size, jeb->offset);
addedsize += jeb->wasted_size;
jeb->dirty_size += jeb->wasted_size;
c->dirty_size += jeb->wasted_size;
c->wasted_size -= jeb->wasted_size;
jeb->wasted_size = 0;
}
}
} else {
jffs2_dbg(1, "Wasting\n");
addedsize = 0;
jeb->wasted_size += freed_len;
c->wasted_size += freed_len;
}
ref->flash_offset = ref_offset(ref) | REF_OBSOLETE;
jffs2_dbg_acct_sanity_check_nolock(c, jeb);
jffs2_dbg_acct_paranoia_check_nolock(c, jeb);
if (c->flags & JFFS2_SB_FLAG_SCANNING) {
/* Flash scanning is in progress. Don't muck about with the block
lists because they're not ready yet, and don't actually
obliterate nodes that look obsolete. If they weren't
marked obsolete on the flash at the time they _became_
obsolete, there was probably a reason for that. */
spin_unlock(&c->erase_completion_lock);
/* We didn't lock the erase_free_sem */
return;
}
if (jeb == c->nextblock) {
jffs2_dbg(2, "Not moving nextblock 0x%08x to dirty/erase_pending list\n",
jeb->offset);
} else if (!jeb->used_size && !jeb->unchecked_size) {
if (jeb == c->gcblock) {
jffs2_dbg(1, "gcblock at 0x%08x completely dirtied. Clearing gcblock...\n",
jeb->offset);
c->gcblock = NULL;
} else {
jffs2_dbg(1, "Eraseblock at 0x%08x completely dirtied. Removing from (dirty?) list...\n",
jeb->offset);
list_del(&jeb->list);
}
if (jffs2_wbuf_dirty(c)) {
jffs2_dbg(1, "...and adding to erasable_pending_wbuf_list\n");
list_add_tail(&jeb->list, &c->erasable_pending_wbuf_list);
} else {
if (jiffies & 127) {
/* Most of the time, we just erase it immediately. Otherwise we
spend ages scanning it on mount, etc. */
jffs2_dbg(1, "...and adding to erase_pending_list\n");
list_add_tail(&jeb->list, &c->erase_pending_list);
c->nr_erasing_blocks++;
jffs2_garbage_collect_trigger(c);
} else {
/* Sometimes, however, we leave it elsewhere so it doesn't get
immediately reused, and we spread the load a bit. */
jffs2_dbg(1, "...and adding to erasable_list\n");
list_add_tail(&jeb->list, &c->erasable_list);
}
}
jffs2_dbg(1, "Done OK\n");
} else if (jeb == c->gcblock) {
jffs2_dbg(2, "Not moving gcblock 0x%08x to dirty_list\n",
jeb->offset);
} else if (ISDIRTY(jeb->dirty_size) && !ISDIRTY(jeb->dirty_size - addedsize)) {
jffs2_dbg(1, "Eraseblock at 0x%08x is freshly dirtied. Removing from clean list...\n",
jeb->offset);
list_del(&jeb->list);
jffs2_dbg(1, "...and adding to dirty_list\n");
list_add_tail(&jeb->list, &c->dirty_list);
} else if (VERYDIRTY(c, jeb->dirty_size) &&
!VERYDIRTY(c, jeb->dirty_size - addedsize)) {
jffs2_dbg(1, "Eraseblock at 0x%08x is now very dirty. Removing from dirty list...\n",
jeb->offset);
list_del(&jeb->list);
jffs2_dbg(1, "...and adding to very_dirty_list\n");
list_add_tail(&jeb->list, &c->very_dirty_list);
} else {
jffs2_dbg(1, "Eraseblock at 0x%08x not moved anywhere. (free 0x%08x, dirty 0x%08x, used 0x%08x)\n",
jeb->offset, jeb->free_size, jeb->dirty_size,
jeb->used_size);
}
spin_unlock(&c->erase_completion_lock);
if (!jffs2_can_mark_obsolete(c) || jffs2_is_readonly(c) ||
(c->flags & JFFS2_SB_FLAG_BUILDING)) {
/* We didn't lock the erase_free_sem */
return;
}
/* The erase_free_sem is locked, and has been since before we marked the node obsolete
and potentially put its eraseblock onto the erase_pending_list. Thus, we know that
the block hasn't _already_ been erased, and that 'ref' itself hasn't been freed yet
by jffs2_free_jeb_node_refs() in erase.c. Which is nice. */
jffs2_dbg(1, "obliterating obsoleted node at 0x%08x\n",
ref_offset(ref));
ret = jffs2_flash_read(c, ref_offset(ref), sizeof(n), &retlen, (char *)&n);
if (ret) {
pr_warn("Read error reading from obsoleted node at 0x%08x: %d\n",
ref_offset(ref), ret);
goto out_erase_sem;
}
if (retlen != sizeof(n)) {
pr_warn("Short read from obsoleted node at 0x%08x: %zd\n",
ref_offset(ref), retlen);
goto out_erase_sem;
}
if (PAD(je32_to_cpu(n.totlen)) != PAD(freed_len)) {
pr_warn("Node totlen on flash (0x%08x) != totlen from node ref (0x%08x)\n",
je32_to_cpu(n.totlen), freed_len);
goto out_erase_sem;
}
if (!(je16_to_cpu(n.nodetype) & JFFS2_NODE_ACCURATE)) {
jffs2_dbg(1, "Node at 0x%08x was already marked obsolete (nodetype 0x%04x)\n",
ref_offset(ref), je16_to_cpu(n.nodetype));
goto out_erase_sem;
}
/* XXX FIXME: This is ugly now */
n.nodetype = cpu_to_je16(je16_to_cpu(n.nodetype) & ~JFFS2_NODE_ACCURATE);
ret = jffs2_flash_write(c, ref_offset(ref), sizeof(n), &retlen, (char *)&n);
if (ret) {
pr_warn("Write error in obliterating obsoleted node at 0x%08x: %d\n",
ref_offset(ref), ret);
goto out_erase_sem;
}
if (retlen != sizeof(n)) {
pr_warn("Short write in obliterating obsoleted node at 0x%08x: %zd\n",
ref_offset(ref), retlen);
goto out_erase_sem;
}
/* Nodes which have been marked obsolete no longer need to be
associated with any inode. Remove them from the per-inode list.
Note we can't do this for NAND at the moment because we need
obsolete dirent nodes to stay on the lists, because of the
horridness in jffs2_garbage_collect_deletion_dirent(). Also
because we delete the inocache, and on NAND we need that to
stay around until all the nodes are actually erased, in order
to stop us from giving the same inode number to another newly
created inode. */
if (ref->next_in_ino) {
struct jffs2_inode_cache *ic;
struct jffs2_raw_node_ref **p;
spin_lock(&c->erase_completion_lock);
ic = jffs2_raw_ref_to_ic(ref);
for (p = &ic->nodes; (*p) != ref; p = &((*p)->next_in_ino))
;
*p = ref->next_in_ino;
ref->next_in_ino = NULL;
switch (ic->class) {
#ifdef CONFIG_JFFS2_FS_XATTR
case RAWNODE_CLASS_XATTR_DATUM:
jffs2_release_xattr_datum(c, (struct jffs2_xattr_datum *)ic);
break;
case RAWNODE_CLASS_XATTR_REF:
jffs2_release_xattr_ref(c, (struct jffs2_xattr_ref *)ic);
break;
#endif
default:
if (ic->nodes == (void *)ic && ic->pino_nlink == 0)
jffs2_del_ino_cache(c, ic);
break;
}
spin_unlock(&c->erase_completion_lock);
}
out_erase_sem:
mutex_unlock(&c->erase_free_sem);
}
int jffs2_thread_should_wake(struct jffs2_sb_info *c)
{
int ret = 0;
uint32_t dirty;
int nr_very_dirty = 0;
struct jffs2_eraseblock *jeb;
if (!list_empty(&c->erase_complete_list) ||
!list_empty(&c->erase_pending_list))
return 1;
if (c->unchecked_size) {
jffs2_dbg(1, "jffs2_thread_should_wake(): unchecked_size %d, checked_ino #%d\n",
c->unchecked_size, c->checked_ino);
return 1;
}
/* dirty_size contains blocks on erase_pending_list
* those blocks are counted in c->nr_erasing_blocks.
* If one block is actually erased, it is not longer counted as dirty_space
* but it is counted in c->nr_erasing_blocks, so we add it and subtract it
* with c->nr_erasing_blocks * c->sector_size again.
* Blocks on erasable_list are counted as dirty_size, but not in c->nr_erasing_blocks
* This helps us to force gc and pick eventually a clean block to spread the load.
*/
dirty = c->dirty_size + c->erasing_size - c->nr_erasing_blocks * c->sector_size;
if (c->nr_free_blocks + c->nr_erasing_blocks < c->resv_blocks_gctrigger &&
(dirty > c->nospc_dirty_size))
ret = 1;
list_for_each_entry(jeb, &c->very_dirty_list, list) {
nr_very_dirty++;
if (nr_very_dirty == c->vdirty_blocks_gctrigger) {
ret = 1;
/* In debug mode, actually go through and count them all */
D1(continue);
break;
}
}
jffs2_dbg(1, "%s(): nr_free_blocks %d, nr_erasing_blocks %d, dirty_size 0x%x, vdirty_blocks %d: %s\n",
__func__, c->nr_free_blocks, c->nr_erasing_blocks,
c->dirty_size, nr_very_dirty, ret ? "yes" : "no");
return ret;
}

199
fs/jffs2/os-linux.h Normal file
View file

@ -0,0 +1,199 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#ifndef __JFFS2_OS_LINUX_H__
#define __JFFS2_OS_LINUX_H__
/* JFFS2 uses Linux mode bits natively -- no need for conversion */
#define os_to_jffs2_mode(x) (x)
#define jffs2_to_os_mode(x) (x)
struct kstatfs;
struct kvec;
#define JFFS2_INODE_INFO(i) (list_entry(i, struct jffs2_inode_info, vfs_inode))
#define OFNI_EDONI_2SFFJ(f) (&(f)->vfs_inode)
#define JFFS2_SB_INFO(sb) (sb->s_fs_info)
#define OFNI_BS_2SFFJ(c) ((struct super_block *)c->os_priv)
#define JFFS2_F_I_SIZE(f) (OFNI_EDONI_2SFFJ(f)->i_size)
#define JFFS2_F_I_MODE(f) (OFNI_EDONI_2SFFJ(f)->i_mode)
#define JFFS2_F_I_UID(f) (i_uid_read(OFNI_EDONI_2SFFJ(f)))
#define JFFS2_F_I_GID(f) (i_gid_read(OFNI_EDONI_2SFFJ(f)))
#define JFFS2_F_I_RDEV(f) (OFNI_EDONI_2SFFJ(f)->i_rdev)
#define ITIME(sec) ((struct timespec){sec, 0})
#define I_SEC(tv) ((tv).tv_sec)
#define JFFS2_F_I_CTIME(f) (OFNI_EDONI_2SFFJ(f)->i_ctime.tv_sec)
#define JFFS2_F_I_MTIME(f) (OFNI_EDONI_2SFFJ(f)->i_mtime.tv_sec)
#define JFFS2_F_I_ATIME(f) (OFNI_EDONI_2SFFJ(f)->i_atime.tv_sec)
#define sleep_on_spinunlock(wq, s) \
do { \
DECLARE_WAITQUEUE(__wait, current); \
add_wait_queue((wq), &__wait); \
set_current_state(TASK_UNINTERRUPTIBLE); \
spin_unlock(s); \
schedule(); \
remove_wait_queue((wq), &__wait); \
} while(0)
static inline void jffs2_init_inode_info(struct jffs2_inode_info *f)
{
f->highest_version = 0;
f->fragtree = RB_ROOT;
f->metadata = NULL;
f->dents = NULL;
f->target = NULL;
f->flags = 0;
f->usercompr = 0;
}
#define jffs2_is_readonly(c) (OFNI_BS_2SFFJ(c)->s_flags & MS_RDONLY)
#define SECTOR_ADDR(x) ( (((unsigned long)(x) / c->sector_size) * c->sector_size) )
#ifndef CONFIG_JFFS2_FS_WRITEBUFFER
#ifdef CONFIG_JFFS2_SUMMARY
#define jffs2_can_mark_obsolete(c) (0)
#else
#define jffs2_can_mark_obsolete(c) (1)
#endif
#define jffs2_is_writebuffered(c) (0)
#define jffs2_cleanmarker_oob(c) (0)
#define jffs2_write_nand_cleanmarker(c,jeb) (-EIO)
#define jffs2_flash_write(c, ofs, len, retlen, buf) jffs2_flash_direct_write(c, ofs, len, retlen, buf)
#define jffs2_flash_read(c, ofs, len, retlen, buf) (mtd_read((c)->mtd, ofs, len, retlen, buf))
#define jffs2_flush_wbuf_pad(c) ({ do{} while(0); (void)(c), 0; })
#define jffs2_flush_wbuf_gc(c, i) ({ do{} while(0); (void)(c), (void) i, 0; })
#define jffs2_write_nand_badblock(c,jeb,bad_offset) (1)
#define jffs2_nand_flash_setup(c) (0)
#define jffs2_nand_flash_cleanup(c) do {} while(0)
#define jffs2_wbuf_dirty(c) (0)
#define jffs2_flash_writev(a,b,c,d,e,f) jffs2_flash_direct_writev(a,b,c,d,e)
#define jffs2_wbuf_timeout NULL
#define jffs2_wbuf_process NULL
#define jffs2_dataflash(c) (0)
#define jffs2_dataflash_setup(c) (0)
#define jffs2_dataflash_cleanup(c) do {} while (0)
#define jffs2_nor_wbuf_flash(c) (0)
#define jffs2_nor_wbuf_flash_setup(c) (0)
#define jffs2_nor_wbuf_flash_cleanup(c) do {} while (0)
#define jffs2_ubivol(c) (0)
#define jffs2_ubivol_setup(c) (0)
#define jffs2_ubivol_cleanup(c) do {} while (0)
#define jffs2_dirty_trigger(c) do {} while (0)
#else /* NAND and/or ECC'd NOR support present */
#define jffs2_is_writebuffered(c) (c->wbuf != NULL)
#ifdef CONFIG_JFFS2_SUMMARY
#define jffs2_can_mark_obsolete(c) (0)
#else
#define jffs2_can_mark_obsolete(c) (c->mtd->flags & (MTD_BIT_WRITEABLE))
#endif
#define jffs2_cleanmarker_oob(c) (c->mtd->type == MTD_NANDFLASH)
#define jffs2_wbuf_dirty(c) (!!(c)->wbuf_len)
/* wbuf.c */
int jffs2_flash_writev(struct jffs2_sb_info *c, const struct kvec *vecs, unsigned long count, loff_t to, size_t *retlen, uint32_t ino);
int jffs2_flash_write(struct jffs2_sb_info *c, loff_t ofs, size_t len, size_t *retlen, const u_char *buf);
int jffs2_flash_read(struct jffs2_sb_info *c, loff_t ofs, size_t len, size_t *retlen, u_char *buf);
int jffs2_check_oob_empty(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb,int mode);
int jffs2_check_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
int jffs2_write_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb);
int jffs2_write_nand_badblock(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t bad_offset);
void jffs2_wbuf_timeout(unsigned long data);
void jffs2_wbuf_process(void *data);
int jffs2_flush_wbuf_gc(struct jffs2_sb_info *c, uint32_t ino);
int jffs2_flush_wbuf_pad(struct jffs2_sb_info *c);
int jffs2_nand_flash_setup(struct jffs2_sb_info *c);
void jffs2_nand_flash_cleanup(struct jffs2_sb_info *c);
#define jffs2_dataflash(c) (c->mtd->type == MTD_DATAFLASH)
int jffs2_dataflash_setup(struct jffs2_sb_info *c);
void jffs2_dataflash_cleanup(struct jffs2_sb_info *c);
#define jffs2_ubivol(c) (c->mtd->type == MTD_UBIVOLUME)
int jffs2_ubivol_setup(struct jffs2_sb_info *c);
void jffs2_ubivol_cleanup(struct jffs2_sb_info *c);
#define jffs2_nor_wbuf_flash(c) (c->mtd->type == MTD_NORFLASH && ! (c->mtd->flags & MTD_BIT_WRITEABLE))
int jffs2_nor_wbuf_flash_setup(struct jffs2_sb_info *c);
void jffs2_nor_wbuf_flash_cleanup(struct jffs2_sb_info *c);
void jffs2_dirty_trigger(struct jffs2_sb_info *c);
#endif /* WRITEBUFFER */
/* background.c */
int jffs2_start_garbage_collect_thread(struct jffs2_sb_info *c);
void jffs2_stop_garbage_collect_thread(struct jffs2_sb_info *c);
void jffs2_garbage_collect_trigger(struct jffs2_sb_info *c);
/* dir.c */
extern const struct file_operations jffs2_dir_operations;
extern const struct inode_operations jffs2_dir_inode_operations;
/* file.c */
extern const struct file_operations jffs2_file_operations;
extern const struct inode_operations jffs2_file_inode_operations;
extern const struct address_space_operations jffs2_file_address_operations;
int jffs2_fsync(struct file *, loff_t, loff_t, int);
int jffs2_do_readpage_unlock (struct inode *inode, struct page *pg);
/* ioctl.c */
long jffs2_ioctl(struct file *, unsigned int, unsigned long);
/* symlink.c */
extern const struct inode_operations jffs2_symlink_inode_operations;
/* fs.c */
int jffs2_setattr (struct dentry *, struct iattr *);
int jffs2_do_setattr (struct inode *, struct iattr *);
struct inode *jffs2_iget(struct super_block *, unsigned long);
void jffs2_evict_inode (struct inode *);
void jffs2_dirty_inode(struct inode *inode, int flags);
struct inode *jffs2_new_inode (struct inode *dir_i, umode_t mode,
struct jffs2_raw_inode *ri);
int jffs2_statfs (struct dentry *, struct kstatfs *);
int jffs2_do_remount_fs(struct super_block *, int *, char *);
int jffs2_do_fill_super(struct super_block *sb, void *data, int silent);
void jffs2_gc_release_inode(struct jffs2_sb_info *c,
struct jffs2_inode_info *f);
struct jffs2_inode_info *jffs2_gc_fetch_inode(struct jffs2_sb_info *c,
int inum, int unlinked);
unsigned char *jffs2_gc_fetch_page(struct jffs2_sb_info *c,
struct jffs2_inode_info *f,
unsigned long offset,
unsigned long *priv);
void jffs2_gc_release_page(struct jffs2_sb_info *c,
unsigned char *pg,
unsigned long *priv);
void jffs2_flash_cleanup(struct jffs2_sb_info *c);
/* writev.c */
int jffs2_flash_direct_writev(struct jffs2_sb_info *c, const struct kvec *vecs,
unsigned long count, loff_t to, size_t *retlen);
int jffs2_flash_direct_write(struct jffs2_sb_info *c, loff_t ofs, size_t len,
size_t *retlen, const u_char *buf);
#endif /* __JFFS2_OS_LINUX_H__ */

228
fs/jffs2/read.c Normal file
View file

@ -0,0 +1,228 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/crc32.h>
#include <linux/pagemap.h>
#include <linux/mtd/mtd.h>
#include <linux/compiler.h>
#include "nodelist.h"
#include "compr.h"
int jffs2_read_dnode(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
struct jffs2_full_dnode *fd, unsigned char *buf,
int ofs, int len)
{
struct jffs2_raw_inode *ri;
size_t readlen;
uint32_t crc;
unsigned char *decomprbuf = NULL;
unsigned char *readbuf = NULL;
int ret = 0;
ri = jffs2_alloc_raw_inode();
if (!ri)
return -ENOMEM;
ret = jffs2_flash_read(c, ref_offset(fd->raw), sizeof(*ri), &readlen, (char *)ri);
if (ret) {
jffs2_free_raw_inode(ri);
pr_warn("Error reading node from 0x%08x: %d\n",
ref_offset(fd->raw), ret);
return ret;
}
if (readlen != sizeof(*ri)) {
jffs2_free_raw_inode(ri);
pr_warn("Short read from 0x%08x: wanted 0x%zx bytes, got 0x%zx\n",
ref_offset(fd->raw), sizeof(*ri), readlen);
return -EIO;
}
crc = crc32(0, ri, sizeof(*ri)-8);
jffs2_dbg(1, "Node read from %08x: node_crc %08x, calculated CRC %08x. dsize %x, csize %x, offset %x, buf %p\n",
ref_offset(fd->raw), je32_to_cpu(ri->node_crc),
crc, je32_to_cpu(ri->dsize), je32_to_cpu(ri->csize),
je32_to_cpu(ri->offset), buf);
if (crc != je32_to_cpu(ri->node_crc)) {
pr_warn("Node CRC %08x != calculated CRC %08x for node at %08x\n",
je32_to_cpu(ri->node_crc), crc, ref_offset(fd->raw));
ret = -EIO;
goto out_ri;
}
/* There was a bug where we wrote hole nodes out with csize/dsize
swapped. Deal with it */
if (ri->compr == JFFS2_COMPR_ZERO && !je32_to_cpu(ri->dsize) &&
je32_to_cpu(ri->csize)) {
ri->dsize = ri->csize;
ri->csize = cpu_to_je32(0);
}
D1(if(ofs + len > je32_to_cpu(ri->dsize)) {
pr_warn("jffs2_read_dnode() asked for %d bytes at %d from %d-byte node\n",
len, ofs, je32_to_cpu(ri->dsize));
ret = -EINVAL;
goto out_ri;
});
if (ri->compr == JFFS2_COMPR_ZERO) {
memset(buf, 0, len);
goto out_ri;
}
/* Cases:
Reading whole node and it's uncompressed - read directly to buffer provided, check CRC.
Reading whole node and it's compressed - read into comprbuf, check CRC and decompress to buffer provided
Reading partial node and it's uncompressed - read into readbuf, check CRC, and copy
Reading partial node and it's compressed - read into readbuf, check checksum, decompress to decomprbuf and copy
*/
if (ri->compr == JFFS2_COMPR_NONE && len == je32_to_cpu(ri->dsize)) {
readbuf = buf;
} else {
readbuf = kmalloc(je32_to_cpu(ri->csize), GFP_KERNEL);
if (!readbuf) {
ret = -ENOMEM;
goto out_ri;
}
}
if (ri->compr != JFFS2_COMPR_NONE) {
if (len < je32_to_cpu(ri->dsize)) {
decomprbuf = kmalloc(je32_to_cpu(ri->dsize), GFP_KERNEL);
if (!decomprbuf) {
ret = -ENOMEM;
goto out_readbuf;
}
} else {
decomprbuf = buf;
}
} else {
decomprbuf = readbuf;
}
jffs2_dbg(2, "Read %d bytes to %p\n", je32_to_cpu(ri->csize),
readbuf);
ret = jffs2_flash_read(c, (ref_offset(fd->raw)) + sizeof(*ri),
je32_to_cpu(ri->csize), &readlen, readbuf);
if (!ret && readlen != je32_to_cpu(ri->csize))
ret = -EIO;
if (ret)
goto out_decomprbuf;
crc = crc32(0, readbuf, je32_to_cpu(ri->csize));
if (crc != je32_to_cpu(ri->data_crc)) {
pr_warn("Data CRC %08x != calculated CRC %08x for node at %08x\n",
je32_to_cpu(ri->data_crc), crc, ref_offset(fd->raw));
ret = -EIO;
goto out_decomprbuf;
}
jffs2_dbg(2, "Data CRC matches calculated CRC %08x\n", crc);
if (ri->compr != JFFS2_COMPR_NONE) {
jffs2_dbg(2, "Decompress %d bytes from %p to %d bytes at %p\n",
je32_to_cpu(ri->csize), readbuf,
je32_to_cpu(ri->dsize), decomprbuf);
ret = jffs2_decompress(c, f, ri->compr | (ri->usercompr << 8), readbuf, decomprbuf, je32_to_cpu(ri->csize), je32_to_cpu(ri->dsize));
if (ret) {
pr_warn("Error: jffs2_decompress returned %d\n", ret);
goto out_decomprbuf;
}
}
if (len < je32_to_cpu(ri->dsize)) {
memcpy(buf, decomprbuf+ofs, len);
}
out_decomprbuf:
if(decomprbuf != buf && decomprbuf != readbuf)
kfree(decomprbuf);
out_readbuf:
if(readbuf != buf)
kfree(readbuf);
out_ri:
jffs2_free_raw_inode(ri);
return ret;
}
int jffs2_read_inode_range(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
unsigned char *buf, uint32_t offset, uint32_t len)
{
uint32_t end = offset + len;
struct jffs2_node_frag *frag;
int ret;
jffs2_dbg(1, "%s(): ino #%u, range 0x%08x-0x%08x\n",
__func__, f->inocache->ino, offset, offset + len);
frag = jffs2_lookup_node_frag(&f->fragtree, offset);
/* XXX FIXME: Where a single physical node actually shows up in two
frags, we read it twice. Don't do that. */
/* Now we're pointing at the first frag which overlaps our page
* (or perhaps is before it, if we've been asked to read off the
* end of the file). */
while(offset < end) {
jffs2_dbg(2, "%s(): offset %d, end %d\n",
__func__, offset, end);
if (unlikely(!frag || frag->ofs > offset ||
frag->ofs + frag->size <= offset)) {
uint32_t holesize = end - offset;
if (frag && frag->ofs > offset) {
jffs2_dbg(1, "Eep. Hole in ino #%u fraglist. frag->ofs = 0x%08x, offset = 0x%08x\n",
f->inocache->ino, frag->ofs, offset);
holesize = min(holesize, frag->ofs - offset);
}
jffs2_dbg(1, "Filling non-frag hole from %d-%d\n",
offset, offset + holesize);
memset(buf, 0, holesize);
buf += holesize;
offset += holesize;
continue;
} else if (unlikely(!frag->node)) {
uint32_t holeend = min(end, frag->ofs + frag->size);
jffs2_dbg(1, "Filling frag hole from %d-%d (frag 0x%x 0x%x)\n",
offset, holeend, frag->ofs,
frag->ofs + frag->size);
memset(buf, 0, holeend - offset);
buf += holeend - offset;
offset = holeend;
frag = frag_next(frag);
continue;
} else {
uint32_t readlen;
uint32_t fragofs; /* offset within the frag to start reading */
fragofs = offset - frag->ofs;
readlen = min(frag->size - fragofs, end - offset);
jffs2_dbg(1, "Reading %d-%d from node at 0x%08x (%d)\n",
frag->ofs+fragofs,
frag->ofs + fragofs+readlen,
ref_offset(frag->node->raw),
ref_flags(frag->node->raw));
ret = jffs2_read_dnode(c, f, frag->node, buf, fragofs + frag->ofs - frag->node->ofs, readlen);
jffs2_dbg(2, "node read done\n");
if (ret) {
jffs2_dbg(1, "%s(): error %d\n",
__func__, ret);
memset(buf, 0, readlen);
return ret;
}
buf += readlen;
offset += readlen;
frag = frag_next(frag);
jffs2_dbg(2, "node read was OK. Looping\n");
}
}
return 0;
}

1451
fs/jffs2/readinode.c Normal file

File diff suppressed because it is too large Load diff

1176
fs/jffs2/scan.c Normal file

File diff suppressed because it is too large Load diff

89
fs/jffs2/security.c Normal file
View file

@ -0,0 +1,89 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2006 NEC Corporation
*
* Created by KaiGai Kohei <kaigai@ak.jp.nec.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/time.h>
#include <linux/pagemap.h>
#include <linux/highmem.h>
#include <linux/crc32.h>
#include <linux/jffs2.h>
#include <linux/xattr.h>
#include <linux/mtd/mtd.h>
#include <linux/security.h>
#include "nodelist.h"
/* ---- Initial Security Label(s) Attachment callback --- */
static int jffs2_initxattrs(struct inode *inode,
const struct xattr *xattr_array, void *fs_info)
{
const struct xattr *xattr;
int err = 0;
for (xattr = xattr_array; xattr->name != NULL; xattr++) {
err = do_jffs2_setxattr(inode, JFFS2_XPREFIX_SECURITY,
xattr->name, xattr->value,
xattr->value_len, 0);
if (err < 0)
break;
}
return err;
}
/* ---- Initial Security Label(s) Attachment ----------- */
int jffs2_init_security(struct inode *inode, struct inode *dir,
const struct qstr *qstr)
{
return security_inode_init_security(inode, dir, qstr,
&jffs2_initxattrs, NULL);
}
/* ---- XATTR Handler for "security.*" ----------------- */
static int jffs2_security_getxattr(struct dentry *dentry, const char *name,
void *buffer, size_t size, int type)
{
if (!strcmp(name, ""))
return -EINVAL;
return do_jffs2_getxattr(dentry->d_inode, JFFS2_XPREFIX_SECURITY,
name, buffer, size);
}
static int jffs2_security_setxattr(struct dentry *dentry, const char *name,
const void *buffer, size_t size, int flags, int type)
{
if (!strcmp(name, ""))
return -EINVAL;
return do_jffs2_setxattr(dentry->d_inode, JFFS2_XPREFIX_SECURITY,
name, buffer, size, flags);
}
static size_t jffs2_security_listxattr(struct dentry *dentry, char *list,
size_t list_size, const char *name, size_t name_len, int type)
{
size_t retlen = XATTR_SECURITY_PREFIX_LEN + name_len + 1;
if (list && retlen <= list_size) {
strcpy(list, XATTR_SECURITY_PREFIX);
strcpy(list + XATTR_SECURITY_PREFIX_LEN, name);
}
return retlen;
}
const struct xattr_handler jffs2_security_xattr_handler = {
.prefix = XATTR_SECURITY_PREFIX,
.list = jffs2_security_listxattr,
.set = jffs2_security_setxattr,
.get = jffs2_security_getxattr
};

873
fs/jffs2/summary.c Normal file
View file

@ -0,0 +1,873 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2004 Ferenc Havasi <havasi@inf.u-szeged.hu>,
* Zoltan Sogor <weth@inf.u-szeged.hu>,
* Patrik Kluba <pajko@halom.u-szeged.hu>,
* University of Szeged, Hungary
* 2006 KaiGai Kohei <kaigai@ak.jp.nec.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/mtd/mtd.h>
#include <linux/pagemap.h>
#include <linux/crc32.h>
#include <linux/compiler.h>
#include <linux/vmalloc.h>
#include "nodelist.h"
#include "debug.h"
int jffs2_sum_init(struct jffs2_sb_info *c)
{
uint32_t sum_size = min_t(uint32_t, c->sector_size, MAX_SUMMARY_SIZE);
c->summary = kzalloc(sizeof(struct jffs2_summary), GFP_KERNEL);
if (!c->summary) {
JFFS2_WARNING("Can't allocate memory for summary information!\n");
return -ENOMEM;
}
c->summary->sum_buf = kmalloc(sum_size, GFP_KERNEL);
if (!c->summary->sum_buf) {
JFFS2_WARNING("Can't allocate buffer for writing out summary information!\n");
kfree(c->summary);
return -ENOMEM;
}
dbg_summary("returned successfully\n");
return 0;
}
void jffs2_sum_exit(struct jffs2_sb_info *c)
{
dbg_summary("called\n");
jffs2_sum_disable_collecting(c->summary);
kfree(c->summary->sum_buf);
c->summary->sum_buf = NULL;
kfree(c->summary);
c->summary = NULL;
}
static int jffs2_sum_add_mem(struct jffs2_summary *s, union jffs2_sum_mem *item)
{
if (!s->sum_list_head)
s->sum_list_head = (union jffs2_sum_mem *) item;
if (s->sum_list_tail)
s->sum_list_tail->u.next = (union jffs2_sum_mem *) item;
s->sum_list_tail = (union jffs2_sum_mem *) item;
switch (je16_to_cpu(item->u.nodetype)) {
case JFFS2_NODETYPE_INODE:
s->sum_size += JFFS2_SUMMARY_INODE_SIZE;
s->sum_num++;
dbg_summary("inode (%u) added to summary\n",
je32_to_cpu(item->i.inode));
break;
case JFFS2_NODETYPE_DIRENT:
s->sum_size += JFFS2_SUMMARY_DIRENT_SIZE(item->d.nsize);
s->sum_num++;
dbg_summary("dirent (%u) added to summary\n",
je32_to_cpu(item->d.ino));
break;
#ifdef CONFIG_JFFS2_FS_XATTR
case JFFS2_NODETYPE_XATTR:
s->sum_size += JFFS2_SUMMARY_XATTR_SIZE;
s->sum_num++;
dbg_summary("xattr (xid=%u, version=%u) added to summary\n",
je32_to_cpu(item->x.xid), je32_to_cpu(item->x.version));
break;
case JFFS2_NODETYPE_XREF:
s->sum_size += JFFS2_SUMMARY_XREF_SIZE;
s->sum_num++;
dbg_summary("xref added to summary\n");
break;
#endif
default:
JFFS2_WARNING("UNKNOWN node type %u\n",
je16_to_cpu(item->u.nodetype));
return 1;
}
return 0;
}
/* The following 3 functions are called from scan.c to collect summary info for not closed jeb */
int jffs2_sum_add_padding_mem(struct jffs2_summary *s, uint32_t size)
{
dbg_summary("called with %u\n", size);
s->sum_padded += size;
return 0;
}
int jffs2_sum_add_inode_mem(struct jffs2_summary *s, struct jffs2_raw_inode *ri,
uint32_t ofs)
{
struct jffs2_sum_inode_mem *temp = kmalloc(sizeof(struct jffs2_sum_inode_mem), GFP_KERNEL);
if (!temp)
return -ENOMEM;
temp->nodetype = ri->nodetype;
temp->inode = ri->ino;
temp->version = ri->version;
temp->offset = cpu_to_je32(ofs); /* relative offset from the beginning of the jeb */
temp->totlen = ri->totlen;
temp->next = NULL;
return jffs2_sum_add_mem(s, (union jffs2_sum_mem *)temp);
}
int jffs2_sum_add_dirent_mem(struct jffs2_summary *s, struct jffs2_raw_dirent *rd,
uint32_t ofs)
{
struct jffs2_sum_dirent_mem *temp =
kmalloc(sizeof(struct jffs2_sum_dirent_mem) + rd->nsize, GFP_KERNEL);
if (!temp)
return -ENOMEM;
temp->nodetype = rd->nodetype;
temp->totlen = rd->totlen;
temp->offset = cpu_to_je32(ofs); /* relative from the beginning of the jeb */
temp->pino = rd->pino;
temp->version = rd->version;
temp->ino = rd->ino;
temp->nsize = rd->nsize;
temp->type = rd->type;
temp->next = NULL;
memcpy(temp->name, rd->name, rd->nsize);
return jffs2_sum_add_mem(s, (union jffs2_sum_mem *)temp);
}
#ifdef CONFIG_JFFS2_FS_XATTR
int jffs2_sum_add_xattr_mem(struct jffs2_summary *s, struct jffs2_raw_xattr *rx, uint32_t ofs)
{
struct jffs2_sum_xattr_mem *temp;
temp = kmalloc(sizeof(struct jffs2_sum_xattr_mem), GFP_KERNEL);
if (!temp)
return -ENOMEM;
temp->nodetype = rx->nodetype;
temp->xid = rx->xid;
temp->version = rx->version;
temp->offset = cpu_to_je32(ofs);
temp->totlen = rx->totlen;
temp->next = NULL;
return jffs2_sum_add_mem(s, (union jffs2_sum_mem *)temp);
}
int jffs2_sum_add_xref_mem(struct jffs2_summary *s, struct jffs2_raw_xref *rr, uint32_t ofs)
{
struct jffs2_sum_xref_mem *temp;
temp = kmalloc(sizeof(struct jffs2_sum_xref_mem), GFP_KERNEL);
if (!temp)
return -ENOMEM;
temp->nodetype = rr->nodetype;
temp->offset = cpu_to_je32(ofs);
temp->next = NULL;
return jffs2_sum_add_mem(s, (union jffs2_sum_mem *)temp);
}
#endif
/* Cleanup every collected summary information */
static void jffs2_sum_clean_collected(struct jffs2_summary *s)
{
union jffs2_sum_mem *temp;
if (!s->sum_list_head) {
dbg_summary("already empty\n");
}
while (s->sum_list_head) {
temp = s->sum_list_head;
s->sum_list_head = s->sum_list_head->u.next;
kfree(temp);
}
s->sum_list_tail = NULL;
s->sum_padded = 0;
s->sum_num = 0;
}
void jffs2_sum_reset_collected(struct jffs2_summary *s)
{
dbg_summary("called\n");
jffs2_sum_clean_collected(s);
s->sum_size = 0;
}
void jffs2_sum_disable_collecting(struct jffs2_summary *s)
{
dbg_summary("called\n");
jffs2_sum_clean_collected(s);
s->sum_size = JFFS2_SUMMARY_NOSUM_SIZE;
}
int jffs2_sum_is_disabled(struct jffs2_summary *s)
{
return (s->sum_size == JFFS2_SUMMARY_NOSUM_SIZE);
}
/* Move the collected summary information into sb (called from scan.c) */
void jffs2_sum_move_collected(struct jffs2_sb_info *c, struct jffs2_summary *s)
{
dbg_summary("oldsize=0x%x oldnum=%u => newsize=0x%x newnum=%u\n",
c->summary->sum_size, c->summary->sum_num,
s->sum_size, s->sum_num);
c->summary->sum_size = s->sum_size;
c->summary->sum_num = s->sum_num;
c->summary->sum_padded = s->sum_padded;
c->summary->sum_list_head = s->sum_list_head;
c->summary->sum_list_tail = s->sum_list_tail;
s->sum_list_head = s->sum_list_tail = NULL;
}
/* Called from wbuf.c to collect writed node info */
int jffs2_sum_add_kvec(struct jffs2_sb_info *c, const struct kvec *invecs,
unsigned long count, uint32_t ofs)
{
union jffs2_node_union *node;
struct jffs2_eraseblock *jeb;
if (c->summary->sum_size == JFFS2_SUMMARY_NOSUM_SIZE) {
dbg_summary("Summary is disabled for this jeb! Skipping summary info!\n");
return 0;
}
node = invecs[0].iov_base;
jeb = &c->blocks[ofs / c->sector_size];
ofs -= jeb->offset;
switch (je16_to_cpu(node->u.nodetype)) {
case JFFS2_NODETYPE_INODE: {
struct jffs2_sum_inode_mem *temp =
kmalloc(sizeof(struct jffs2_sum_inode_mem), GFP_KERNEL);
if (!temp)
goto no_mem;
temp->nodetype = node->i.nodetype;
temp->inode = node->i.ino;
temp->version = node->i.version;
temp->offset = cpu_to_je32(ofs);
temp->totlen = node->i.totlen;
temp->next = NULL;
return jffs2_sum_add_mem(c->summary, (union jffs2_sum_mem *)temp);
}
case JFFS2_NODETYPE_DIRENT: {
struct jffs2_sum_dirent_mem *temp =
kmalloc(sizeof(struct jffs2_sum_dirent_mem) + node->d.nsize, GFP_KERNEL);
if (!temp)
goto no_mem;
temp->nodetype = node->d.nodetype;
temp->totlen = node->d.totlen;
temp->offset = cpu_to_je32(ofs);
temp->pino = node->d.pino;
temp->version = node->d.version;
temp->ino = node->d.ino;
temp->nsize = node->d.nsize;
temp->type = node->d.type;
temp->next = NULL;
switch (count) {
case 1:
memcpy(temp->name,node->d.name,node->d.nsize);
break;
case 2:
memcpy(temp->name,invecs[1].iov_base,node->d.nsize);
break;
default:
BUG(); /* impossible count value */
break;
}
return jffs2_sum_add_mem(c->summary, (union jffs2_sum_mem *)temp);
}
#ifdef CONFIG_JFFS2_FS_XATTR
case JFFS2_NODETYPE_XATTR: {
struct jffs2_sum_xattr_mem *temp;
temp = kmalloc(sizeof(struct jffs2_sum_xattr_mem), GFP_KERNEL);
if (!temp)
goto no_mem;
temp->nodetype = node->x.nodetype;
temp->xid = node->x.xid;
temp->version = node->x.version;
temp->totlen = node->x.totlen;
temp->offset = cpu_to_je32(ofs);
temp->next = NULL;
return jffs2_sum_add_mem(c->summary, (union jffs2_sum_mem *)temp);
}
case JFFS2_NODETYPE_XREF: {
struct jffs2_sum_xref_mem *temp;
temp = kmalloc(sizeof(struct jffs2_sum_xref_mem), GFP_KERNEL);
if (!temp)
goto no_mem;
temp->nodetype = node->r.nodetype;
temp->offset = cpu_to_je32(ofs);
temp->next = NULL;
return jffs2_sum_add_mem(c->summary, (union jffs2_sum_mem *)temp);
}
#endif
case JFFS2_NODETYPE_PADDING:
dbg_summary("node PADDING\n");
c->summary->sum_padded += je32_to_cpu(node->u.totlen);
break;
case JFFS2_NODETYPE_CLEANMARKER:
dbg_summary("node CLEANMARKER\n");
break;
case JFFS2_NODETYPE_SUMMARY:
dbg_summary("node SUMMARY\n");
break;
default:
/* If you implement a new node type you should also implement
summary support for it or disable summary.
*/
BUG();
break;
}
return 0;
no_mem:
JFFS2_WARNING("MEMORY ALLOCATION ERROR!");
return -ENOMEM;
}
static struct jffs2_raw_node_ref *sum_link_node_ref(struct jffs2_sb_info *c,
struct jffs2_eraseblock *jeb,
uint32_t ofs, uint32_t len,
struct jffs2_inode_cache *ic)
{
/* If there was a gap, mark it dirty */
if ((ofs & ~3) > c->sector_size - jeb->free_size) {
/* Ew. Summary doesn't actually tell us explicitly about dirty space */
jffs2_scan_dirty_space(c, jeb, (ofs & ~3) - (c->sector_size - jeb->free_size));
}
return jffs2_link_node_ref(c, jeb, jeb->offset + ofs, len, ic);
}
/* Process the stored summary information - helper function for jffs2_sum_scan_sumnode() */
static int jffs2_sum_process_sum_data(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb,
struct jffs2_raw_summary *summary, uint32_t *pseudo_random)
{
struct jffs2_inode_cache *ic;
struct jffs2_full_dirent *fd;
void *sp;
int i, ino;
int err;
sp = summary->sum;
for (i=0; i<je32_to_cpu(summary->sum_num); i++) {
dbg_summary("processing summary index %d\n", i);
cond_resched();
/* Make sure there's a spare ref for dirty space */
err = jffs2_prealloc_raw_node_refs(c, jeb, 2);
if (err)
return err;
switch (je16_to_cpu(((struct jffs2_sum_unknown_flash *)sp)->nodetype)) {
case JFFS2_NODETYPE_INODE: {
struct jffs2_sum_inode_flash *spi;
spi = sp;
ino = je32_to_cpu(spi->inode);
dbg_summary("Inode at 0x%08x-0x%08x\n",
jeb->offset + je32_to_cpu(spi->offset),
jeb->offset + je32_to_cpu(spi->offset) + je32_to_cpu(spi->totlen));
ic = jffs2_scan_make_ino_cache(c, ino);
if (!ic) {
JFFS2_NOTICE("scan_make_ino_cache failed\n");
return -ENOMEM;
}
sum_link_node_ref(c, jeb, je32_to_cpu(spi->offset) | REF_UNCHECKED,
PAD(je32_to_cpu(spi->totlen)), ic);
*pseudo_random += je32_to_cpu(spi->version);
sp += JFFS2_SUMMARY_INODE_SIZE;
break;
}
case JFFS2_NODETYPE_DIRENT: {
struct jffs2_sum_dirent_flash *spd;
int checkedlen;
spd = sp;
dbg_summary("Dirent at 0x%08x-0x%08x\n",
jeb->offset + je32_to_cpu(spd->offset),
jeb->offset + je32_to_cpu(spd->offset) + je32_to_cpu(spd->totlen));
/* This should never happen, but https://dev.laptop.org/ticket/4184 */
checkedlen = strnlen(spd->name, spd->nsize);
if (!checkedlen) {
pr_err("Dirent at %08x has zero at start of name. Aborting mount.\n",
jeb->offset +
je32_to_cpu(spd->offset));
return -EIO;
}
if (checkedlen < spd->nsize) {
pr_err("Dirent at %08x has zeroes in name. Truncating to %d chars\n",
jeb->offset +
je32_to_cpu(spd->offset),
checkedlen);
}
fd = jffs2_alloc_full_dirent(checkedlen+1);
if (!fd)
return -ENOMEM;
memcpy(&fd->name, spd->name, checkedlen);
fd->name[checkedlen] = 0;
ic = jffs2_scan_make_ino_cache(c, je32_to_cpu(spd->pino));
if (!ic) {
jffs2_free_full_dirent(fd);
return -ENOMEM;
}
fd->raw = sum_link_node_ref(c, jeb, je32_to_cpu(spd->offset) | REF_UNCHECKED,
PAD(je32_to_cpu(spd->totlen)), ic);
fd->next = NULL;
fd->version = je32_to_cpu(spd->version);
fd->ino = je32_to_cpu(spd->ino);
fd->nhash = full_name_hash(fd->name, checkedlen);
fd->type = spd->type;
jffs2_add_fd_to_list(c, fd, &ic->scan_dents);
*pseudo_random += je32_to_cpu(spd->version);
sp += JFFS2_SUMMARY_DIRENT_SIZE(spd->nsize);
break;
}
#ifdef CONFIG_JFFS2_FS_XATTR
case JFFS2_NODETYPE_XATTR: {
struct jffs2_xattr_datum *xd;
struct jffs2_sum_xattr_flash *spx;
spx = (struct jffs2_sum_xattr_flash *)sp;
dbg_summary("xattr at %#08x-%#08x (xid=%u, version=%u)\n",
jeb->offset + je32_to_cpu(spx->offset),
jeb->offset + je32_to_cpu(spx->offset) + je32_to_cpu(spx->totlen),
je32_to_cpu(spx->xid), je32_to_cpu(spx->version));
xd = jffs2_setup_xattr_datum(c, je32_to_cpu(spx->xid),
je32_to_cpu(spx->version));
if (IS_ERR(xd))
return PTR_ERR(xd);
if (xd->version > je32_to_cpu(spx->version)) {
/* node is not the newest one */
struct jffs2_raw_node_ref *raw
= sum_link_node_ref(c, jeb, je32_to_cpu(spx->offset) | REF_UNCHECKED,
PAD(je32_to_cpu(spx->totlen)), NULL);
raw->next_in_ino = xd->node->next_in_ino;
xd->node->next_in_ino = raw;
} else {
xd->version = je32_to_cpu(spx->version);
sum_link_node_ref(c, jeb, je32_to_cpu(spx->offset) | REF_UNCHECKED,
PAD(je32_to_cpu(spx->totlen)), (void *)xd);
}
*pseudo_random += je32_to_cpu(spx->xid);
sp += JFFS2_SUMMARY_XATTR_SIZE;
break;
}
case JFFS2_NODETYPE_XREF: {
struct jffs2_xattr_ref *ref;
struct jffs2_sum_xref_flash *spr;
spr = (struct jffs2_sum_xref_flash *)sp;
dbg_summary("xref at %#08x-%#08x\n",
jeb->offset + je32_to_cpu(spr->offset),
jeb->offset + je32_to_cpu(spr->offset) +
(uint32_t)PAD(sizeof(struct jffs2_raw_xref)));
ref = jffs2_alloc_xattr_ref();
if (!ref) {
JFFS2_NOTICE("allocation of xattr_datum failed\n");
return -ENOMEM;
}
ref->next = c->xref_temp;
c->xref_temp = ref;
sum_link_node_ref(c, jeb, je32_to_cpu(spr->offset) | REF_UNCHECKED,
PAD(sizeof(struct jffs2_raw_xref)), (void *)ref);
*pseudo_random += ref->node->flash_offset;
sp += JFFS2_SUMMARY_XREF_SIZE;
break;
}
#endif
default : {
uint16_t nodetype = je16_to_cpu(((struct jffs2_sum_unknown_flash *)sp)->nodetype);
JFFS2_WARNING("Unsupported node type %x found in summary! Exiting...\n", nodetype);
if ((nodetype & JFFS2_COMPAT_MASK) == JFFS2_FEATURE_INCOMPAT)
return -EIO;
/* For compatible node types, just fall back to the full scan */
c->wasted_size -= jeb->wasted_size;
c->free_size += c->sector_size - jeb->free_size;
c->used_size -= jeb->used_size;
c->dirty_size -= jeb->dirty_size;
jeb->wasted_size = jeb->used_size = jeb->dirty_size = 0;
jeb->free_size = c->sector_size;
jffs2_free_jeb_node_refs(c, jeb);
return -ENOTRECOVERABLE;
}
}
}
return 0;
}
/* Process the summary node - called from jffs2_scan_eraseblock() */
int jffs2_sum_scan_sumnode(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb,
struct jffs2_raw_summary *summary, uint32_t sumsize,
uint32_t *pseudo_random)
{
struct jffs2_unknown_node crcnode;
int ret, ofs;
uint32_t crc;
ofs = c->sector_size - sumsize;
dbg_summary("summary found for 0x%08x at 0x%08x (0x%x bytes)\n",
jeb->offset, jeb->offset + ofs, sumsize);
/* OK, now check for node validity and CRC */
crcnode.magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
crcnode.nodetype = cpu_to_je16(JFFS2_NODETYPE_SUMMARY);
crcnode.totlen = summary->totlen;
crc = crc32(0, &crcnode, sizeof(crcnode)-4);
if (je32_to_cpu(summary->hdr_crc) != crc) {
dbg_summary("Summary node header is corrupt (bad CRC or "
"no summary at all)\n");
goto crc_err;
}
if (je32_to_cpu(summary->totlen) != sumsize) {
dbg_summary("Summary node is corrupt (wrong erasesize?)\n");
goto crc_err;
}
crc = crc32(0, summary, sizeof(struct jffs2_raw_summary)-8);
if (je32_to_cpu(summary->node_crc) != crc) {
dbg_summary("Summary node is corrupt (bad CRC)\n");
goto crc_err;
}
crc = crc32(0, summary->sum, sumsize - sizeof(struct jffs2_raw_summary));
if (je32_to_cpu(summary->sum_crc) != crc) {
dbg_summary("Summary node data is corrupt (bad CRC)\n");
goto crc_err;
}
if ( je32_to_cpu(summary->cln_mkr) ) {
dbg_summary("Summary : CLEANMARKER node \n");
ret = jffs2_prealloc_raw_node_refs(c, jeb, 1);
if (ret)
return ret;
if (je32_to_cpu(summary->cln_mkr) != c->cleanmarker_size) {
dbg_summary("CLEANMARKER node has totlen 0x%x != normal 0x%x\n",
je32_to_cpu(summary->cln_mkr), c->cleanmarker_size);
if ((ret = jffs2_scan_dirty_space(c, jeb, PAD(je32_to_cpu(summary->cln_mkr)))))
return ret;
} else if (jeb->first_node) {
dbg_summary("CLEANMARKER node not first node in block "
"(0x%08x)\n", jeb->offset);
if ((ret = jffs2_scan_dirty_space(c, jeb, PAD(je32_to_cpu(summary->cln_mkr)))))
return ret;
} else {
jffs2_link_node_ref(c, jeb, jeb->offset | REF_NORMAL,
je32_to_cpu(summary->cln_mkr), NULL);
}
}
ret = jffs2_sum_process_sum_data(c, jeb, summary, pseudo_random);
/* -ENOTRECOVERABLE isn't a fatal error -- it means we should do a full
scan of this eraseblock. So return zero */
if (ret == -ENOTRECOVERABLE)
return 0;
if (ret)
return ret; /* real error */
/* for PARANOIA_CHECK */
ret = jffs2_prealloc_raw_node_refs(c, jeb, 2);
if (ret)
return ret;
sum_link_node_ref(c, jeb, ofs | REF_NORMAL, sumsize, NULL);
if (unlikely(jeb->free_size)) {
JFFS2_WARNING("Free size 0x%x bytes in eraseblock @0x%08x with summary?\n",
jeb->free_size, jeb->offset);
jeb->wasted_size += jeb->free_size;
c->wasted_size += jeb->free_size;
c->free_size -= jeb->free_size;
jeb->free_size = 0;
}
return jffs2_scan_classify_jeb(c, jeb);
crc_err:
JFFS2_WARNING("Summary node crc error, skipping summary information.\n");
return 0;
}
/* Write summary data to flash - helper function for jffs2_sum_write_sumnode() */
static int jffs2_sum_write_data(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb,
uint32_t infosize, uint32_t datasize, int padsize)
{
struct jffs2_raw_summary isum;
union jffs2_sum_mem *temp;
struct jffs2_sum_marker *sm;
struct kvec vecs[2];
uint32_t sum_ofs;
void *wpage;
int ret;
size_t retlen;
if (padsize + datasize > MAX_SUMMARY_SIZE) {
/* It won't fit in the buffer. Abort summary for this jeb */
jffs2_sum_disable_collecting(c->summary);
JFFS2_WARNING("Summary too big (%d data, %d pad) in eraseblock at %08x\n",
datasize, padsize, jeb->offset);
/* Non-fatal */
return 0;
}
/* Is there enough space for summary? */
if (padsize < 0) {
/* don't try to write out summary for this jeb */
jffs2_sum_disable_collecting(c->summary);
JFFS2_WARNING("Not enough space for summary, padsize = %d\n",
padsize);
/* Non-fatal */
return 0;
}
memset(c->summary->sum_buf, 0xff, datasize);
memset(&isum, 0, sizeof(isum));
isum.magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
isum.nodetype = cpu_to_je16(JFFS2_NODETYPE_SUMMARY);
isum.totlen = cpu_to_je32(infosize);
isum.hdr_crc = cpu_to_je32(crc32(0, &isum, sizeof(struct jffs2_unknown_node) - 4));
isum.padded = cpu_to_je32(c->summary->sum_padded);
isum.cln_mkr = cpu_to_je32(c->cleanmarker_size);
isum.sum_num = cpu_to_je32(c->summary->sum_num);
wpage = c->summary->sum_buf;
while (c->summary->sum_num) {
temp = c->summary->sum_list_head;
switch (je16_to_cpu(temp->u.nodetype)) {
case JFFS2_NODETYPE_INODE: {
struct jffs2_sum_inode_flash *sino_ptr = wpage;
sino_ptr->nodetype = temp->i.nodetype;
sino_ptr->inode = temp->i.inode;
sino_ptr->version = temp->i.version;
sino_ptr->offset = temp->i.offset;
sino_ptr->totlen = temp->i.totlen;
wpage += JFFS2_SUMMARY_INODE_SIZE;
break;
}
case JFFS2_NODETYPE_DIRENT: {
struct jffs2_sum_dirent_flash *sdrnt_ptr = wpage;
sdrnt_ptr->nodetype = temp->d.nodetype;
sdrnt_ptr->totlen = temp->d.totlen;
sdrnt_ptr->offset = temp->d.offset;
sdrnt_ptr->pino = temp->d.pino;
sdrnt_ptr->version = temp->d.version;
sdrnt_ptr->ino = temp->d.ino;
sdrnt_ptr->nsize = temp->d.nsize;
sdrnt_ptr->type = temp->d.type;
memcpy(sdrnt_ptr->name, temp->d.name,
temp->d.nsize);
wpage += JFFS2_SUMMARY_DIRENT_SIZE(temp->d.nsize);
break;
}
#ifdef CONFIG_JFFS2_FS_XATTR
case JFFS2_NODETYPE_XATTR: {
struct jffs2_sum_xattr_flash *sxattr_ptr = wpage;
temp = c->summary->sum_list_head;
sxattr_ptr->nodetype = temp->x.nodetype;
sxattr_ptr->xid = temp->x.xid;
sxattr_ptr->version = temp->x.version;
sxattr_ptr->offset = temp->x.offset;
sxattr_ptr->totlen = temp->x.totlen;
wpage += JFFS2_SUMMARY_XATTR_SIZE;
break;
}
case JFFS2_NODETYPE_XREF: {
struct jffs2_sum_xref_flash *sxref_ptr = wpage;
temp = c->summary->sum_list_head;
sxref_ptr->nodetype = temp->r.nodetype;
sxref_ptr->offset = temp->r.offset;
wpage += JFFS2_SUMMARY_XREF_SIZE;
break;
}
#endif
default : {
if ((je16_to_cpu(temp->u.nodetype) & JFFS2_COMPAT_MASK)
== JFFS2_FEATURE_RWCOMPAT_COPY) {
dbg_summary("Writing unknown RWCOMPAT_COPY node type %x\n",
je16_to_cpu(temp->u.nodetype));
jffs2_sum_disable_collecting(c->summary);
} else {
BUG(); /* unknown node in summary information */
}
}
}
c->summary->sum_list_head = temp->u.next;
kfree(temp);
c->summary->sum_num--;
}
jffs2_sum_reset_collected(c->summary);
wpage += padsize;
sm = wpage;
sm->offset = cpu_to_je32(c->sector_size - jeb->free_size);
sm->magic = cpu_to_je32(JFFS2_SUM_MAGIC);
isum.sum_crc = cpu_to_je32(crc32(0, c->summary->sum_buf, datasize));
isum.node_crc = cpu_to_je32(crc32(0, &isum, sizeof(isum) - 8));
vecs[0].iov_base = &isum;
vecs[0].iov_len = sizeof(isum);
vecs[1].iov_base = c->summary->sum_buf;
vecs[1].iov_len = datasize;
sum_ofs = jeb->offset + c->sector_size - jeb->free_size;
dbg_summary("writing out data to flash to pos : 0x%08x\n", sum_ofs);
ret = jffs2_flash_writev(c, vecs, 2, sum_ofs, &retlen, 0);
if (ret || (retlen != infosize)) {
JFFS2_WARNING("Write of %u bytes at 0x%08x failed. returned %d, retlen %zd\n",
infosize, sum_ofs, ret, retlen);
if (retlen) {
/* Waste remaining space */
spin_lock(&c->erase_completion_lock);
jffs2_link_node_ref(c, jeb, sum_ofs | REF_OBSOLETE, infosize, NULL);
spin_unlock(&c->erase_completion_lock);
}
c->summary->sum_size = JFFS2_SUMMARY_NOSUM_SIZE;
return 0;
}
spin_lock(&c->erase_completion_lock);
jffs2_link_node_ref(c, jeb, sum_ofs | REF_NORMAL, infosize, NULL);
spin_unlock(&c->erase_completion_lock);
return 0;
}
/* Write out summary information - called from jffs2_do_reserve_space */
int jffs2_sum_write_sumnode(struct jffs2_sb_info *c)
{
int datasize, infosize, padsize;
struct jffs2_eraseblock *jeb;
int ret = 0;
dbg_summary("called\n");
spin_unlock(&c->erase_completion_lock);
jeb = c->nextblock;
jffs2_prealloc_raw_node_refs(c, jeb, 1);
if (!c->summary->sum_num || !c->summary->sum_list_head) {
JFFS2_WARNING("Empty summary info!!!\n");
BUG();
}
datasize = c->summary->sum_size + sizeof(struct jffs2_sum_marker);
infosize = sizeof(struct jffs2_raw_summary) + datasize;
padsize = jeb->free_size - infosize;
infosize += padsize;
datasize += padsize;
ret = jffs2_sum_write_data(c, jeb, infosize, datasize, padsize);
spin_lock(&c->erase_completion_lock);
return ret;
}

213
fs/jffs2/summary.h Normal file
View file

@ -0,0 +1,213 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2004 Ferenc Havasi <havasi@inf.u-szeged.hu>,
* Zoltan Sogor <weth@inf.u-szeged.hu>,
* Patrik Kluba <pajko@halom.u-szeged.hu>,
* University of Szeged, Hungary
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#ifndef JFFS2_SUMMARY_H
#define JFFS2_SUMMARY_H
/* Limit summary size to 64KiB so that we can kmalloc it. If the summary
is larger than that, we have to just ditch it and avoid using summary
for the eraseblock in question... and it probably doesn't hurt us much
anyway. */
#define MAX_SUMMARY_SIZE 65536
#include <linux/uio.h>
#include <linux/jffs2.h>
#define BLK_STATE_ALLFF 0
#define BLK_STATE_CLEAN 1
#define BLK_STATE_PARTDIRTY 2
#define BLK_STATE_CLEANMARKER 3
#define BLK_STATE_ALLDIRTY 4
#define BLK_STATE_BADBLOCK 5
#define JFFS2_SUMMARY_NOSUM_SIZE 0xffffffff
#define JFFS2_SUMMARY_INODE_SIZE (sizeof(struct jffs2_sum_inode_flash))
#define JFFS2_SUMMARY_DIRENT_SIZE(x) (sizeof(struct jffs2_sum_dirent_flash) + (x))
#define JFFS2_SUMMARY_XATTR_SIZE (sizeof(struct jffs2_sum_xattr_flash))
#define JFFS2_SUMMARY_XREF_SIZE (sizeof(struct jffs2_sum_xref_flash))
/* Summary structures used on flash */
struct jffs2_sum_unknown_flash
{
jint16_t nodetype; /* node type */
};
struct jffs2_sum_inode_flash
{
jint16_t nodetype; /* node type */
jint32_t inode; /* inode number */
jint32_t version; /* inode version */
jint32_t offset; /* offset on jeb */
jint32_t totlen; /* record length */
} __attribute__((packed));
struct jffs2_sum_dirent_flash
{
jint16_t nodetype; /* == JFFS_NODETYPE_DIRENT */
jint32_t totlen; /* record length */
jint32_t offset; /* offset on jeb */
jint32_t pino; /* parent inode */
jint32_t version; /* dirent version */
jint32_t ino; /* == zero for unlink */
uint8_t nsize; /* dirent name size */
uint8_t type; /* dirent type */
uint8_t name[0]; /* dirent name */
} __attribute__((packed));
struct jffs2_sum_xattr_flash
{
jint16_t nodetype; /* == JFFS2_NODETYPE_XATR */
jint32_t xid; /* xattr identifier */
jint32_t version; /* version number */
jint32_t offset; /* offset on jeb */
jint32_t totlen; /* node length */
} __attribute__((packed));
struct jffs2_sum_xref_flash
{
jint16_t nodetype; /* == JFFS2_NODETYPE_XREF */
jint32_t offset; /* offset on jeb */
} __attribute__((packed));
union jffs2_sum_flash
{
struct jffs2_sum_unknown_flash u;
struct jffs2_sum_inode_flash i;
struct jffs2_sum_dirent_flash d;
struct jffs2_sum_xattr_flash x;
struct jffs2_sum_xref_flash r;
};
/* Summary structures used in the memory */
struct jffs2_sum_unknown_mem
{
union jffs2_sum_mem *next;
jint16_t nodetype; /* node type */
};
struct jffs2_sum_inode_mem
{
union jffs2_sum_mem *next;
jint16_t nodetype; /* node type */
jint32_t inode; /* inode number */
jint32_t version; /* inode version */
jint32_t offset; /* offset on jeb */
jint32_t totlen; /* record length */
} __attribute__((packed));
struct jffs2_sum_dirent_mem
{
union jffs2_sum_mem *next;
jint16_t nodetype; /* == JFFS_NODETYPE_DIRENT */
jint32_t totlen; /* record length */
jint32_t offset; /* ofset on jeb */
jint32_t pino; /* parent inode */
jint32_t version; /* dirent version */
jint32_t ino; /* == zero for unlink */
uint8_t nsize; /* dirent name size */
uint8_t type; /* dirent type */
uint8_t name[0]; /* dirent name */
} __attribute__((packed));
struct jffs2_sum_xattr_mem
{
union jffs2_sum_mem *next;
jint16_t nodetype;
jint32_t xid;
jint32_t version;
jint32_t offset;
jint32_t totlen;
} __attribute__((packed));
struct jffs2_sum_xref_mem
{
union jffs2_sum_mem *next;
jint16_t nodetype;
jint32_t offset;
} __attribute__((packed));
union jffs2_sum_mem
{
struct jffs2_sum_unknown_mem u;
struct jffs2_sum_inode_mem i;
struct jffs2_sum_dirent_mem d;
struct jffs2_sum_xattr_mem x;
struct jffs2_sum_xref_mem r;
};
/* Summary related information stored in superblock */
struct jffs2_summary
{
uint32_t sum_size; /* collected summary information for nextblock */
uint32_t sum_num;
uint32_t sum_padded;
union jffs2_sum_mem *sum_list_head;
union jffs2_sum_mem *sum_list_tail;
jint32_t *sum_buf; /* buffer for writing out summary */
};
/* Summary marker is stored at the end of every sumarized erase block */
struct jffs2_sum_marker
{
jint32_t offset; /* offset of the summary node in the jeb */
jint32_t magic; /* == JFFS2_SUM_MAGIC */
};
#define JFFS2_SUMMARY_FRAME_SIZE (sizeof(struct jffs2_raw_summary) + sizeof(struct jffs2_sum_marker))
#ifdef CONFIG_JFFS2_SUMMARY /* SUMMARY SUPPORT ENABLED */
#define jffs2_sum_active() (1)
int jffs2_sum_init(struct jffs2_sb_info *c);
void jffs2_sum_exit(struct jffs2_sb_info *c);
void jffs2_sum_disable_collecting(struct jffs2_summary *s);
int jffs2_sum_is_disabled(struct jffs2_summary *s);
void jffs2_sum_reset_collected(struct jffs2_summary *s);
void jffs2_sum_move_collected(struct jffs2_sb_info *c, struct jffs2_summary *s);
int jffs2_sum_add_kvec(struct jffs2_sb_info *c, const struct kvec *invecs,
unsigned long count, uint32_t to);
int jffs2_sum_write_sumnode(struct jffs2_sb_info *c);
int jffs2_sum_add_padding_mem(struct jffs2_summary *s, uint32_t size);
int jffs2_sum_add_inode_mem(struct jffs2_summary *s, struct jffs2_raw_inode *ri, uint32_t ofs);
int jffs2_sum_add_dirent_mem(struct jffs2_summary *s, struct jffs2_raw_dirent *rd, uint32_t ofs);
int jffs2_sum_add_xattr_mem(struct jffs2_summary *s, struct jffs2_raw_xattr *rx, uint32_t ofs);
int jffs2_sum_add_xref_mem(struct jffs2_summary *s, struct jffs2_raw_xref *rr, uint32_t ofs);
int jffs2_sum_scan_sumnode(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb,
struct jffs2_raw_summary *summary, uint32_t sumlen,
uint32_t *pseudo_random);
#else /* SUMMARY DISABLED */
#define jffs2_sum_active() (0)
#define jffs2_sum_init(a) (0)
#define jffs2_sum_exit(a)
#define jffs2_sum_disable_collecting(a)
#define jffs2_sum_is_disabled(a) (0)
#define jffs2_sum_reset_collected(a)
#define jffs2_sum_add_kvec(a,b,c,d) (0)
#define jffs2_sum_move_collected(a,b)
#define jffs2_sum_write_sumnode(a) (0)
#define jffs2_sum_add_padding_mem(a,b)
#define jffs2_sum_add_inode_mem(a,b,c)
#define jffs2_sum_add_dirent_mem(a,b,c)
#define jffs2_sum_add_xattr_mem(a,b,c)
#define jffs2_sum_add_xref_mem(a,b,c)
#define jffs2_sum_scan_sumnode(a,b,c,d,e) (0)
#endif /* CONFIG_JFFS2_SUMMARY */
#endif /* JFFS2_SUMMARY_H */

442
fs/jffs2/super.c Normal file
View file

@ -0,0 +1,442 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/list.h>
#include <linux/fs.h>
#include <linux/err.h>
#include <linux/mount.h>
#include <linux/parser.h>
#include <linux/jffs2.h>
#include <linux/pagemap.h>
#include <linux/mtd/super.h>
#include <linux/ctype.h>
#include <linux/namei.h>
#include <linux/seq_file.h>
#include <linux/exportfs.h>
#include "compr.h"
#include "nodelist.h"
static void jffs2_put_super(struct super_block *);
static struct kmem_cache *jffs2_inode_cachep;
static struct inode *jffs2_alloc_inode(struct super_block *sb)
{
struct jffs2_inode_info *f;
f = kmem_cache_alloc(jffs2_inode_cachep, GFP_KERNEL);
if (!f)
return NULL;
return &f->vfs_inode;
}
static void jffs2_i_callback(struct rcu_head *head)
{
struct inode *inode = container_of(head, struct inode, i_rcu);
kmem_cache_free(jffs2_inode_cachep, JFFS2_INODE_INFO(inode));
}
static void jffs2_destroy_inode(struct inode *inode)
{
call_rcu(&inode->i_rcu, jffs2_i_callback);
}
static void jffs2_i_init_once(void *foo)
{
struct jffs2_inode_info *f = foo;
mutex_init(&f->sem);
inode_init_once(&f->vfs_inode);
}
static const char *jffs2_compr_name(unsigned int compr)
{
switch (compr) {
case JFFS2_COMPR_MODE_NONE:
return "none";
#ifdef CONFIG_JFFS2_LZO
case JFFS2_COMPR_MODE_FORCELZO:
return "lzo";
#endif
#ifdef CONFIG_JFFS2_ZLIB
case JFFS2_COMPR_MODE_FORCEZLIB:
return "zlib";
#endif
default:
/* should never happen; programmer error */
WARN_ON(1);
return "";
}
}
static int jffs2_show_options(struct seq_file *s, struct dentry *root)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(root->d_sb);
struct jffs2_mount_opts *opts = &c->mount_opts;
if (opts->override_compr)
seq_printf(s, ",compr=%s", jffs2_compr_name(opts->compr));
if (opts->rp_size)
seq_printf(s, ",rp_size=%u", opts->rp_size / 1024);
return 0;
}
static int jffs2_sync_fs(struct super_block *sb, int wait)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(sb);
#ifdef CONFIG_JFFS2_FS_WRITEBUFFER
cancel_delayed_work_sync(&c->wbuf_dwork);
#endif
mutex_lock(&c->alloc_sem);
jffs2_flush_wbuf_pad(c);
mutex_unlock(&c->alloc_sem);
return 0;
}
static struct inode *jffs2_nfs_get_inode(struct super_block *sb, uint64_t ino,
uint32_t generation)
{
/* We don't care about i_generation. We'll destroy the flash
before we start re-using inode numbers anyway. And even
if that wasn't true, we'd have other problems...*/
return jffs2_iget(sb, ino);
}
static struct dentry *jffs2_fh_to_dentry(struct super_block *sb, struct fid *fid,
int fh_len, int fh_type)
{
return generic_fh_to_dentry(sb, fid, fh_len, fh_type,
jffs2_nfs_get_inode);
}
static struct dentry *jffs2_fh_to_parent(struct super_block *sb, struct fid *fid,
int fh_len, int fh_type)
{
return generic_fh_to_parent(sb, fid, fh_len, fh_type,
jffs2_nfs_get_inode);
}
static struct dentry *jffs2_get_parent(struct dentry *child)
{
struct jffs2_inode_info *f;
uint32_t pino;
BUG_ON(!S_ISDIR(child->d_inode->i_mode));
f = JFFS2_INODE_INFO(child->d_inode);
pino = f->inocache->pino_nlink;
JFFS2_DEBUG("Parent of directory ino #%u is #%u\n",
f->inocache->ino, pino);
return d_obtain_alias(jffs2_iget(child->d_inode->i_sb, pino));
}
static const struct export_operations jffs2_export_ops = {
.get_parent = jffs2_get_parent,
.fh_to_dentry = jffs2_fh_to_dentry,
.fh_to_parent = jffs2_fh_to_parent,
};
/*
* JFFS2 mount options.
*
* Opt_override_compr: override default compressor
* Opt_rp_size: size of reserved pool in KiB
* Opt_err: just end of array marker
*/
enum {
Opt_override_compr,
Opt_rp_size,
Opt_err,
};
static const match_table_t tokens = {
{Opt_override_compr, "compr=%s"},
{Opt_rp_size, "rp_size=%u"},
{Opt_err, NULL},
};
static int jffs2_parse_options(struct jffs2_sb_info *c, char *data)
{
substring_t args[MAX_OPT_ARGS];
char *p, *name;
unsigned int opt;
if (!data)
return 0;
while ((p = strsep(&data, ","))) {
int token;
if (!*p)
continue;
token = match_token(p, tokens, args);
switch (token) {
case Opt_override_compr:
name = match_strdup(&args[0]);
if (!name)
return -ENOMEM;
if (!strcmp(name, "none"))
c->mount_opts.compr = JFFS2_COMPR_MODE_NONE;
#ifdef CONFIG_JFFS2_LZO
else if (!strcmp(name, "lzo"))
c->mount_opts.compr = JFFS2_COMPR_MODE_FORCELZO;
#endif
#ifdef CONFIG_JFFS2_ZLIB
else if (!strcmp(name, "zlib"))
c->mount_opts.compr =
JFFS2_COMPR_MODE_FORCEZLIB;
#endif
else {
pr_err("Error: unknown compressor \"%s\"\n",
name);
kfree(name);
return -EINVAL;
}
kfree(name);
c->mount_opts.override_compr = true;
break;
case Opt_rp_size:
if (match_int(&args[0], &opt))
return -EINVAL;
opt *= 1024;
if (opt > c->mtd->size) {
pr_warn("Too large reserve pool specified, max "
"is %llu KB\n", c->mtd->size / 1024);
return -EINVAL;
}
c->mount_opts.rp_size = opt;
break;
default:
pr_err("Error: unrecognized mount option '%s' or missing value\n",
p);
return -EINVAL;
}
}
return 0;
}
static int jffs2_remount_fs(struct super_block *sb, int *flags, char *data)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(sb);
int err;
sync_filesystem(sb);
err = jffs2_parse_options(c, data);
if (err)
return -EINVAL;
return jffs2_do_remount_fs(sb, flags, data);
}
static const struct super_operations jffs2_super_operations =
{
.alloc_inode = jffs2_alloc_inode,
.destroy_inode =jffs2_destroy_inode,
.put_super = jffs2_put_super,
.statfs = jffs2_statfs,
.remount_fs = jffs2_remount_fs,
.evict_inode = jffs2_evict_inode,
.dirty_inode = jffs2_dirty_inode,
.show_options = jffs2_show_options,
.sync_fs = jffs2_sync_fs,
};
/*
* fill in the superblock
*/
static int jffs2_fill_super(struct super_block *sb, void *data, int silent)
{
struct jffs2_sb_info *c;
int ret;
jffs2_dbg(1, "jffs2_get_sb_mtd():"
" New superblock for device %d (\"%s\")\n",
sb->s_mtd->index, sb->s_mtd->name);
c = kzalloc(sizeof(*c), GFP_KERNEL);
if (!c)
return -ENOMEM;
c->mtd = sb->s_mtd;
c->os_priv = sb;
sb->s_fs_info = c;
ret = jffs2_parse_options(c, data);
if (ret) {
kfree(c);
return -EINVAL;
}
/* Initialize JFFS2 superblock locks, the further initialization will
* be done later */
mutex_init(&c->alloc_sem);
mutex_init(&c->erase_free_sem);
init_waitqueue_head(&c->erase_wait);
init_waitqueue_head(&c->inocache_wq);
spin_lock_init(&c->erase_completion_lock);
spin_lock_init(&c->inocache_lock);
sb->s_op = &jffs2_super_operations;
sb->s_export_op = &jffs2_export_ops;
sb->s_flags = sb->s_flags | MS_NOATIME;
sb->s_xattr = jffs2_xattr_handlers;
#ifdef CONFIG_JFFS2_FS_POSIX_ACL
sb->s_flags |= MS_POSIXACL;
#endif
ret = jffs2_do_fill_super(sb, data, silent);
return ret;
}
static struct dentry *jffs2_mount(struct file_system_type *fs_type,
int flags, const char *dev_name,
void *data)
{
return mount_mtd(fs_type, flags, dev_name, data, jffs2_fill_super);
}
static void jffs2_put_super (struct super_block *sb)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(sb);
jffs2_dbg(2, "%s()\n", __func__);
mutex_lock(&c->alloc_sem);
jffs2_flush_wbuf_pad(c);
mutex_unlock(&c->alloc_sem);
jffs2_sum_exit(c);
jffs2_free_ino_caches(c);
jffs2_free_raw_node_refs(c);
if (jffs2_blocks_use_vmalloc(c))
vfree(c->blocks);
else
kfree(c->blocks);
jffs2_flash_cleanup(c);
kfree(c->inocache_list);
jffs2_clear_xattr_subsystem(c);
mtd_sync(c->mtd);
jffs2_dbg(1, "%s(): returning\n", __func__);
}
static void jffs2_kill_sb(struct super_block *sb)
{
struct jffs2_sb_info *c = JFFS2_SB_INFO(sb);
if (!(sb->s_flags & MS_RDONLY))
jffs2_stop_garbage_collect_thread(c);
kill_mtd_super(sb);
kfree(c);
}
static struct file_system_type jffs2_fs_type = {
.owner = THIS_MODULE,
.name = "jffs2",
.mount = jffs2_mount,
.kill_sb = jffs2_kill_sb,
};
MODULE_ALIAS_FS("jffs2");
static int __init init_jffs2_fs(void)
{
int ret;
/* Paranoia checks for on-medium structures. If we ask GCC
to pack them with __attribute__((packed)) then it _also_
assumes that they're not aligned -- so it emits crappy
code on some architectures. Ideally we want an attribute
which means just 'no padding', without the alignment
thing. But GCC doesn't have that -- we have to just
hope the structs are the right sizes, instead. */
BUILD_BUG_ON(sizeof(struct jffs2_unknown_node) != 12);
BUILD_BUG_ON(sizeof(struct jffs2_raw_dirent) != 40);
BUILD_BUG_ON(sizeof(struct jffs2_raw_inode) != 68);
BUILD_BUG_ON(sizeof(struct jffs2_raw_summary) != 32);
pr_info("version 2.2."
#ifdef CONFIG_JFFS2_FS_WRITEBUFFER
" (NAND)"
#endif
#ifdef CONFIG_JFFS2_SUMMARY
" (SUMMARY) "
#endif
" © 2001-2006 Red Hat, Inc.\n");
jffs2_inode_cachep = kmem_cache_create("jffs2_i",
sizeof(struct jffs2_inode_info),
0, (SLAB_RECLAIM_ACCOUNT|
SLAB_MEM_SPREAD),
jffs2_i_init_once);
if (!jffs2_inode_cachep) {
pr_err("error: Failed to initialise inode cache\n");
return -ENOMEM;
}
ret = jffs2_compressors_init();
if (ret) {
pr_err("error: Failed to initialise compressors\n");
goto out;
}
ret = jffs2_create_slab_caches();
if (ret) {
pr_err("error: Failed to initialise slab caches\n");
goto out_compressors;
}
ret = register_filesystem(&jffs2_fs_type);
if (ret) {
pr_err("error: Failed to register filesystem\n");
goto out_slab;
}
return 0;
out_slab:
jffs2_destroy_slab_caches();
out_compressors:
jffs2_compressors_exit();
out:
kmem_cache_destroy(jffs2_inode_cachep);
return ret;
}
static void __exit exit_jffs2_fs(void)
{
unregister_filesystem(&jffs2_fs_type);
jffs2_destroy_slab_caches();
jffs2_compressors_exit();
/*
* Make sure all delayed rcu free inodes are flushed before we
* destroy cache.
*/
rcu_barrier();
kmem_cache_destroy(jffs2_inode_cachep);
}
module_init(init_jffs2_fs);
module_exit(exit_jffs2_fs);
MODULE_DESCRIPTION("The Journalling Flash File System, v2");
MODULE_AUTHOR("Red Hat, Inc.");
MODULE_LICENSE("GPL"); // Actually dual-licensed, but it doesn't matter for
// the sake of this tag. It's Free Software.

66
fs/jffs2/symlink.c Normal file
View file

@ -0,0 +1,66 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/namei.h>
#include "nodelist.h"
static void *jffs2_follow_link(struct dentry *dentry, struct nameidata *nd);
const struct inode_operations jffs2_symlink_inode_operations =
{
.readlink = generic_readlink,
.follow_link = jffs2_follow_link,
.setattr = jffs2_setattr,
.setxattr = jffs2_setxattr,
.getxattr = jffs2_getxattr,
.listxattr = jffs2_listxattr,
.removexattr = jffs2_removexattr
};
static void *jffs2_follow_link(struct dentry *dentry, struct nameidata *nd)
{
struct jffs2_inode_info *f = JFFS2_INODE_INFO(dentry->d_inode);
char *p = (char *)f->target;
/*
* We don't acquire the f->sem mutex here since the only data we
* use is f->target.
*
* 1. If we are here the inode has already built and f->target has
* to point to the target path.
* 2. Nobody uses f->target (if the inode is symlink's inode). The
* exception is inode freeing function which frees f->target. But
* it can't be called while we are here and before VFS has
* stopped using our f->target string which we provide by means of
* nd_set_link() call.
*/
if (!p) {
pr_err("%s(): can't find symlink target\n", __func__);
p = ERR_PTR(-EIO);
}
jffs2_dbg(1, "%s(): target path is '%s'\n",
__func__, (char *)f->target);
nd_set_link(nd, p);
/*
* We will unlock the f->sem mutex but VFS will use the f->target string. This is safe
* since the only way that may cause f->target to be changed is iput() operation.
* But VFS will not use f->target after iput() has been called.
*/
return NULL;
}

1353
fs/jffs2/wbuf.c Normal file

File diff suppressed because it is too large Load diff

722
fs/jffs2/write.c Normal file
View file

@ -0,0 +1,722 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/crc32.h>
#include <linux/pagemap.h>
#include <linux/mtd/mtd.h>
#include "nodelist.h"
#include "compr.h"
int jffs2_do_new_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
uint32_t mode, struct jffs2_raw_inode *ri)
{
struct jffs2_inode_cache *ic;
ic = jffs2_alloc_inode_cache();
if (!ic) {
return -ENOMEM;
}
memset(ic, 0, sizeof(*ic));
f->inocache = ic;
f->inocache->pino_nlink = 1; /* Will be overwritten shortly for directories */
f->inocache->nodes = (struct jffs2_raw_node_ref *)f->inocache;
f->inocache->state = INO_STATE_PRESENT;
jffs2_add_ino_cache(c, f->inocache);
jffs2_dbg(1, "%s(): Assigned ino# %d\n", __func__, f->inocache->ino);
ri->ino = cpu_to_je32(f->inocache->ino);
ri->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
ri->nodetype = cpu_to_je16(JFFS2_NODETYPE_INODE);
ri->totlen = cpu_to_je32(PAD(sizeof(*ri)));
ri->hdr_crc = cpu_to_je32(crc32(0, ri, sizeof(struct jffs2_unknown_node)-4));
ri->mode = cpu_to_jemode(mode);
f->highest_version = 1;
ri->version = cpu_to_je32(f->highest_version);
return 0;
}
/* jffs2_write_dnode - given a raw_inode, allocate a full_dnode for it,
write it to the flash, link it into the existing inode/fragment list */
struct jffs2_full_dnode *jffs2_write_dnode(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
struct jffs2_raw_inode *ri, const unsigned char *data,
uint32_t datalen, int alloc_mode)
{
struct jffs2_full_dnode *fn;
size_t retlen;
uint32_t flash_ofs;
struct kvec vecs[2];
int ret;
int retried = 0;
unsigned long cnt = 2;
D1(if(je32_to_cpu(ri->hdr_crc) != crc32(0, ri, sizeof(struct jffs2_unknown_node)-4)) {
pr_crit("Eep. CRC not correct in jffs2_write_dnode()\n");
BUG();
}
);
vecs[0].iov_base = ri;
vecs[0].iov_len = sizeof(*ri);
vecs[1].iov_base = (unsigned char *)data;
vecs[1].iov_len = datalen;
if (je32_to_cpu(ri->totlen) != sizeof(*ri) + datalen) {
pr_warn("%s(): ri->totlen (0x%08x) != sizeof(*ri) (0x%08zx) + datalen (0x%08x)\n",
__func__, je32_to_cpu(ri->totlen),
sizeof(*ri), datalen);
}
fn = jffs2_alloc_full_dnode();
if (!fn)
return ERR_PTR(-ENOMEM);
/* check number of valid vecs */
if (!datalen || !data)
cnt = 1;
retry:
flash_ofs = write_ofs(c);
jffs2_dbg_prewrite_paranoia_check(c, flash_ofs, vecs[0].iov_len + vecs[1].iov_len);
if ((alloc_mode!=ALLOC_GC) && (je32_to_cpu(ri->version) < f->highest_version)) {
BUG_ON(!retried);
jffs2_dbg(1, "%s(): dnode_version %d, highest version %d -> updating dnode\n",
__func__,
je32_to_cpu(ri->version), f->highest_version);
ri->version = cpu_to_je32(++f->highest_version);
ri->node_crc = cpu_to_je32(crc32(0, ri, sizeof(*ri)-8));
}
ret = jffs2_flash_writev(c, vecs, cnt, flash_ofs, &retlen,
(alloc_mode==ALLOC_GC)?0:f->inocache->ino);
if (ret || (retlen != sizeof(*ri) + datalen)) {
pr_notice("Write of %zd bytes at 0x%08x failed. returned %d, retlen %zd\n",
sizeof(*ri) + datalen, flash_ofs, ret, retlen);
/* Mark the space as dirtied */
if (retlen) {
/* Don't change raw->size to match retlen. We may have
written the node header already, and only the data will
seem corrupted, in which case the scan would skip over
any node we write before the original intended end of
this node */
jffs2_add_physical_node_ref(c, flash_ofs | REF_OBSOLETE, PAD(sizeof(*ri)+datalen), NULL);
} else {
pr_notice("Not marking the space at 0x%08x as dirty because the flash driver returned retlen zero\n",
flash_ofs);
}
if (!retried && alloc_mode != ALLOC_NORETRY) {
/* Try to reallocate space and retry */
uint32_t dummy;
struct jffs2_eraseblock *jeb = &c->blocks[flash_ofs / c->sector_size];
retried = 1;
jffs2_dbg(1, "Retrying failed write.\n");
jffs2_dbg_acct_sanity_check(c,jeb);
jffs2_dbg_acct_paranoia_check(c, jeb);
if (alloc_mode == ALLOC_GC) {
ret = jffs2_reserve_space_gc(c, sizeof(*ri) + datalen, &dummy,
JFFS2_SUMMARY_INODE_SIZE);
} else {
/* Locking pain */
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = jffs2_reserve_space(c, sizeof(*ri) + datalen, &dummy,
alloc_mode, JFFS2_SUMMARY_INODE_SIZE);
mutex_lock(&f->sem);
}
if (!ret) {
flash_ofs = write_ofs(c);
jffs2_dbg(1, "Allocated space at 0x%08x to retry failed write.\n",
flash_ofs);
jffs2_dbg_acct_sanity_check(c,jeb);
jffs2_dbg_acct_paranoia_check(c, jeb);
goto retry;
}
jffs2_dbg(1, "Failed to allocate space to retry failed write: %d!\n",
ret);
}
/* Release the full_dnode which is now useless, and return */
jffs2_free_full_dnode(fn);
return ERR_PTR(ret?ret:-EIO);
}
/* Mark the space used */
/* If node covers at least a whole page, or if it starts at the
beginning of a page and runs to the end of the file, or if
it's a hole node, mark it REF_PRISTINE, else REF_NORMAL.
*/
if ((je32_to_cpu(ri->dsize) >= PAGE_CACHE_SIZE) ||
( ((je32_to_cpu(ri->offset)&(PAGE_CACHE_SIZE-1))==0) &&
(je32_to_cpu(ri->dsize)+je32_to_cpu(ri->offset) == je32_to_cpu(ri->isize)))) {
flash_ofs |= REF_PRISTINE;
} else {
flash_ofs |= REF_NORMAL;
}
fn->raw = jffs2_add_physical_node_ref(c, flash_ofs, PAD(sizeof(*ri)+datalen), f->inocache);
if (IS_ERR(fn->raw)) {
void *hold_err = fn->raw;
/* Release the full_dnode which is now useless, and return */
jffs2_free_full_dnode(fn);
return ERR_CAST(hold_err);
}
fn->ofs = je32_to_cpu(ri->offset);
fn->size = je32_to_cpu(ri->dsize);
fn->frags = 0;
jffs2_dbg(1, "jffs2_write_dnode wrote node at 0x%08x(%d) with dsize 0x%x, csize 0x%x, node_crc 0x%08x, data_crc 0x%08x, totlen 0x%08x\n",
flash_ofs & ~3, flash_ofs & 3, je32_to_cpu(ri->dsize),
je32_to_cpu(ri->csize), je32_to_cpu(ri->node_crc),
je32_to_cpu(ri->data_crc), je32_to_cpu(ri->totlen));
if (retried) {
jffs2_dbg_acct_sanity_check(c,NULL);
}
return fn;
}
struct jffs2_full_dirent *jffs2_write_dirent(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
struct jffs2_raw_dirent *rd, const unsigned char *name,
uint32_t namelen, int alloc_mode)
{
struct jffs2_full_dirent *fd;
size_t retlen;
struct kvec vecs[2];
uint32_t flash_ofs;
int retried = 0;
int ret;
jffs2_dbg(1, "%s(ino #%u, name at *0x%p \"%s\"->ino #%u, name_crc 0x%08x)\n",
__func__,
je32_to_cpu(rd->pino), name, name, je32_to_cpu(rd->ino),
je32_to_cpu(rd->name_crc));
D1(if(je32_to_cpu(rd->hdr_crc) != crc32(0, rd, sizeof(struct jffs2_unknown_node)-4)) {
pr_crit("Eep. CRC not correct in jffs2_write_dirent()\n");
BUG();
});
if (strnlen(name, namelen) != namelen) {
/* This should never happen, but seems to have done on at least one
occasion: https://dev.laptop.org/ticket/4184 */
pr_crit("Error in jffs2_write_dirent() -- name contains zero bytes!\n");
pr_crit("Directory inode #%u, name at *0x%p \"%s\"->ino #%u, name_crc 0x%08x\n",
je32_to_cpu(rd->pino), name, name, je32_to_cpu(rd->ino),
je32_to_cpu(rd->name_crc));
WARN_ON(1);
return ERR_PTR(-EIO);
}
vecs[0].iov_base = rd;
vecs[0].iov_len = sizeof(*rd);
vecs[1].iov_base = (unsigned char *)name;
vecs[1].iov_len = namelen;
fd = jffs2_alloc_full_dirent(namelen+1);
if (!fd)
return ERR_PTR(-ENOMEM);
fd->version = je32_to_cpu(rd->version);
fd->ino = je32_to_cpu(rd->ino);
fd->nhash = full_name_hash(name, namelen);
fd->type = rd->type;
memcpy(fd->name, name, namelen);
fd->name[namelen]=0;
retry:
flash_ofs = write_ofs(c);
jffs2_dbg_prewrite_paranoia_check(c, flash_ofs, vecs[0].iov_len + vecs[1].iov_len);
if ((alloc_mode!=ALLOC_GC) && (je32_to_cpu(rd->version) < f->highest_version)) {
BUG_ON(!retried);
jffs2_dbg(1, "%s(): dirent_version %d, highest version %d -> updating dirent\n",
__func__,
je32_to_cpu(rd->version), f->highest_version);
rd->version = cpu_to_je32(++f->highest_version);
fd->version = je32_to_cpu(rd->version);
rd->node_crc = cpu_to_je32(crc32(0, rd, sizeof(*rd)-8));
}
ret = jffs2_flash_writev(c, vecs, 2, flash_ofs, &retlen,
(alloc_mode==ALLOC_GC)?0:je32_to_cpu(rd->pino));
if (ret || (retlen != sizeof(*rd) + namelen)) {
pr_notice("Write of %zd bytes at 0x%08x failed. returned %d, retlen %zd\n",
sizeof(*rd) + namelen, flash_ofs, ret, retlen);
/* Mark the space as dirtied */
if (retlen) {
jffs2_add_physical_node_ref(c, flash_ofs | REF_OBSOLETE, PAD(sizeof(*rd)+namelen), NULL);
} else {
pr_notice("Not marking the space at 0x%08x as dirty because the flash driver returned retlen zero\n",
flash_ofs);
}
if (!retried) {
/* Try to reallocate space and retry */
uint32_t dummy;
struct jffs2_eraseblock *jeb = &c->blocks[flash_ofs / c->sector_size];
retried = 1;
jffs2_dbg(1, "Retrying failed write.\n");
jffs2_dbg_acct_sanity_check(c,jeb);
jffs2_dbg_acct_paranoia_check(c, jeb);
if (alloc_mode == ALLOC_GC) {
ret = jffs2_reserve_space_gc(c, sizeof(*rd) + namelen, &dummy,
JFFS2_SUMMARY_DIRENT_SIZE(namelen));
} else {
/* Locking pain */
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = jffs2_reserve_space(c, sizeof(*rd) + namelen, &dummy,
alloc_mode, JFFS2_SUMMARY_DIRENT_SIZE(namelen));
mutex_lock(&f->sem);
}
if (!ret) {
flash_ofs = write_ofs(c);
jffs2_dbg(1, "Allocated space at 0x%08x to retry failed write\n",
flash_ofs);
jffs2_dbg_acct_sanity_check(c,jeb);
jffs2_dbg_acct_paranoia_check(c, jeb);
goto retry;
}
jffs2_dbg(1, "Failed to allocate space to retry failed write: %d!\n",
ret);
}
/* Release the full_dnode which is now useless, and return */
jffs2_free_full_dirent(fd);
return ERR_PTR(ret?ret:-EIO);
}
/* Mark the space used */
fd->raw = jffs2_add_physical_node_ref(c, flash_ofs | dirent_node_state(rd),
PAD(sizeof(*rd)+namelen), f->inocache);
if (IS_ERR(fd->raw)) {
void *hold_err = fd->raw;
/* Release the full_dirent which is now useless, and return */
jffs2_free_full_dirent(fd);
return ERR_CAST(hold_err);
}
if (retried) {
jffs2_dbg_acct_sanity_check(c,NULL);
}
return fd;
}
/* The OS-specific code fills in the metadata in the jffs2_raw_inode for us, so that
we don't have to go digging in struct inode or its equivalent. It should set:
mode, uid, gid, (starting)isize, atime, ctime, mtime */
int jffs2_write_inode_range(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
struct jffs2_raw_inode *ri, unsigned char *buf,
uint32_t offset, uint32_t writelen, uint32_t *retlen)
{
int ret = 0;
uint32_t writtenlen = 0;
jffs2_dbg(1, "%s(): Ino #%u, ofs 0x%x, len 0x%x\n",
__func__, f->inocache->ino, offset, writelen);
while(writelen) {
struct jffs2_full_dnode *fn;
unsigned char *comprbuf = NULL;
uint16_t comprtype = JFFS2_COMPR_NONE;
uint32_t alloclen;
uint32_t datalen, cdatalen;
int retried = 0;
retry:
jffs2_dbg(2, "jffs2_commit_write() loop: 0x%x to write to 0x%x\n",
writelen, offset);
ret = jffs2_reserve_space(c, sizeof(*ri) + JFFS2_MIN_DATA_LEN,
&alloclen, ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
if (ret) {
jffs2_dbg(1, "jffs2_reserve_space returned %d\n", ret);
break;
}
mutex_lock(&f->sem);
datalen = min_t(uint32_t, writelen, PAGE_CACHE_SIZE - (offset & (PAGE_CACHE_SIZE-1)));
cdatalen = min_t(uint32_t, alloclen - sizeof(*ri), datalen);
comprtype = jffs2_compress(c, f, buf, &comprbuf, &datalen, &cdatalen);
ri->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
ri->nodetype = cpu_to_je16(JFFS2_NODETYPE_INODE);
ri->totlen = cpu_to_je32(sizeof(*ri) + cdatalen);
ri->hdr_crc = cpu_to_je32(crc32(0, ri, sizeof(struct jffs2_unknown_node)-4));
ri->ino = cpu_to_je32(f->inocache->ino);
ri->version = cpu_to_je32(++f->highest_version);
ri->isize = cpu_to_je32(max(je32_to_cpu(ri->isize), offset + datalen));
ri->offset = cpu_to_je32(offset);
ri->csize = cpu_to_je32(cdatalen);
ri->dsize = cpu_to_je32(datalen);
ri->compr = comprtype & 0xff;
ri->usercompr = (comprtype >> 8 ) & 0xff;
ri->node_crc = cpu_to_je32(crc32(0, ri, sizeof(*ri)-8));
ri->data_crc = cpu_to_je32(crc32(0, comprbuf, cdatalen));
fn = jffs2_write_dnode(c, f, ri, comprbuf, cdatalen, ALLOC_NORETRY);
jffs2_free_comprbuf(comprbuf, buf);
if (IS_ERR(fn)) {
ret = PTR_ERR(fn);
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
if (!retried) {
/* Write error to be retried */
retried = 1;
jffs2_dbg(1, "Retrying node write in jffs2_write_inode_range()\n");
goto retry;
}
break;
}
ret = jffs2_add_full_dnode_to_inode(c, f, fn);
if (f->metadata) {
jffs2_mark_node_obsolete(c, f->metadata->raw);
jffs2_free_full_dnode(f->metadata);
f->metadata = NULL;
}
if (ret) {
/* Eep */
jffs2_dbg(1, "Eep. add_full_dnode_to_inode() failed in commit_write, returned %d\n",
ret);
jffs2_mark_node_obsolete(c, fn->raw);
jffs2_free_full_dnode(fn);
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
break;
}
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
if (!datalen) {
pr_warn("Eep. We didn't actually write any data in jffs2_write_inode_range()\n");
ret = -EIO;
break;
}
jffs2_dbg(1, "increasing writtenlen by %d\n", datalen);
writtenlen += datalen;
offset += datalen;
writelen -= datalen;
buf += datalen;
}
*retlen = writtenlen;
return ret;
}
int jffs2_do_create(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f,
struct jffs2_inode_info *f, struct jffs2_raw_inode *ri,
const struct qstr *qstr)
{
struct jffs2_raw_dirent *rd;
struct jffs2_full_dnode *fn;
struct jffs2_full_dirent *fd;
uint32_t alloclen;
int ret;
/* Try to reserve enough space for both node and dirent.
* Just the node will do for now, though
*/
ret = jffs2_reserve_space(c, sizeof(*ri), &alloclen, ALLOC_NORMAL,
JFFS2_SUMMARY_INODE_SIZE);
jffs2_dbg(1, "%s(): reserved 0x%x bytes\n", __func__, alloclen);
if (ret)
return ret;
mutex_lock(&f->sem);
ri->data_crc = cpu_to_je32(0);
ri->node_crc = cpu_to_je32(crc32(0, ri, sizeof(*ri)-8));
fn = jffs2_write_dnode(c, f, ri, NULL, 0, ALLOC_NORMAL);
jffs2_dbg(1, "jffs2_do_create created file with mode 0x%x\n",
jemode_to_cpu(ri->mode));
if (IS_ERR(fn)) {
jffs2_dbg(1, "jffs2_write_dnode() failed\n");
/* Eeek. Wave bye bye */
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
return PTR_ERR(fn);
}
/* No data here. Only a metadata node, which will be
obsoleted by the first data write
*/
f->metadata = fn;
mutex_unlock(&f->sem);
jffs2_complete_reservation(c);
ret = jffs2_init_security(&f->vfs_inode, &dir_f->vfs_inode, qstr);
if (ret)
return ret;
ret = jffs2_init_acl_post(&f->vfs_inode);
if (ret)
return ret;
ret = jffs2_reserve_space(c, sizeof(*rd)+qstr->len, &alloclen,
ALLOC_NORMAL, JFFS2_SUMMARY_DIRENT_SIZE(qstr->len));
if (ret) {
/* Eep. */
jffs2_dbg(1, "jffs2_reserve_space() for dirent failed\n");
return ret;
}
rd = jffs2_alloc_raw_dirent();
if (!rd) {
/* Argh. Now we treat it like a normal delete */
jffs2_complete_reservation(c);
return -ENOMEM;
}
mutex_lock(&dir_f->sem);
rd->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
rd->nodetype = cpu_to_je16(JFFS2_NODETYPE_DIRENT);
rd->totlen = cpu_to_je32(sizeof(*rd) + qstr->len);
rd->hdr_crc = cpu_to_je32(crc32(0, rd, sizeof(struct jffs2_unknown_node)-4));
rd->pino = cpu_to_je32(dir_f->inocache->ino);
rd->version = cpu_to_je32(++dir_f->highest_version);
rd->ino = ri->ino;
rd->mctime = ri->ctime;
rd->nsize = qstr->len;
rd->type = DT_REG;
rd->node_crc = cpu_to_je32(crc32(0, rd, sizeof(*rd)-8));
rd->name_crc = cpu_to_je32(crc32(0, qstr->name, qstr->len));
fd = jffs2_write_dirent(c, dir_f, rd, qstr->name, qstr->len, ALLOC_NORMAL);
jffs2_free_raw_dirent(rd);
if (IS_ERR(fd)) {
/* dirent failed to write. Delete the inode normally
as if it were the final unlink() */
jffs2_complete_reservation(c);
mutex_unlock(&dir_f->sem);
return PTR_ERR(fd);
}
/* Link the fd into the inode's list, obsoleting an old
one if necessary. */
jffs2_add_fd_to_list(c, fd, &dir_f->dents);
jffs2_complete_reservation(c);
mutex_unlock(&dir_f->sem);
return 0;
}
int jffs2_do_unlink(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f,
const char *name, int namelen, struct jffs2_inode_info *dead_f,
uint32_t time)
{
struct jffs2_raw_dirent *rd;
struct jffs2_full_dirent *fd;
uint32_t alloclen;
int ret;
if (!jffs2_can_mark_obsolete(c)) {
/* We can't mark stuff obsolete on the medium. We need to write a deletion dirent */
rd = jffs2_alloc_raw_dirent();
if (!rd)
return -ENOMEM;
ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &alloclen,
ALLOC_DELETION, JFFS2_SUMMARY_DIRENT_SIZE(namelen));
if (ret) {
jffs2_free_raw_dirent(rd);
return ret;
}
mutex_lock(&dir_f->sem);
/* Build a deletion node */
rd->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
rd->nodetype = cpu_to_je16(JFFS2_NODETYPE_DIRENT);
rd->totlen = cpu_to_je32(sizeof(*rd) + namelen);
rd->hdr_crc = cpu_to_je32(crc32(0, rd, sizeof(struct jffs2_unknown_node)-4));
rd->pino = cpu_to_je32(dir_f->inocache->ino);
rd->version = cpu_to_je32(++dir_f->highest_version);
rd->ino = cpu_to_je32(0);
rd->mctime = cpu_to_je32(time);
rd->nsize = namelen;
rd->type = DT_UNKNOWN;
rd->node_crc = cpu_to_je32(crc32(0, rd, sizeof(*rd)-8));
rd->name_crc = cpu_to_je32(crc32(0, name, namelen));
fd = jffs2_write_dirent(c, dir_f, rd, name, namelen, ALLOC_DELETION);
jffs2_free_raw_dirent(rd);
if (IS_ERR(fd)) {
jffs2_complete_reservation(c);
mutex_unlock(&dir_f->sem);
return PTR_ERR(fd);
}
/* File it. This will mark the old one obsolete. */
jffs2_add_fd_to_list(c, fd, &dir_f->dents);
mutex_unlock(&dir_f->sem);
} else {
uint32_t nhash = full_name_hash(name, namelen);
fd = dir_f->dents;
/* We don't actually want to reserve any space, but we do
want to be holding the alloc_sem when we write to flash */
mutex_lock(&c->alloc_sem);
mutex_lock(&dir_f->sem);
for (fd = dir_f->dents; fd; fd = fd->next) {
if (fd->nhash == nhash &&
!memcmp(fd->name, name, namelen) &&
!fd->name[namelen]) {
jffs2_dbg(1, "Marking old dirent node (ino #%u) @%08x obsolete\n",
fd->ino, ref_offset(fd->raw));
jffs2_mark_node_obsolete(c, fd->raw);
/* We don't want to remove it from the list immediately,
because that screws up getdents()/seek() semantics even
more than they're screwed already. Turn it into a
node-less deletion dirent instead -- a placeholder */
fd->raw = NULL;
fd->ino = 0;
break;
}
}
mutex_unlock(&dir_f->sem);
}
/* dead_f is NULL if this was a rename not a real unlink */
/* Also catch the !f->inocache case, where there was a dirent
pointing to an inode which didn't exist. */
if (dead_f && dead_f->inocache) {
mutex_lock(&dead_f->sem);
if (S_ISDIR(OFNI_EDONI_2SFFJ(dead_f)->i_mode)) {
while (dead_f->dents) {
/* There can be only deleted ones */
fd = dead_f->dents;
dead_f->dents = fd->next;
if (fd->ino) {
pr_warn("Deleting inode #%u with active dentry \"%s\"->ino #%u\n",
dead_f->inocache->ino,
fd->name, fd->ino);
} else {
jffs2_dbg(1, "Removing deletion dirent for \"%s\" from dir ino #%u\n",
fd->name,
dead_f->inocache->ino);
}
if (fd->raw)
jffs2_mark_node_obsolete(c, fd->raw);
jffs2_free_full_dirent(fd);
}
dead_f->inocache->pino_nlink = 0;
} else
dead_f->inocache->pino_nlink--;
/* NB: Caller must set inode nlink if appropriate */
mutex_unlock(&dead_f->sem);
}
jffs2_complete_reservation(c);
return 0;
}
int jffs2_do_link (struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, uint32_t ino, uint8_t type, const char *name, int namelen, uint32_t time)
{
struct jffs2_raw_dirent *rd;
struct jffs2_full_dirent *fd;
uint32_t alloclen;
int ret;
rd = jffs2_alloc_raw_dirent();
if (!rd)
return -ENOMEM;
ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &alloclen,
ALLOC_NORMAL, JFFS2_SUMMARY_DIRENT_SIZE(namelen));
if (ret) {
jffs2_free_raw_dirent(rd);
return ret;
}
mutex_lock(&dir_f->sem);
/* Build a deletion node */
rd->magic = cpu_to_je16(JFFS2_MAGIC_BITMASK);
rd->nodetype = cpu_to_je16(JFFS2_NODETYPE_DIRENT);
rd->totlen = cpu_to_je32(sizeof(*rd) + namelen);
rd->hdr_crc = cpu_to_je32(crc32(0, rd, sizeof(struct jffs2_unknown_node)-4));
rd->pino = cpu_to_je32(dir_f->inocache->ino);
rd->version = cpu_to_je32(++dir_f->highest_version);
rd->ino = cpu_to_je32(ino);
rd->mctime = cpu_to_je32(time);
rd->nsize = namelen;
rd->type = type;
rd->node_crc = cpu_to_je32(crc32(0, rd, sizeof(*rd)-8));
rd->name_crc = cpu_to_je32(crc32(0, name, namelen));
fd = jffs2_write_dirent(c, dir_f, rd, name, namelen, ALLOC_NORMAL);
jffs2_free_raw_dirent(rd);
if (IS_ERR(fd)) {
jffs2_complete_reservation(c);
mutex_unlock(&dir_f->sem);
return PTR_ERR(fd);
}
/* File it. This will mark the old one obsolete. */
jffs2_add_fd_to_list(c, fd, &dir_f->dents);
jffs2_complete_reservation(c);
mutex_unlock(&dir_f->sem);
return 0;
}

51
fs/jffs2/writev.c Normal file
View file

@ -0,0 +1,51 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2001-2007 Red Hat, Inc.
*
* Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#include <linux/kernel.h>
#include <linux/mtd/mtd.h>
#include "nodelist.h"
int jffs2_flash_direct_writev(struct jffs2_sb_info *c, const struct kvec *vecs,
unsigned long count, loff_t to, size_t *retlen)
{
if (!jffs2_is_writebuffered(c)) {
if (jffs2_sum_active()) {
int res;
res = jffs2_sum_add_kvec(c, vecs, count, (uint32_t) to);
if (res) {
return res;
}
}
}
return mtd_writev(c->mtd, vecs, count, to, retlen);
}
int jffs2_flash_direct_write(struct jffs2_sb_info *c, loff_t ofs, size_t len,
size_t *retlen, const u_char *buf)
{
int ret;
ret = mtd_write(c->mtd, ofs, len, retlen, buf);
if (jffs2_sum_active()) {
struct kvec vecs[1];
int res;
vecs[0].iov_base = (unsigned char *) buf;
vecs[0].iov_len = len;
res = jffs2_sum_add_kvec(c, vecs, 1, (uint32_t) ofs);
if (res) {
return res;
}
}
return ret;
}

1341
fs/jffs2/xattr.c Normal file

File diff suppressed because it is too large Load diff

133
fs/jffs2/xattr.h Normal file
View file

@ -0,0 +1,133 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2006 NEC Corporation
*
* Created by KaiGai Kohei <kaigai@ak.jp.nec.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#ifndef _JFFS2_FS_XATTR_H_
#define _JFFS2_FS_XATTR_H_
#include <linux/xattr.h>
#include <linux/list.h>
#define JFFS2_XFLAGS_HOT (0x01) /* This datum is HOT */
#define JFFS2_XFLAGS_BIND (0x02) /* This datum is not reclaimed */
#define JFFS2_XFLAGS_DEAD (0x40) /* This datum is already dead */
#define JFFS2_XFLAGS_INVALID (0x80) /* This datum contains crc error */
struct jffs2_xattr_datum
{
void *always_null;
struct jffs2_raw_node_ref *node;
uint8_t class;
uint8_t flags;
uint16_t xprefix; /* see JFFS2_XATTR_PREFIX_* */
struct list_head xindex; /* chained from c->xattrindex[n] */
atomic_t refcnt; /* # of xattr_ref refers this */
uint32_t xid;
uint32_t version;
uint32_t data_crc;
uint32_t hashkey;
char *xname; /* XATTR name without prefix */
uint32_t name_len; /* length of xname */
char *xvalue; /* XATTR value */
uint32_t value_len; /* length of xvalue */
};
struct jffs2_inode_cache;
struct jffs2_xattr_ref
{
void *always_null;
struct jffs2_raw_node_ref *node;
uint8_t class;
uint8_t flags; /* Currently unused */
u16 unused;
uint32_t xseqno;
union {
struct jffs2_inode_cache *ic; /* reference to jffs2_inode_cache */
uint32_t ino; /* only used in scanning/building */
};
union {
struct jffs2_xattr_datum *xd; /* reference to jffs2_xattr_datum */
uint32_t xid; /* only used in sccanning/building */
};
struct jffs2_xattr_ref *next; /* chained from ic->xref_list */
};
#define XREF_DELETE_MARKER (0x00000001)
static inline int is_xattr_ref_dead(struct jffs2_xattr_ref *ref)
{
return ((ref->xseqno & XREF_DELETE_MARKER) != 0);
}
#ifdef CONFIG_JFFS2_FS_XATTR
extern void jffs2_init_xattr_subsystem(struct jffs2_sb_info *c);
extern void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c);
extern void jffs2_clear_xattr_subsystem(struct jffs2_sb_info *c);
extern struct jffs2_xattr_datum *jffs2_setup_xattr_datum(struct jffs2_sb_info *c,
uint32_t xid, uint32_t version);
extern void jffs2_xattr_do_crccheck_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic);
extern void jffs2_xattr_delete_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic);
extern void jffs2_xattr_free_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic);
extern int jffs2_garbage_collect_xattr_datum(struct jffs2_sb_info *c, struct jffs2_xattr_datum *xd,
struct jffs2_raw_node_ref *raw);
extern int jffs2_garbage_collect_xattr_ref(struct jffs2_sb_info *c, struct jffs2_xattr_ref *ref,
struct jffs2_raw_node_ref *raw);
extern int jffs2_verify_xattr(struct jffs2_sb_info *c);
extern void jffs2_release_xattr_datum(struct jffs2_sb_info *c, struct jffs2_xattr_datum *xd);
extern void jffs2_release_xattr_ref(struct jffs2_sb_info *c, struct jffs2_xattr_ref *ref);
extern int do_jffs2_getxattr(struct inode *inode, int xprefix, const char *xname,
char *buffer, size_t size);
extern int do_jffs2_setxattr(struct inode *inode, int xprefix, const char *xname,
const char *buffer, size_t size, int flags);
extern const struct xattr_handler *jffs2_xattr_handlers[];
extern const struct xattr_handler jffs2_user_xattr_handler;
extern const struct xattr_handler jffs2_trusted_xattr_handler;
extern ssize_t jffs2_listxattr(struct dentry *, char *, size_t);
#define jffs2_getxattr generic_getxattr
#define jffs2_setxattr generic_setxattr
#define jffs2_removexattr generic_removexattr
#else
#define jffs2_init_xattr_subsystem(c)
#define jffs2_build_xattr_subsystem(c)
#define jffs2_clear_xattr_subsystem(c)
#define jffs2_xattr_do_crccheck_inode(c, ic)
#define jffs2_xattr_delete_inode(c, ic)
#define jffs2_xattr_free_inode(c, ic)
#define jffs2_verify_xattr(c) (1)
#define jffs2_xattr_handlers NULL
#define jffs2_listxattr NULL
#define jffs2_getxattr NULL
#define jffs2_setxattr NULL
#define jffs2_removexattr NULL
#endif /* CONFIG_JFFS2_FS_XATTR */
#ifdef CONFIG_JFFS2_FS_SECURITY
extern int jffs2_init_security(struct inode *inode, struct inode *dir,
const struct qstr *qstr);
extern const struct xattr_handler jffs2_security_xattr_handler;
#else
#define jffs2_init_security(inode,dir,qstr) (0)
#endif /* CONFIG_JFFS2_FS_SECURITY */
#endif /* _JFFS2_FS_XATTR_H_ */

55
fs/jffs2/xattr_trusted.c Normal file
View file

@ -0,0 +1,55 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2006 NEC Corporation
*
* Created by KaiGai Kohei <kaigai@ak.jp.nec.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/jffs2.h>
#include <linux/xattr.h>
#include <linux/mtd/mtd.h>
#include "nodelist.h"
static int jffs2_trusted_getxattr(struct dentry *dentry, const char *name,
void *buffer, size_t size, int type)
{
if (!strcmp(name, ""))
return -EINVAL;
return do_jffs2_getxattr(dentry->d_inode, JFFS2_XPREFIX_TRUSTED,
name, buffer, size);
}
static int jffs2_trusted_setxattr(struct dentry *dentry, const char *name,
const void *buffer, size_t size, int flags, int type)
{
if (!strcmp(name, ""))
return -EINVAL;
return do_jffs2_setxattr(dentry->d_inode, JFFS2_XPREFIX_TRUSTED,
name, buffer, size, flags);
}
static size_t jffs2_trusted_listxattr(struct dentry *dentry, char *list,
size_t list_size, const char *name, size_t name_len, int type)
{
size_t retlen = XATTR_TRUSTED_PREFIX_LEN + name_len + 1;
if (list && retlen<=list_size) {
strcpy(list, XATTR_TRUSTED_PREFIX);
strcpy(list + XATTR_TRUSTED_PREFIX_LEN, name);
}
return retlen;
}
const struct xattr_handler jffs2_trusted_xattr_handler = {
.prefix = XATTR_TRUSTED_PREFIX,
.list = jffs2_trusted_listxattr,
.set = jffs2_trusted_setxattr,
.get = jffs2_trusted_getxattr
};

55
fs/jffs2/xattr_user.c Normal file
View file

@ -0,0 +1,55 @@
/*
* JFFS2 -- Journalling Flash File System, Version 2.
*
* Copyright © 2006 NEC Corporation
*
* Created by KaiGai Kohei <kaigai@ak.jp.nec.com>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
*/
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/jffs2.h>
#include <linux/xattr.h>
#include <linux/mtd/mtd.h>
#include "nodelist.h"
static int jffs2_user_getxattr(struct dentry *dentry, const char *name,
void *buffer, size_t size, int type)
{
if (!strcmp(name, ""))
return -EINVAL;
return do_jffs2_getxattr(dentry->d_inode, JFFS2_XPREFIX_USER,
name, buffer, size);
}
static int jffs2_user_setxattr(struct dentry *dentry, const char *name,
const void *buffer, size_t size, int flags, int type)
{
if (!strcmp(name, ""))
return -EINVAL;
return do_jffs2_setxattr(dentry->d_inode, JFFS2_XPREFIX_USER,
name, buffer, size, flags);
}
static size_t jffs2_user_listxattr(struct dentry *dentry, char *list,
size_t list_size, const char *name, size_t name_len, int type)
{
size_t retlen = XATTR_USER_PREFIX_LEN + name_len + 1;
if (list && retlen <= list_size) {
strcpy(list, XATTR_USER_PREFIX);
strcpy(list + XATTR_USER_PREFIX_LEN, name);
}
return retlen;
}
const struct xattr_handler jffs2_user_xattr_handler = {
.prefix = XATTR_USER_PREFIX,
.list = jffs2_user_listxattr,
.set = jffs2_user_setxattr,
.get = jffs2_user_getxattr
};