[f2fs-dev] [PATCH 2/2] f2fs: support data compression
Brought to you by:
kjgkr
|
From: Jaegeuk K. <ja...@ke...> - 2019-10-22 17:16:19
|
From: Chao Yu <yu...@hu...>
This patch tries to support compression in f2fs.
- New term named cluster is defined as basic unit of compression, file can
be divided into multiple clusters logically. One cluster includes 4 << n
(n >= 0) logical pages, compression size is also cluster size, each of
cluster can be compressed or not.
- In cluster metadata layout, one special flag is used to indicate cluster
is compressed one or normal one, for compressed cluster, following metadata
maps cluster to [1, 4 << n - 1] physical blocks, in where f2fs stores
data including compress header and compressed data.
- In order to eliminate write amplification during overwrite, F2FS only
support compression on write-once file, data can be compressed only when
all logical blocks in file are valid and cluster compress ratio is lower
than specified threshold.
- To enable compression on regular inode, there are three ways:
* chattr +c file
* chattr +c dir; touch dir/file
* mount w/ -o compress_extension=ext; touch file.ext
Compress metadata layout:
[Dnode Structure]
+-----------------------------------------------+
| cluster 1 | cluster 2 | ......... | cluster N |
+-----------------------------------------------+
. . . .
. . . .
. Compressed Cluster . . Normal Cluster .
+----------+---------+---------+---------+ +---------+---------+---------+---------+
|compr flag| block 1 | block 2 | block 3 | | block 1 | block 2 | block 3 | block 4 |
+----------+---------+---------+---------+ +---------+---------+---------+---------+
. .
. .
. .
+-------------+-------------+----------+----------------------------+
| data length | data chksum | reserved | compressed data |
+-------------+-------------+----------+----------------------------+
Changelog:
20190326:
- fix error handling of read_end_io().
- remove unneeded comments in f2fs_encrypt_one_page().
20190327:
- fix wrong use of f2fs_cluster_is_full() in f2fs_mpage_readpages().
- don't jump into loop directly to avoid uninitialized variables.
- add TODO tag in error path of f2fs_write_cache_pages().
20190328:
- fix wrong merge condition in f2fs_read_multi_pages().
- check compressed file in f2fs_post_read_required().
20190401
- allow overwrite on non-compressed cluster.
- check cluster meta before writing compressed data.
20190402
- don't preallocate blocks for compressed file.
- add lz4 compress algorithm
- process multiple post read works in one workqueue
Now f2fs supports processing post read work in multiple workqueue,
it shows low performance due to schedule overhead of multiple
workqueue executing orderly.
- compress: support buffered overwrite
C: compress cluster flag
V: valid block address
N: NEW_ADDR
One cluster contain 4 blocks
before overwrite after overwrite
- VVVV -> CVNN
- CVNN -> VVVV
- CVNN -> CVNN
- CVNN -> CVVV
- CVVV -> CVNN
- CVVV -> CVVV
[Jaegeuk Kim]
- add tracepoint for f2fs_{,de}compress_pages()
- fix many bugs and add some compression stats
Signed-off-by: Chao Yu <yu...@hu...>
Signed-off-by: Jaegeuk Kim <ja...@ke...>
---
Documentation/filesystems/f2fs.txt | 48 ++
fs/f2fs/Kconfig | 4 +
fs/f2fs/Makefile | 2 +-
fs/f2fs/compress.c | 1066 ++++++++++++++++++++++++++++
fs/f2fs/data.c | 468 ++++++++++--
fs/f2fs/debug.c | 6 +
fs/f2fs/f2fs.h | 205 +++++-
fs/f2fs/file.c | 124 +++-
fs/f2fs/inode.c | 29 +
fs/f2fs/namei.c | 47 ++
fs/f2fs/segment.c | 5 +-
fs/f2fs/segment.h | 12 -
fs/f2fs/super.c | 115 ++-
fs/f2fs/sysfs.c | 7 +
include/linux/f2fs_fs.h | 8 +
include/trace/events/f2fs.h | 99 +++
16 files changed, 2139 insertions(+), 106 deletions(-)
create mode 100644 fs/f2fs/compress.c
diff --git a/Documentation/filesystems/f2fs.txt b/Documentation/filesystems/f2fs.txt
index 29020af0cff9..d1accf665c86 100644
--- a/Documentation/filesystems/f2fs.txt
+++ b/Documentation/filesystems/f2fs.txt
@@ -235,6 +235,13 @@ checkpoint=%s[:%u[%]] Set to "disable" to turn off checkpointing. Set to "en
hide up to all remaining free space. The actual space that
would be unusable can be viewed at /sys/fs/f2fs/<disk>/unusable
This space is reclaimed once checkpoint=enable.
+compress_algorithm=%s Control compress algorithm, currently f2fs supports "lzo"
+ and "lz4" algorithm.
+compress_log_size=%u Support configuring compress cluster size, the size will
+ be 4kb * (1 << %u), 16kb is minimum size, also it's
+ default size.
+compress_extension=%s Support adding specified extension, so that f2fs can
+ enable compression on those corresponding file.
================================================================================
DEBUGFS ENTRIES
@@ -837,3 +844,44 @@ zero or random data, which is useful to the below scenario where:
4. address = fibmap(fd, offset)
5. open(blkdev)
6. write(blkdev, address)
+
+Compression implementation
+--------------------------
+
+- New term named cluster is defined as basic unit of compression, file can
+be divided into multiple clusters logically. One cluster includes 4 << n
+(n >= 0) logical pages, compression size is also cluster size, each of
+cluster can be compressed or not.
+
+- In cluster metadata layout, one special flag is used to indicate cluster
+is compressed one or normal one, for compressed cluster, following metadata
+maps cluster to [1, 4 << n - 1] physical blocks, in where f2fs stores
+data including compress header and compressed data.
+
+- In order to eliminate write amplification during overwrite, F2FS only
+support compression on write-once file, data can be compressed only when
+all logical blocks in file are valid and cluster compress ratio is lower
+than specified threshold.
+
+- To enable compression on regular inode, there are three ways:
+* chattr +c file
+* chattr +c dir; touch dir/file
+* mount w/ -o compress_extension=ext; touch file.ext
+
+Compress metadata layout:
+ [Dnode Structure]
+ +-----------------------------------------------+
+ | cluster 1 | cluster 2 | ......... | cluster N |
+ +-----------------------------------------------+
+ . . . .
+ . . . .
+ . Compressed Cluster . . Normal Cluster .
++----------+---------+---------+---------+ +---------+---------+---------+---------+
+|compr flag| block 1 | block 2 | block 3 | | block 1 | block 2 | block 3 | block 4 |
++----------+---------+---------+---------+ +---------+---------+---------+---------+
+ . .
+ . .
+ . .
+ +-------------+-------------+----------+----------------------------+
+ | data length | data chksum | reserved | compressed data |
+ +-------------+-------------+----------+----------------------------+
diff --git a/fs/f2fs/Kconfig b/fs/f2fs/Kconfig
index 652fd2e2b23d..c12854c3b1a1 100644
--- a/fs/f2fs/Kconfig
+++ b/fs/f2fs/Kconfig
@@ -6,6 +6,10 @@ config F2FS_FS
select CRYPTO
select CRYPTO_CRC32
select F2FS_FS_XATTR if FS_ENCRYPTION
+ select LZO_COMPRESS
+ select LZO_DECOMPRESS
+ select LZ4_COMPRESS
+ select LZ4_DECOMPRESS
help
F2FS is based on Log-structured File System (LFS), which supports
versatile "flash-friendly" features. The design has been focused on
diff --git a/fs/f2fs/Makefile b/fs/f2fs/Makefile
index 2aaecc63834f..a5c771a6367d 100644
--- a/fs/f2fs/Makefile
+++ b/fs/f2fs/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_F2FS_FS) += f2fs.o
f2fs-y := dir.o file.o inode.o namei.o hash.o super.o inline.o
f2fs-y += checkpoint.o gc.o data.o node.o segment.o recovery.o
-f2fs-y += shrinker.o extent_cache.o sysfs.o
+f2fs-y += shrinker.o extent_cache.o sysfs.o compress.o
f2fs-$(CONFIG_F2FS_STAT_FS) += debug.o
f2fs-$(CONFIG_F2FS_FS_XATTR) += xattr.o
f2fs-$(CONFIG_F2FS_FS_POSIX_ACL) += acl.o
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
new file mode 100644
index 000000000000..f276d82a67aa
--- /dev/null
+++ b/fs/f2fs/compress.c
@@ -0,0 +1,1066 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * f2fs compress support
+ *
+ * Copyright (c) 2019 Chao Yu <ch...@ke...>
+ */
+
+#include <linux/fs.h>
+#include <linux/f2fs_fs.h>
+#include <linux/writeback.h>
+#include <linux/lzo.h>
+#include <linux/lz4.h>
+
+#include "f2fs.h"
+#include "node.h"
+#include <trace/events/f2fs.h>
+
+struct f2fs_compress_ops {
+ int (*init_compress_ctx)(struct compress_ctx *cc);
+ void (*destroy_compress_ctx)(struct compress_ctx *cc);
+ int (*compress_pages)(struct compress_ctx *cc);
+ int (*decompress_pages)(struct decompress_io_ctx *dic);
+};
+
+static unsigned int offset_in_cluster(struct compress_ctx *cc, pgoff_t index)
+{
+ return index % cc->cluster_size;
+}
+
+static unsigned int cluster_idx(struct compress_ctx *cc, pgoff_t index)
+{
+ return index / cc->cluster_size;
+}
+
+static unsigned int start_idx_of_cluster(struct compress_ctx *cc)
+{
+ return cc->cluster_idx * cc->cluster_size;
+}
+
+bool f2fs_is_compressed_page(struct page *page)
+{
+ if (!page_private(page))
+ return false;
+ if (IS_ATOMIC_WRITTEN_PAGE(page) || IS_DUMMY_WRITTEN_PAGE(page))
+ return false;
+ return *((u32 *)page_private(page)) == F2FS_COMPRESSED_PAGE_MAGIC;
+}
+
+static void f2fs_set_compressed_page(struct page *page,
+ struct inode *inode, pgoff_t index, void *data, refcount_t *r)
+{
+ SetPagePrivate(page);
+ set_page_private(page, (unsigned long)data);
+
+ /* i_crypto_info and iv index */
+ page->index = index;
+ page->mapping = inode->i_mapping;
+ if (r)
+ refcount_inc(r);
+}
+
+static void f2fs_put_compressed_page(struct page *page)
+{
+ set_page_private(page, (unsigned long)NULL);
+ ClearPagePrivate(page);
+ page->mapping = NULL;
+ unlock_page(page);
+ put_page(page);
+}
+
+struct page *f2fs_compress_control_page(struct page *page)
+{
+ return ((struct compress_io_ctx *)page_private(page))->rpages[0];
+}
+
+int f2fs_init_compress_ctx(struct compress_ctx *cc)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode);
+
+ if (cc->rpages)
+ return 0;
+ cc->rpages = f2fs_kzalloc(sbi, sizeof(struct page *) * cc->cluster_size,
+ GFP_KERNEL);
+ if (!cc->rpages)
+ return -ENOMEM;
+ return 0;
+}
+
+void f2fs_destroy_compress_ctx(struct compress_ctx *cc)
+{
+ kvfree(cc->rpages);
+}
+
+int f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page)
+{
+ unsigned int cluster_ofs;
+
+ if (!f2fs_cluster_can_merge_page(cc, page->index))
+ return -EAGAIN;
+
+ cluster_ofs = offset_in_cluster(cc, page->index);
+ cc->rpages[cluster_ofs] = page;
+ cc->nr_rpages++;
+ cc->cluster_idx = cluster_idx(cc, page->index);
+ return 0;
+}
+
+static int lzo_init_compress_ctx(struct compress_ctx *cc)
+{
+ cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode),
+ LZO1X_MEM_COMPRESS, GFP_KERNEL);
+ if (!cc->private)
+ return -ENOMEM;
+
+ cc->clen = lzo1x_worst_compress(PAGE_SIZE * cc->cluster_size);
+ return 0;
+}
+
+static void lzo_destroy_compress_ctx(struct compress_ctx *cc)
+{
+ kvfree(cc->private);
+ cc->private = NULL;
+}
+
+static int lzo_compress_pages(struct compress_ctx *cc)
+{
+ int ret;
+
+ ret = lzo1x_1_compress(cc->rbuf, cc->rlen, cc->cbuf->cdata,
+ &cc->clen, cc->private);
+ if (ret != LZO_E_OK) {
+ printk_ratelimited("%sF2FS-fs: lzo compress failed, ret:%d\n",
+ KERN_ERR, ret);
+ return -EIO;
+ }
+ return 0;
+}
+
+static int lzo_decompress_pages(struct decompress_io_ctx *dic)
+{
+ int ret;
+
+ ret = lzo1x_decompress_safe(dic->cbuf->cdata, dic->clen,
+ dic->rbuf, &dic->rlen);
+ if (ret != LZO_E_OK) {
+ printk_ratelimited("%sF2FS-fs: lzo decompress failed, ret:%d\n",
+ KERN_ERR, ret);
+ return -EIO;
+ }
+
+ if (dic->rlen != PAGE_SIZE * dic->cluster_size) {
+ printk_ratelimited("%sF2FS-fs: lzo invalid rlen:%zu, "
+ "expected:%lu\n", KERN_ERR, dic->rlen,
+ PAGE_SIZE * dic->cluster_size);
+ return -EIO;
+ }
+ return 0;
+}
+
+static const struct f2fs_compress_ops f2fs_lzo_ops = {
+ .init_compress_ctx = lzo_init_compress_ctx,
+ .destroy_compress_ctx = lzo_destroy_compress_ctx,
+ .compress_pages = lzo_compress_pages,
+ .decompress_pages = lzo_decompress_pages,
+};
+
+static int lz4_init_compress_ctx(struct compress_ctx *cc)
+{
+ cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode),
+ LZO1X_MEM_COMPRESS, GFP_KERNEL);
+ if (!cc->private)
+ return -ENOMEM;
+
+ cc->clen = LZ4_compressBound(PAGE_SIZE * cc->cluster_size);
+ return 0;
+}
+
+static void lz4_destroy_compress_ctx(struct compress_ctx *cc)
+{
+ kvfree(cc->private);
+ cc->private = NULL;
+}
+
+static int lz4_compress_pages(struct compress_ctx *cc)
+{
+ int len;
+
+ len = LZ4_compress_default(cc->rbuf, cc->cbuf->cdata, cc->rlen,
+ cc->clen, cc->private);
+ if (!len) {
+ printk_ratelimited("%sF2FS-fs: lz4 compress failed\n",
+ KERN_ERR);
+ return -EIO;
+ }
+ cc->clen = len;
+ return 0;
+}
+
+static int lz4_decompress_pages(struct decompress_io_ctx *dic)
+{
+ int ret;
+
+ ret = LZ4_decompress_safe(dic->cbuf->cdata, dic->rbuf,
+ dic->clen, dic->rlen);
+ if (ret < 0) {
+ printk_ratelimited("%sF2FS-fs: lz4 decompress failed, ret:%d\n",
+ KERN_ERR, ret);
+ return -EIO;
+ }
+
+ if (ret != PAGE_SIZE * dic->cluster_size) {
+ printk_ratelimited("%sF2FS-fs: lz4 invalid rlen:%zu, "
+ "expected:%lu\n", KERN_ERR, dic->rlen,
+ PAGE_SIZE * dic->cluster_size);
+ return -EIO;
+ }
+ return 0;
+}
+
+static const struct f2fs_compress_ops f2fs_lz4_ops = {
+ .init_compress_ctx = lz4_init_compress_ctx,
+ .destroy_compress_ctx = lz4_destroy_compress_ctx,
+ .compress_pages = lz4_compress_pages,
+ .decompress_pages = lz4_decompress_pages,
+};
+
+static void f2fs_release_cluster_pages(struct compress_ctx *cc)
+{
+ int i;
+
+ for (i = 0; i < cc->nr_rpages; i++) {
+ inode_dec_dirty_pages(cc->inode);
+ unlock_page(cc->rpages[i]);
+ }
+}
+
+static struct page *f2fs_grab_page(void)
+{
+ struct page *page;
+
+ page = alloc_pages(GFP_KERNEL, 0);
+ if (!page)
+ return NULL;
+ lock_page(page);
+ return page;
+}
+
+static int f2fs_compress_pages(struct compress_ctx *cc)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode);
+ struct f2fs_inode_info *fi = F2FS_I(cc->inode);
+ const struct f2fs_compress_ops *cops =
+ sbi->cops[fi->i_compress_algorithm];
+ unsigned int max_len, nr_cpages;
+ int i, ret;
+
+ trace_f2fs_compress_pages_start(cc->inode, cc->cluster_idx,
+ cc->cluster_size, fi->i_compress_algorithm);
+
+ ret = cops->init_compress_ctx(cc);
+ if (ret)
+ goto out;
+
+ max_len = COMPRESS_HEADER_SIZE + cc->clen;
+ cc->nr_cpages = roundup(max_len, PAGE_SIZE) / PAGE_SIZE;
+
+ cc->cpages = f2fs_kzalloc(sbi, sizeof(struct page *) *
+ cc->nr_cpages, GFP_KERNEL);
+ if (!cc->cpages) {
+ ret = -ENOMEM;
+ goto destroy_compress_ctx;
+ }
+
+ for (i = 0; i < cc->nr_cpages; i++) {
+ cc->cpages[i] = f2fs_grab_page();
+ if (!cc->cpages[i]) {
+ ret = -ENOMEM;
+ goto out_free_cpages;
+ }
+ }
+
+ cc->rbuf = vmap(cc->rpages, cc->cluster_size, VM_MAP, PAGE_KERNEL);
+ if (!cc->rbuf) {
+ ret = -ENOMEM;
+ goto destroy_compress_ctx;
+ }
+
+ cc->cbuf = vmap(cc->cpages, cc->nr_cpages, VM_MAP, PAGE_KERNEL);
+ if (!cc->cbuf) {
+ ret = -ENOMEM;
+ goto out_vunmap_rbuf;
+ }
+
+ ret = cops->compress_pages(cc);
+ if (ret)
+ goto out_vunmap_cbuf;
+
+ max_len = PAGE_SIZE * (cc->cluster_size - 1) - COMPRESS_HEADER_SIZE;
+
+ if (cc->clen > max_len) {
+ ret = -EAGAIN;
+ goto out_vunmap_cbuf;
+ }
+
+ cc->cbuf->clen = cpu_to_le32(cc->clen);
+ cc->cbuf->chksum = 0;
+
+ vunmap(cc->cbuf);
+ vunmap(cc->rbuf);
+
+ nr_cpages = roundup(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE) /
+ PAGE_SIZE;
+
+ for (i = nr_cpages; i < cc->nr_cpages; i++) {
+ f2fs_put_compressed_page(cc->cpages[i]);
+ cc->cpages[i] = NULL;
+ }
+
+ cc->nr_cpages = nr_cpages;
+
+ trace_f2fs_compress_pages_end(cc->inode, cc->cluster_idx,
+ cc->clen, ret);
+ return 0;
+out_vunmap_cbuf:
+ vunmap(cc->cbuf);
+out_vunmap_rbuf:
+ vunmap(cc->rbuf);
+out_free_cpages:
+ for (i = 0; i < cc->nr_cpages; i++)
+ f2fs_put_compressed_page(cc->cpages[i]);
+ kvfree(cc->cpages);
+ cc->cpages = NULL;
+destroy_compress_ctx:
+ cops->destroy_compress_ctx(cc);
+out:
+ trace_f2fs_compress_pages_end(cc->inode, cc->cluster_idx,
+ cc->clen, ret);
+ return ret;
+}
+
+void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity)
+{
+ struct decompress_io_ctx *dic =
+ (struct decompress_io_ctx *)page_private(page);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode);
+ struct f2fs_inode_info *fi= F2FS_I(dic->inode);
+ const struct f2fs_compress_ops *cops =
+ sbi->cops[fi->i_compress_algorithm];
+ int ret;
+
+ dec_page_count(sbi, F2FS_RD_DATA);
+
+ if (bio->bi_status)
+ dic->err = true;
+
+ if (refcount_dec_not_one(&dic->ref))
+ return;
+
+ trace_f2fs_decompress_pages_start(dic->inode, dic->cluster_idx,
+ dic->cluster_size, fi->i_compress_algorithm);
+
+ /* submit partial compressed pages */
+ if (dic->err) {
+ ret = dic->err;
+ goto out_free_dic;
+ }
+
+ dic->rbuf = vmap(dic->tpages, dic->cluster_size, VM_MAP, PAGE_KERNEL);
+ if (!dic->rbuf) {
+ ret = -ENOMEM;
+ goto out_free_dic;
+ }
+
+ dic->cbuf = vmap(dic->cpages, dic->nr_cpages, VM_MAP, PAGE_KERNEL);
+ if (!dic->cbuf) {
+ ret = -ENOMEM;
+ goto out_vunmap_rbuf;
+ }
+
+ dic->clen = le32_to_cpu(dic->cbuf->clen);
+ dic->rlen = PAGE_SIZE * dic->cluster_size;
+
+ if (dic->clen > PAGE_SIZE * dic->nr_cpages - COMPRESS_HEADER_SIZE) {
+ ret = -EFAULT;
+ goto out_vunmap_cbuf;
+ }
+
+ ret = cops->decompress_pages(dic);
+
+out_vunmap_cbuf:
+ vunmap(dic->cbuf);
+out_vunmap_rbuf:
+ vunmap(dic->rbuf);
+out_free_dic:
+ f2fs_set_cluster_uptodate(dic->rpages, dic->cluster_size, ret, verity);
+ f2fs_free_dic(dic);
+
+ trace_f2fs_decompress_pages_end(dic->inode, dic->cluster_idx,
+ dic->clen, ret);
+}
+
+static bool is_page_in_cluster(struct compress_ctx *cc, pgoff_t index)
+{
+ if (cc->cluster_idx == NULL_CLUSTER)
+ return true;
+ return cc->cluster_idx == cluster_idx(cc, index);
+}
+
+bool f2fs_cluster_is_empty(struct compress_ctx *cc)
+{
+ return cc->nr_rpages == 0;
+}
+
+static bool f2fs_cluster_is_full(struct compress_ctx *cc)
+{
+ return cc->cluster_size == cc->nr_rpages;
+}
+
+bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index)
+{
+ if (f2fs_cluster_is_empty(cc))
+ return true;
+ if (f2fs_cluster_is_full(cc))
+ return false;
+ return is_page_in_cluster(cc, index);
+}
+
+static bool __cluster_may_compress(struct compress_ctx *cc)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode);
+ loff_t i_size = i_size_read(cc->inode);
+ const pgoff_t end_index = ((unsigned long long)i_size)
+ >> PAGE_SHIFT;
+ unsigned offset;
+ int i;
+
+ for (i = 0; i < cc->cluster_size; i++) {
+ struct page *page = cc->rpages[i];
+
+ f2fs_bug_on(sbi, !page);
+
+ if (unlikely(f2fs_cp_error(sbi)))
+ return false;
+ if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ return false;
+ if (f2fs_is_drop_cache(cc->inode))
+ return false;
+ if (f2fs_is_volatile_file(cc->inode))
+ return false;
+
+ offset = i_size & (PAGE_SIZE - 1);
+ if ((page->index > end_index) ||
+ (page->index == end_index && !offset))
+ return false;
+ }
+ return true;
+}
+
+int f2fs_is_cluster_existed(struct compress_ctx *cc)
+{
+ struct dnode_of_data dn;
+ unsigned int start_idx = start_idx_of_cluster(cc);
+ int ret;
+ int i;
+
+ set_new_dnode(&dn, cc->inode, NULL, NULL, 0);
+ ret = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE);
+ if (ret)
+ return ret;
+
+ for (i = 0; i < cc->cluster_size; i++, dn.ofs_in_node++) {
+ block_t blkaddr = datablock_addr(dn.inode, dn.node_page,
+ dn.ofs_in_node);
+ if (blkaddr == COMPRESS_ADDR) {
+ ret = 1;
+ break;
+ }
+ if (__is_valid_data_blkaddr(blkaddr)) {
+ ret = 2;
+ break;
+ }
+ }
+ f2fs_put_dnode(&dn);
+ return ret;
+}
+
+static bool cluster_may_compress(struct compress_ctx *cc)
+{
+ if (!f2fs_compressed_file(cc->inode))
+ return false;
+ if (!f2fs_cluster_is_full(cc))
+ return false;
+ return __cluster_may_compress(cc);
+}
+
+void f2fs_reset_compress_ctx(struct compress_ctx *cc)
+{
+ if (cc->rpages)
+ memset(cc->rpages, 0, sizeof(struct page *) * cc->cluster_size);
+ cc->nr_rpages = 0;
+ cc->nr_cpages = 0;
+ cc->cluster_idx = NULL_CLUSTER;
+}
+
+static void set_cluster_writeback(struct compress_ctx *cc)
+{
+ int i;
+
+ for (i = 0; i < cc->cluster_size; i++)
+ set_page_writeback(cc->rpages[i]);
+}
+
+static void set_cluster_dirty(struct compress_ctx *cc)
+{
+ int i;
+
+ for (i = 0; i < cc->cluster_size; i++)
+ set_page_dirty(cc->rpages[i]);
+}
+
+int f2fs_prepare_compress_overwrite(struct compress_ctx *cc,
+ struct page **pagep, pgoff_t index,
+ void **fsdata, bool prealloc)
+{
+ struct inode *inode = cc->inode;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct bio *bio = NULL;
+ struct address_space *mapping = inode->i_mapping;
+ struct page *page;
+ struct dnode_of_data dn;
+ sector_t last_block_in_bio;
+ unsigned fgp_flag = FGP_LOCK | FGP_WRITE | FGP_CREAT;
+ unsigned int start_idx = cluster_idx(cc, index) * cc->cluster_size;
+ int i, idx;
+ int ret;
+
+ ret = f2fs_init_compress_ctx(cc);
+ if (ret)
+ goto out;
+retry:
+ /* keep page reference to avoid page reclaim */
+ for (i = 0; i < cc->cluster_size; i++) {
+ page = f2fs_pagecache_get_page(mapping, start_idx + i,
+ fgp_flag, GFP_NOFS);
+ if (!page) {
+ ret = -ENOMEM;
+ goto unlock_pages;
+ }
+
+ if (PageUptodate(page)) {
+ unlock_page(page);
+ continue;
+ }
+
+ ret = f2fs_compress_ctx_add_page(cc, page);
+ f2fs_bug_on(sbi, ret);
+ }
+
+ if (!f2fs_cluster_is_empty(cc)) {
+ ret = f2fs_read_multi_pages(cc, &bio, cc->cluster_size,
+ &last_block_in_bio, false);
+ if (ret)
+ goto out;
+
+ if (bio)
+ f2fs_submit_bio(sbi, bio, DATA);
+
+ ret = f2fs_init_compress_ctx(cc);
+ if (ret)
+ goto out;
+ }
+
+ for (i = 0; i < cc->cluster_size; i++) {
+ page = find_lock_page(mapping, start_idx + i);
+ f2fs_bug_on(sbi, !page);
+
+ f2fs_wait_on_page_writeback(page, DATA, true, true);
+
+ cc->rpages[i] = page;
+ f2fs_put_page(page, 0);
+
+ if (!PageUptodate(page)) {
+ for (idx = i; idx >= 0; idx--) {
+ f2fs_put_page(cc->rpages[idx], 0);
+ f2fs_put_page(cc->rpages[idx], 1);
+ }
+ goto retry;
+ }
+
+ }
+
+ if (prealloc) {
+ __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true);
+
+ set_new_dnode(&dn, inode, NULL, NULL, 0);
+
+ for (i = cc->cluster_size - 1; i > 0; i--) {
+ ret = f2fs_get_block(&dn, start_idx + i);
+ if (ret)
+ /* TODO: release preallocate blocks */
+ goto release_pages;
+
+ if (dn.data_blkaddr != NEW_ADDR)
+ break;
+ }
+
+ __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false);
+ }
+
+ *fsdata = cc->rpages;
+ *pagep = cc->rpages[offset_in_cluster(cc, index)];
+ return 0;
+unlock_pages:
+ for (idx = 0; idx < i; idx++) {
+ if (cc->rpages[idx])
+ unlock_page(cc->rpages[idx]);
+ }
+release_pages:
+ for (idx = 0; idx < i; idx++) {
+ page = find_lock_page(mapping, start_idx + idx);
+ f2fs_put_page(page, 0);
+ f2fs_put_page(page, 1);
+ }
+ f2fs_destroy_compress_ctx(cc);
+out:
+ return ret;
+}
+
+void f2fs_compress_write_end(struct inode *inode, void *fsdata,
+ bool written)
+{
+ struct compress_ctx cc = {
+ .cluster_size = F2FS_I(inode)->i_cluster_size,
+ .rpages = fsdata,
+ };
+ int i;
+
+ if (written)
+ set_cluster_dirty(&cc);
+
+ for (i = 0; i < cc.cluster_size; i++)
+ f2fs_put_page(cc.rpages[i], 1);
+}
+
+static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ int *submitted,
+ struct writeback_control *wbc,
+ enum iostat_type io_type)
+{
+ struct inode *inode = cc->inode;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+ struct f2fs_io_info fio = {
+ .sbi = sbi,
+ .ino = cc->inode->i_ino,
+ .type = DATA,
+ .op = REQ_OP_WRITE,
+ .op_flags = wbc_to_write_flags(wbc),
+ .old_blkaddr = NEW_ADDR,
+ .page = NULL,
+ .encrypted_page = NULL,
+ .compressed_page = NULL,
+ .submitted = false,
+ .need_lock = LOCK_RETRY,
+ .io_type = io_type,
+ .io_wbc = wbc,
+ .compressed = true,
+ .encrypted = f2fs_encrypted_file(cc->inode),
+ };
+ struct dnode_of_data dn;
+ struct node_info ni;
+ struct compress_io_ctx *cic;
+ unsigned int start_idx = start_idx_of_cluster(cc);
+ unsigned int last_index = cc->cluster_size - 1;
+ loff_t psize;
+ int pre_compressed_blocks = 0;
+ int i, err;
+
+ set_new_dnode(&dn, cc->inode, NULL, NULL, 0);
+
+ f2fs_lock_op(sbi);
+
+ err = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE);
+ if (err)
+ goto out_unlock_op;
+
+ psize = (cc->rpages[last_index]->index + 1) << PAGE_SHIFT;
+
+ err = f2fs_get_node_info(fio.sbi, dn.nid, &ni);
+ if (err)
+ goto out_put_dnode;
+
+ fio.version = ni.version;
+
+ cic = f2fs_kzalloc(sbi, sizeof(struct compress_io_ctx), GFP_NOFS);
+ if (!cic)
+ goto out_put_dnode;
+
+ cic->magic = F2FS_COMPRESSED_PAGE_MAGIC;
+ cic->inode = inode;
+ refcount_set(&cic->ref, 1);
+ cic->rpages = cc->rpages;
+ cic->nr_rpages = cc->cluster_size;
+
+ for (i = 0; i < cc->nr_cpages; i++) {
+ f2fs_set_compressed_page(cc->cpages[i], inode,
+ cc->rpages[i + 1]->index,
+ cic, i ? &cic->ref : NULL);
+ fio.compressed_page = cc->cpages[i];
+ if (fio.encrypted) {
+ fio.page = cc->rpages[i + 1];
+ err = f2fs_encrypt_one_page(&fio);
+ if (err)
+ goto out_destroy_crypt;
+ cc->cpages[i] = fio.encrypted_page;
+ }
+ }
+
+ set_cluster_writeback(cc);
+
+ for (i = 0; i < cc->cluster_size; i++, dn.ofs_in_node++) {
+ block_t blkaddr;
+
+ blkaddr = datablock_addr(dn.inode, dn.node_page,
+ dn.ofs_in_node);
+
+ /* cluster header */
+ if (i == 0) {
+ if (blkaddr == COMPRESS_ADDR)
+ pre_compressed_blocks++;
+ if (__is_valid_data_blkaddr(blkaddr))
+ f2fs_invalidate_blocks(sbi, blkaddr);
+ f2fs_update_data_blkaddr(&dn, COMPRESS_ADDR);
+ continue;
+ }
+
+ if (pre_compressed_blocks && __is_valid_data_blkaddr(blkaddr))
+ pre_compressed_blocks++;
+
+ if (i > cc->nr_cpages) {
+ if (__is_valid_data_blkaddr(blkaddr)) {
+ f2fs_invalidate_blocks(sbi, blkaddr);
+ f2fs_update_data_blkaddr(&dn, NEW_ADDR);
+ }
+ continue;
+ }
+
+ f2fs_bug_on(fio.sbi, blkaddr == NULL_ADDR);
+
+ fio.page = cc->rpages[i];
+ fio.old_blkaddr = blkaddr;
+
+ if (fio.encrypted)
+ fio.encrypted_page = cc->cpages[i - 1];
+ else if (fio.compressed)
+ fio.compressed_page = cc->cpages[i - 1];
+ else
+ f2fs_bug_on(sbi, 1);
+ cc->cpages[i - 1] = NULL;
+ f2fs_outplace_write_data(&dn, &fio);
+ (*submitted)++;
+ }
+
+ if (pre_compressed_blocks) {
+ stat_sub_compr_blocks(inode,
+ cc->cluster_size - pre_compressed_blocks + 1);
+ F2FS_I(inode)->i_compressed_blocks -=
+ (cc->cluster_size - pre_compressed_blocks + 1);
+ }
+ stat_add_compr_blocks(inode, cc->cluster_size - cc->nr_cpages);
+ F2FS_I(inode)->i_compressed_blocks += cc->cluster_size - cc->nr_cpages;
+ f2fs_mark_inode_dirty_sync(inode, true);
+
+ set_inode_flag(cc->inode, FI_APPEND_WRITE);
+ if (cc->cluster_idx == 0)
+ set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN);
+
+ f2fs_put_dnode(&dn);
+ f2fs_unlock_op(sbi);
+
+ f2fs_release_cluster_pages(cc);
+
+ cc->rpages = NULL;
+
+ if (err) {
+ file_set_keep_isize(inode);
+ } else {
+ down_write(&fi->i_sem);
+ if (fi->last_disk_size < psize)
+ fi->last_disk_size = psize;
+ up_write(&fi->i_sem);
+ }
+ return 0;
+out_destroy_crypt:
+ for (i -= 1; i >= 0; i--)
+ fscrypt_finalize_bounce_page(&cc->cpages[i]);
+ for (i = 0; i < cc->nr_cpages; i++) {
+ if (!cc->cpages[i])
+ continue;
+ f2fs_put_page(cc->cpages[i], 1);
+ }
+out_put_dnode:
+ f2fs_put_dnode(&dn);
+out_unlock_op:
+ f2fs_unlock_op(sbi);
+ return -EAGAIN;
+}
+
+void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
+{
+ struct f2fs_sb_info *sbi = bio->bi_private;
+ struct compress_io_ctx *cic =
+ (struct compress_io_ctx *)page_private(page);
+ int i;
+
+ if (unlikely(bio->bi_status))
+ mapping_set_error(cic->inode->i_mapping, -EIO);
+
+ f2fs_put_compressed_page(page);
+
+ dec_page_count(sbi, F2FS_WB_DATA);
+
+ if (refcount_dec_not_one(&cic->ref))
+ return;
+
+ for (i = 0; i < cic->nr_rpages; i++) {
+ clear_cold_data(cic->rpages[i]);
+ end_page_writeback(cic->rpages[i]);
+ }
+
+ kvfree(cic->rpages);
+ kvfree(cic);
+}
+
+static int f2fs_write_raw_pages(struct compress_ctx *cc,
+ int *submitted,
+ struct writeback_control *wbc,
+ enum iostat_type io_type)
+{
+ int i, _submitted;
+ int ret, err = 0;
+
+ for (i = 0; i < cc->cluster_size; i++) {
+ if (!cc->rpages[i])
+ continue;
+ ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted,
+ NULL, NULL, wbc, io_type);
+ if (ret) {
+ if (ret == AOP_WRITEPAGE_ACTIVATE)
+ unlock_page(cc->rpages[i]);
+ err = ret;
+ goto out_fail;
+ }
+
+ *submitted += _submitted;
+ }
+ return 0;
+out_fail:
+ /* TODO: revoke partially updated block addresses */
+ for (i += 1; i < cc->cluster_size; i++) {
+ if (!cc->rpages[i])
+ continue;
+ redirty_page_for_writepage(wbc, cc->rpages[i]);
+ unlock_page(cc->rpages[i]);
+ }
+ return err;
+}
+
+int f2fs_write_multi_pages(struct compress_ctx *cc,
+ int *submitted,
+ struct writeback_control *wbc,
+ enum iostat_type io_type)
+{
+ struct f2fs_inode_info *fi = F2FS_I(cc->inode);
+ const struct f2fs_compress_ops *cops =
+ F2FS_I_SB(cc->inode)->cops[fi->i_compress_algorithm];
+ int err = -EAGAIN;
+
+ *submitted = 0;
+
+ if (cluster_may_compress(cc)) {
+ err = f2fs_compress_pages(cc);
+ if (err) {
+ err = -EAGAIN;
+ goto write;
+ }
+ err = f2fs_write_compressed_pages(cc, submitted,
+ wbc, io_type);
+ cops->destroy_compress_ctx(cc);
+ }
+write:
+ if (err == -EAGAIN) {
+ f2fs_bug_on(F2FS_I_SB(cc->inode), *submitted);
+ err = f2fs_write_raw_pages(cc, submitted, wbc, io_type);
+ if (f2fs_is_cluster_existed(cc) == 1) {
+ stat_sub_compr_blocks(cc->inode, *submitted);
+ F2FS_I(cc->inode)->i_compressed_blocks -= *submitted;
+ f2fs_mark_inode_dirty_sync(cc->inode, true);
+ }
+ }
+ f2fs_reset_compress_ctx(cc);
+ return err;
+}
+
+int f2fs_is_compressed_cluster(struct compress_ctx *cc, pgoff_t index)
+{
+ struct dnode_of_data dn;
+ unsigned int start_idx = cluster_idx(cc, index) * cc->cluster_size;
+ int ret, i;
+
+ set_new_dnode(&dn, cc->inode, NULL, NULL, 0);
+ ret = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE);
+ if (ret) {
+ if (ret == -ENOENT)
+ ret = 0;
+ goto fail;
+ }
+ if (dn.data_blkaddr == COMPRESS_ADDR) {
+ ret = CLUSTER_IS_FULL;
+ for (i = 1; i < cc->cluster_size; i++) {
+ block_t blkaddr;
+
+ blkaddr = datablock_addr(dn.inode,
+ dn.node_page, dn.ofs_in_node + i);
+ if (blkaddr == NULL_ADDR) {
+ ret = CLUSTER_HAS_SPACE;
+ break;
+ }
+ }
+ }
+fail:
+ f2fs_put_dnode(&dn);
+ return ret;
+}
+
+struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode);
+ struct decompress_io_ctx *dic;
+ unsigned int start_idx = start_idx_of_cluster(cc);
+ int i;
+
+ dic = f2fs_kzalloc(sbi, sizeof(struct decompress_io_ctx), GFP_KERNEL);
+ if (!dic)
+ goto out;
+
+ dic->inode = cc->inode;
+ refcount_set(&dic->ref, 1);
+ dic->cluster_idx = cc->cluster_idx;
+ dic->cluster_size = cc->cluster_size;
+ dic->nr_cpages = cc->nr_cpages;
+ dic->err = false;
+
+ dic->cpages = f2fs_kzalloc(sbi, sizeof(struct page *) *
+ dic->nr_cpages, GFP_KERNEL);
+ if (!dic->cpages)
+ goto out_free;
+
+ for (i = 0; i < dic->nr_cpages; i++) {
+ struct page *page;
+
+ page = f2fs_grab_page();
+ if (!page)
+ goto out_free;
+
+ f2fs_set_compressed_page(page, cc->inode,
+ start_idx + i + 1,
+ dic, i ? &dic->ref : NULL);
+ dic->cpages[i] = page;
+ }
+
+ dic->tpages = f2fs_kzalloc(sbi, sizeof(struct page *) *
+ dic->cluster_size, GFP_KERNEL);
+ if (!dic->tpages)
+ goto out_free;
+
+ for (i = 0; i < dic->cluster_size; i++) {
+ if (cc->rpages[i])
+ continue;
+
+ dic->tpages[i] = f2fs_grab_page();
+ if (!dic->tpages[i])
+ goto out_free;
+ }
+
+ for (i = 0; i < dic->cluster_size; i++) {
+ if (dic->tpages[i])
+ continue;
+ dic->tpages[i] = cc->rpages[i];
+ }
+
+ dic->rpages = cc->rpages;
+ dic->nr_rpages = cc->cluster_size;
+
+ cc->rpages = NULL;
+ return dic;
+out_free:
+ f2fs_free_dic(dic);
+out:
+ return ERR_PTR(-ENOMEM);
+}
+
+void f2fs_free_dic(struct decompress_io_ctx *dic)
+{
+ int i;
+
+ if (dic->tpages) {
+ for (i = 0; i < dic->cluster_size; i++) {
+ if (dic->rpages[i])
+ continue;
+ unlock_page(dic->tpages[i]);
+ put_page(dic->tpages[i]);
+ }
+ kvfree(dic->tpages);
+ }
+
+ if (dic->cpages) {
+ for (i = 0; i < dic->nr_cpages; i++) {
+ if (!dic->cpages[i])
+ continue;
+ f2fs_put_compressed_page(dic->cpages[i]);
+ }
+ kvfree(dic->cpages);
+ }
+
+ kvfree(dic->rpages);
+ kvfree(dic);
+}
+
+void f2fs_set_cluster_uptodate(struct page **rpages,
+ unsigned int cluster_size, bool err, bool verity)
+{
+ int i;
+
+ for (i = 0; i < cluster_size; i++) {
+ struct page *rpage = rpages[i];
+
+ if (!rpage)
+ continue;
+
+ if (err || PageError(rpage)) {
+ ClearPageUptodate(rpage);
+ ClearPageError(rpage);
+ } else {
+ if (!verity || fsverity_verify_page(rpage))
+ SetPageUptodate(rpage);
+ else
+ SetPageError(rpage);
+ }
+ unlock_page(rpage);
+ }
+}
+
+static void f2fs_init_compress_ops(struct f2fs_sb_info *sbi)
+{
+ sbi->cops[COMPRESS_LZO] = &f2fs_lzo_ops;
+ sbi->cops[COMPRESS_LZ4] = &f2fs_lz4_ops;
+}
+
+void f2fs_init_compress_info(struct f2fs_sb_info *sbi)
+{
+ if (!f2fs_sb_has_compression(sbi))
+ return;
+
+ f2fs_init_compress_ops(sbi);
+}
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index ba3bcf4c7889..bac96c3a8bc9 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -41,6 +41,9 @@ static bool __is_cp_guaranteed(struct page *page)
if (!mapping)
return false;
+ if (f2fs_is_compressed_page(page))
+ return false;
+
inode = mapping->host;
sbi = F2FS_I_SB(inode);
@@ -73,19 +76,19 @@ static enum count_type __read_io_type(struct page *page)
/* postprocessing steps for read bios */
enum bio_post_read_step {
- STEP_INITIAL = 0,
STEP_DECRYPT,
+ STEP_DECOMPRESS,
STEP_VERITY,
};
struct bio_post_read_ctx {
struct bio *bio;
+ struct f2fs_sb_info *sbi;
struct work_struct work;
- unsigned int cur_step;
unsigned int enabled_steps;
};
-static void __read_end_io(struct bio *bio)
+static void __read_end_io(struct bio *bio, bool compr, bool verity)
{
struct page *page;
struct bio_vec *bv;
@@ -94,6 +97,11 @@ static void __read_end_io(struct bio *bio)
bio_for_each_segment_all(bv, bio, iter_all) {
page = bv->bv_page;
+ if (compr && PagePrivate(page)) {
+ f2fs_decompress_pages(bio, page, verity);
+ continue;
+ }
+
/* PG_error was set if any post_read step failed */
if (bio->bi_status || PageError(page)) {
ClearPageUptodate(page);
@@ -110,60 +118,67 @@ static void __read_end_io(struct bio *bio)
bio_put(bio);
}
+static void f2fs_decompress_bio(struct bio *bio, bool verity)
+{
+ __read_end_io(bio, true, verity);
+}
+
static void bio_post_read_processing(struct bio_post_read_ctx *ctx);
-static void decrypt_work(struct work_struct *work)
+static void decrypt_work(struct bio_post_read_ctx *ctx)
{
- struct bio_post_read_ctx *ctx =
- container_of(work, struct bio_post_read_ctx, work);
-
fscrypt_decrypt_bio(ctx->bio);
+}
+
+static void decompress_work(struct bio_post_read_ctx *ctx, bool verity)
+{
+ f2fs_decompress_bio(ctx->bio, verity);
+}
- bio_post_read_processing(ctx);
+static void verity_work(struct bio_post_read_ctx *ctx)
+{
+ fsverity_verify_bio(ctx->bio);
}
-static void verity_work(struct work_struct *work)
+static void f2fs_post_read_work(struct work_struct *work)
{
struct bio_post_read_ctx *ctx =
container_of(work, struct bio_post_read_ctx, work);
- fsverity_verify_bio(ctx->bio);
+ if (ctx->enabled_steps & (1 << STEP_DECRYPT))
+ decrypt_work(ctx);
- bio_post_read_processing(ctx);
+ if (ctx->enabled_steps & (1 << STEP_DECOMPRESS)) {
+ decompress_work(ctx,
+ ctx->enabled_steps & (1 << STEP_VERITY));
+ return;
+ }
+
+ if (ctx->enabled_steps & (1 << STEP_VERITY))
+ verity_work(ctx);
+
+ __read_end_io(ctx->bio, false, false);
+}
+
+static void f2fs_enqueue_post_read_work(struct f2fs_sb_info *sbi,
+ struct work_struct *work)
+{
+ queue_work(sbi->post_read_wq, work);
}
static void bio_post_read_processing(struct bio_post_read_ctx *ctx)
{
- /*
- * We use different work queues for decryption and for verity because
- * verity may require reading metadata pages that need decryption, and
- * we shouldn't recurse to the same workqueue.
- */
- switch (++ctx->cur_step) {
- case STEP_DECRYPT:
- if (ctx->enabled_steps & (1 << STEP_DECRYPT)) {
- INIT_WORK(&ctx->work, decrypt_work);
- fscrypt_enqueue_decrypt_work(&ctx->work);
- return;
- }
- ctx->cur_step++;
- /* fall-through */
- case STEP_VERITY:
- if (ctx->enabled_steps & (1 << STEP_VERITY)) {
- INIT_WORK(&ctx->work, verity_work);
- fsverity_enqueue_verify_work(&ctx->work);
- return;
- }
- ctx->cur_step++;
- /* fall-through */
- default:
- __read_end_io(ctx->bio);
+ if (ctx->enabled_steps) {
+ INIT_WORK(&ctx->work, f2fs_post_read_work);
+ f2fs_enqueue_post_read_work(ctx->sbi, &ctx->work);
+ return;
}
+ __read_end_io(ctx->bio, false, false);
}
static bool f2fs_bio_post_read_required(struct bio *bio)
{
- return bio->bi_private && !bio->bi_status;
+ return bio->bi_private;
}
static void f2fs_read_end_io(struct bio *bio)
@@ -177,12 +192,11 @@ static void f2fs_read_end_io(struct bio *bio)
if (f2fs_bio_post_read_required(bio)) {
struct bio_post_read_ctx *ctx = bio->bi_private;
- ctx->cur_step = STEP_INITIAL;
bio_post_read_processing(ctx);
return;
}
- __read_end_io(bio);
+ __read_end_io(bio, false, false);
}
static void f2fs_write_end_io(struct bio *bio)
@@ -213,6 +227,11 @@ static void f2fs_write_end_io(struct bio *bio)
fscrypt_finalize_bounce_page(&page);
+ if (f2fs_is_compressed_page(page)) {
+ f2fs_compress_write_end_io(bio, page);
+ continue;
+ }
+
if (unlikely(bio->bi_status)) {
mapping_set_error(page->mapping, -EIO);
if (type == F2FS_WB_CP_DATA)
@@ -357,6 +376,12 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi,
submit_bio(bio);
}
+void f2fs_submit_bio(struct f2fs_sb_info *sbi,
+ struct bio *bio, enum page_type type)
+{
+ __submit_bio(sbi, bio, type);
+}
+
static void __submit_merged_bio(struct f2fs_bio_info *io)
{
struct f2fs_io_info *fio = &io->fio;
@@ -379,7 +404,6 @@ static bool __has_merged_page(struct bio *bio, struct inode *inode,
struct page *page, nid_t ino)
{
struct bio_vec *bvec;
- struct page *target;
struct bvec_iter_all iter_all;
if (!bio)
@@ -389,10 +413,12 @@ static bool __has_merged_page(struct bio *bio, struct inode *inode,
return true;
bio_for_each_segment_all(bvec, bio, iter_all) {
+ struct page *target = bvec->bv_page;
- target = bvec->bv_page;
if (fscrypt_is_bounce_page(target))
target = fscrypt_pagecache_page(target);
+ if (f2fs_is_compressed_page(target))
+ target = f2fs_compress_control_page(target);
if (inode && inode == target->mapping->host)
return true;
@@ -727,7 +753,12 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
verify_fio_blkaddr(fio);
- bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page;
+ if (fio->encrypted_page)
+ bio_page = fio->encrypted_page;
+ else if (fio->compressed_page)
+ bio_page = fio->compressed_page;
+ else
+ bio_page = fio->page;
/* set submitted = true as a return value */
fio->submitted = true;
@@ -796,7 +827,8 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
if (f2fs_encrypted_file(inode))
post_read_steps |= 1 << STEP_DECRYPT;
-
+ if (f2fs_compressed_file(inode))
+ post_read_steps |= 1 << STEP_DECOMPRESS;
if (f2fs_need_verity(inode, first_idx))
post_read_steps |= 1 << STEP_VERITY;
@@ -807,6 +839,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
return ERR_PTR(-ENOMEM);
}
ctx->bio = bio;
+ ctx->sbi = sbi;
ctx->enabled_steps = post_read_steps;
bio->bi_private = ctx;
}
@@ -1871,6 +1904,142 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
return ret;
}
+int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ unsigned nr_pages, sector_t *last_block_in_bio,
+ bool is_readahead)
+{
+ struct dnode_of_data dn;
+ struct inode *inode = cc->inode;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct bio *bio = *bio_ret;
+ unsigned int start_idx = cc->cluster_idx * cc->cluster_size;
+ sector_t last_block_in_file;
+ const unsigned blkbits = inode->i_blkbits;
+ const unsigned blocksize = 1 << blkbits;
+ struct decompress_io_ctx *dic = NULL;
+ int i;
+ int ret = 0;
+
+ f2fs_bug_on(sbi, f2fs_cluster_is_empty(cc));
+
+ last_block_in_file = (i_size_read(inode) + blocksize - 1) >> blkbits;
+
+ /* get rid of pages beyond EOF */
+ for (i = cc->nr_rpages - 1; i >= 0; i--) {
+ struct page *page = cc->rpages[i];
+
+ if (!page)
+ continue;
+ if ((sector_t)page->index < last_block_in_file)
+ break;
+
+ zero_user_segment(page, 0, PAGE_SIZE);
+ if (!PageUptodate(page))
+ SetPageUptodate(page);
+
+ unlock_page(page);
+ cc->rpages[i] = NULL;
+ cc->nr_rpages--;
+ }
+
+ /* we are done since all pages are beyond EOF */
+ if (f2fs_cluster_is_empty(cc))
+ goto out;
+
+ set_new_dnode(&dn, inode, NULL, NULL, 0);
+ ret = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE);
+ if (ret)
+ goto out;
+
+ f2fs_bug_on(sbi, dn.data_blkaddr != COMPRESS_ADDR);
+
+ for (i = 1; i < cc->cluster_size; i++) {
+ block_t blkaddr;
+
+ blkaddr = datablock_addr(dn.inode, dn.node_page,
+ dn.ofs_in_node + i);
+
+ if (!__is_valid_data_blkaddr(blkaddr))
+ break;
+
+ if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC)) {
+ ret = -EFAULT;
+ goto out_put_dnode;
+ }
+ cc->nr_cpages++;
+ }
+
+ /* nothing to decompress */
+ if (cc->nr_cpages == 0) {
+ ret = 0;
+ goto out_put_dnode;
+ }
+
+ dic = f2fs_alloc_dic(cc);
+ if (IS_ERR(dic)) {
+ ret = PTR_ERR(dic);
+ goto out_put_dnode;
+ }
+
+ for (i = 0; i < dic->nr_cpages; i++) {
+ struct page *page = dic->cpages[i];
+ block_t blkaddr;
+
+ blkaddr = datablock_addr(dn.inode, dn.node_page,
+ dn.ofs_in_node + i + 1);
+
+ if (bio && !page_is_mergeable(sbi, bio,
+ *last_block_in_bio, blkaddr)) {
+submit_and_realloc:
+ __submit_bio(sbi, bio, DATA);
+ bio = NULL;
+ }
+
+ if (!bio) {
+ bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
+ is_readahead ? REQ_RAHEAD : 0,
+ page->index);
+ if (IS_ERR(bio)) {
+ ret = PTR_ERR(bio);
+ bio = NULL;
+ dic->err = true;
+ if (refcount_sub_and_test(dic->nr_cpages - i,
+ &dic->ref))
+ f2fs_set_cluster_uptodate(dic->rpages,
+ cc->cluster_size, true,
+ false);
+ f2fs_free_dic(dic);
+ f2fs_put_dnode(&dn);
+ f2fs_reset_compress_ctx(cc);
+ *bio_ret = bio;
+ return ret;
+ }
+ }
+
+ f2fs_wait_on_block_writeback(inode, blkaddr);
+
+ if (bio_add_page(bio, page, blocksize, 0) < blocksize)
+ goto submit_and_realloc;
+
+ inc_page_count(sbi, F2FS_RD_DATA);
+ ClearPageError(page);
+ *last_block_in_bio = blkaddr;
+ }
+
+ f2fs_put_dnode(&dn);
+
+ f2fs_reset_compress_ctx(cc);
+ *bio_ret = bio;
+ return 0;
+out_put_dnode:
+ f2fs_put_dnode(&dn);
+out:
+ f2fs_set_cluster_uptodate(cc->rpages, cc->cluster_size, true, false);
+ f2fs_reset_compress_ctx(cc);
+ *bio_ret = bio;
+ return ret;
+}
+
/*
* This function was originally taken from fs/mpage.c, and customized for f2fs.
* Major change was from block_size == page_size in f2fs by default.
@@ -1880,7 +2049,7 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
* use ->readpage() or do the necessary surgery to decouple ->readpages()
* from read-ahead.
*/
-static int f2fs_mpage_readpages(struct address_space *mapping,
+int f2fs_mpage_readpages(struct address_space *mapping,
struct list_head *pages, struct page *page,
unsigned nr_pages, bool is_readahead)
{
@@ -1888,6 +2057,16 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
sector_t last_block_in_bio = 0;
struct inode *inode = mapping->host;
struct f2fs_map_blocks map;
+ struct compress_ctx cc = {
+ .inode = inode,
+ .cluster_size = F2FS_I(inode)->i_cluster_size,
+ .cluster_idx = NULL_CLUSTER,
+ .rpages = NULL,
+ .cpages = NULL,
+ .nr_rpages = 0,
+ .nr_cpages = 0,
+ };
+ unsigned max_nr_pages = nr_pages;
int ret = 0;
map.m_pblk = 0;
@@ -1911,9 +2090,36 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
goto next_page;
}
- ret = f2fs_read_single_page(inode, page, nr_pages, &map, &bio,
- &last_block_in_bio, is_readahead);
+ if (f2fs_compressed_file(inode)) {
+ /* there are remained comressed pages, submit them */
+ if (!f2fs_cluster_can_merge_page(&cc, page->index)) {
+ ret = f2fs_read_multi_pages(&cc, &bio,
+ max_nr_pages,
+ &last_block_in_bio,
+ is_readahead);
+ if (ret)
+ goto set_error_page;
+ }
+ ret = f2fs_is_compressed_cluster(&cc, page->index);
+ if (ret < 0)
+ goto set_error_page;
+ else if (!ret)
+ goto read_single_page;
+
+ ret = f2fs_init_compress_ctx(&cc);
+ if (ret)
+ goto set_error_page;
+
+ ret = f2fs_compress_ctx_add_page(&cc, page);
+ f2fs_bug_on(F2FS_I_SB(inode), ret);
+
+ goto next_page;
+ }
+read_single_page:
+ ret = f2fs_read_single_page(inode, page, max_nr_pages, &map,
+ &bio, &last_block_in_bio, is_readahead);
if (ret) {
+set_error_page:
SetPageError(page);
zero_user_segment(page, 0, PAGE_SIZE);
unlock_page(page);
@@ -1921,6 +2127,15 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
next_page:
if (pages)
put_page(page);
+
+ if (f2fs_compressed_file(inode)) {
+ /* last page */
+ if (nr_pages == 1 && !f2fs_cluster_is_empty(&cc))
+ ret = f2fs_read_multi_pages(&cc, &bio,
+ max_nr_pages,
+ &last_block_in_bio,
+ is_readahead);
+ }
}
BUG_ON(pages && !list_empty(pages));
if (bio)
@@ -1960,22 +2175,23 @@ static int f2fs_read_data_pages(struct file *file,
return f2fs_mpage_readpages(mapping, pages, NULL, nr_pages, true);
}
-static int encrypt_one_page(struct f2fs_io_info *fio)
+int f2fs_encrypt_one_page(struct f2fs_io_info *fio)
{
struct inode *inode = fio->page->mapping->host;
- struct page *mpage;
+ struct page *mpage, *page;
gfp_t gfp_flags = GFP_NOFS;
if (!f2fs_encrypted_file(inode))
return 0;
+ page = fio->compressed_page ? fio->compressed_page : fio->page;
+
/* wait for GCed page writeback via META_MAPPING */
f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
retry_encrypt:
- fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(fio->page,
- PAGE_SIZE, 0,
- gfp_flags);
+ fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(page,
+ PAGE_SIZE, 0, gfp_flags);
if (IS_ERR(fio->encrypted_page)) {
/* flush pending IOs and wait for a while in the ENOMEM case */
if (PTR_ERR(fio->encrypted_page) == -ENOMEM) {
@@ -2135,7 +2351,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
if (ipu_force ||
(__is_valid_data_blkaddr(fio->old_blkaddr) &&
need_inplace_update(fio))) {
- err = encrypt_one_page(fio);
+ err = f2fs_encrypt_one_page(fio);
if (err)
goto out_writepage;
@@ -2171,7 +2387,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
fio->version = ni.version;
- err = encrypt_one_page(fio);
+ err = f2fs_encrypt_one_page(fio);
if (err)
goto out_writepage;
@@ -2192,7 +2408,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
return err;
}
-static int __write_data_page(struct page *page, bool *submitted,
+int f2fs_write_single_data_page(struct page *page, int *submitted,
struct bio **bio,
sector_t *last_block,
struct writeback_control *wbc,
@@ -2201,7 +2417,7 @@ static int __write_data_page(struct page *page, bool *submitted,
struct inode *inode = page->mapping->host;
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
loff_t i_size = i_size_read(inode);
- const pgoff_t end_index = ((unsigned long long) i_size)
+ const pgoff_t end_index = ((unsigned long long)i_size)
>> PAGE_SHIFT;
loff_t psize = (page->index + 1) << PAGE_SHIFT;
unsigned offset = 0;
@@ -2330,7 +2546,7 @@ static int __write_data_page(struct page *page, bool *submitted,
}
if (submitted)
- *submitted = fio.submitted;
+ *submitted = fio.submitted ? 1 : 0;
return 0;
@@ -2351,7 +2567,8 @@ static int __write_data_page(struct page *page, bool *submitted,
static int f2fs_write_data_page(struct page *page,
struct writeback_control *wbc)
{
- return __write_data_page(page, NULL, NULL, NULL, wbc, FS_DATA_IO);
+ return f2fs_write_single_data_page(page, NULL, NULL, NULL,
+ wbc, FS_DATA_IO);
}
/*
@@ -2369,6 +2586,19 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
struct bio *bio = NULL;
sector_t last_block;
+ struct inode *inode = mapping->host;
+ struct compress_ctx cc = {
+ .inode = inode,
+ .cluster_size = F2FS_I(inode)->i_cluster_size,
+ .cluster_idx = NULL_CLUSTER,
+ .rpages = NULL,
+ .nr_rpages = 0,
+ .cpages = NULL,
+ .rbuf = NULL,
+ .cbuf = NULL,
+ .rlen = PAGE_SIZE * F2FS_I(inode)->i_cluster_size,
+ .private = NULL,
+ };
int nr_pages;
pgoff_t uninitialized_var(writeback_index);
pgoff_t index;
@@ -2378,6 +2608,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
int range_whole = 0;
xa_mark_t tag;
int nwritten = 0;
+ int submitted = 0;
+ int i;
pagevec_init(&pvec);
@@ -2411,8 +2643,6 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
tag_pages_for_writeback(mapping, index, end);
done_index = index;
while (!done && (index <= end)) {
- int i;
-
nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
tag);
if (nr_pages == 0)
@@ -2420,7 +2650,24 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
for (i = 0; i < nr_pages; i++) {
struct page *page = pvec.pages[i];
- bool submitted = false;
+ bool need_readd = false;
+
+readd:
+ if (f2fs_compressed_file(inode)) {
+ ret = f2fs_init_compress_ctx(&cc);
+ if (ret) {
+ done = 1;
+ break;
+ }
+
+ if (!f2fs_cluster_can_merge_page(&cc,
+ page->index)) {
+ need_readd = true;
+ ret = f2fs_write_multi_pages(&cc,
+ &submitted, wbc, io_type);
+ goto result;
+ }
+ }
/* give a priority to WB_SYNC threads */
if (atomic_read(&sbi->wb_sync_req[DATA]) &&
@@ -2430,7 +2677,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
}
done_index = page->index;
-retry_write:
+
lock_page(page);
if (unlikely(page->mapping != mapping)) {
@@ -2455,44 +2702,58 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
if (!clear_page_dirty_for_io(page))
goto continue_unlock;
- ret = __write_data_page(page, &submitted, &bio,
- &last_block, wbc, io_type);
+ if (f2fs_compressed_file(mapping->host)) {
+ ret = f2fs_compress_ctx_add_page(&cc, page);
+ f2fs_bug_on(sbi, ret);
+ continue;
+ }
+ ret = f2fs_write_single_data_page(page, &submitted,
+ &bio, &last_block, wbc, io_type);
+ if (ret == AOP_WRITEPAGE_ACTIVATE)
+ unlock_page(page);
+result:
+ nwritten += submitted;
+ wbc->nr_to_write -= submitted;
+
if (unlikely(ret)) {
/*
* keep nr_to_write, since vfs uses this to
* get # of written pages.
*/
if (ret == AOP_WRITEPAGE_ACTIVATE) {
- unlock_page(page);
ret = 0;
- continue;
+ goto next;
} else if (ret == -EAGAIN) {
ret = 0;
- if (wbc->sync_mode == WB_SYNC_ALL) {
- cond_resched();
- congestion_wait(BLK_RW_ASYNC,
- HZ/50);
- goto retry_write;
- }
- continue;
+ goto next;
}
done_index = page->index + 1;
done = 1;
break;
- } else if (submitted) {
- nwritten++;
}
- if (--wbc->nr_to_write <= 0 &&
+ if (wbc->nr_to_write <= 0 &&
wbc->sync_mode == WB_SYNC_NONE) {
done = 1;
break;
}
+next:
+ if (need_readd)
+ goto readd;
}
+
pagevec_release(&pvec);
cond_resched();
}
+ /* flush remained pages in compress cluster */
+ if (f2fs_compressed_file(inode) && !f2fs_cluster_is_empty(&cc)) {
+ ret = f2fs_write_multi_pages(&cc, &submitted, wbc, io_type);
+ nwritten += submitted;
+ wbc->nr_to_write -= submitted;
+ /* TODO: error handling */
+ }
+
if (!cycled && !done) {
cycled = 1;
index = 0;
@@ -2509,6 +2770,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
if (bio)
f2fs_submit_merged_ipu_write(sbi, &bio, NULL);
+ f2fs_destroy_compress_ctx(&cc);
+
return ret;
}
@@ -2517,6 +2780,8 @@ static inline bool __should_serialize_io(struct inode *inode,
{
if (!S_ISREG(inode->i_mode))
return false;
+ if (f2fs_compressed_file(inode))
+ return true;
if (IS_NOQUOTA(inode))
return false;
/* to avoid deadlock in path of data flush */
@@ -2659,6 +2924,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
__do_map_lock(sbi, flag, true);
locked = true;
}
+
restart:
/* check inline_data */
ipage = f2fs_get_node_page(sbi, inode->i_ino);
@@ -2749,6 +3015,30 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
if (err)
goto fail;
}
+
+ if (f2fs_compressed_file(inode)) {
+ struct compress_ctx cc = {
+ .inode = inode,
+ .cluster_size = F2FS_I(inode)->i_cluster_size,
+ .cluster_idx = NULL_CLUSTER,
+ .rpages = NULL,
+ .nr_rpages = 0,
+ };
+
+ *fsdata = NULL;
+
+ err = f2fs_is_compressed_cluster(&cc, index);
+ if (err < 0)
+ goto fail;
+ if (!err)
+ goto repeat;
+
+ err = f2fs_prepare_compress_overwrite(&cc, pagep, index, fsdata,
+ err == CLUSTER_HAS_SPACE);
+ /* need to goto fail? */
+ return err;
+ }
+
repeat:
/*
* Do not use grab_cache_page_write_begin() to avoid deadlock due to
@@ -2761,6 +3051,8 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
goto fail;
}
+ /* TODO: cluster can be compressed due to race with .writepage */
+
*pagep = page;
err = prepare_write_begin(sbi, page, pos, len,
@@ -2844,6 +3136,13 @@ static int f2fs_write_end(struct file *file,
else
SetPageUptodate(page);
}
+
+ /* overwrite compressed file */
+ if (f2fs_compressed_file(inode) && fsdata) {
+ f2fs_compress_write_end(inode, fsdata, copied);
+ goto update_time;
+ }
+
if (!copied)
goto unlock_out;
@@ -2854,6 +3153,7 @@ static int f2fs_write_end(struct file *file,
f2fs_i_size_write(inode, pos + copied);
unlock_out:
f2fs_put_page(page, 1);
+update_time:
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
return copied;
}
@@ -3318,6 +3618,26 @@ void f2fs_destroy_post_read_processing(void)
kmem_cache_destroy(bio_post_read_ctx_cache);
}
+int f2fs_init_post_read_wq(struct f2fs_sb_info *sbi)
+{
+ if (!f2fs_sb_has_encrypt(sbi) &&
+ !f2fs_sb_has_compression(sbi))
+ return 0;
+
+ sbi->post_read_wq = alloc_workqueue("f2fs_post_read_wq",
+ WQ_UNBOUND | WQ_HIGHPRI,
+ num_online_cpus());
+ if (!sbi->post_read_wq)
+ return -ENOMEM;
+ return 0;
+}
+
+void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi)
+{
+ if (sbi->post_read_wq)
+ destroy_workqueue(sbi->post_read_wq);
+}
+
int __init f2fs_init_bio_entry_cache(void)
{
bio_entry_slab = f2fs_kmem_cache_create("bio_entry_slab",
diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
index 9b0bedd82581..498207c9dbe2 100644
--- a/fs/f2fs/debug.c
+++ b/fs/f2fs/debug.c
@@ -94,6 +94,8 @@ static void update_general_status(struct f2fs_sb_info *sbi)
si->inline_xattr = atomic_read(&sbi->inline_xattr);
si->inline_inode = atomic_read(&sbi->inline_inode);
si->inline_dir = atomic_read(&sbi->inline_dir);
+ si->compr_inode = atomic_read(&sbi->compr_inode);
+ si->compr_blocks = atomic_read(&sbi->compr_blocks);
si->append = sbi->im[APPEND_INO].ino_num;
si->update = sbi->im[UPDATE_INO].ino_num;
si->orphans = sbi->im[ORPHAN_INO].ino_num;
@@ -315,6 +317,8 @@ static int stat_show(struct seq_file *s, void *v)
si->inline_inode);
seq_printf(s, " - Inline_dentry Inode: %u\n",
si->inline_dir);
+ seq_printf(s, " - Compressed Inode: %u, Blocks: %u\n",
+ si->compr_inode, si->compr_blocks);
seq_printf(s, " - Orphan/Append/Update Inode: %u, %u, %u\n",
si->orphans, si->append, si->update);
seq_printf(s, "\nMain area: %d segs, %d secs %d zones\n",
@@ -491,6 +495,8 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
atomic_set(&sbi->inline_xattr, 0);
atomic_set(&sbi->inline_inode, 0);
atomic_set(&sbi->inline_dir, 0);
+ atomic_set(&sbi->compr_inode, 0);
+ atomic_set(&sbi->compr_blocks, 0);
atomic_set(&sbi->inplace_count, 0);
for (i = META_CP; i < META_MAX; i++)
atomic_set(&sbi->meta_count[i], 0);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index c681f51e351b..775c96291490 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -116,6 +116,8 @@ typedef u32 block_t; /*
*/
typedef u32 nid_t;
+#define COMPRESS_EXT_NUM 16
+
struct f2fs_mount_info {
unsigned int opt;
int write_io_size_bits; /* Write IO size bits */
@@ -140,6 +142,12 @@ struct f2fs_mount_info {
block_t unusable_cap; /* Amount of space allowed to be
* unusable when disabling checkpoint
*/
+
+ /* For compression */
+ unsigned char compress_algorithm; /* algorithm type */
+ unsigned compress_log_size; /* cluster log size */
+ unsigned char compress_ext_cnt; /* extension count */
+ unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN]; /* extensions */
};
#define F2FS_FEATURE_ENCRYPT 0x0001
@@ -155,6 +163,7 @@ struct f2fs_mount_info {
#define F2FS_FEATURE_VERITY 0x0400
#define F2FS_FEATURE_SB_CHKSUM 0x0800
#define F2FS_FEATURE_CASEFOLD 0x1000
+#define F2FS_FEATURE_COMPRESSION 0x2000
#define __F2FS_HAS_FEATURE(raw_super, mask) \
((raw_super->feature & cpu_to_le32(mask)) != 0)
@@ -712,6 +721,12 @@ struct f2fs_inode_info {
int i_inline_xattr_size; /* inline xattr size */
struct timespec64 i_crtime; /* inode creation time */
struct timespec64 i_disk_time[4];/* inode disk times */
+
+ /* for file compress */
+ u64 i_compressed_blocks; /* # of compressed blocks */
+ unsigned char i_compress_algorithm; /* algorithm type */
+ unsigned char i_log_cluster_size; /* log of cluster size */
+ unsigned int i_cluster_size; /* cluster size */
};
static inline void get_extent_info(struct extent_info *ext,
@@ -1056,12 +1071,15 @@ struct f2fs_io_info {
block_t old_blkaddr; /* old block address before Cow */
struct page *page; /* page to be written */
struct page *encrypted_page; /* encrypted page */
+ struct page *compressed_page; /* compressed page */
struct list_head list; /* serialize IOs */
bool submitted; /* indicate IO submission */
int need_lock; /* indicate we need to lock cp_rwsem */
bool in_list; /* indicate fio is in io_list */
bool is_por; /* indicate IO is from recovery or not */
bool retry; /* need to reallocate block address */
+ bool compressed; /* indicate cluster is compressed */
+ bool encrypted; /* indicate file is encrypted */
enum iostat_type io_type; /* io type */
struct writeback_control *io_wbc; /* writeback control */
struct bio **bio; /* bio for ipu */
@@ -1169,6 +1187,18 @@ enum fsync_mode {
FSYNC_MODE_NOBARRIER, /* fsync behaves nobarrier based on posix */
};
+/*
+ * this value is set in page as a private data which indicate that
+ * the page is atomically written, and it is in inmem_pages list.
+ */
+#define ATOMIC_WRITTEN_PAGE ((unsigned long)-1)
+#define DUMMY_WRITTEN_PAGE ((unsigned long)-2)
+
+#define IS_ATOMIC_WRITTEN_PAGE(page) \
+ (page_privat...
[truncated message content] |