Kent Overstreet [Sun, 1 Sep 2024 18:54:42 +0000 (14:54 -0400)]
bcachefs: bch_stripe.disk_label
When reshaping existing stripes, we should keep them on the same target
that they were allocated on; to do this, we need to add a field to the
btree stripe type.
This is a tad awkward, because we only have 8 bits left, and targets are
16 bits - but we only need to store a label, not a full target.
Kent Overstreet [Thu, 5 Sep 2024 00:49:37 +0000 (20:49 -0400)]
bcachefs: Rework btree node pinning
In backpointers fsck, we do a seqential scan of one btree, and check
references to another: extents <-> backpointers
Checking references generates random lookups, so we want to pin that
btree in memory (or only a range, if it doesn't fit in ram).
Previously, this was done with a simple check in the shrinker - "if
btree node is in range being pinned, don't free it" - but this generated
OOMs, as our shrinker wasn't well behaved if there was less memory
available than expected.
Instead, we now have two different shrinkers and lru lists; the second
shrinker being for pinned nodes, with seeks set much higher than normal
- so they can still be freed if necessary, but we'll prefer not to.
Hongbo Li [Wed, 4 Sep 2024 07:15:32 +0000 (15:15 +0800)]
bcachefs: Fix compilation error for bch2_sb_member_alloc
Fix the following compilation error:
```
fs/bcachefs/sb-members.c: In function ‘bch2_sb_member_alloc’:
fs/bcachefs/sb-members.c:508:2: error: a label can only be part of a statement and a declaration is not a statement
508 | unsigned nr_devices = max_t(unsigned, dev_idx + 1, c->sb.nr_devices);
```
Fixes: a7d364a133c7 ("bcachefs: bch2_sb_member_alloc()") Signed-off-by: Hongbo Li <[email protected]> Signed-off-by: Kent Overstreet <[email protected]>
Kent Overstreet [Mon, 2 Sep 2024 02:39:42 +0000 (22:39 -0400)]
bcachefs: Options for recovery_passes, recovery_passes_exclude
This adds mount options for specifying recovery passes to run, or
exclude; the immediate need for this is that backpointers fsck is having
trouble completing, so we need a way to skip it.
Kent Overstreet [Wed, 4 Sep 2024 19:30:48 +0000 (15:30 -0400)]
bcachefs: Use mm_account_reclaimed_pages() when freeing btree nodes
When freeing in a shrinker callback, we need to notify memory reclaim,
so it knows forward progress has been made.
Normally this is done in e.g. slab code, but we're not freeing through
slab - or rather we are, but these allocations are big, and use the
kmalloc_large() path.
This is really a bug in the slub code, but we're working around it here
for now.
Kent Overstreet [Sun, 1 Sep 2024 21:32:22 +0000 (17:32 -0400)]
bcachefs: BCH_WRITE_ALLOC_NOWAIT no longer applies to open bucket allocation
rebalance writes must be BCH_WRITE_ALLOC_NOWAIT because they don't
allocate from the full filesystem - but we don't want spurious
allocation failures due to open buckets.
Thorsten Blum [Mon, 26 Aug 2024 10:11:36 +0000 (12:11 +0200)]
bcachefs: Annotate bch_replicas_entry_{v0,v1} with __counted_by()
Add the __counted_by compiler attribute to the flexible array members
devs to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
CONFIG_FORTIFY_SOURCE.
Increment nr_devs before adding a new device to the devs array and
adjust the array indexes accordingly. Add a helper macro for adding a
new device.
In bch2_journal_read(), explicitly set nr_devs to 0.
Thorsten Blum [Sat, 24 Aug 2024 13:57:41 +0000 (15:57 +0200)]
bcachefs: Annotate struct bch_xattr with __counted_by()
Add the __counted_by compiler attribute to the flexible array member
x_name to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
CONFIG_FORTIFY_SOURCE.
Alan Huang [Thu, 15 Aug 2024 15:40:53 +0000 (23:40 +0800)]
bcachefs: Refactor bch2_bset_fix_lookup_table
bch2_bset_fix_lookup_table is too complicated to be easily understood,
the comment "l now > where" there is also incorrect when where ==
t->end_offset. This patch therefore refactor the function, the idea is
that when where >= rw_aux_tree(b, t)[t->size - 1].offset, we don't need
to adjust the rw aux tree.
Kent Overstreet [Sun, 30 Jun 2024 13:25:56 +0000 (09:25 -0400)]
bcachefs: Assert that we don't lock nodes when !trans->locked
We rely on the trans->locked to know if a trans has nodes locked for
assertions about deadlocks; there can't be more than one trans in the
same process that is locked.
folio_has_private() is an attractive nuisance; filesystem authors
generally don't realise that it actually checks two flags (one of which
is never set by bcachefs). There's no need to check the private flag at
all; for folios owned by bcachefs, we know that folio->private is NULL
when the private flag is clear and non-NULL when the private flag is set.
Kent Overstreet [Mon, 19 Aug 2024 19:11:20 +0000 (15:11 -0400)]
bcachefs: Drop memalloc_nofs_save() in bch2_btree_node_mem_alloc()
It's really not needed: the only locks used here are the btree cache
lock, which we drop for GFP_WAIT allocations, and btree node locks - but
we also drop those for GFP_WAIT allocations.
Youling Tang [Thu, 15 Aug 2024 08:57:43 +0000 (16:57 +0800)]
bcachefs: drop unused posix acl handlers
Remove struct nop_posix_acl_{access,default} for bcachefs filesystem
that don't depend on the xattr handler in their inode->i_op->listxattr()
method in any way. There's nothing more to do than to simply remove the
handler. It's been effectively unused ever since we introduced the new
posix acl api. See [1] for details.
Link [1]: https://patchwork.kernel.org/project/linux-fsdevel/cover/20230125-fs-acl-remove-generic-xattr-handlers-v3-0-f760cc58967d@kernel.org/
Alan Huang [Mon, 12 Aug 2024 08:06:09 +0000 (16:06 +0800)]
bcachefs: Minimize the search range used to calculate the mantissa
When the search key's mantissa is larger than the node i's, we know that
the search key is larger than the first key of the cacheline corresponding
to node i, so that when we are calculating the mantissa of right side
nodes of node i, the left side of the search range can be the first key
of node i. Once the search range is minimized, the mantissa we are
calculating can have more useful bits, thus reduce the slow path
comparison. Besides, we can now remove all the prev array stuff.
The macro allocate_dropping_locks accepts a parameter _trans,
but it was not used, rather the variable trans was directly used,
which may be a local variable inside a function that calls the macros.
The macro allocate_dropping_locks_errocode accepts a parameter _trans,
but it was not used, rather the variable trans was directly used,
which may be a local variable inside a function that calls the macros.
After commit 230e9fc28604 ("slab: add SLAB_ACCOUNT flag"), we need to mark
the inode cache as SLAB_ACCOUNT, similar to commit 5d097056c9a0 ("kmemcg:
account for certain kmem allocations to memcg")
Thorsten Blum [Wed, 21 Aug 2024 16:29:22 +0000 (18:29 +0200)]
bcachefs: Annotate struct bucket_array with __counted_by()
Add the __counted_by compiler attribute to the flexible array member
bucket to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
CONFIG_FORTIFY_SOURCE.
bcachefs: Fix format specifier in bch2_btree_key_cache_to_text()
When building for a 32-bit architecture, for which 'size_t' is
'unsigned int', there is a compiler warning due to use of '%lu':
In file included from fs/bcachefs/vstructs.h:5,
from fs/bcachefs/bcachefs_format.h:80,
from fs/bcachefs/bcachefs.h:207,
from fs/bcachefs/btree_key_cache.c:3:
fs/bcachefs/btree_key_cache.c: In function 'bch2_btree_key_cache_to_text':
fs/bcachefs/btree_key_cache.c:795:25: error: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t' {aka 'unsigned int'} [-Werror=format=]
795 | prt_printf(out, "pending:\t%lu\r\n", per_cpu_sum(bc->nr_pending));
| ^~~~~~~~~~~~~~~~~~~
fs/bcachefs/util.h:78:63: note: in definition of macro 'prt_printf'
78 | #define prt_printf(_out, ...) bch2_prt_printf(_out, __VA_ARGS__)
| ^~~~~~~~~~~
fs/bcachefs/btree_key_cache.c:795:38: note: format string is defined here
795 | prt_printf(out, "pending:\t%lu\r\n", per_cpu_sum(bc->nr_pending));
| ~~^
| |
| long unsigned int
| %u
cc1: all warnings being treated as errors
Use the proper specifier, '%zu', to resolve the warning.
Fixes: e447e49977b8 ("bcachefs: key cache can now allocate from pending") Signed-off-by: Nathan Chancellor <[email protected]> Signed-off-by: Kent Overstreet <[email protected]>
Kent Overstreet [Thu, 13 Jun 2024 19:35:47 +0000 (15:35 -0400)]
bcachefs: key cache can now allocate from pending
btree_trans objects can hold the btree_trans_barrier srcu read lock for
an extended amount of time (they shouldn't, but it's difficult to
guarantee).
the srcu barrier blocks memory reclaim, so to avoid too many stranded
key cache items, this uses the new pending_rcu_items to allocate from
pending items - like we did before, but now without a global lock on the
key cache.
Kent Overstreet [Tue, 11 Jun 2024 00:47:03 +0000 (20:47 -0400)]
bcachefs: rcu_pending
Generic data structure for explicitly tracking pending RCU items,
allowing items to be dequeued (i.e. allocate from items pending
freeing). Works with conventional RCU and SRCU, and possibly other RCU
flavors in the future, meaning this can serve as a more generic
replacement for SLAB_TYPESAFE_BY_RCU.
Pending items are tracked in radix trees; if memory allocation fails, we
fall back to linked lists.
A rcu_pending is initialized with a callback, which is invoked when
pending items's grace periods have expired. Two types of callback
processing are handled specially:
- RCU_PENDING_KVFREE_FN
New backend for kvfree_rcu(). Slightly faster, and eliminates the
synchronize_rcu() slowpath in kvfree_rcu_mightsleep() - instead, an
rcu_head is allocated if we don't have one and can't use the radix
tree
TODO:
- add a shrinker (as in the existing kvfree_rcu implementation) so that
memory reclaim can free expired objects if callback processing isn't
keeping up, and to expedite a grace period if we're under memory
pressure and too much memory is stranded by RCU
- add a counter for amount of memory pending
- RCU_PENDING_CALL_RCU_FN
Accelerated backend for call_rcu() - pending callbacks are tracked in
a radix tree to eliminate linked list overhead.
to serve as replacement backends for kvfree_rcu() and call_rcu(); these
may be of interest to other uses (e.g. SLAB_TYPESAFE_BY_RCU users).
Note:
Internally, we're using a single rearming call_rcu() callback for
notifications from the core RCU subsystem for notifications when objects
are ready to be processed.
Ideally we would be getting a callback every time a grace period
completes for which we have objects, but that would require multiple
rcu_heads in flight, and since the number of gp sequence numbers with
uncompleted callbacks is not bounded, we can't do that yet.
Kent Overstreet [Fri, 16 Aug 2024 16:31:53 +0000 (12:31 -0400)]
bcachefs: Fix deadlock in __wait_on_freeing_inode()
We can't call __wait_on_freeing_inode() with btree locks held; we're
waiting on another thread that's in evict(), and before it clears that
bit it needs to write that inode to flush timestamps - deadlock.
Fixing this involves a fair amount of re-jiggering to plumb a new
transaction restart.
Kent Overstreet [Thu, 8 Aug 2024 15:18:21 +0000 (11:18 -0400)]
inode: make __iget() a static inline
bcachefs is switching to an rhashtable for vfs inodes instead of the
standard inode.c hashtable, so we need this exported, or - a static
inline makes more sense for a single atomic_inc().
bcachefs: Replace div_u64 with div64_u64 where second param is u64
Bcachefs often uses this function to divide by nanosecond times - which
can easily cause problems when cast to u32. For example, `cat
/sys/fs/bcachefs/*/internal/rebalance_status` would return invalid data
in the `duration waited` field because dividing by the number of
nanoseconds in a minute requires the divisor parameter to be u64.
Alyssa Ross [Sat, 7 Sep 2024 16:00:26 +0000 (18:00 +0200)]
bcachefs: Fix negative timespecs
This fixes two problems in the handling of negative times:
• rem is signed, but the rem * c->sb.nsec_per_time_unit operation
produced a bogus unsigned result, because s32 * u32 = u32.
• The timespec was not normalized (it could contain more than a
billion nanoseconds).
For example, { .tv_sec = -14245441, .tv_nsec = 750000000 }, after
being round tripped through timespec_to_bch2_time and then
bch2_time_to_timespec would come back as
{ .tv_sec = -14245440, .tv_nsec = 4044967296 } (more than 4 billion
nanoseconds).
Kent Overstreet [Wed, 4 Sep 2024 21:49:20 +0000 (17:49 -0400)]
bcachefs: Simplify bch2_bkey_drop_ptrs()
bch2_bkey_drop_ptrs() had a some complicated machinery for avoiding
O(n^2) when dropping multiple pointers - but when n is only going to be
~4, it's not worth it.