Projekt

Allgemein

Profil

Howto btrfs » Historie » Version 1

Jeremias Keihsler, 13.01.2017 09:46

1 1 Jeremias Keihsler
h1. btrfs-file-system
2
3
h2. preliminary note
4
5
most of this is taken from
6
* https://btrfs.wiki.kernel.org/index.php/Problem_FAQ
7
* http://nathantypanski.com/blog/2014-07-14-no-space-left.html
8
* http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html
9
10
h2. check for empty space on file system
11
12
First, check how much space has been allocated on your filesystem: 
13
<pre><code class="bash">
14
btrfs fi show
15
</code></pre>
16
Next, check how much of your metadata allocation has been used up: 
17
<pre><code class="bash">
18
btrfs fi df /mount/point
19
</code></pre>
20
If there is space for data but no space left for metadata, then a partial balance may solve the issue
21
<pre><code class="bash">
22
btrfs fi balance start -dusage=5 /mount/point
23
</code></pre>
24
25
h2. Fixing Btrfs Filesystem Full Problems
26
27
h3. Clear space now
28
29
If you have historical snapshots, the quickest way to get space back so that you can look at the filesystem and apply better fixes and cleanups is to drop the oldest historical snapshots.
30
31
Two things to note:
32
* If you have historical snapshots as described here, delete the oldest ones first, and wait (see below). However if you just just deleted 100GB, and replaced it with another 100GB which failed to fully write, giving you out of space, all your snapshots will have to be deleted to clear the blocks of that old file you just removed to make space for the new one (actually if you know exactly what file it is, you can go in all your snapshots and manually delete it, but in the common case it'll be multiple files and you won't know which ones, so you'll have to drop all your snapshots before you get the space back).
33
* After deleting snapshots, it can take a minute or more for btrfs fi show to show the space freed. Do not be too impatient, run btrfs fi show in a loop and see if the number changes every minute. If it does not, carry on and delete other snapshots or look at rebalancing.
34
35
Note that even in the cases described below, you may have to clear one snapshot or more to make space before btrfs balance can run. As a corollary, btrfs can get in states where it's hard to get it out of the 'no space' state it's in. As a result, even if you don't need snapshot, keeping at least one around to free up space should you hit that mis-feature/bug, can be handy
36
37
h3. Is your filesystem really full? Mis-balanced data chunks
38
39
Look at filesystem show output:
40
<pre><code class="bash">
41
legolas:~# btrfs fi show
42
Label: btrfs_pool1  uuid: 4850ee22-bf32-4131-a841-02abdb4a5ba6
43
	Total devices 1 FS bytes used 441.69GiB
44
	devid    1 size 865.01GiB used 751.04GiB path /dev/mapper/cryptroot
45
</code></pre>
46
47
Only about 50% of the space is used (441 out of 865GB), but the device is 88% full (751 out of 865MB). Unfortunately it's not uncommon for a btrfs device to fill up due to the fact that it does not rebalance chunks (3.18+ has started freeing empty chunks, which is a step in the right direction).
48
49
In the case above, because the filesystem is only 55% full, I can ask balance to rewrite all chunks that have less than 55% space used. Rebalancing those blocks actually means taking the data in those blocks, and putting it in fuller blocks so that you end up being able to free the less used blocks.
50
This means the bigger the -dusage value, the more work balance will have to do (i.e. taking fuller and fuller blocks and trying to free them up by putting their data elsewhere). Also, if your FS is 55% full, using -dusage=55 is ok, but there isn't a 1 to 1 correlation and you'll likely be ok with a smaller dusage number, so start small and ramp up as needed.
51
52
<pre><code class="bash">
53
legolas:~# btrfs balance start -dusage=55 /mnt/btrfs_pool1 &
54
55
# Follow the progress along with: legolas:~# while :; do btrfs balance status -v /mnt/btrfs_pool1; sleep 60; done Balance on '/mnt/btrfs_pool1' is running 10 out of about 315 chunks balanced (22 considered), 97% left Dumping filters: flags 0x1, state 0x1, force is off DATA (flags 0x2): balancing, usage=55 Balance on '/mnt/btrfs_pool1' is running 16 out of about 315 chunks balanced (28 considered), 95% left Dumping filters: flags 0x1, state 0x1, force is off DATA (flags 0x2): balancing, usage=55 (...)
56
</code></pre>
57
58
When it's over, the filesystem now looks like this (note devid used is now 513GB instead of 751GB):
59
60
<pre><code class="bash">
61
legolas:~# btrfs fi show
62
Label: btrfs_pool1  uuid: 4850ee22-bf32-4131-a841-02abdb4a5ba6
63
	Total devices 1 FS bytes used 441.64GiB
64
	devid    1 size 865.01GiB used 513.04GiB path /dev/mapper/cryptroot
65
</code></pre>
66
67
Before you ask, yes, btrfs should do this for you on its own, but currently doesn't as of 3.14.
68
69
h3. Is your filesystem really full? Misbalanced metadata
70
71
Unfortunately btrfs has another failure case where the metadata space can fill up. When this happens, even though you have data space left, no new files will be writeable.
72
In the example below, you can see Metadata DUP 9.5GB out of 10GB. Btrfs keeps 0.5GB for itself, so in the case above, metadata is full and prevents new writes.
73
One suggested way is to force a full rebalance, and in the example below you can see metadata goes back down to 7.39GB after it's done. Yes, there again, it would be nice if btrfs did this on its own. It will one day (some if it is now in 3.18).
74
Sometimes, just using -dusage=0 is enough to rebalance metadata (this is now done automatically in 3.18 and above), but if it's not enough, you'll have to increase the number.
75
76
<pre><code class="bash">
77
legolas:/mnt/btrfs_pool2# btrfs fi df . Data, single: total=800.42GiB, used=636.91GiB System, DUP: total=8.00MiB, used=92.00KiB System, single: total=4.00MiB, used=0.00 Metadata, DUP: total=10.00GiB, used=9.50GiB Metadata, single: total=8.00MiB, used=0.00
78
79
legolas:/mnt/btrfs_pool2# btrfs balance start -v -dusage=0 /mnt/btrfs_pool2 Dumping filters: flags 0x1, state 0x0, force is off DATA (flags 0x2): balancing, usage=0 Done, had to relocate 91 out of 823 chunks
80
81
legolas:/mnt/btrfs_pool2# btrfs fi df . Data, single: total=709.01GiB, used=603.85GiB System, DUP: total=8.00MiB, used=88.00KiB System, single: total=4.00MiB, used=0.00 Metadata, DUP: total=10.00GiB, used=7.39GiB Metadata, single: total=8.00MiB, used=0.00
82
</code></pre>
83
84
h3. Balance cannot run because the filesystem is full
85
86
One trick to get around this is to add a device (even a USB key will do) to your btrfs filesystem. This should allow balance to start, and then you can remove the device with btrfs device delete when the balance is finished.
87
It's also been said on the list that kernel 3.14 can fix some balancing issues that older kernels can't, so give that a shot if your kernel is old.
88
89
Note, it's even possible for a filesystem to be full in a way that you cannot even delete snapshots to free space. This shows how you would work around it:
90
91
<pre><code class="bash">
92
root@polgara:/mnt/btrfs_pool2# btrfs fi df .
93
Data, single: total=159.67GiB, used=80.33GiB
94
System, single: total=4.00MiB, used=24.00KiB
95
Metadata, single: total=8.01GiB, used=7.51GiB <<<< BAD
96
root@polgara:/mnt/btrfs_pool2# btrfs balance start -v -dusage=0 /mnt/btrfs_pool2
97
Dumping filters: flags 0x1, state 0x0, force is off
98
  DATA (flags 0x2): balancing, usage=0
99
Done, had to relocate 0 out of 170 chunks
100
root@polgara:/mnt/btrfs_pool2# btrfs balance start -v -dusage=1 /mnt/btrfs_pool2
101
Dumping filters: flags 0x1, state 0x0, force is off
102
  DATA (flags 0x2): balancing, usage=1
103
ERROR: error during balancing '/mnt/btrfs_pool2' - No space left on device
104
There may be more info in syslog - try dmesg | tail
105
root@polgara:/mnt/btrfs_pool2# dd if=/dev/zero of=/var/tmp/btrfs bs=1G count=5
106
5+0 records in
107
5+0 records out
108
5368709120 bytes (5.4 GB) copied, 7.68099 s, 699 MB/s
109
root@polgara:/mnt/btrfs_pool2# losetup -v -f /var/tmp/btrfs
110
Loop device is /dev/loop0
111
root@polgara:/mnt/btrfs_pool2# btrfs device add /dev/loop0 .
112
Performing full device TRIM (5.00GiB) ...
113
root@polgara:/mnt/btrfs_pool2# btrfs subvolume delete space2_daily_20140603_00:05:01
114
Delete subvolume '/mnt/btrfs_pool2/space2_daily_20140603_00:05:01'
115
root@polgara:/mnt/btrfs_pool2# for i in *daily*; do btrfs subvolume delete $i; done
116
Delete subvolume '/mnt/btrfs_pool2/space2_daily_20140604_00:05:01'
117
Delete subvolume '/mnt/btrfs_pool2/space2_daily_20140605_00:05:01'
118
Delete subvolume '/mnt/btrfs_pool2/space2_daily_20140606_00:05:01'
119
Delete subvolume '/mnt/btrfs_pool2/space2_daily_20140607_00:05:01'
120
Delete subvolume '/mnt/btrfs_pool2/space2_daily_20140608_00:05:01'
121
Delete subvolume '/mnt/btrfs_pool2/space2_daily_20140609_00:05:01'
122
root@polgara:/mnt/btrfs_pool2# btrfs device delete /dev/loop0 .
123
root@polgara:/mnt/btrfs_pool2# btrfs balance start -v -dusage=1 /mnt/btrfs_pool2
124
Dumping filters: flags 0x1, state 0x0, force is off
125
  DATA (flags 0x2): balancing, usage=1
126
Done, had to relocate 5 out of 169 chunks
127
128
root@polgara:/mnt/btrfs_pool2# btrfs fi df . Data, single: total=154.01GiB, used=80.06GiB System, single: total=4.00MiB, used=28.00KiB Metadata, single: total=8.01GiB, used=4.88GiB <<< GOOD
129
</code></pre>
130
131
h3. Misc Balance Resources
132
133
For more info, please read:
134
* https://btrfs.wiki.kernel.org/index.php/FAQ#Raw_disk_usage
135
* https://btrfs.wiki.kernel.org/index.php/Balance_Filters