The Wiert Corner – irregular stream of stuff

Jeroen W. Pluimers on .NET, C#, Delphi, databases, and personal interests

  • My badges

  • Twitter Updates

  • My Flickr Stream

  • Pages

  • All categories

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 2,513 other followers

Archive for the ‘btrfs’ Category

More on empty files

Posted by jpluimers on 2021/10/07

TL;DR: Empty files are indeed of size zero, but there is some disk space involved for their meta-data (like name, permission, timestamps)

Some links (via [WayBack] create zero sized file – Google Search):

  • [WayBack] Zero-byte file – Wikipedia
  • [WayBack] filesystems – How can a file size be zero? – Super User (thanks [WayBack] phuclv):

    Filesystems store a lot of information about a file such as file name, file size, creation time, access time, modified time, created user, user and group permissions, fragments, pointer to clusters that store the file, hard/soft links, attributes… Those are called file metadata. Why do you count those metadata into file size when users do not (need to) care about them and don’t know about them? They only really care about the file content

    Moreover each filesystem stores different types of metadata which take different amounts of space on disk. For example POSIX permissions are very different from NTFS permission, and there are also inode numbers in POSIX which do not exist on Windows. Even POSIX filesystems vary a lot, like ext3 with 32-bit block address, ext4 with 48-bit, Btrfs with 64-bit and ZFS with 128-bit address. So how will you count those metadata into file size?

    Take another example with a 100-byte file whose metadata consumes 56 bytes on the current filesystem. We copy the file to another filesystem and now it takes 128 bytes of metadata. However the file contents are exactly the same, the number of bytes in the files are also the same. So displaying file size as 156 bytes on a system but 228 bytes on another is very confusing and counter-intuitive.

  • [WayBack] What is the concept of creating a file with zero bytes in Linux? – Unix & Linux Stack Exchange:

    touch will create an inode, and ls -i or stat will show info about the inode:

    $ touch test
    $ ls -i test
    28971114 test
    $ stat test
      File: ‘test’
      Size: 0           Blocks: 0          IO Block: 4096   regular empty file
    Device: fc01h/64513d    Inode: 28971114    Links: 1
    Access: (0664/-rw-rw-r--)  Uid: ( 1000/1000)   Gid: ( 1000/1000)
    Access: 2017-03-28 17:38:07.221131925 +0200
    Modify: 2017-03-28 17:38:07.221131925 +0200
    Change: 2017-03-28 17:38:07.221131925 +0200
     Birth: -
    

    Notice that test uses 0 blocks. To store the data displayed, the inode uses some bytes. Those bytes are stored in the inode table. Look at the ext2 page for an example of an inode structure [WayBack].

Oh and a nice NTFS thing (thanks [WayBack] Paweł Bulwan):

and in case of NTFS, the size of file reported by Windows and most tools is actually the size of the main stream of the file, which we perceive as the content of the file. The file stored on NTFS partition can additionaly have some data stored in alternative data streams, and still have the reported size of 0. It’s a nice filesystem feature to know if you want to have the full picture :)

Related: my really old post command line – create empty text file from a batch file (via: Stack Overflow)

–jeroen

Posted in *nix, btrfs, Development, File-Systems, NTFS, Power User, Software Development, Windows | Leave a Comment »

When your btrfs partition is damaged.

Posted by jpluimers on 2019/05/27

A while ago, I somehow had a damaged btrfs partition that I found out after the virtualisation host without reason decided to reboot.

I’m not sure what caused that (by now the machine has been retired as it was already getting a bit old), but btrfs was panicking shortly after boot, so the VM as is was unusable.

In the end I had to:

  1. Boot from a Tumbleweed Rescue DVD (download Rescue CD – x86_64 from [WayBackopenSUSE:Tumbleweed installation – openSUSE)
  2. Add a fresh backup hard disk in read-write mote
  3. Mount the old one in read-only mode
  4. rsync -avloz over as much as I could
  5. Restore the VM from a backup
  6. Attach the backup hard disk
  7. Diff what I missed (only a few bits in the /etc tree and my home directory for which I hadn’t yet pushed the git repositories).

These didn’t work, but might work for others: [WayBackSDB:BTRFS – openSUSE – How to repair a broken/unmountable btrfs filesystem

–jeroen

Posted in *nix, btrfs, File-Systems, Linux, openSuSE, Power User, SuSE Linux, Tumbleweed | Leave a Comment »

Not sure if this btrfs error was benign or not.

Posted by jpluimers on 2019/05/17

Booting the VM gave this in the log:

# btrfs balance /
WARNING:

    Full balance without filters requested. This operation is very
    intense and takes potentially very long. It is recommended to
    use the balance filters to narrow down the scope of balance.
    Use 'btrfs balance start --full-balance' option to skip this
    warning. The operation will start in 10 seconds.
    Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1
Starting balance without any filters.
ERROR: error during balancing '/': Input/output error
There may be more info in syslog - try dmesg | tail
diaspore:~# dmesg | tail
[ 2261.857360] BTRFS info (device sda2): found 144 extents
[ 2261.922014] BTRFS info (device sda2): found 144 extents
[ 2262.003653] BTRFS info (device sda2): found 144 extents
[ 2262.146557] BTRFS info (device sda2): found 144 extents
[ 2262.268034] BTRFS info (device sda2): relocating block group 20951597056 flags data
[ 2268.255631] BTRFS info (device sda2): found 19765 extents
[ 2278.541549] BTRFS info (device sda2): found 19758 extents
[ 2278.685372] BTRFS info (device sda2): relocating block group 14558429184 flags data
[ 2278.714483] BTRFS warning (device sda2): csum failed root -9 ino 269 off 65150976 csum 0x27374190 expected csum 0x7091fbbc mirror 1
[ 2278.714619] BTRFS warning (device sda2): csum failed root -9 ino 269 off 65150976 csum 0x27374190 expected csum 0x7091fbbc mirror 1

Booting from a rescue DVD, then checking with an unmounted /dev/sda2 nothing is wrong:

localhost:~ # btrfs check /dev/sda2
Checking filesystem on /dev/sda2
UUID: 23d33d0f-0468-4408-b73c-b0eec9387d82
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
checking quota groups
found 6460166144 bytes used, no error found
total csum bytes: 5795704
total tree bytes: 214171648
total fs tree bytes: 193740800
total extent tree bytes: 12910592
btree space waste bytes: 40754145
file data blocks allocated: 35720101888
 referenced 11352182784

Both had the same version:

# btrfs version
btrfs-progs v4.13

If I ever need recovery, then these links likely help:

–jeroen

Posted in *nix, *nix-tools, btrfs, File-Systems, Power User | Leave a Comment »

Follow up on “btrfs free space. It’s complicated. Still.”

Posted by jpluimers on 2019/02/14

In the mean time I’ve made a bit of progress on btrfs free space. It’s complicated. Still.

Let me start with an example system that has details further below.

  • total of quotas is slightly more than 1.1 Gibibyte
    • Sometimes this helps making the quota list better:
      btrfs quota rescan /
  • the disk partition itself is 10 Gibibyte
  • btrfs indicates there is 6.6 Gibibyte used
  • df indicates there is 11 Gigabyte total, 6.9 Gigabyte used and 2.6 Gigabyte available.

In short: the used 6.6 Gibibyte (which matches 6.9 Gigabyte) does not match the 11 Gibibyte. A situation very similar to [WayBackDisk usage is more than double the snapshots exclusive data — Linux BTRFS.

Reminder to self: try bedup that is supposed to deduplicate btrfs data: [WayBackRe: Disk usage is more than double the snapshots exclusive data — Linux BTRFS

I need to check out on de-duplication (as I know this particular machine has quite a bit of duplicate data).

But first lets get the size down a bit with this series of commands:

sftp-host:~ # btrfs balance start -dusage=0 -musage=0 /
Done, had to relocate 0 out of 18 chunks
sftp-host:~ # btrfs balance start -dusage=10 -musage=10 /
Done, had to relocate 1 out of 18 chunks
sftp-host:~ # btrfs balance start -dusage=20 -musage=20 /
Done, had to relocate 1 out of 18 chunks
sftp-host:~ # btrfs balance start -dusage=30 -musage=30 /
Done, had to relocate 2 out of 18 chunks
sftp-host:~ # btrfs balance start -dusage=40 -musage=40 /
Done, had to relocate 1 out of 17 chunks
sftp-host:~ # btrfs balance start -dusage=50 -musage=40 /
Done, had to relocate 2 out of 17 chunks
sftp-host:~ # btrfs balance start -dusage=60 -musage=40 /
Done, had to relocate 2 out of 17 chunks
sftp-host:~ # btrfs balance start -dusage=60 -musage=60 /
sftp-host:~ # btrfs filesystem show
Label: none  uuid: 6492a1c6-5fbc-4938-bf11-57d6194e6b8f
    Total devices 1 FS bytes used 6.61GiB
    devid    1 size 10.00GiB used 8.88GiB path /dev/sda2

sftp-host:~ # btrfs filesystem df /
Data, single: total=7.82GiB, used=6.35GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=512.00MiB, used=263.47MiB
GlobalReserve, single: total=22.67MiB, used=0.00B

Compare this to the initial situation:

sftp-host:~ # btrfs filesystem show
Label: none  uuid: 6492a1c6-5fbc-4938-bf11-57d6194e6b8f
    Total devices 1 FS bytes used 6.61GiB
    devid    1 size 10.00GiB used 10.00GiB path /dev/sda2

sftp-host:~ # btrfs filesystem df /
Data, single: total=8.94GiB, used=6.35GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=512.00MiB, used=264.27MiB
GlobalReserve, single: total=23.48MiB, used=0.00B

Now you see that:

  • far less of the partition is actually used by the filesystem (was 10 Gibibyte, now 8.88 Gibibyte)
  • far less storage is needed for the data (was 8.94 Gibibyte, now 7.82 Gibibyte to store 6.35 Gibibyte)

If the above succeeds

Continue with steps closer to 99 (which is a percentage) and if that succeeds try this:

btrfs balance start --full-balance /

In my experience it needs at least 60% free dh -f disk space to run to completion. If it fails, it’s no problem: it merges the final almost full blocks. But those blocks will be split soon anyway because of file system write activity.

Nicer overview

You can even get a nicer view by executing btrfs filesystem usage -T / (which I did after continuing up to 99):

Overall:
    Device size:          10.00GiB
    Device allocated:          8.22GiB
    Device unallocated:        1.78GiB
    Device missing:          0.00B
    Used:              7.00GiB
    Free (estimated):          2.72GiB  (min: 1.83GiB)
    Data ratio:               1.00
    Metadata ratio:           2.00
    Global reserve:       24.55MiB  (used: 48.00KiB)

             Data    Metadata  System              
Id Path      single  DUP       DUP      Unallocated
-- --------- ------- --------- -------- -----------
 1 /dev/sda2 7.41GiB 768.00MiB 64.00MiB     1.78GiB
-- --------- ------- --------- -------- -----------
   Total     7.41GiB 384.00MiB 32.00MiB     1.78GiB
   Used      6.47GiB 269.88MiB 16.00KiB

 

If the above fails

Three things to try now:

  1. Try to start with lower values of -dusage and -musage.
  2. Split-dusage and -musage in different btrfs balance start commands.
  3. Try to remove any snapper snapshots that you do not need.

Log:

sftp-host:~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        176M     0  176M   0% /dev
tmpfs           182M     0  182M   0% /dev/shm
tmpfs           182M  868K  181M   1% /run
tmpfs           182M     0  182M   0% /sys/fs/cgroup
/dev/sda2        11G  6.9G  2.6G  73% /
/dev/sda2        11G  6.9G  2.6G  73% /var/spool
/dev/sda2        11G  6.9G  2.6G  73% /tmp
/dev/sda2        11G  6.9G  2.6G  73% /boot/grub2/i386-pc
/dev/sda2        11G  6.9G  2.6G  73% /boot/grub2/x86_64-efi
/dev/sda2        11G  6.9G  2.6G  73% /var/crash
/dev/sda2        11G  6.9G  2.6G  73% /var/lib/named
/dev/sda2        11G  6.9G  2.6G  73% /var/opt
/dev/sda2        11G  6.9G  2.6G  73% /var/lib/mailman
/dev/sda2        11G  6.9G  2.6G  73% /var/tmp
/dev/sda2        11G  6.9G  2.6G  73% /var/log
/dev/sda2        11G  6.9G  2.6G  73% /var/lib/pgsql
/dev/sda2        11G  6.9G  2.6G  73% /var/lib/machines
/dev/sda2        11G  6.9G  2.6G  73% /srv
/dev/sda2        11G  6.9G  2.6G  73% /usr/local
/dev/sda2        11G  6.9G  2.6G  73% /opt
/dev/sda2        11G  6.9G  2.6G  73% /.snapshots
/dev/sda3       5.5G   36M  5.5G   1% /home
tmpfs            37M     0   37M   0% /run/user/1000
sftp-host:~ # btrfs filesystem show
Label: none  uuid: 6492a1c6-5fbc-4938-bf11-57d6194e6b8f
    Total devices 1 FS bytes used 6.61GiB
    devid    1 size 10.00GiB used 10.00GiB path /dev/sda2

sftp-host:~ # btrfs filesystem df /
Data, single: total=8.94GiB, used=6.35GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=512.00MiB, used=264.27MiB
GlobalReserve, single: total=23.48MiB, used=0.00B
sftp-host:~ # btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5          16.00KiB     16.00KiB 
0/257         1.05MiB      1.05MiB 
0/258         2.55GiB     51.11MiB 
0/259         2.36MiB      2.36MiB 
0/260        16.00KiB     16.00KiB 
0/261        16.00KiB     16.00KiB 
0/262        16.00KiB     16.00KiB 
0/263        36.00KiB     36.00KiB 
0/264        16.00KiB     16.00KiB 
0/265        16.00KiB     16.00KiB 
0/266        16.00KiB     16.00KiB 
0/267        16.00KiB     16.00KiB 
0/268        16.00KiB     16.00KiB 
0/269       533.83MiB    533.83MiB 
0/270        16.00KiB     16.00KiB 
0/271        48.00KiB     48.00KiB 
0/272        16.00KiB     16.00KiB 
0/289        16.00KiB     16.00KiB 
0/401         2.80GiB    396.91MiB 
0/402         2.55GiB      9.57MiB 
0/403         2.55GiB     12.91MiB 
0/404         2.54GiB    676.00KiB 
0/405         2.54GiB    660.00KiB 
0/406         2.81GiB     60.34MiB 
0/407         2.55GiB      8.66MiB 
0/408         2.55GiB      4.57MiB 
0/409         2.56GiB     24.31MiB 
0/410         2.55GiB      7.28MiB 
0/411         2.57GiB     20.55MiB 
255/289      16.00KiB     16.00KiB 
sftp-host:~ # !~
~/Versioned/btrfs-size/btrfs-size.sh 
===============================================================================================
Snapshot / Subvolume                                               ID   Total    Exclusive Data
===============================================================================================
257 gen 505741 top level 5 path .snapshots                         257  1.05MB   1.05MB   
258 gen 505796 top level 257 path .snapshots/1/snapshot            258  2.55GB   51.11MB  
259 gen 505736 top level 5 path boot/grub2/i386-pc                 259  2.36MB   2.36MB   
260 gen 452028 top level 5 path boot/grub2/x86_64-efi              260  16.00KB  16.00KB  
261 gen 452028 top level 5 path opt                                261  16.00KB  16.00KB  
262 gen 505720 top level 5 path srv                                262  16.00KB  16.00KB  
263 gen 505791 top level 5 path tmp                                263  36.00KB  36.00KB  
264 gen 505717 top level 5 path usr/local                          264  16.00KB  16.00KB  
265 gen 452028 top level 5 path var/crash                          265  16.00KB  16.00KB  
266 gen 452028 top level 5 path var/lib/mailman                    266  16.00KB  16.00KB  
267 gen 452028 top level 5 path var/lib/named                      267  16.00KB  16.00KB  
268 gen 452028 top level 5 path var/lib/pgsql                      268  16.00KB  16.00KB  
269 gen 505795 top level 5 path var/log                            269  533.83MB 533.83MB 
270 gen 452028 top level 5 path var/opt                            270  16.00KB  16.00KB  
271 gen 505796 top level 5 path var/spool                          271  48.00KB  48.00KB  
272 gen 505771 top level 5 path var/tmp                            272  16.00KB  16.00KB  
289 gen 452028 top level 5 path var/lib/machines                   289  16.00KB  16.00KB  
401 gen 451786 top level 257 path .snapshots/92/snapshot           401  2.81GB   396.91MB 
402 gen 465358 top level 257 path .snapshots/93/snapshot           402  2.55GB   9.57MB   
403 gen 465363 top level 257 path .snapshots/94/snapshot           403  2.55GB   12.91MB  
404 gen 471598 top level 257 path .snapshots/95/snapshot           404  2.54GB   676.00KB 
405 gen 471603 top level 257 path .snapshots/96/snapshot           405  2.54GB   660.00KB 
406 gen 471658 top level 257 path .snapshots/97/snapshot           406  2.81GB   60.34MB  
407 gen 487231 top level 257 path .snapshots/98/snapshot           407  2.55GB   8.66MB   
408 gen 490073 top level 257 path .snapshots/99/snapshot           408  2.55GB   4.57MB   
409 gen 490081 top level 257 path .snapshots/100/snapshot          409  2.56GB   24.31MB  
410 gen 505715 top level 257 path .snapshots/101/snapshot          410  2.55GB   7.28MB   
411 gen 505739 top level 257 path .snapshots/102/snapshot          411  2.57GB   20.55MB  
===============================================================================================
                                                                Exclusive Total: 1.11GB    
sftp-host:~ # 

–jeroen

Posted in *nix, *nix-tools, btrfs, File-Systems, Power User | Leave a Comment »

when btrfs-size shows a snapshot as 16777216.00TB or btrfs qgroup as 16.00EiB

Posted by jpluimers on 2018/10/19

A long time ago I wrote about the btrfs-size tool: [WayBackA bash script to btrfs snapshot details like disk sizes (requires btrfs quota to be enabled).

One day, it showed a ridiculously large size for /tmp:

# ./btrfs-size.sh 
=============================================================================================================================================================================================================================================================
Snapshot / Subvolume                                               ID   Total    Exclusive Data
=============================================================================================================================================================================================================================================================
257 gen 855182 top level 5 path .snapshots                         257  4.30MB   4.30MB   
258 gen 856438 top level 257 path .snapshots/1/snapshot            258  1.84GB   193.01MB 
...
262 gen 856438 top level 5 path srv                                262  1.83GB   1.83GB   
263 gen 856438 top level 5 path tmp                                263  16777216.00TB16777216.00TB
264 gen 856438 top level 5 path usr/local                          264  260.00KB 260.00KB 
...
990 gen 849192 top level 257 path .snapshots/583/snapshot          990  1.83GB   8.23MB   
991 gen 849224 top level 257 path .snapshots/584/snapshot          991  2.09GB   62.66MB  
=============================================================================================================================================================================================================================================================
                                                                Exclusive Total: 3.26GB    

This tracks back to the output of this command, which I’ve shortened a bit:

# btrfs qgroup show /
qgroupid         rfer         excl
--------         ----         ----
0/5          16.00KiB     16.00KiB
0/257         4.30MiB      4.30MiB
...
0/262         1.83GiB      1.83GiB
0/263        16.00EiB     16.00EiB
0/264       260.00KiB    260.00KiB
...
255/274         0.00B        0.00B
255/797      16.00KiB     16.00KiB

This is a known issue as quotas in btrfs – though workable – aren’t fully stable yet: [WayBack] Linux BTRFS Storage: Re: During a btrfs balance nearly all quotas of the subvolumes became exceeded

It also provides this simple solution:

Read the rest of this entry »

Posted in *nix, *nix-tools, btrfs, File-Systems, Power User | Leave a Comment »

 
%d bloggers like this: