Posted by jpluimers on 2019/05/27
When you get the response “web.archive.org unexpectedly closed the connection” without even returning an HTTP code, but:
- it works in anonymous mode
- it works with all extensions turned off
then likely there are too many cookies for archive.org or/and web.archive.org: in my case, I had 90 cookies.
Cleaning these cookies out resolved the problem (I used [WayBack] Awesome Cookie Manager for this).
Edit 20231230: Awesome Cookie Manager source repository at [Wayback/Archive] Phatsuo/awesome-cookie-manager: Awesome Cookie Manager.
--
jeroen
Posted in Chrome, Google, Internet, InternetArchive, Power User, WayBack machine | Leave a Comment »
Posted by jpluimers on 2019/05/27
For my own memory:
[WayBack] Best Hard Drives for ZFS Server (Updated 2017) | b3n.org
My blog post Best Buy Guides (BBGs) – mux’ blog – Tweakblogs – Tweakers.
ZFS, dedupe and RAM:
ZFS, FreeBSD, ZoL (ZFS on Linux) and SSDs:
- Via [WayBack] Solved – ZFS with only one ssd | The FreeBSD Forums
- [WayBack] How I Learned to Stop Worrying and Love RAIDZ | Delphix (backed with plenty of tables and graphs)
- TL;DR: Choose a RAID-Z stripe width based on your IOPS needs and the amount of space you are willing to devote to parity information.
- Guidance on a choice between:
- best performance on random IOPS
- best reliability
- best space efficiency
- A misunderstanding of this overhead, has caused some people to recommend using “(2^n)+p” disks, where p is the number of parity “disks” (i.e. 2 for RAIDZ-2), and n is an integer. These people would claim that for example, a 9-wide (2^3+1) RAIDZ1 is better than 8-wide or 10-wide. This is not generally true. The primary flaw with this recommendation is that it assumes that you are using small blocks whose size is a power of 2. While some workloads (e.g. databases) do use 4KB or 8KB logical block sizes (i.e. recordsize=4K or 8K), these workloads benefit greatly from compression. At Delphix, we store Oracle, MS SQL Server, and PostgreSQL databases with LZ4 compression and typically see a 2-3x compression ratio. This compression is more beneficial than any RAID-Z sizing. Due to compression, the physical (allocated) block sizes are not powers of two, they are odd sizes like 3.5KB or 6KB. This means that we can not rely on any exact fit of (compressed) block size to the RAID-Z group width.
- If you are using RAID-Z with 512-byte sector devices with recordsize=4K or 8K and compression=off (but you probably want compression=lz4): use at least 5 disks with RAIDZ1; use at least 6 disks with RAIDZ2; and use at least 11 disks with RAIDZ3.
- To summarize: Use RAID-Z. Not too wide. Enable compression.
- [NoWayBack/Archive] FreeNAS All SSDs? – Hardware / Build a PC – Level1Techs Forums
- [WayBack] ZFS on all-sdd storage | iXsystems Community
I wouldn’t worry so much about the cost of the drives if you have to replace them in a few years. They’re constantly getting bigger and cheaper. If you really need to replace them in 3 years it’s not going to be the end of the world. Just think, a 256GB SSD can be purchased for about $100 today and 3 years ago the same drives were like $400+. To boot, they are faster than they were 3 years ago.
It’s quite possible that by the time you need to be worried about buying replacement drives for your pool you’ll be able to buy a single drive that can hold 1/2 your pool’s data for $100.
Don’t fret it. Buy the SSDs and be happy. Tell your boss you did the analysis and all is well. Just don’t buy those TLC drives. Those seem very scary for ZFS IMO.
…
There are some companies that have forked ZFS and set it up as you describe (separate vdevs for metadata using high-endurance SLC NAND) but there’s nothing like that in OpenZFS at the moment.
- [WayBack] ZFS with SSDs: Am I asking for a headache in the near future? | Proxmox Support Forum
- [WayBack] SSD Over-provisioning using hdparm – Thomas-Krenn-Wiki
- [WayBack] Optimize SSD Performance – Thomas-Krenn-Wiki
OpenSuSE related
Samba/CIFS related
–jeroen
Posted in ESXi6.5, Power User, Virtualization, VMware, VMware ESXi | Leave a Comment »
Posted by jpluimers on 2019/05/27
A while ago, I somehow had a damaged btrfs partition that I found out after the virtualisation host without reason decided to reboot.
I’m not sure what caused that (by now the machine has been retired as it was already getting a bit old), but btrfs was panicking shortly after boot, so the VM as is was unusable.
In the end I had to:
- Boot from a Tumbleweed Rescue DVD (download Rescue CD – x86_64 from [WayBack] openSUSE:Tumbleweed installation – openSUSE)
- Add a fresh backup hard disk in read-write mote
- Mount the old one in read-only mode
rsync -avloz
over as much as I could
- Restore the VM from a backup
- Attach the backup hard disk
- Diff what I missed (only a few bits in the
/etc
tree and my home directory for which I hadn’t yet pushed the git repositories).
These didn’t work, but might work for others: [WayBack] SDB:BTRFS – openSUSE – How to repair a broken/unmountable btrfs filesystem
–jeroen
Posted in *nix, btrfs, File-Systems, Linux, openSuSE, Power User, SuSE Linux, Tumbleweed | Leave a Comment »