ZFS: Difference between revisions
| (7 intermediate revisions by the same user not shown) | |||
| Line 11: | Line 11: | ||
<pre> | <pre> | ||
# Create a zpool with a mirror vdev. | # Create a zpool with a mirror vdev. | ||
zpool create -f -o ashift=12 -o compression= | zpool create -f -o ashift=12 -o compression=zstd $zpool_name mirror \ | ||
ata-diskA \ | ata-diskA \ | ||
ata-diskB | ata-diskB | ||
| Line 33: | Line 33: | ||
zfs list -t snapshot | zfs list -t snapshot | ||
</pre> | </pre> | ||
==Caching== | |||
ZFS has two read caches: | |||
* ARC - this is enabled by default and uses half of your memory. This memory will be released if you approach out of memory. | |||
* L2ARC - you can enable additional caching by adding an L2ARC drive for ARC to overflow to. | |||
For writes: | |||
* SLOG - A separate log, typically an SSD backed mirror to write the ZFS intent log (ZIL). | |||
In general, you will want to use an Intel Optane SSD for caching as they're supposed to last longer and have less latency.<br> | |||
A 16GB Optane stick can be had for ~$12. | |||
===ARC=== | |||
<code>arc_summary</code> or <code>arcstat</code> will show you the memory used by ARC. This does not appear in <code>htop</code>. | |||
If you want to reduce arc memory usage, you can set limits by creating <code>/etc/modprobe.d/zfs.conf</code>: | |||
{{hidden | <code>/etc/modprobe.d/zfs.conf</code> | | |||
<pre> | |||
# Set Max ARC size => 4GB == 4294967296 Bytes | |||
options zfs zfs_arc_max=4294967296 | |||
# Set Min ARC size => 1GB == 1073741824 | |||
options zfs zfs_arc_min=1073741824 | |||
</pre> | |||
}} | |||
===L2ARC=== | |||
L2ARC costs about 80 bytes per record. Historically, this used to be 320 bytes, but now it's mostly negligible.<br> | |||
At the default of 128K record size, 1 GiB has 8196 records, hence requiring approx 656 KiB of memory.<br> | |||
At 4K record size, you will need approx. 20 MB of RAM per GB. | |||
To add an l2arc: | |||
<syntaxhighlight lang="bash"> | |||
sudo zpool add $pool cache $device | |||
</syntaxhighlight> | |||
===SLOG=== | |||
<syntaxhighlight lang="bash"> | |||
sudo zpool add $pool log $device | |||
# or | |||
# sudo zpool add $pool log mirror $device1 $device2 | |||
</syntaxhighlight> | |||
==Expanding== | |||
You can only expand by adding vdevs or replacing all drives in a vdev with larger ones.<br> | |||
See [https://docs.oracle.com/cd/E19253-01/819-5461/githb/index.html]<br> | |||
After replacing all drives in a vdev, you need to run: | |||
<code>sudo zpool online -e $pool $disk</code> on any disk. | |||
==Pros and Cons== | ==Pros and Cons== | ||