ZFS: Difference between revisions
| (3 intermediate revisions by the same user not shown) | |||
| Line 11: | Line 11: | ||
<pre> | <pre> | ||
# Create a zpool with a mirror vdev. | # Create a zpool with a mirror vdev. | ||
zpool create -f -o ashift=12 -o compression= | zpool create -f -o ashift=12 -o compression=zstd $zpool_name mirror \ | ||
ata-diskA \ | ata-diskA \ | ||
ata-diskB | ata-diskB | ||
| Line 61: | Line 61: | ||
At the default of 128K record size, 1 GiB has 8196 records, hence requiring approx 656 KiB of memory.<br> | At the default of 128K record size, 1 GiB has 8196 records, hence requiring approx 656 KiB of memory.<br> | ||
At 4K record size, you will need approx. 20 MB of RAM per GB. | At 4K record size, you will need approx. 20 MB of RAM per GB. | ||
To add an l2arc: | |||
<syntaxhighlight lang="bash"> | |||
sudo zpool add $pool cache $device | |||
</syntaxhighlight> | |||
===SLOG=== | |||
<syntaxhighlight lang="bash"> | |||
sudo zpool add $pool log $device | |||
# or | |||
# sudo zpool add $pool log mirror $device1 $device2 | |||
</syntaxhighlight> | |||
==Expanding== | |||
You can only expand by adding vdevs or replacing all drives in a vdev with larger ones.<br> | |||
See [https://docs.oracle.com/cd/E19253-01/819-5461/githb/index.html]<br> | |||
After replacing all drives in a vdev, you need to run: | |||
<code>sudo zpool online -e $pool $disk</code> on any disk. | |||
==Pros and Cons== | ==Pros and Cons== | ||