Last modified by Drunk Monkey on 2024-09-01 12:05

From version 2.1
edited by Drunk Monkey
on 2024-09-01 12:05
Change comment: There is no comment for this version
To version 1.4
edited by Drunk Monkey
on 2024-09-01 12:04
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -6,7 +6,8 @@
6 6  
7 7  This is done by creating a redundant array of disks, and exporting a block device to represent that array. Then, we format that exported block device using LVM. If we have multiple RAID arrays, we format each of those as well. We then add all these exported block devices to a "volume group" which represents my pooled storage. If I had five exported RAID arrays, of 1 TB each, then I would have 5 TB of pooled storage in this volume group. Now, I need to decide how to divide up the volume, to create logical volumes of a specific size. If this was for an Ubuntu or Debian installation, maybe I would give 100 GB to one logical volume for the root filesystem. That 100 GB is now marked as occupied by the volume group. I then give 500 GB to my home directory, and so forth. Each operation exports a block device, representing my logical volume. It's these block devices that I format with ex4 or a filesystem of my choosing.
8 8  
9 -[[image:lvm.png||alt="lvm" height="386" width="526"]]
9 +
10 +[[image:https://web.archive.org/web/20210430213505im_/http://pthree.org/wp-content/uploads/2012/12/lvm.png||alt="lvm" height="386" width="526"]]
10 10  //Linux RAID, LVM, and filesystem stack. Each filesystem is limited in size.//
11 11  
12 12  
... ... @@ -14,7 +14,8 @@
14 14  
15 15  ZFS handles filesystems a bit differently. First, there is no need to create this stacked approach to storage. We've already covered how to pool the storage, now we well cover how to use it. This is done by creating a dataset in the filesystem. By default, this dataset will have full access to the entire storage pool. If our storage pool is 5 TB in size, as previously mentioned, then our first dataset will have access to all 5 TB in the pool. If I create a second dataset, it too will have full access to all 5 TB in the pool. And so on and so forth.
16 16  
17 -[[image:zfs.png||alt="zfs" height="259" width="521"]]
18 +
19 +[[image:https://web.archive.org/web/20210430213505im_/http://pthree.org/wp-content/uploads/2012/12/zfs.png||alt="zfs" height="259" width="521"]]
18 18  //Each ZFS dataset can use the full underlying storage.//
19 19  
20 20  
... ... @@ -26,18 +26,15 @@
26 26  
27 27  In these examples, we will assume our ZFS shared storage is named "tank". Further, we will assume that the pool is created with 4 preallocated files of 1 GB in size each, in a RAIDZ-1 array. Let's create some datasets.
28 28  
29 -{{code language="bash session"}}
30 -# zfs create tank/test
31 +{{code language="bash session"}}# zfs create tank/test
31 31  # zfs list
32 32  NAME USED AVAIL REFER MOUNTPOINT
33 33  tank 175K 2.92G 43.4K /tank
34 -tank/test 41.9K 2.92G 41.9K /tank/test
35 -{{/code}}
35 +tank/test 41.9K 2.92G 41.9K /tank/test{{/code}}
36 36  
37 37  Notice that the dataset "tank/test" is mounted to "/tank/test" by default, and that it has full access to the entire pool. Also notice that it is occupying only 41.9 KB of the pool. Let's create 4 more datasets, then look at the output:
38 38  
39 -{{code language="bash session"}}
40 -# zfs create tank/test2
39 +{{code language="bash session"}}# zfs create tank/test2
41 41  # zfs create tank/test3
42 42  # zfs create tank/test4
43 43  # zfs create tank/test5
... ... @@ -48,13 +48,11 @@
48 48  tank/test2 41.9K 2.92G 41.9K /tank/test2
49 49  tank/test3 41.9K 2.92G 41.9K /tank/test3
50 50  tank/test4 41.9K 2.92G 41.9K /tank/test4
51 -tank/test5 41.9K 2.92G 41.9K /tank/test5
52 -{{/code}}
50 +tank/test5 41.9K 2.92G 41.9K /tank/test5{{/code}}
53 53  
54 54  Each dataset is automatically mounted to its respective mount point, and each dataset has full unfettered access to the storage pool. Let's fill up some data in one of the datasets, and see how that affects the underlying storage:
55 55  
56 -{{code language="bash session"}}
57 -# cd /tank/test3
54 +{{code language="bash session"}}# cd /tank/test3
58 58  # for i in {1..10}; do dd if=/dev/urandom of=file$i.img bs=1024 count=$RANDOM &> /dev/null; done
59 59  # zfs list
60 60  NAME USED AVAIL REFER MOUNTPOINT
... ... @@ -63,8 +63,7 @@
63 63  tank/test2 41.9K 2.77G 41.9K /tank/test2
64 64  tank/test3 158M 2.77G 158M /tank/test3
65 65  tank/test4 41.9K 2.77G 41.9K /tank/test4
66 -tank/test5 41.9K 2.77G 41.9K /tank/test5
67 -{{/code}}
63 +tank/test5 41.9K 2.77G 41.9K /tank/test5{{/code}}
68 68  
69 69  Notice that in my case, "tank/test3" is occupying 158 MB of disk, so according to the rest of the datasets, there is only 2.77 GB available in the pool, where previously there was 2.92 GB. So as you can see, the big advantage here is that I do not need to worry about preallocated block devices, as I would with LVM. Instead, ZFS manages the entire stack, so it understands how much data has been occupied, and how much is available.
70 70  
... ... @@ -74,8 +74,7 @@
74 74  
75 75  So, if there is nothing to add do the /etc/fstab file, how do the filesystems get mounted? This is done by importing the pool, if necessary, then running the "zfs mount" command. Similarly, we have a "zfs unmount" command to unmount datasets, or we can use the standard "umount" utility:
76 76  
77 -{{code language="bash session"}}
78 -# umount /tank/test5
73 +{{code language="bash session"}}# umount /tank/test5
79 79  # mount | grep tank
80 80  tank/test on /tank/test type zfs (rw,relatime,xattr)
81 81  tank/test2 on /tank/test2 type zfs (rw,relatime,xattr)
... ... @@ -87,13 +87,11 @@
87 87  tank/test2 on /tank/test2 type zfs (rw,relatime,xattr)
88 88  tank/test3 on /tank/test3 type zfs (rw,relatime,xattr)
89 89  tank/test4 on /tank/test4 type zfs (rw,relatime,xattr)
90 -tank/test5 on /tank/test5 type zfs (rw,relatime,xattr)
91 -{{/code}}
85 +tank/test5 on /tank/test5 type zfs (rw,relatime,xattr){{/code}}
92 92  
93 93  By default, the mount point for the dataset is "/<pool-name>/<dataset-name>". This can be changed, by changing the dataset property. Just as storage pools have properties that can be tuned, so do datasets. We'll dedicate a full post to dataset properties later. We only need to change the "mountpoint" property, as follows:
94 94  
95 -{{code language="bash session"}}
96 -# zfs set mountpoint=/mnt/test tank/test
89 +{{code language="bash session"}}# zfs set mountpoint=/mnt/test tank/test
97 97  # mount | grep tank
98 98  tank on /tank type zfs (rw,relatime,xattr)
99 99  tank/test2 on /tank/test2 type zfs (rw,relatime,xattr)
... ... @@ -100,8 +100,7 @@
100 100  tank/test3 on /tank/test3 type zfs (rw,relatime,xattr)
101 101  tank/test4 on /tank/test4 type zfs (rw,relatime,xattr)
102 102  tank/test5 on /tank/test5 type zfs (rw,relatime,xattr)
103 -tank/test on /mnt/test type zfs (rw,relatime,xattr)
104 -{{/code}}
96 +tank/test on /mnt/test type zfs (rw,relatime,xattr){{/code}}
105 105  
106 106  == Nested Datasets ==
107 107  
... ... @@ -109,8 +109,7 @@
109 109  
110 110  To create a nested dataset, create it like you would any other, by providing the parent storage pool //and// dataset. In this case we will create a nested log dataset in the test dataset:
111 111  
112 -{{code language="bash session"}}
113 -# zfs create tank/test/log
104 +{{code language="bash session"}}# zfs create tank/test/log
114 114  # zfs list
115 115  NAME USED AVAIL REFER MOUNTPOINT
116 116  tank 159M 2.77G 47.9K /tank
... ... @@ -119,15 +119,13 @@
119 119  tank/test2 41.9K 2.77G 41.9K /tank/test2
120 120  tank/test3 158M 2.77G 158M /tank/test3
121 121  tank/test4 41.9K 2.77G 41.9K /tank/test4
122 -tank/test5 41.9K 2.77G 41.9K /tank/test5
123 -{{/code}}
113 +tank/test5 41.9K 2.77G 41.9K /tank/test5{{/code}}
124 124  
125 125  == Additional Dataset Administration ==
126 126  
127 127  Along with creating datasets, when you no longer need them, you can destroy them. This frees up the blocks for use by other datasets, and cannot be reverted without a previous snapshot, which we'll cover later. To destroy a dataset:
128 128  
129 -{{code language="bash session"}}
130 -# zfs destroy tank/test5
119 +{{code language="bash session"}}# zfs destroy tank/test5
131 131  # zfs list
132 132  NAME USED AVAIL REFER MOUNTPOINT
133 133  tank 159M 2.77G 49.4K /tank
... ... @@ -135,13 +135,11 @@
135 135  tank/test/log 41.9K 2.77G 41.9K /mnt/test/log
136 136  tank/test2 41.9K 2.77G 41.9K /tank/test2
137 137  tank/test3 158M 2.77G 158M /tank/test3
138 -tank/test4 41.9K 2.77G 41.9K /tank/test4
139 -{{/code}}
127 +tank/test4 41.9K 2.77G 41.9K /tank/test4{{/code}}
140 140  
141 141  We can also rename a dataset if needed. This is handy when the purpose of the dataset changes, and you want the name to reflect that purpose. The arguments take a dataset source as the first argument and the new name as the last argument. To rename the tank/test3 dataset to music:
142 142  
143 -{{code language="bash session"}}
144 -# zfs rename tank/test3 tank/music
131 +{{code language="bash session"}}# zfs rename tank/test3 tank/music
145 145  # zfs list
146 146  NAME USED AVAIL REFER MOUNTPOINT
147 147  tank 159M 2.77G 49.4K /tank
... ... @@ -149,8 +149,7 @@
149 149  tank/test 41.9K 2.77G 41.9K /mnt/test
150 150  tank/test/log 41.9K 2.77G 41.9K /mnt/test/log
151 151  tank/test2 41.9K 2.77G 41.9K /tank/test2
152 -tank/test4 41.9K 2.77G 41.9K /tank/test4
153 -{{/code}}
139 +tank/test4 41.9K 2.77G 41.9K /tank/test4{{/code}}
154 154  
155 155  == Conclusion ==
156 156  
lvm.png
Author
... ... @@ -1,1 +1,0 @@
1 -XWiki.pdwalker
Size
... ... @@ -1,1 +1,0 @@
1 -27.3 KB
Content
zfs.png
Author
... ... @@ -1,1 +1,0 @@
1 -XWiki.pdwalker
Size
... ... @@ -1,1 +1,0 @@
1 -17.7 KB
Content