Changes for page ZFS Administration - Part III - The ZFS Intent Log
Last modified by Drunk Monkey on 2024-09-01 09:30
From version
1.2


edited by Drunk Monkey
on 2024-09-01 09:27
on 2024-09-01 09:27
Change comment:
There is no comment for this version
To version
2.1


edited by Drunk Monkey
on 2024-09-01 09:29
on 2024-09-01 09:29
Change comment:
There is no comment for this version
Summary
Details
- Page properties
-
- Content
-
... ... @@ -30,15 +30,11 @@ 30 30 This first image shows my disks from the hypervisor perspective. Notice that the throughput for each device was around 800 KBps. After adding the SSD SLOG, the throughput dropped to 400 KBps. This means that the underlying disks in the pool are doing less work, and as a result, will last longer. 31 31 32 32 [[image:hypervisor-disk-throughput.png]] 33 - 34 -[[image:https://web.archive.org/web/20210430212906im_/http://pthree.org/wp-content/uploads/2012/12/hypervisor-disk-throughput.png||alt="Image showing disk throughput on the hypervisor."]] 35 35 //Image showing all 4 disks throughput in the zpool on the hypervisor.// 36 36 37 37 This next image show my disk from the virtual machine perspective. Notice how disk latency and utilization drop as explained above, including system load. 38 38 39 39 [[image:virtual-machine-load.png]] 40 - 41 -[[image:https://web.archive.org/web/20210430212906im_/http://pthree.org/wp-content/uploads/2012/12/virtual-machine-load.png||alt="Image showing disk latency, utilization and system load on the virtual machine."]] 42 42 //Image showing what sort of load the virtual machine was under.// 43 43 44 44 I blogged about this just a few days ago at [[http:~~/~~/pthree.org/2012/12/03/how-a-zil-improves-disk-latencies/>>url:https://web.archive.org/web/20210430212906/http://pthree.org/2012/12/03/how-a-zil-improves-disk-latencies/]]. ... ... @@ -49,7 +49,8 @@ 49 49 50 50 Adding a SLOG to your existing zpool is not difficult. However, it is considered best practice to mirror the SLOG. So, I'll follow best practice in this example. Suppose I have 4 platter disks in my pool, and an OCZ Revodrive SSD that presents two 60 GB drives to the system. I'll partition the drives on the SSD, for 5 GB, then mirror the partitions as my SLOG. This is how you would add the SLOG to the pool. Here, I am using GNU parted to create the partitions first, then adding the SSDs. The devices in /dev/disk/by-id/ are pointing to /dev/sda and /dev/sdb. FYI. 51 51 52 -{{{# parted /dev/sda mklabel gpt mkpart primary zfs 0 5G 48 +{{code language="bash session"}} 49 +# parted /dev/sda mklabel gpt mkpart primary zfs 0 5G 53 53 # parted /dev/sdb mklabel gpt mkpart primary zfs 0 5G 54 54 # zpool add tank log mirror \ 55 55 /dev/disk/by-id/ata-OCZ-REVODRIVE_OCZ-69ZO5475MT43KNTU-part1 \ ... ... @@ -70,7 +70,7 @@ 70 70 logs 71 71 mirror-1 ONLINE 0 0 0 72 72 ata-OCZ-REVODRIVE_OCZ-69ZO5475MT43KNTU-part1 ONLINE 0 0 0 73 - ata-OCZ-REVODRIVE_OCZ-9724MG8BII8G3255-part1 ONLINE 0 0 0}} }70 + ata-OCZ-REVODRIVE_OCZ-9724MG8BII8G3255-part1 ONLINE 0 0 0{{/code}} 74 74 75 75 == SLOG Life Expectancy == 76 76 ... ... @@ -88,7 +88,8 @@ 88 88 89 89 Just a short note to say that you will likely not need a large ZIL. I've partitioned my ZIL with only 4 GB of usable space, and it's barely occupying a MB or two of space. I've put all my virtual machines on the same hypervisor, ran operating system upgrades, while they were also doing a great amount of work, and only saw the ZIL get up to about 100 MB of cached data. I can't imagine what sort of workload you would need to get your ZIL north of 1 GB of used space, let alone the 4 GB I partitioned off. Here's a command you can run to check the size of your ZIL: 90 90 91 -{{{# zpool iostat -v tank 88 +{{code language="bash session"}} 89 +# zpool iostat -v tank 92 92 capacity operations bandwidth 93 93 tank alloc free read write read write 94 94 ------------------------------------------------ ----- ----- ----- ----- ----- ----- ... ... @@ -102,16 +102,21 @@ 102 102 mirror 1.46M 3.72G 20 0 285K 0 103 103 ata-OCZ-REVODRIVE_OCZ-69ZO5475MT43KNTU-part1 - - 20 0 285K 0 104 104 ata-OCZ-REVODRIVE_OCZ-9724MG8BII8G3255-part1 - - 20 0 285K 0 105 ------------------------------------------------- ----- ----- ----- ----- ----- -----}} }103 +------------------------------------------------ ----- ----- ----- ----- ----- -----{{/code}} 106 106 107 107 == Conclusion == 108 108 109 109 A fast SLOG can provide amazing benefits for applications that need lower latencies on synchronous transactions. This works well for database servers or other applications that are more time sensitive. However, there is increased cost for adding a SLOG to your pool. The battery-backed DRAM chips are very, very expensive. Usually on the order of $2,500 per 8 GB of DDR3 DIMMs, where a 40 GB MLC SSD can cost you only $100, and a 600 GB 15k SAS drive is $200. Again though, capacity really isn't an issue, while performance is. I would go for faster IOPS on the SSD, and a smaller capacity. Unless you want to partition it, and share the L2ARC on the same drive, which is a great idea, and something I'll cover in the next post. 110 110 109 +---- 110 +(% style="text-align: center;" %) 111 111 Posted by Aaron Toponce on Thursday, December 6, 2012, at 6:00 am. 112 112 Filed under [[Debian>>url:https://web.archive.org/web/20210430212906/https://pthree.org/category/debian/]], [[Linux>>url:https://web.archive.org/web/20210430212906/https://pthree.org/category/linux/]], [[Ubuntu>>url:https://web.archive.org/web/20210430212906/https://pthree.org/category/ubuntu/]], [[ZFS>>url:https://web.archive.org/web/20210430212906/https://pthree.org/category/zfs/]]. 113 113 Follow any responses to this post with its [[comments RSS>>url:https://web.archive.org/web/20210430212906/https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/feed/]] feed. 114 114 You can [[post a comment>>url:https://web.archive.org/web/20210430212906/https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/#respond]] or [[trackback>>url:https://web.archive.org/web/20210430212906/https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/trackback/]] from your blog. 115 115 For IM, Email or Microblogs, here is the [[Shortlink>>url:https://web.archive.org/web/20210430212906/https://pthree.org/?p=2592]]. 116 +---- 116 116 117 -=== === 118 +{{box title="**Archived From:**"}} 119 +[[https:~~/~~/web.archive.org/web/20210430213532/https:~~/~~/pthree.org/2012/12/04/zfs-administration-part-i-vdevs/>>https://web.archive.org/web/20210430213532/https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/]] 120 +{{/box}}