zfs повышение производительности
Добавлено: 2012-02-20 9:59:43
Только начинаю потихоньку разбираться.
Был в системе второй диск и канал в мир на 200мегов, диск на 500гб и система с 8гб озу на борту.
Сделал поол, написал что-то в loader.conf и теперь вот так:
Толкните что еще надо подрукрутить чтобы максимально задействовать кэш для файлов, на диск льет и разливает transmission и хочется максимально снизить нагрузку при обычных раздачах.
Был в системе второй диск и канал в мир на 200мегов, диск на 500гб и система с 8гб озу на борту.
Сделал поол, написал что-то в loader.conf и теперь вот так:
Код: Выделить всё
pppoe# zfs-stats -a
------------------------------------------------------------------------
ZFS Subsystem Report Mon Feb 20 08:54:31 2012
------------------------------------------------------------------------
System Information:
Kernel Version: 900044 (osreldate)
Hardware Platform: amd64
Processor Architecture: amd64
ZFS Storage pool Version: 28
ZFS Filesystem Version: 5
FreeBSD 9.0-RELEASE #0: Tue Jan 3 07:46:30 UTC 2012 root
8:54AM up 3 days, 15 hrs, 1 user, load averages: 0.21, 0.21, 0.16
------------------------------------------------------------------------
System Memory:
14.35% 1.10 GiB Active, 61.62% 4.70 GiB Inact
20.07% 1.53 GiB Wired, 3.18% 248.55 MiB Cache
0.77% 60.40 MiB Free, 0.01% 868.00 KiB Gap
Real Installed: 8.00 GiB
Real Available: 98.71% 7.90 GiB
Real Managed: 96.65% 7.63 GiB
Logical Total: 8.00 GiB
Logical Used: 37.44% 3.00 GiB
Logical Free: 62.56% 5.00 GiB
Kernel Memory: 434.37 MiB
Data: 95.59% 415.20 MiB
Text: 4.41% 19.17 MiB
Kernel Memory Map: 1.48 GiB
Size: 26.37% 400.30 MiB
Free: 73.63% 1.09 GiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 10.34m
Recycle Misses: 319.80k
Mutex Misses: 25
Evict Skips: 25
ARC Size: 100.03% 384.13 MiB
Target Size: (Adaptive) 100.00% 384.00 MiB
Min Size (Hard Limit): 12.50% 48.00 MiB
Max Size (High Water): 8:1 384.00 MiB
ARC Size Breakdown:
Recently Used Cache Size: 92.82% 356.53 MiB
Frequently Used Cache Size: 7.18% 27.60 MiB
ARC Hash Breakdown:
Elements Max: 32.39k
Elements Current: 44.71% 14.48k
Collisions: 1.83m
Chain Max: 6
Chains: 889
------------------------------------------------------------------------
ARC Efficiency: 55.76m
Cache Hit Ratio: 82.17% 45.81m
Cache Miss Ratio: 17.83% 9.94m
Actual Hit Ratio: 82.13% 45.79m
Data Demand Efficiency: 83.52% 53.13m
Data Prefetch Efficiency: 5.45% 331.98k
CACHE HITS BY CACHE LIST:
Most Recently Used: 78.73% 36.07m
Most Frequently Used: 21.23% 9.72m
Most Recently Used Ghost: 0.96% 438.38k
Most Frequently Used Ghost: 1.29% 590.80k
CACHE HITS BY DATA TYPE:
Demand Data: 96.87% 44.38m
Prefetch Data: 0.04% 18.09k
Demand Metadata: 3.09% 1.41m
Prefetch Metadata: 0.00% 11
CACHE MISSES BY DATA TYPE:
Demand Data: 88.04% 8.75m
Prefetch Data: 3.16% 313.89k
Demand Metadata: 8.81% 875.68k
Prefetch Metadata: 0.00% 11
------------------------------------------------------------------------
L2ARC is disabled
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)
DMU Efficiency: 1.71m
Hit Ratio: 89.51% 1.53m
Miss Ratio: 10.49% 179.78k
Colinear: 179.78k
Hit Ratio: 0.05% 85
Miss Ratio: 99.95% 179.70k
Stride: 1.46m
Hit Ratio: 99.96% 1.46m
Miss Ratio: 0.04% 656
DMU Misc:
Reclaim: 179.70k
Successes: 15.54% 27.93k
Failures: 84.46% 151.77k
Streams: 76.78k
+Resets: 0.78% 598
-Resets: 99.22% 76.19k
Bogus: 0
------------------------------------------------------------------------
VDEV cache is disabled
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 384
vm.kmem_size 1610612736
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 1610612736
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_lsize 275382272
vfs.zfs.mfu_ghost_metadata_lsize 96627712
vfs.zfs.mfu_ghost_size 372009984
vfs.zfs.mfu_data_lsize 5111808
vfs.zfs.mfu_metadata_lsize 1097728
vfs.zfs.mfu_size 16093184
vfs.zfs.mru_ghost_data_lsize 1310720
vfs.zfs.mru_ghost_metadata_lsize 28385280
vfs.zfs.mru_ghost_size 29696000
vfs.zfs.mru_data_lsize 368207360
vfs.zfs.mru_metadata_lsize 37376
vfs.zfs.mru_size 373821440
vfs.zfs.anon_data_lsize 0
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_size 147456
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 100663296
vfs.zfs.arc_meta_used 29327248
vfs.zfs.arc_min 50331648
vfs.zfs.arc_max 402653184
vfs.zfs.dedup.prefetch 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.write_limit_override 0
vfs.zfs.write_limit_inflated 25438347264
vfs.zfs.write_limit_max 1059931136
vfs.zfs.write_limit_min 33554432
vfs.zfs.write_limit_shift 3
vfs.zfs.no_write_throttle 0
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.block_cap 256
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 1
vfs.zfs.mg_alloc_failures 8
vfs.zfs.check_hostid 1
vfs.zfs.recover 0
vfs.zfs.txg.synctime_ms 1000
vfs.zfs.txg.timeout 5
vfs.zfs.scrub_limit 10
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 0
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.ramp_rate 2
vfs.zfs.vdev.time_shift 6
vfs.zfs.vdev.min_pending 4
vfs.zfs.vdev.max_pending 10
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.zio.use_uma 0
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 28
vfs.zfs.version.acl 1
vfs.zfs.debug 0
vfs.zfs.super_owner 0
------------------------------------------------------------------------
pppoe#