Storage Performance Tuning Guide – Best Practice

This article provides technical guidance for setting up iSCSI and Fibre Channel (FC) connections on QSAN XCubeFAS, XCubeSAN, XCubeNXT, and XCubeNAS in Windows, Unix-like, and VMware environments to achieve better performance.

Download: Performance Tuning Guide

Click here to download the Storage Performance Tuning Guide – Best Practice.

Key Sections:

  • Prerequisites: Storage topology example (XF2026 / XS5226 / XS3226 / XS1226), hosts, management, iSCSI, FC.
  • Guidelines for Hosts: Keep HBA/NIC/switch firmware and drivers updated; ensure the host local drive is not a bottleneck for copy tests.
  • Guidelines for Configuring Storage Pools:
    • Prefer Thick provisioning; enable Disk Write Cache, Read-ahead, Command Queuing.
    • Enable Write-back cache on volumes; optional Video Editing Mode for steadier throughput.
    • Avoid snapshots during performance tests; consider multiple pools split across controllers when backend exceeds front-end ports.
    • Default LUN masking “*” if masking not required.
    • VMware: do not use 4K block size.
  • Guidelines for Configuring iSCSI Connections:
    • In Storage: Configure iSCSI data ports (XEVO/SANOS/QSM); verify link speed; ensure enough host NICs to match controller bandwidth.
    • Optional: Jumbo Frame (consistent MTU across host/switch/storage); VLAN as needed; iSCSI Entity Name.
    • Trunking/LACP only for large multi-client topologies; otherwise MPIO is sufficient.
    • Ethernet Switch: Jumbo Frame, Flow Control (ON/OFF depends on environment), LACP if used; port mirroring + Wireshark for troubleshooting.
    • Windows: Use all NICs; specify source IP per session; NIC tuning: set RSS Queue ~2, max Receive/Transmit Buffers, Interrupt Moderation OFF; netsh int tcp set global autotuninglevel=restricted (or highlyrestricted).
    • Unix-like: Use different subnets per NIC; increase RAID read-ahead (e.g., 4096/8192); raise TCP receive buffer (e.g., 524,284+); consider disabling Hyper-Threading.
    • VMware: Use different subnets per NIC; avoid 4K block size; MPIO policy Round Robin with IOPS = 1; Delayed ACK notes and ATS heartbeat guidance per VMware KB.
  • Guidelines for Configure FC Connections:
    • In Storage: FC data ports; default topology P2P; 16Gb FC supports P2P only (Loop requires 8G/4G and storage restart); link speed Auto.
    • FC Switch: Auto topology/speed; configure ZONE if needed.
    • Windows: Update FC HBA; for Marvell QLogic, optional registry DriverParameter=qd=255.
    • Unix-like: Update FC HBA; in multipath.conf set rr_min_io=1; increase read-ahead; consider disabling Hyper-Threading.
    • VMware: Same VMware constraints (no 4K; RR IOPS 1).
  • Test Results and Use Cases:
    • Random Read 4K: ~500K IOPS <1 ms (max ~784K).
    • Random Write 4K: ~280K IOPS <1 ms (max ~384K).
    • Use cases: High-IOPS/low-latency (VDI, DB), onboard 10GbE usefulness, extreme throughput (>12,000 MB/s possible with combined ports).
  • Apply To: XEVO firmware 2.0.3+; SANOS firmware 2.0.1a+; QSM firmware 3.3.0+.
  • References: XEVO/SANOS/QSM Software Manuals; related white papers; video tutorial playlists.

Critical Notes:

  • CAUTION: Do not set 4K block size for VMware ESXi datastores.
  • TIP: With VMware MPIO, use Round Robin and set IOPS to 1 (instead of 1000).
  • INFORMATION: 16Gb FC supports P2P topology only; Loop requires 8G/4G and a storage restart.
  • TIP: Avoid Trunking/LACP unless serving many clients; standard MPIO is typically sufficient.

JetStor Support

For assistance with performance tuning on JetStor deployments:
๐Ÿ“ง [email protected]
๐ŸŽซ Submit Support Ticket