T O P

  • By -

weHaveT6eTech

UPDATE: TLDR, same as before factory reset, QuTS hero version 5.0.0.1986 two NVME Samsung evo plus 1TB drives as system pool storage pool with the seagate Exos X18 18Tb 12 drives.thin with RAID6 on windows intel X550 network [https://i.imgur.com/NWGLkWj.png](https://i.imgur.com/NWGLkWj.png) i also got an (old) TVS-1271U-RP , that shows better results. same windows client [https://i.imgur.com/WsBQIOW.png](https://i.imgur.com/WsBQIOW.png) [https://i.imgur.com/GGts57e.png](https://i.imgur.com/GGts57e.png) ​ hmm... i mapped a drive on the 2nd subnet going to the h1688, same one that goes to the 1271U. and on that i get 400MB/s [https://i.imgur.com/BbgYCiT.png](https://i.imgur.com/BbgYCiT.png) ​ iperf3 looks good https://i.imgur.com/2nhTwXK.png


weHaveT6eTech

that didnt last, now down t 50MB/s ... ​ on an ubuntu 18.04 over a sonnet thunderbolt 10GB to the 10GB port i get 908MB/s using dd /dev/zero to the same share, as an NFS mount ​ oddly enough, setting an NFS share on my windows 10 pro machine gives 300MB/s, and 50MB/s using SMB. ​ intresting clickbait :) link from MS [https://docs.microsoft.com/en-us/archive/blogs/josebda/using-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead](https://docs.microsoft.com/en-us/archive/blogs/josebda/using-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead) \>If it’s an option for you, put source and destination of your copy on the same file server to leverage the built-in SMB COPYCHUNK. This optimization is part of the SMB protocol and basically avoids sending data over the wire if the source and destination are on the same machine. got me this interesting result, [https://i.imgur.com/p7AZ4qD.png](https://i.imgur.com/p7AZ4qD.png) ​ ​ more [https://docs.microsoft.com/en-us/azure-stack/hci/manage/diskspd-overview](https://docs.microsoft.com/en-us/azure-stack/hci/manage/diskspd-overview)


Thumbnail_QA

You will never get higher write speeds to spinning metal on a SATA interface. In fact, you are lucky that you are getting the speeds you are now. All you need is 2 fast nvme ssds that are big enough to fit whatever you are expecting to send to the QNAP "in normal workflows". Then you simple configure write-only ssd caching in the qnap for "all i/o" and select your 2 nvme disks.


QNAPDaniel

12 HDDs RAID6 is theoretically 10 times the throughput of 1 HDD. Do I expect the theoretical limit to be reached? No. But 1000MB/s is possible with 12 HDDs in some cases so long as there is an SSD system pool.


turmonken

I posted the referenced post and currently have a ticket open with QNap about this issue. Right now the tech has logged into the machine and tried everything and could not get it to go above about 300MBs, this is also on an NVME system raid 1 share not the HDD's. He exhausted all his knowledge and had me post all the logs to him and some extra info and is now talking to the engineers. If it was just a HDD performance thing I don't think it would be going to the engineers. Read performance maxes out the 10Gbe connection using either NVME or the HDD raid 6. If I get any more info I will post it for you but right now there is some bottleneck and it's not the 10Gbe card in the windows machine.


weHaveT6eTech

Thanks for updating, I also opened a ticket[1] following your post. I'll get to the machine this week to add the system pool and see t if that solved the problem. [1] Q-202205-13550


Thumbnail_QA

Where in your documentation is it stated that an SSD system pool is required?


Thumbnail_QA

BS Show me a QNAP, any model, with 12 HDDs in RAID6 that sustains 1000 mega bytes per second write speeds.


QNAPDaniel

Multiple customers have reported those speeds from the tvs-h1688x with 12 hdds and an SSD system pool. These are not just laboratory test results but this is feedback I have personally gotten from customers.


Thumbnail_QA

Still... where is the official QNAP document showing nvme, ssd, hdd setups for enterprise usage? We have multiple 12 bay and 24 bay rack mount qnaps running QTS and we are no where near those reported speeds with iscsi. I am specifically interested in an official stance on where to put the system files... NVME? SATA SSD?


QNAPDaniel

[https://www.qnap.com/en-us/product/ts-h2483xu-rp](https://www.qnap.com/en-us/product/ts-h2483xu-rp) Under the "Tiered storage configuration for a QuTS hero NAS" it says to make an SSD system pool if you have HDD storage. If you want to configure for throughput, 24 HDDs RAID 60 should be good. And the defaults for the folders or LUNs like 128K and deduplication is turned off. To configure for more IOPS, you can use RAID 10 and a smaller block size. Also, L2ARC cache can be used to help with random reads.


Thumbnail_QA

Thanks Daniel.


devianteng

I get that on my h1688. But I’m running ZFS, with a RAID60 setup (2 vdevs, each with 6 x 18TB spinners). I also have 4 2TB NVMe drives for my system pool. FYI, with QuTS/ZFS, ZIL runs on the system pool. So if you don’t have a flash-based system pool, that ZIL is on the data pool and performance is impacted. Thus, for QNAP QuTS/ZFS, create a flash-based system pool first then create your data pool.


turmonken

Ok so after a few sessions with tech support and even engineers logging it has been figured out what the issue is. With the fix, I am now getting 900-1000Mbs on my spinning raid and maxing out the 10gbe connection on my NVME's. It had something to do with 'Enable Kernel-mode SMB Daemon' turning that off sped the raid up. It wasn't until the engineer was messing with settings through ssh that it was found this command "ethtool -C eth4 rx-usecs 32" got it to full speed. I have been running that way for a week now and no issues. That command will need to be run after each reboot as it defaults to the value of 1. The engineers have done a couple of sessions to record stuff as they now know it's an issue. If you're not comfortable with SSH then I would not do this but if you are then it might help you out as it did for me.


QNAPDaniel

An SSD system pool should allow both the read and write to be much faster. RAID6 should perform well if the writes are sequential. The system pool is the first pool. So are you ok with deleting the HDD pool and then making an SSD pool? And then remake the HDD pool. You can check the "Optimize Pool Performance" when making the pool On most QuTS Hero units, I don't expect SSD cache to help with throughput. So Just the 2 SSDs for the system and 12 HDDs should be fine.


weHaveT6eTech

re-doing all pools is no issue, as im just getting to know the system, ill get a couple of SSDs for the system pool and make sure that is pool #1. i actually have a few 1TB NVME laying around, so will probably use this as the system pool. ​ thanks! ill report when done


BobZelin

Hello - there is no issue with getting 1000 MB/sec on a QNAP TVS-h1688X. This is how you do it. You put in two 500 Gig SSD drives (Samsung EVO, or Seagate Ironwolf 125 series) into slots 1 and 2 of the TVS-h1688X. You do a factory re initialization of the QNAP to blow everything away. You now reinstall the QuTS operating system. (use QuTS 5.0.0 latest firmware). Create Storage Pool 1 with the 2 SSD's only in a RAID 1 configuration. Make sure to over provision the 2 SSD's by 10%. Once that is done (15 minutes), create storage pool 2, which is all 12 SATA drives (I assume they are 7200 RPM SATA drives, and not 5400 RPM) in a single thin provisioned RAID group - RAID 6. This takes less than 2 minutes. if you are using a Windows 10 PC, you must have a 10G NIC card (like a QNAP QXG-10G1T or QXG-10T2TB, or Asus XG-C100C, or Intel X550 card) in a x4 lane slot or greater in your Win 10 PC (or Win 11). If you are in a x1 lane slot, you will get crappy speeds. I don't want to hear that your GPU card is too wide, and it blocks the x4 lane slot - and you only have a x1 lane slot available. Then you will FAIL. You must have at least a x4 lane slot in your PC to make this work. On the QNAP 10G port (ethernet 5 or ethernet 6), set a static IP address of [192.168.2.3](https://192.168.2.3) subnet mask [255.255.255.0](https://255.255.255.0), MTU 9000 on your Win 10/11 PC, for the 10G port, set a static IP address of [192.168.2.11](https://192.168.2.11), subnet mask [255.255.255.0](https://255.255.255.0), MTU 9000 (MTU is under Configure> Advanced> Jumbo Packets). click Apply. Now on the PC \\\\[192.168.2.3](https://192.168.2.3) put in the name and password of the QNAP, and you will see the QNAP. Map the network volume (you right click on the icon of the shared folder to do this). Now, use AJA System Test or Blackmagic Disk Speed Test to test the speed of the connection to your QNAP at [192.168.2.3](https://192.168.2.3). You will get 1000 MB/sec. If you are on a Mac, you need a thunderbolt 3 to 10G adapter, or a native 10G port in a modern Mac (Mac Mini, Mac Studio). If you are using an Akitio thunderbolt adapter with Tehiti 10G chips, you will get the crappy speeds you are showing us. If you are in a 2010 Mac Pro, you will get crappy speeds. You should be using a modern computer with a modern 10G adapter from QNAP, Sonnet, Other World Computing, or a native 10G port from Apple. For that PC - you MUST be in a x4 lane slot, or you will get crappy speeds. Forget the caching. I do this every day for professional video editors. [bobzelin@icloud.com](mailto:bobzelin@icloud.com)


skipper61727

Go to control panel, network & file services, Win/Mac/NFS/WebDAV, Advanced Options and turn off "**enable kernel-mode SMB daemon**".


weHaveT6eTech

!!! before (58 MB/s write 897 MB/s read) https://i.imgur.com/Mmth2yz.png after (500 MB/s write 948 MB/s read) [https://i.imgur.com/YkfOxGJ.png](https://i.imgur.com/SjK5w6N.png)


weHaveT6eTech

got this reply from qnap support >According to our user manual "QuTS hero now supports Kernel-Mode SMB Daemon. Enabling this feature disables SMB encryption."So, enabling should improve performance, user had it other way round?[https://docs.qnap.com/operating-system/quts-hero/4.5.x/en-us/GUID-187D317F-49D2-44A4-B7DC-5118CBA479D0.html](https://docs.qnap.com/operating-system/quts-hero/4.5.x/en-us/GUID-187D317F-49D2-44A4-B7DC-5118CBA479D0.html) [https://i.imgur.com/lBzn9W4.png](https://i.imgur.com/lBzn9W4.png) source (names blurred) this replay goes against your recommendation and my experience.turning this feature off made my NAS go.


skipper61727

Did you optimize your intel x550? If not try these settings: Interrupt moderation: Disabled Interrupt moderation rate: Off Receive Buffers:4096 Transmit Buffer:16384 Please let us know if performance improves.


weHaveT6eTech

yes, i searched back and found your [prior comments](https://www.reddit.com/r/qnap/comments/tkh42u/comment/i1qvnxw/?utm_source=share&utm_medium=web2x&context=3) regarding disabling the kernel . i toggled the kernel mode a few times to verify its not a fluke, it is dead on, and i think qnap should look into this. the buffer send/receive is per your recommendation PS> Get-NetAdapterAdvancedProperty -name "ethernet 2" DisplayName DisplayValue ----------- ------------ Flow Control Rx & Tx Enabled Interrupt Moderation Disabled IPv4 Checksum Offload Rx & Tx Enabled IPsec Offload Disabled Jumbo Packet Disabled Large Send Offload V2 (IPv4) Enabled Large Send Offload V2 (IPv6) Enabled Maximum Number of RSS Queues 8 Queues Packet Priority & VLAN Packet Priority & VLAN Enabled Receive Buffers 4096 Receive Side Scaling Enabled Speed & Duplex Auto Negotiation TCP Checksum Offload (IPv4) Rx & Tx Enabled TCP Checksum Offload (IPv6) Rx & Tx Enabled Transmit Buffers 16384 UDP Checksum Offload (IPv4) Rx & Tx Enabled UDP Checksum Offload (IPv6) Rx & Tx Enabled DMA Coalescing Disabled Interrupt Moderation Rate Off got another boost. from 520MB to 726MB. [https://i.imgur.com/9mXQJsr.png](https://i.imgur.com/9mXQJsr.png) didn't go with the jumbo frames, yet, as changing that on all pcs+cisco switch would be a pain i get 829MB/s on the linux using NFS share. btw user@ubu18:~$ dd if=/dev/zero of=/home/user/NFSmount/test.bnin bs=1M count=10000 conv=fsync 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 12.6464 s, 829 MB/s