It will only occur on windows. It might be the Cache issue. Users can delete the Qcentral.key file and Qcentral.password file to fix this issue. The two files's path might be seem as"C:\Users\XXXXX\AppData\Roaming\QSAN\QCentral".
Yes, we always suggest the user to use the latest version. Because there are some bug might be fixed with the latest firmware. User can download the Qcentral and release note in our partner portal.
It is accordingly SLP(service location protocol) to scan for Qsan devices. When user start to scan for the devices, Qcentral will send the SLP packet to the host's broadcase domain, and then all the SLP clients which receive the SLP packet will re-send a service type's value to Qcentral. After that, Qcentral will list the clients which are corresponded with the service type's value.  The clinets which aren't corresponded won't list on Qcentral. 
Qcentral is JAVA based program(JRE), so it can support any OS which supports JAVA. 
NFS access isn't restricted by account based ACL, but using it's own access rules instead. The NFS access rules works independently from the POSIX layer ACL and the Windows ACL. 
Under the share configuration page, firstly select to enable the NFS share and click apply, then re-enter the same page to find the NFS access rules section and add the rules accordingly.
The root squash prevents the NFS client access from mapping a root account UID to the storage's root account, if the root squash is selected and the NFS clients uses a root account UID for mapping to the NFS share, the access credential will be automatically mapped to "nobody". Select this option to avoid security threads from unwanted root access.
Using Sync access rule would caused the NFS access performance to drop down to 10%~30% of Async access in average. So it is recommended to use Async access whenever possible unless you strongly demend the safely in data transaction and have no or less concern about the data access performance. 
Dudplication is a feature that uses a DDT (dudplication table) stored in the system cache space (RAM or SSD) to keep mapping records of unique data. Via checking the DDT to verify whether data block being written is duplicated or not, the system then decides if the data can be deduplicated.
By using deduplication, in cases that a lot of duplicated data are stored, user can benefit from reclaiming the space back from deduplicated data. 
Yes, enabling deduplication greatly reduces the zfs performance since with each read and write, the action of checking through the DDT is required. In theory the larger the DDT is, the greater the impact is to the write performance. 
Qsan NAS products use block level deduplication, which means the data is checked block by block and a footprint is generated for each block, then recorded in the DDT along with its address.
Block level deduplication works with higher efficiency then file level deduplication, as the possibility of block level duplication is higher than that of file level. (A file is always consisted of multiple blocks and therefore the complexity at file level is higher than block level, therefore duplication possibility is lower than block level.)