Latest QSAN Storage Management System

QSM Live Demo

Smart Snapshot Retention

4.2x Longer Protection Time

By empowering tap backup retention policy, GFS helps system administrators maximum the amounts of snapshots version but also use the storage capacity even more efficiency. GFS backup policy a.k.a. Grandfather - Father - Son backup rotation policy. You can set the maximum amounts of snapshot versions to keep, for example, set weekly, monthly and yearly backup policy in snapshot rotation. IT can modify the frequency to suit the company demands. The weekly or Son backups can set to rotate on a weekly basis with one graduation to Father status each month. The monthly or Father snapshots can set as a yearly basis with one graduation to grandfather status each year. At the same time, system administrators can lock the particular snapshots to prevent automatic removal by the retention policy. By setting the GFS rotation policy, You can perform incremental backups daily and full backup weekly to the primary copy, create a particular copy for the monthly full backups and create another particular copy for the yearly full backups.

SingleFileSnapshot Recover

Single File Snapshot Recover

Most Precision Restore

Snapshots help system administrators assures data security and single file recovery delivers a powerful tool for administrators to reduce the time required for actual failover and recovering files effectively. Snapshots single file recovery provides the flexibilities methods for IT to transfer file in local, remote and even public cloud service.


Flexible Protection

One-way Sync

1-way (unidirectional): data modification of the local NAS will be synchronized to the remote, but any data modification of remote NAS will not be reflected.

Two-way Sync

2-way (bi-directional): any data modification of a NAS will be synchronized to the others.

Multi Nodes Sync

Support multi devices real time data synchronization.

XMirror is one of the functions of QSM 3.0. It regularly synchronizes a volume or the contents of a folder between multiple XCubeNAS devices, any modification to documents will be replicated to other XCubeNAS via QSM. By using XMirror, data on different XCubeNAS systems, data consistency and availability is ensured.
• Support Volume and Folder mirror.
• 1-way (unidirectional): data modification of the local NAS will be synchronized to the remote, but any data modification of remote NAS will not be reflected.
• 2-way (bi-directional): any data modification of a NAS will be synchronized to the others.
• Version control (64 versions).

File Retention

Privacy Management & Capacity Utilization

  • Profession Privacy Management
  • Flexible Capacity Utilization
File Retention

With organizations managing so many threats to the security of data, backup and recovery systems continue to be very important aspects of data protection. How long data should be kept for is still a question the industry continues to ask. Data retention, describes the continued storage of an organization‘s data for compliance or business reasons. Industry standards and legislation may mandate long-term data retention. Most of the organizations keeps the file from the beginning days until now. However, not each data are really useful for nowadays operation. By setting folders for a retention period it helps storage to be used in the right place.


Support All New RAID Z3

  • Ultralize system reliabilty for giant storage
  • 1.5x safety than RAID 6
Support All New RAID Z3
Support All New RAID Z3

The mission of QSM was to simplify storage and to construct an enterprise level of quality from storage components by building smarter software – indeed that notion is at the heart of the ZFS (Zettabyte File System). The most important part of enterprise storage device is RAID (Redundant Array of Independent Disks). RAIDZ is a data and parity scheme like traditional RAID, but it uses dynamic strip width. Every block is its RAID-Z stripe regardless of block size which means every RAIDZ write is a full stripe write. At the same time, RAIDZ combines with COW (Copy on Write) transactional semantics of ZFS and eliminates the RAID write hole. Nowadays, the time to populate a drive is straight significant for RAID rebuild. As disks in RAID systems take longer to reconstruct, the reliability of the total system decreases due to the increased period running in a degraded state.As hard drive capacities continue to outpace their throughput, the time comes for a new level of the RAID – Triple Party RAID and Beyond. Today, to repair a RAID group can easily take more than a day and the problems are getting significantly more pronounced as HDD continue to outpace its throughput. Ave. 0.8 percent of disk failures would result in data loss to an uncorrectable bit error. Therefore, the time to repair a failed drives are increasing, and at the same time, the lengthening duration of a scrub means that errors are more likely to be encountered during repair. With RAIDZ2 (similar to RADI-6) increasingly unable to meet reliability requirements, there is an impending but yet urgent for Triple parity RAID. The data reliability on RAIDZ3 is 5X safer than RAIDZ2 and 30X safer than RAIDZ1(similar to RAID-5)