No Data Corruption & Data Integrity in Cloud Web Hosting
The integrity of the data which you upload to your new cloud web hosting account shall be guaranteed by the ZFS file system that we use on our cloud platform. Most web hosting suppliers, including our company, use multiple HDDs to keep content and since the drives work in a RAID, the exact same data is synchronized between the drives at all times. If a file on a drive gets corrupted for whatever reason, however, it's very likely that it will be duplicated on the other drives since alternative file systems do not have special checks for this. Unlike them, ZFS works with a digital fingerprint, or a checksum, for each and every file. If a file gets damaged, its checksum will not match what ZFS has as a record for it, and the damaged copy shall be replaced with a good one from a different hard disk. Due to the fact that this happens right away, there is no possibility for any of your files to ever get damaged.
No Data Corruption & Data Integrity in Semi-dedicated Hosting
We've avoided any risk of files getting damaged silently as the servers where your semi-dedicated hosting account will be created use a powerful file system called ZFS. Its key advantage over various other file systems is that it uses a unique checksum for each file - a digital fingerprint that's checked in real time. As we store all content on a number of NVMe drives, ZFS checks whether the fingerprint of a file on one drive matches the one on the rest of the drives and the one it has stored. If there's a mismatch, the bad copy is replaced with a good one from one of the other drives and since it happens right away, there is no chance that a damaged copy can remain on our web hosting servers or that it can be copied to the other hard drives in the RAID. None of the other file systems include such checks and what is more, even during a file system check following an unexpected electrical power failure, none of them will discover silently corrupted files. In comparison, ZFS will not crash after a power loss and the continual checksum monitoring makes a lenghty file system check unnecessary.