

- #DUPLICACY WASABI RETENTION POLICY SKIN#
- #DUPLICACY WASABI RETENTION POLICY SOFTWARE#
- #DUPLICACY WASABI RETENTION POLICY FREE#
In your case, I don't think you can set a retention policy with HB. Hyperbackup : did we ever arrive on a final solution for retention? If rue use case is only for backups, check out. If I heavily encrypt all my files with something like cryptomator or veracrypt before uploading them to a cloud storage service, does it matter what cloud storage provider we are using ? I'm using Arq to send data up to Storj on my / other's machines. If it helps, my node is in a datacenter, not in my home/SOHO. If you want something to fiddle with use Duplicacy.I use Storj and host a node (I get it, might not be what you're after). TL DR - If you want something reliable with minimal fuss, use Arq.
#DUPLICACY WASABI RETENTION POLICY SKIN#
skin in the game) (could be true of Duplicacy too) Developer’s livelihood depends on it (i.e.Good options for retention (how many backups per day/week/month/months).Minimal fiddling, set it up and it works.No idea what it cost them in terms of lost work and contract delays.Īrq does deduplication, block level backups, etc. I also had an aerospace customer “back in the day” that lost six months worth of work (head crash on the working disk pack, retrieved the backup (crash), retrieved the longer-term backup (crash)). Why? I enjoy thinking through scenarios and strategies, and learning about really cool tech for risk reduction (Duplicacy and ZFS being the latest). I might could add a six-month and one-year backup too.

I think I’m pretty well covered for whatever might come up. I also put a drive on the shelf about every two years. I use multiple backup programs (Duplicacy, Carbon Copy Cloner, Arq 5, Backblaze, and even one Time Machine task) to back up to multiple targets (drives that are always attached, drives that live in a drawer and are attached once a month, and SFTP, SMB, and S3-Compatible targets on both my Synology NAS and TrueNAS.) Backup intervals range from hourly to monthly. Using Duplicacy to back up to my TrueNAS system with 8 drives in a RAIDZ2 configuration seems like a safe and reliable system. Thus, deduplication means that the duplicated blocks in the files, whether on the same or different drives, and on the same or different computers, can be saved only once and reused. This large reduction is due to my having multiple copies of the same dataset on the Data drive, and my home folders on my iMac and MBP are largely the same. Not sure where the discrepancy comes from - possibly block size on the ZFS file system. The space occupied on the server (after deduplication and compression) is (401 GiB + 618 GiB + 64 GiB) = 1083 GiB, and checking the folder on the server, it shows 1303 GiB. That’s a total of 2688 GiB being backed up. The Storage is SSH/SFTP running on my TrueNAS 12 CORE server (free)
#DUPLICACY WASABI RETENTION POLICY SOFTWARE#
The software supports multiple repositories, and multiple storage destinations. The things you back up are called repositories, and the places you back up to are called storage. The terminology used in the Duplicacy world is a little different. Intermediate to advanced skills required.Local disk, SFTP, Dropbox, Amazon S3, Wasabi, DigitalOcean Spaces, Google Cloud Storage, Microsoft Azure, Backblaze B2, Google Drive, Microsoft OneDrive, Hubic, OpenStack Swift, WebDAV (under beta testing), pcloud (via WebDAV), Box (via WebDAV), File Fabric.
#DUPLICACY WASABI RETENTION POLICY FREE#
Command line version is free ( GitHub or Homebrew).The latest addition to my stable of software is Duplicacy. It seems creating a robust backup system has become somewhat of a hobby/obsession (hobbsession?) for me.
