I've been using CrashPlan for just over three weeks now. In the past I used Mozy and I've also for a long time used my own set of rsync scripts that backed up to some "unlimited" web space that I had access to.
Mozy never really worked for me, it took 5 months to do my initial backup and it just couldn't cope with very large files (my large virtual hard discs caused it to stall the backup for days). The "unlimited" web space provider recently spotted I was storing 100GB+ on their server and asked me to remove it. So I needed something new.
Over the last three weeks CrashPlan has backed up 60GB of my data to their cloud. That covers my photos, documents, source code, email and other important things. In comparison, it look 5 months to backup the same data to Mozy !
I still have 150GB of music and other assorted files to backup. After that I'll try backing up my larger virtual disc images (it has already coped with some 6GB ones that Mozy choked on).
I'm also using it to backup the whole of my Mac Book to my server, works well and is painless to use. It is also running within my main Windows virtual machine, continuously backing up all my local source code repositories (CrashPlan on that virtual machine is also simultaneously backing up to their cloud).
So it all seems to work pretty well. I am about to sign up for their $6 a month service (to cover more than one computer and various extra features).
It isn't perfect though, I have a few complaints.
The first one is fairly minor, it isn't very good at estimating the remaining time. It very naively assumes that if it has backed up lots of compressible stuff, that the rest of the backup set is similar. For example a few minutes ago it was claiming that I had 2.3 days left to backup 150GB...
Its bandwidth throttling is very inflexible. You don't get to vary the throttling based on time of day. Also the selection of speed limits are oddly restricted, for example you there is no option between 300kb and 1mb !? I'd really like to have it use very little upstream data during the day and have it fill most of the line over night.
It has, for me, a very serious bug/bad bit of design. There are a bunch of settings around whether you are backing up over the LAN (local network) or WAN (the Internet). Settings like speed for example. So you can tell it to use unlimited speed backing up to other local PCs and limiting the bandwidth used when backing up to their cloud.
All good, except they have screwed up their LAN/WAN detection. Rather than making use of netmasks, they decide that if they are backing up to a public routable IP address that it must be a WAN destination. So for example if I want my MacBook to backup to my local server, that will be seen as a WAN connection as they are both on public IP addresses. So I can't restrict the Internet bandwidth used by the Mac when backing up and also have it backup locally at full speed.
The final major negative point is the lack of account wide de-duplication of data. If you have two copies of a file on a machine, it will only back the data up once. If however you have the same file on two different machines and they are both backing up the cloud, that data will be sent twice. I believe they have plans to address this, which will be good.
I do wish that the local backups were stored as plain files, rather than stuffed into their own proprietary databases. But I understand why that is, as they actually backup blocks of data rather than files.
I have also had the backup stall a few times, for serval hours, seemingly because the server my client was talking to was having problems. Something I will have to keep an eye on.
You have to worry of course about whether it will stay unlimited, though they have at least been very clear to explain that they intend to stay unlimited and their reasons for thinking that they can when others have not. Only time will tell.
_________________________
Remind me to change my signature to something more interesting someday