Backing up .. with RoboCopy

We all need to do backups. But do you do them?

I have lots of various machines to backup, and all of them are running to a centralised location (NAS), and then ultimately onto external hard disks for further redundancy. Lots of hassle I know, but worth it in the worst case scenario of something failing.

So … automation becomes key – simply so you don't have to manually drag and drop files every day / week / month / whatever your backup frequency is.

Microsoft have had a great, Robust File Copy (Robocopy) command line application out there for a while, and it truly is amazing. Most importantly it can handle partial copys / resuming – that means it is ideal for making backups – and it will only copy the changes Smile

There are a LOT of options for Robocopy – you can get a complete list by opening a command prompt and typing in “robocopy /?” and hitting enter. Be prepared to scroll.

The ones that I find the most important are:

/s = Copies subdirectories (but only if they have content)
/e = Copies subdirectories (even if they are empty)
/z = Copy using restartable mode (so if you have to stop it, you can resume)
/purge = Deletes files from the target if they no longer exist on the source
/v = Verbose logging – I like to see what's happening

Other than that, you use it the same as you would a file copy – there is support for jobs (i.e. you can save preset copy parameters and reuse them very easily), but I’ll leave that you to explore Smile

.NET 4.0-Platform Update 1

Today saw the release of what is, effectively, .NET 4.1 – although for some rather odd reason Microsoft have decided to name it slightly differently --- .NET 4.0 Platform Update 1. Now there is a mouthful (and no doubt going to be highly confusing to end users when you ask them to install the .NET 4.0 Platform Update 1 runtime …)

So what’s new? Well, if you are not using Windows Workflow Foundation, it seems you might as well skip this update – as that’s what the changes are in. But then again, there are some very interesting changes here, with the addition of state machine workflows (as well as SQL backed persistence, which is supported in Azure).

Although this update is not really going to apply to many general .NET developers, what annoys me is the naming. And it seems I’m not alone. Why on earth someone had the bright idea to come up with this insane name I really don’t know. And to release it as three packages too.

I wonder, are we going to see the demise of the good old major minor release build style version numbers for something more freehand ? If we do, I think its a step in the wrong direction, and walking towards creating a versioning / distribution hell …

Team Foundation Server 2010-Process Template Editor

I can almost guarantee that if you use TFS, you will need to edit a process template sooner or later; the default forms that TFS provides, although good, always need tweaked to fit how your team works.

I even find the EMC Scrum pack needs tweaked at times (I mean, why is there no assigned to for a bug??).

So, the easiest way to do this is to ensure you have the Team Foundation Power Tools installed, fire up Visual Studio them click Tools, and select Process Editor – then you get to choose what you want to edit!

image

The most common one I end up editing are Work Item Types – and specifically, I tend to cheat and edit them through this tool on the server.

Now, be sure to abide by all the warnings when editing process templates. These changes kick in immediately, and effect EVERYONE on the dev team using this project. You have been warned.

Also remember to export any modifications and re-import them on other project collections that use the same template for consistency.

Cloud Computing-Too risky?

Amazon recently posted their response to the outage that hammered their EC2 platform lately. It would seem that the outage itself was triggered by a piece of network maintenance that was not carried out properly, which in turn triggered a rather catastrophic chain of events within the custom Amazon systems. Ultimately, it resulted in data loss, as well as down time, for many customers – such as Heroko who posted their own post-mortem of the incident here.

Microsoft Azure also suffered problems recently – with parts of their system becoming unavailable. The first in March was blamed on an OS upgrade that went awry, then in April there was an Azure Storage outage – for which I’ve not actually seen any real detail on the cause (if anyone has a link, please point me to it – I’d love to know what happened). However, I think the stark contrast between these two vendors is the transparency and information given – both at the time and after the fact.

Amazon have gone the whole hog, totally admitting the fault, identifying exactly (in full Technicolor) the issues that occurred and have resolved themselves to – publically – fix it. And they have issued a decent amount of compute time refund. Microsoft? Well, I’ve not heard of any refunds – even partial ones – for the outages that occurred on their platform. I’ve also not heard of any refunds related to outages on another of their cloud platforms – Business Productivity Online Suite – either, which has had it own problems of late. So is using cloud technology too risk? In a  nutshell, no, as long as you are sensible. I can’t say that I would advocate putting everything in the cloud unless its totally stateless and can operate if any SQL instances etc disappear. If you need to store state, or anything really sensitive, I still prefer the hybrid model, but I guess that because they need to do more to convince me that they are as secure as they proclaim to me. The biggest fault with people using clouds to date and suffering outages is quite simply education. They have put applications up into the cloud and expect them to be highly available. That’s not the case. Unfortunately you still need to understand the requirements of highly available design, and be sure to implement them – including setting your application up in different zones / regions – and ideally, different geographical locations! If you don't, all you really are doing is running a small cluster after all. I know that many people will be screaming about the EC2 outage in particular where this was caused by human error. But I’d love to see them do better in their own data centre. Human error occurs everywhere, but where do you think the resources (i.e. skills AND money) are to mitigate them better? On premise with yourself, or out in a cloud?