Team Foundation Server-E-Mail Alerts

I love things that remind me to do things. I’m a forgetful person, and I need prompts. So I guess that's why I’m a big fan of e-mail alerts, and one of the first things I do in TFS is configure them.

The only annoyance I’ve ever hit with TFS alerts is the rather strange absence of ability to setup the authentication for the SMTP server used for the alerts – I like to use an external server, as for a lot of the jobs I’m involved in, the users are very widely geographically dispersed, and all on different providers.

While you can correct this shortcoming by editing the TFS web services config file (found at C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\web.config), I don’t like this approach, as I feel that it is risky – you never know if something will overwrite this file, especially a service pack etc.

So, what do you do?

Simple, install the SMTP Server Feature of IIS on the server, and run a locally restricted SMTP Service that redirects to your smart host.

image

Once you have installed the SMTP Server, its important that you secure it – ideally you want to set the only machine granted relay permissions as the local machine (127.0.0.1). Then go to the delivery tab, and click Advanced. Specify the details of your external smart host – if you need to provide authentication, you will find the relevant options under the Outbound Security button.

Then, open up Team Foundation Server Administration Console, click on Application Tier, and then Alert Settings (over on the right). Fill in the boxes, and away you go.

image

Oh, and one last thing – make sure the SMTP Service is running!

The final step is to use the excellent Alerts Explorer tool that is in the TFS PowerTools pack to setup your alerts.

image

Team Foundation Server-Automated Backups

One of the things that never ceases to make me smile is the number of companies running Microsoft’s Team Foundation Server software … who don’t back it up.

For those that don’t know, TFS can be looked at as a central store for pretty much all the work that goes on inside a software company. Neglecting to back it up is opening yourself to disaster.

As the TFS Databases are nothing more than SQL Databases, you can back them up in the normal SQL way, or use a tool (there are multiple databases, and you have to get them all at the same time and in the same state – not always easy to achieve). My favourite tool of choice for handling these backups is actually part of the Microsoft Team Foundation Server PowerTools, and integrates neatly into the TFS Administration Console.

The first step after installing the tools is to Create Your Backup Plan.

image

Some things that you will need before you start are:

* A network location where you want your backups to go
* An idea how long you want your backup retention (defaults to 30 days)
* An idea of how you want to schedule your backups – the default nightly runs at 2am local time

Now, my TFS Server doubles up as the main fileserver, so I cheated and entered a local path in the Network Backup Path. (These in turn are synced off to a remote device nightly). This, although being accepted, failed the Readiness check – as its not a network path.

One strange gotcha, I chose to run Full and Transactional backups – leaving Differential off, and you have to uncheck any checked day selection boxes before you can continue.

The other thing that caught me out was the Grant Backup Permissions and Backup Tasks Verification steps were failing, saying that my own account did not have suitable rights for the backup location (strange, as I’m an Admin, and I have full writes to both the NTFS folder and the share). After checking the TFS and SQL Server logs, it was a problem that my target share had a space in it. Putting quotes round it doesn't help either, just causes something else to fail.

And the third, and final thing? Don’t use Local_System account. Remember to setup your own, restricted where possible, account for all services.

image

Mvc Scaffolding - Part two

I need to start tonight's blog post with an apology … in last nights post, I neglected to explain what NuGet is – I kind of took it for granted that you developers out there would know.

NuGet is a Microsoft spin off that is intended to make the introduction of new developer frameworks to Visual Studio and your projects easier – removing the need for you to remember all the assembly dependencies. All you do is open the Package Manager Console (View, Other Windows, Package Manager Console) and key in a few PowerShell commands – and NuGet will pull down the dependencies onto your machine, and update your open project. Simples.

So, lets get rocking. We have our sample project (MVC 3 Razor) up and running. For this sample we are going to build a very simple prototype around an order, and the information it would hold.

We need the following classes:

Address.cs

public class Address
{
    [Key]
    public Guid AddressID { get; set; }

    public string AddressLine1 { get; set; }
    public string AddressLine2 { get; set; }
    public string AddressLine3 { get; set; }
    public string City { get; set; }
    public string County { get; set; }
    public string PostCode { get; set; }
    public string Country { get; set; }
}

OrderItem.cs

public class OrderItem
{
    [Key]
    public Guid OrderItemID { get; set; }

    public int Quantity { get; set; }
    public string Description { get; set; }
    public decimal Price { get; set; }
}

Order.cs

public class Order
{
    [Key]
    public Guid OrderID { get; set; }

    public Address DeliveryAddress { get; set; }
    public Address InvoiceAddress { get; set; }

    public DateTime OrderDatae { get; set; }

    public List<OrderItem> OrderItems { get; set; }
}

You’ll probably notice the [Key] decoration in that lot – these define key indexes, which Entity Framework Code First needs to setup the necessary indexes for CRUD operations on the database. You will need to add a using statement to each of your class files for the namespace of System.ComponentModel.DataAnnotations.

So – that defines an order which has two address (one for the delivery and one for the invoices), and a list of items.

Lets go ahead and create a controller, and the CRUD pages for the Order. Open up Package Manager Console, and type in

scaffold controller Order

You will see that the MvcScaffolding framework gets busy and creates the relevant update pages – but most importantly it creates a Context, also within the Models folder. In this example its called ScaffoldingExampleContext.cs. This brings us to the first couple of annoyances with Scaffolding, and EFCodeFirst.

- The database context gets placed in the same project – no easy separation here to give you a distinct data access layer.

- The context connection string expects you to have SQLExpress installed, and operating as a named instance of “SQLExpress”.

On my machines, the latter is not the case, so we need to pop into web.config and add a ConnectionString into the configuration block:

<connectionStrings>
  <add name="ScaffoldingExampleContext" connectionString="data source=.;Integrated Security=SSPI;Database=ExampleData" providerName="System.Data.SqlClient"/>
</connectionStrings>

Fire up your project, and got to /Orders/ on the site that launches – you will be presented with a very simple CRUD interface for the Orders.

You would need to run the same Mvc Scaffolding command (scaffold controller <classname>) against each of the classes, and then you have a little but of manual plumbing to do – such as navigation, and in this case, the actual selection of things like Address and Items.

Scaffolding, at this time, can not handle complex objects such as other classes or list definitions to give you a total coverage CRUD interface, but it can generate most of the simple stuff that makes up a lot of the work involved in most prototypes.

image

Mvc Scaffolding - Part one

I love learning new things connected to software development – and anything that has the potential to save me time is always high up on the list of things to investigate. After all, time is money – either for yourself when you are a contractor, or your employer if you are an employee!

I’m sure you all have been using MVC (Model View Controller) structures for a while now, but I’ve only recently started looking at the “Scaffolding” approaches that are out there – these are quick, simple ways of building, essentially, your prototype application based only on the base models. So you create one class, and the Scaffolding framework builds the views, and the controller for you. Job done. Oh, and it also sorts out the persistence (read: database) for your models too.

Sound like its too good to be true?

Well, I’m not going to say its perfect, in fact, it seems to have some insanely annoying “defaults” – some of which I’m going to introduce you to while we go through this introduction to Scaffolding over the next few days / posts.

First off we need to install the scaffolding stuff – the easiest way to do this is to use NuGet in Visual Studio 2010.

Create a new MVC 3 Razor project (my preference!).

Open up the Package Manager Console, and type in:

Install-package mvcscaffolding

Sit back and wait while the packages are installed. A reference to the scaffolding libraries will be added to your project automatically.

Now, I’ll go into actually using the scaffolding library in my next post, but first, I want to bring your attention to, what in my opinion, is one of the most annoying things about this library.

It uses Entity Framework Code First (EF Code First). Basically this is a wrapper for Entity Framework that lets you write your models and have it handle the generation of the database schema around Entity Framework. So far so good. But it looks for an instance of SQLExpress. Total pain in the ass in my case, as I don’t have it installed as “SQLExpress” … so it fails out of the box. Wonderful.  There isn’t a nice way that I’ve found to change this across the board, so you are down to changing it per project. Again, I’ll cover this on my next post, as you really need to have at least one Model “scaffolded” first so you know what context names etc will be used.

A year of contracting ..

It’s just under a year since I decided to return to the wonderfully unstable world of contracting in Software Development.

And I have to say its been an interesting time – certainly had it’s up and downs …

But, I’ve gotten to work on some truly interesting projects, be involved with some inspirational start-ups and extended my skills.

One thing that caught my attention early on my re-entry to contracting was the telecoms sector – this has always been something I’ve been interested in, having worked a little with Cisco’s in the past, I figured it was time to push forward with it. So I have.

Over the last year I’ve attained my Avaya certification, become an Avaya Technical Partner and I keep pushing – extending my skills into other makes, including Cisco, Mitel and Nortel. I’m loving the variety that this industry brings to the table, and it’s a bit of fresh air from “normal” lines of business applications.

Most scarily, I haven’t had to advertise for work. I’m starting to think that I might, as new projects are starting to thin out (damn economy), but so far things have been good. But that isn’t to say that I’m not keeping my eye on the job market, looking for that ever “perfect” position. You know, that fabled one that is a joy to work at, has free coffee and means you still have a weekend? Ok, I know, maybe a stretch there – I’d give up SOME of the weekend I guess Smile

I wonder what he next 12 months will hold.

Software Development Buzzwords-Scrum and Agile

Everywhere you look now it seems the software industry is singing the praises of Agile development practices – and in particular Scrum.

The actual process was identified back in the late 1980’s although was only introduced to software development in the 90’s.

So what exactly is scrum? Well, it means different things to different people, and its an approach – not a hard and fast set of rules, or an exact process. Essentially, its all about breaking things down into small segments that can be done in a short time frame of at most 30 days (also known as a Sprint). At the end of each sprint you have a new version that you can, if you so desire, release.

Many companies look at Scrum as being a silver bullet to solve all the development best practice woes. It’s not. If you have problems with your best practice (for example, source control, bug tracking and such) then Scrum will NOT solve them de facto for you – and I can’t help but get annoyed when people imply that it can.

Or even, when people imply that there is only ever one way to “do” Scrum. I fear that these days, the words Agile and Scrum have become buzzwords for the development industry – much like Extreme Programming (XP) was a few years ago. Will scrum head the same way? I don’t think so – most development houses have probably been running a scrum style approach without even realising it …

Small Dev Team-Tips for work scheduling

I’ve just been planning some work for fellow developers, and thought I’d throw some thoughts up on what I feel creates a good approach for working as a team.

Know the “critical path”
Before you get started, try and identity what problems you will encounter (easier said that done), but be sure to have a clear understanding of how you are going to approach the development and what things need to be done in what order (i.e. what will cause a delay if its not done in time!).

Keep work segments small
Try and ensure that any given task can easily be completed in a day – that includes researching it, building it and ultimately testing it.  If its going to take longer, then break it down further.  Keeping tasks small like this helps people stay motivated, as they feel that the project is progressing a little bit further every day – and its measurable.
If you find that you are stuck on something, or it’s taking longer than you expected, look at it again – sometimes even scrapping it and starting again totally, although annoying, is the best thing to do.

Make use of people’s strengths
Development is a very broad term, and there are many subtle facets to it. For example, some developers are better at GUI work, others better at service or back office code. Others might be more skilled at doing Silverlight, ASP.NET or maybe Win Forms. Don’t always assign tasks purely based on strengths though – we all like a bit of variety, and it’s important to not find things get too routine.

Use source control
Now this one I really can not stress enough. I don’t care if you use SVN, VCS, Team Foundation Server, GIT, or whatever.
No matter how small your team, you MUST run source control. Personally I have every project I work on (yes, even if I’m the only developer) source controlled. Why? It means I know what I’ve done, what’s remaining to do (Trac or TFS are great for this) and it gives me the all important backups. However, for source control to be really useful, its important to have some rules. Some simple ones that I tend to mandate are:
- Code is checked in at the end of every day. If its not finished (you did read the above though right?), then shelve it (again, TFS, very good), but locks must be cleared and code must go in.
- Run code analysis automatically
- Run gated check-ins. If you break the build, sorry, you are the one that has to fix it.
- Comments rule. Checkin comments should NOT be optional.
- Assign any tickets. If you are using TFS or Trac, or similar, then the work item tickets should be assigned to the checkin. Simple really.

.NET Exception Handling-The right and the wrong

Over the years I’ve seen both ends of the spectrum when it comes to handling unexpected errors in code. Having over zealous exception handling in an application is just as bad (or worse – as it hides the problem!) than having none. Junior developers tend to go to the extreme, capturing and suppressing everything as they’ve had it drummed into them that code must be “stable” and never “fail”. Well, I guess it depends on what definition you put on stable and failure.  Personally, I would rather have an application that behaved, did what it was told and didn’t trash my data when something unexpected happens – I’ve seen lots of commercial apps, especially Financial related, that trash your data when they crash – not exactly fun when you then spend the next three weeks working out what’s missing since the last backup.

There is an excellent article that came out in 2005 on Exception Handling Best Practices in .NET (http://www.codeproject.com/KB/architecture/exceptionbestpractices.aspx), however, I have to admit, I don’t always follow them perfectly.

Personally, I try and follow these rules:

Never swallow an exception
If an exception has occurred, its because something isn't right. At the very least, log the FULL details (so that includes the stack trace) centrally. You do have centralised logging, right?
This is actually probably one of the most contentious issues among developers – some say you should never grab exceptions and only log them, instead you should let it bubble up to a higher layer to be handled. My feelings are that if the exception can be safely handled, without causing risk to user data, then its safe to handle it – as long as its logged!

Never re-throw an exception
You’ve caught an exception. You want to pass it further back up the stack. So don't use new Exception();, or throw ex; as this will hide the original exceptions call stack and message / innerexception information. Simply throw it again (i.e. throw; ).

If an object implements IDisposible, use using
There is always a reason why an object implements IDisposible, and if it does, be sure to use the using keyword to ensure all resources are released when the objects done.

If you are expecting an exception, only catch that specific exception
Ok, I know that sound strange, but bear with me! If you are handling conversion from GUI controls to, say, decimal values, then the odds on you are going to hit exceptions – and usually conversion releated ones. Then instead of trapping Exception, you should be trapping specific exceptions such as ArgumentNullException, FormatException or OverflowException.

If all else has failed, record the detail before you exit
Too many applications these days just throw up the generic “something knackered” .NET exception dialog and then exit. What do you do next? Start it up again. And what if it dies again? How are you supposed to report the problem? If you are building an application, make use of the Application.ThreadException and AppDomain.UnhandledException (note that latter is really AppDomain.CurrentDomain.UnhandledException, and you will need to hook this on each AppDomain, obviously) – record as much information as you can about the exception, and THEN terminate.
You can even go one step further and have the application automatically report the fault – I tend to do this for any programs that are being used internally in an organisation, and I know that they will always have access to reporting APIs (or SQL Servers).

When you compare .NET to my early days developing in Delphi, there is a stark contrast to the behaviour you adopt with Exceptions.  In Delphi, you used Exceptions a lot to control your program flow (you threw exceptions pretty much when anything went even a little wrong). These days, things are little more restrained – exceptions are heading to the realm of reserved for cataclysmic events that will cause the application to no longer be reliable.

Ultimately, no matter what happens when an exception is trigger, it’s down to the developer to take a reasoned approach to what to do next. Is it possible to continue? Or more importantly, is it possible to continue with no risk to stability, user data or other systems? If in doubt, exit!

I’ve personally built a series of frameworks that allow me to handle exceptions, logging and reporting very effectively, however, many developers are not in a position to do that (especially if you are working in a very small team, or are perhaps an independent). Or maybe you’d just rather use a product to help you. If that's the case, I can heartily recommend checking out Exceptioneer.

Cisco - Introducing Cisco Configuration Professional (aka SDM 2.0)

Carrying on from my earlier posting about SDM (Security Device Manager) I’d like to introduce you to Cisco Configuration Professional – also available from Cisco CCO.

In a nutshell, Cisco Configuration Pro is basically SDM 2.0. A lot of the screens incorporated within it are plainly the old SDM screens, although it does fix a number of the “new” issues that you encounter with Windows 7 and Vista. And at least it’s supported these days, unlike SDM.

When you first fire it up you are greeted with a nice, new, clean feeling UI. This quickly passes when you see the actual configuration screens!

ccp

ccp_interfaces

My biggest gripe is that when you fire it up, and provide the details for the router(s), you still have to mess around and hit “Discover” in order to get it to actually interrogate the devices – but at least this now occurs in the background.

Some things of note tho: CCP can actually receive events from IPS modules so you can get alerts – which is cute – as are the extended port / protocol monitoring screens.

It also runs a local web server if you install the software onto a PC, to host and operate the (still Java) app.

At least it works on Windows 7 tho. Sigh. Can’t wait for a proper app tho.

Cisco SDM - Installing onto the router

If you are new to Cisco routers, and especially the SOHO range, Security Device Manager (or SDM) can be an absolute god send – especially if you are only used to working with routers via their web interface.

If you pickup a SOHO Cisco router you can find if SDM is installed by simply pointing your web browser at it’s IP Address. But be sure to check both http and https, as you can easily configure them to only respond on https (which is obviously the more ideal situation).

However, if SDM is NOT installed, and you want to install it, here are the steps to carry this out – note that you need the SDM installer first, which is available from Cisco CCO. At the present time the latest version is 2.5, which is actually rather ancient but still does the job.

So, to work.

Start off by unpacking the zip file to a decent folder on your computer – somewhere temporary is fine.
Open up the folder and run setup.exe
You might get a warning about not having JRE installed – you will need this, so if you haven’t got it installed follow the prompts and install it.
Follow the wizard through until you are asked where to install SDM. You have three options.
This Computer, Cisco Router, and Both.
If you install on “This Computer” or “Both”, the installer will unpack SDM onto the computer itself – and if you select Both or Cisco Router, it will be installed onto the router – so you can access it anywhere (that you have JRE installed) through your web browser by pointing it at your router. This can save you some hassle later, but needs space on the router – if you have the memory upgrades installed, this shouldn’t be a problem however. For this example, I’m installing on the “Cisco Router”.

sdm_install

The next prompt you will have is for the IP Address and login details of your Cisco. Note that your users will need to be Privilege level 15 in order to carry out this install.
You will be prompted to select the suitable modules to install – I would select Typical as this will get the installer to inly install what your router is capable of (based on your IOS install).
… and after a while, the install is done Smile
To confirm simply fire up your browser and point it at your router …

sdm_25

Now, once you have gotten used to your router, it’s time to start exploring the command line – for that, you want putty!

Notes:

I’ve had a lot of problems with SDM on some newer machines – because of this I keep an XP Virtual Machine handy – with plain old XP, running IE 6, JRE 1.4.2_19 – these work, and don’t seem to cause any problems …
You can get the old version of JRE here.