Unable to create C# unit tests - VS2015 and Win10

I just encountered something I thought was odd - Visual Studio 2015 Enterprise was complaining when trying to create a new C# unit test project. It might be because I'm on Windows 10, I don't know - I only run Windows 10 for development now.

The error was :

Error: Could not load file or assembly Microsoft.VisualStudio.JSLS Version=14.0.0.0

Solution:

  1. Mount the VS 2015 ISO
  2. Run E:\packages\JavaScript_LanguageService\JavaScript_LanguageService.msi
  3. Restart Visual Studio

Updating a vCenter 6.0 Appliance

Updating from vCenter 6.0.0a to 6.0.0b would have been a straight forward task I'd thought. Not so it seems.

First off, the appliance no longer auto-updates or has an admin UI - as it did in v5.
Now you have to download the patch ISO (not the normal install one), persuade it to mount and run a number of commands.

Simple isn't it. 

The steps to do an upgrade are:

  1. Find the patch ISO you need from https://my.vmware.com/group/vmware/patch#search and download it.
  2. Fire up the vSphere client, and connect to the HOST that is running the Appliance.
  3. Open the Console for the Appliance VM
  4. Mount the ISO in the normal way
  5. SSH to the Appliance (if you haven't enabled this, you need to first, obviously)
  6. Run: software-packages stage --iso
  7. Accept the EULA (read it first, of course)
  8. Run: software-packages install
  9. Reboot appliance
  10. Repeat process for other patch ISOs as required

I'm wondering why it has to like this ... what was wrong with the semi-automated web interface method?

Log4Net and Splunk

Splunk is one of the most impressive "On Premises" log aggregation tools that I have ever come across. Being able to bring a large number of disperate data sources together into one combined index is truly useful in a modern Ops environment.

One of the things I find helpful from a development approach is consistent logging - and too often this is something that development teams overlook until things break.

However, getting data from a .NET / C# application into Splunk is not difficult and so these days I try and log absolutely everything (well, come on, the free tier gives you a decent chunk of an allowance too!).

The first thing I do is to create a new Index in Splunk - you do this by selecting Settings, Indexes and then clicking New.
The only box you need to fill in is the index name - let everything else default on your installation.

Once you have the index created, we need to setup the input. Settings then Data Inputs will take you to the right screen. Click Add New next to UDP. Pop in an unused port, say 8081, then click Next.  Make sure you select your index you created earlier, and specify the type as Generic Single Line - this basically tells Splunk it's unformatted data and not to pre-parsed it.

The next thing you need to do is actually get your code to submit data to Splunk -- the easiest way that I have found to do is to use Log4Net; in Visual Studio, install the log4net Nuget Package and this will take care of creating the relevant config entries. If, like me, you prefer to put your logging code into a common assembly then reference it elsewhere, remember to copy the assembly redirects and log4net specific entries into your other configs (or things just don't work!).

In your code, you will probably have a common class for sending log data - something like:

using log4net;
namespace YourApp.Common
{
    public static class Logging
    {
        /// <summary>
        /// Application or Class that should be identified with the log statement that is passed
        /// </summary>
        public static string Application { get; set; }
        /// <summary>
        /// Initialise logging - must be called at application start
        /// </summary>
        public static void Initialise()
        {
            log4net.Config.XmlConfigurator.Configure();
        }
        /// <summary>
        ///  Log an information message
        /// </summary>
        /// <param name="message"></param>
        public static void Info(string message)
        {
            ILog logger = LogManager.GetLogger(Application);
            logger.Info(message);
        }
    }
}

That way you can specify the application name to be passed through with the logging data (handy for Splunk, as you can throw everything into one Index and then break out specifically what you need later) - and use the class from pretty much anywhere.

In your web.config you need it to look like:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
  </configSections>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
  </startup>
  <log4net>
    <appender name="UdpAppender" type="log4net.Appender.UdpAppender">
      <param name="RemoteAddress" value="splunk-server" />
      <param name="RemotePort" value="8081" />

      <layout type="log4net.Layout.PatternLayout" value="%level - %date{MM/dd HH:mm:ss} - %c - %stacktrace{2} - %message" />

    </appender>     <root>       <level value="ALL" />

      <appender-ref ref="UdpAppender" />
    </root>
  </log4net>
</configuration>

Finally, call away to get your data logged:



And that, folks, is it - you can now push .NET C# app log data into Splunk.

A couple of points that some people might question me on:

Why use UDP Appender and not TCP?

UDP is a lossy transmission protocol, and it is entirely possible that log messages do not make it into the Splunk indexer; however, it is significantly lighter weight than establishing TCP/IP connections.

Can I log to multiple locations - such as Splunk but also a text file?

Yes - add another Log Appender; the Log4Net docs are pretty good on this one. 

Is there much point about having the date time in the log message?

That depends - if you are worried that the messages might get cached somewhere and not always trust the date / time that Splunk adds to it's indexed entries, then you probably want to keep it. Otherwise feel free to drop it from the pattern.

EE / Apple Wifi Calling

I've moved house, and the EE signal sucks. "No problem", I thought, "EE had enabled Wifi Calling a few days earlier - I'll give it a shot".

It works generally ok - but only on my Wife's iPhone, and not mine. Seems that EE have only enabled it on Personal contracts and not the Corporate contracts. They have, however, pushed out the carrier profile update so you see the option - although it does absolutely nothing but tease!

The one gripe that I have, other than not being able to use it, is that whenever the phone sees a tiny bit of network signal it tries to switch from Wifi - which means you drop the call. This happens ware more than I'd put up with generally and the only way round it that I've found is to enable Airplane mode and then re-enable Wifi. Not the best user experience, but I guess this one is down to Apple's mistake!

Heres hoping that EE and Apple can resolve the glitches on it.

Deploying Cloud Foundry with vSphere - Part 2

michael kors outlet

Now, I decided to try a "light" installation of Cloud Foundry as this isnt going to be production ... normally it seems you deploy BOSH then you deploy Cloud Foundry from that, but light uses bosh micro.

Create a folder to hold your installations, then run

brew tap xoebus/homebrew-cloudfoundry

brew install spiff

git clone https://github.com/cloudfoundry/cf-release

git clone https://github.com/cloudfoundry/bosh-lite

./bosh-lite/bin/provision_cf

Things will then kick off -- downloading th ebasic stemcell and pushing things onto the bosh micro director to perform.


Deploying Cloud Foundry with VMware vSphere - Part 1

Cloud Foundry is an interesting portal / management system from EMC's Pivotal Lab's team to allow you to (essentially) manage docker and the associated infrastructure it provides.

But one of the features I love the most is that it will directly integrate with VMware vSphere and take so much of the pain away.

However, the initial setup can be a bit ... awkward, so I thought I'd document how I got it working.

First off, I assume you have a fully working vSphere setup - so ESXi hosts configured and running, along with the vSphere Appliance or installation on a windows box somewhere. I tested this with vSphere 6.

Next, give the instructions a read: http://docs.cloudfoundry.org/deploying/vsphere/
Now, my private setup for testing was no where near the minimum specifications (I run a couple of HP MicroServers at home for testing things) but this didn't stop me continuing!

One thing to bear in mind, if you have multiple ESXi servers, make sure you have a datastore mounted on them both that can be shared ...

The first main step to deploying Cloud Foundry is deploying the MicroBosh image onto vSphere.
Pretty straight forward, but some jumping around is needed. Create a folder on your machine to hold things, and create the manifest.yml as per the documentation. Go through and replace the various items (such as IP address any thing in caps) thats need to be done to make it valid -- note that you need to use a vSphere Cluster and shared storage here -- I cheated, and had a cluster with a single node and an NFS Datastore that I'd created earlier.

After that, grab MicroBOSH off the web. Installing this onto my Mac OSX literally took opening a terminal and running the command, thanks to Yosemite already coming with a ruby install - but I did need to run it sudoed!

With Bosh on your machine, the next thing is a stemcell. This is basically the "starting point" for Cloud Foundry it seems.

So ...

Downloaded the stemcell with: 

bosh download public stemcell bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz

Preped the deployment with: 

bosh micro deployment manifest.yml

When I went to deploy using: 

bosh micro deploy bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz 

I got an error: 

either of 'genisoimage' or 'mkisofs' commands must be present

Fixable using: 

brew install cdrtools 

(you need homebrew installed, google it).

I also encountered an issue with the stemcell I'd picked that wasn't actually provisioning the network ... initially I thought this was a bug with the process as no errors where given, but I then discovered (after a lot of googling) I had an error in my manifest.yml file. Seems it's really sensitive and there's pretty much no validation.

Full console output of provisioning micro bosh:

iMac:micro-deployment andy$ bosh micro deploy bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz 
No `bosh-deployments.yml` file found in current directory.
Conventionally, `bosh-deployments.yml` should be saved in /Users/andy.
Is /Users/andy/micro-deployment a directory where you can save state? (type 'yes' to continue): yes
Deploying new micro BOSH instance `manifest.yml' to `https://192.168.0.30:25555' (type 'yes' to continue): yes
Verifying stemcell...
File exists and readable                                     OK
Verifying tarball...
Read tarball                                                 OK
Manifest exists                                              OK
Stemcell image file                                          OK
Stemcell properties                                          OK
Stemcell info
-------------
Name:    bosh-vsphere-esxi-ubuntu-trusty-go_agent
Version: 2969
  Started deploy micro bosh
  Started deploy micro bosh > Unpacking stemcell. Done (00:00:07)
  Started deploy micro bosh > Uploading stemcellat depth 0 - 20: unable to get local issuer certificate
at depth 1 - 19: self signed certificate in certificate chain
. Done (00:22:07)
  Started deploy micro bosh > Creating VM from sc-54cdca8c-6b13-4c3d-ae48-5bc57d9b93ffat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:21:46)
  Started deploy micro bosh > Waiting for the agentat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:02:23)
  Started deploy micro bosh > Updating persistent disk
  Started deploy micro bosh > Create disk. Done (00:00:02)at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
  Started deploy micro bosh > Mount diskat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:01:35)
     Done deploy micro bosh > Updating persistent disk (00:01:50)
  Started deploy micro bosh > Stopping agent services. Done (00:00:01)
  Started deploy micro bosh > Applying micro BOSH specat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:06:16)
  Started deploy micro bosh > Starting agent services. Done (00:00:00)
  Started deploy micro bosh > Waiting for the directorat depth 0 - 20: unable to get local issuer certificate
. Done (00:01:14)
     Done deploy micro bosh (00:55:44)
Deployed `manifest.yml' to `https://192.168.0.30:25555', took 00:55:44 to complete
at depth 0 - 20: unable to get local issuer certificate

After that you can check the deployment is valid by: 

bosh target 192.168.0.30

You should be prompted to login; in my case (because of my yml not defining anything) the default credentials of admin / admin were valid.

Next step, deploying BOSH.


Umbraco 7 on Azure Website

This weekend I decided to finally get around to moving two static websites from being a Website MVC Project in Visual Studio to something that the wife could look after - so I set about rebuilding them in Umbraco.

Considering it's been a while since I used Umbraco, I was tempted to download it and create a new project, throwing it in via Nuget. But then I noticed it was on the Azure Website Gallery. A few clicks later, a website is working (New, Website, From Gallery, Umbraco), and even running on a free Web SQL Database (20 Mb limit) -- however, I do wonder how this is going to pan out, as the Web tier is due to be retired this September. 

Once there was a basic instance running, the next thing is to obviously starting building out the templates and document types - and creating the sites (I run both sites off a single instance of Umbraco, and just use the Hostname mapping feature it has).
The problem appeared when it came to FTPing to the instance - I keep forgetting that the nodes (in this case running in a cluster) take a while to synchronise - so if you edit you need to wait ... Or, as I found out, it's easier to enable Monaco (Visual Studio Web editor) that works with Azure - that way you can do the edits, hit save and its instant.

Finally, a quick upgrade by grabbing 7.2.4 from the Umbraco website and replacing bin, Umbraco, Umbraco_Client and the Views\Partials\Grid folder and I was done.

My only grip? The Azure installer doesn't give any indication, nor method, for changing the configuration thats held in the Config\umbracoSettings.config file - some of which (like mailserver) users probably want to alter easily without messing with FTP.


BlogEngine.NET and MySQL Site Map Provider (.NET Connector)

I just encountered an error that had me stumped for a short while - I installed the .NET MySQL Connector onto one of my servers and suddenly my installation of BlogEngine.Net ceased to work - I was unable to login to the admin, and just encountered the yellow screen of death.

After a little digging, I identified that the MySQL Connector had modified the machine.config, and added it's Site Map Provider into the list. As this wasn't configured, it was throwing an exception in the BlogEngine.NET code...

The fix? Simply adding <clear /> to the siteMap Providers list within the system.web block in the root web.config and the site sprang back into life!

Octopus Deploy and Proget

This weekend I switched my local Octopus Deploy server to use the ProGet as the package repository.

Generally speaking, it was pretty painless switch - but I was getting errors until I added an advanced MSBuild Argument (/p:OctoPackPublishApiKey=BuildKey) on to provide an API Key; obviously you need to configure this in ProGet :)
Initially I didn't think I would need to provide this API Key as the user the build agent was running under a user account that had full access to the feeds; it seems, however, that this is not the case when you are running normal authentication (i.e. not domain joined).

TFS 2013 Update 2 - Problems with oi Pages (TF400898)

I've been experimenting testing TFS 2013 Update 2 over the last few days, and encountered a couple of issues.
The first appears to be a problem that is specific to a Team Project Collection where the Activity Log (accessed via _oi) does not render correctly (progressing this on with MS Support), however, the second one is slightly more interesting.

If you go into the /_oi interface, select Job Monitoring, pick a job, you go to the detail page. Only now you also get something unfriendly.


Confirmed this on a clean install of Server 2012 R2, with TFS 2013 Update 2, so it seems that this is a breaking "change".
Hopefully a hot fix comes out soon for this one (and maybe the Activity Log issue).

Update: Just as I'm posting this, I get a call back for MS Support.  Both issues confirmed as bugs, and will be fixed in a future release. Issues that you might encounter with the Activity Log should automatically clear up after a month or two as old entries are purged - so if you encounter a TF400898 here you will probably have to put up with it for a while!