Getting Windows 10 IoT for RPi2 ... without a physical Windows box...

Windows 10 IoT edition has been available for a while now, but I have only just gotten around to looking at deploying it to my Model B Raspberry Pi 2. I had figured this would be a simple matter for downloading an image and then flashing it onto the SD card, job done. But it seems Microsoft have taken a different route which seems to require a Physical Windows 10 box to successfully flash the card.

Now, I don't have access to this at home - all my Windows machines are virtualised, and my physical machines all run OS X.

After much googling, and many dead ends, the general process I found here https://www.raspberrypi.org/forums/viewtopic.php?f=105&t=108979 posted by MikeAtWeaved worked.

Generally this was:

- Grab the IoT download, and get the flash.ffu file
- Download ImgMount Tool: http://forum.xda-developers.com/showthread.php?t=2066903
- Download DiskInternals Linux Reader: http://www.diskinternals.com/download/

Using ImgMount, mount the flash.ffu file - it will appear empty, but ignore this.

Using Linux Reader, select the Virtual Disk and tell it to Create an Image (.img extension).

Compress this img and move it onto your Mac OS X machine. Decompress it.

Then use dd at a terminal (sudoed of course) to write this onto an SD Card.

I am, however, very surprised that the PowerShell Remoting services are enabled on HTTP and the Device Management web page (on port 8080) is HTTP only too. Why no SSL by default?

vSphere 6 gets Update Manager!

I'm really not sure how I missed it, but the vSphere 6 update finally brings the Update Manager components into the web interface!

The upgrade is pretty easy to, if you have 6.0 U1 already installed - as you can now use the VAMI interface (on port 5480) to run the upgrade. Simply go to https://vcenter:5480, login as your root user and go to Update. Then click the Check for Updates, and Install Updates options.... and sit back and wait.

You also need to upgrade the Update Manager component; now this is actually not as simple as I'd hoped - you need to download the full windows installation package and grab just the Update Manager installer from it. At least it completes an upgrade pretty happily!

Tokenising Release Management in VSTS

Yesterday I spent a bit of time working with the new Release Management components on VSTS (http://www.visualstudio.com - essentially Microsoft's hosted TFS implementation) in the knowledge that this will be (almost) what appears in the TFS 2015 Update 2 builds.

The first thing I noticed was the distinct lack of support for pushing environment configuration into the SetParameters files used by WebDeploy; essentially all you could change was the connection string if you were willing to try and work out the advanced parameters to pass in. This just wouldn't be enough for most of the projects I was involved in, and I found it pretty bizarre considering the way the existing on-premise Release Management works with WebDeploy.

So I extended the Azure Web Deploy action to add in tokenisation. And heres how.

First step - extend the Azure PowerShell Publish-AzureWebsiteProject.

Ultimately speaking, this is what VSTS is firing off in the background and so this is where we need to start.

A quick branch of the GitHub code for Azure-Powershell (thank you Microsoft for making this all Open Source!), and a dive into the code in src/ServiceManagement/Services/Commands/Websites/PublishAzureWebsiteProject.cs. It's pretty clear that this function would need to accept an additional set of arguments, and then carry out the tokenisation (i.e. replacement) in the SetParameters.xml file.

With that done, a compile and overwrite the existing copy on my machine (in %ProgramFiles%\Microsoft SDKs\Azure\PowerShell) and then a test in an Azure PowerShell and time to move on.

Second step - clone the existing VSTS Deploy to Azure Website task

Ironically, this step took more work. 

In order to create a task for Build or Release pipelines on VSTS, you need to get you environment setup first. Microsoft have gone down the cross-platform route here, which makes a lot of sense given the direction VSTS is taking.

Download and install NodeJS for your platform (https://nodejs.org/en/).

Install the tfx-cli tooling using npm (https://www.npmjs.com/package/tfx-cli).

First step is to authenticate with the service; you do this by issuing a tfx login command. You will be prompted for you VSTS url as well as your personal access token.

After you have done that, you need to create a task: issue a tfx build tasks create and you will be prompted for:

TFS Cross Platform Command Line Interface v0.3.9
Copyright Microsoft Corporation
> Task Name: Antask
> Friendly Task Name: Testing
> Task Description: Testing
> Task Author: Andy
created task @ C:\Users\Andy\Antask
id   : ----
name: Antask
A temporary task icon was created.  Replace with a 32x32 png with transparencies
At this point the task folder will contain a number of template files for you to modify. I grabbed the code for the existing actions from the Microsoft VSO Agents Task GitHub page and copied the contents of the existing action (under Tasks/AzureWebPowerShellDeployment/) while retaining the existing copy of task.json from my template folder; this is the file you will need to merge some code into in order to be able import the action again. The task.json file from the Microsoft GitHub repo is pretty complete - bit things like the name, task id etc need to be changed to that of the task.json template file. Do this and drop the merged file into your task folder.
Add a new parameter onto the task by adding the block into the inputs section; something like:
    {
      "name": "Tokens",
      "type": "string",
      "label": "Tokens to replace",
      "defaultValue": "",
      "helpMarkDown": "Tokens to replace in the SetParameters.xml as used by Web Deploy.",
      "required": false
    },

Then its the PowerShell script that needs modified; the new parameter needs to be added to the param block, then the actual call to $azureCommandArguments. And thats essentially all the modifications needed.

Uploading the action is a simple matter of issuing a tfx build tasks upload --task-path ./ANTest

And that is it!

I have repo's on GitHub for modified versions of both azure-powershell and vso-agent-tasks; hopefully I can get a couple of Pull Requests accepted with some work!

Cloudflare - How to get SSL on any site. For free.

With the advent of Lets Encrypt, everyone seems to be looking to put SSL on their website. That's not to say it is not a good thing to do, but it most definitely seems to be at the forefront of peoples minds these days.

However, there's an easy way for many small website operators to get in on the SSL action - even if their host will not let them (or will charge through the nose) for an SSL certificate on their domain. CloudFlare.

Simply register with CloudFlare, add your website on the Free plan (which is free!), and update your DNS to use the CloudFlare servers. Not only will you then get a degree of Distributed Denial of Service attack filtering, but you will also get SSL. Result.

There's a lot of other things that can be done, either to improve the security of the offering, or to make things more performant - some have a cost, but many don't, and I would highly recommend people have a good look around the options CloudFlare present. One of the most important tips I've got is to add a rule to redirect everyone to your newly secured site - and thats on the CloudFlare knowledge base too.

Moving from Sophos UTM to Sophos XG

I've just upgraded my Sophos UTM 9 firewall to the newly released Sophos XG; this is a free upgrade for those that have been running a UTM with the Home License and thankfully removes the 50 device limit.

The best part is this time there is no messing around with UNetbootin or Rufus to convert the ISO image to something that can be USB booted; all it needed was a dd write and done.

So, on a Mac:

diskutil list

Work out which drive your USB stick is on

diskutil unmountDisk /dev/disk3 (replacing this obviously)
sudo dd if=./SW-SFOS_15.01.0-376.iso of=/dev/disk3 bs=1m
276+0 records in
276+0 records out
289406976 bytes transferred in 220.163896 secs (1314507 bytes/sec)
diskutil eject /dev/disk3

Boot the hardware - answer the prompt to allow it to overwrite the existing drive. Wait!

Connect a crossover cable between your PC and the internal nic on the device; you'll find this means the PC gets an IP in the 172.16.16.X block.

Open a browser to https://172.16.16.16:4444; login with admin / admin.

Accept the EULA; if you are upgrading select the upgrading from UTM9 option and provide your license file. Otherwise enter the Serial Number Sophos will have sent you. I had problems here, and couldn't get the "upgrade" route to work, and ended up having to get a new Home usage serial for the XG.

Hit Activate Device - note that you need the WAN link connected to a valid external internet connection for this process to work.

And thats it - you now have a working XG!


Apple Watch - Calendar out of sync?

I recently noticed that my Apple Watch was telling me about events that had moved - and they definitely were not present on my phone.

The quick fix is to fire up the Apple Watch app on the phone, go to General, Reset and hit Reset Sync Data

This cleared all the content off the watch (contacts and calendar entries) and let them reload correctly.

Strange bug, but easy fix.

Updating a vCenter Appliance to 6.0 Update1

Well, I've just followed my own post, Updating a vCenter v6.0 Appliance and upgraded my appliance to 6.0 Update 1.

And I'm pleased to see the VAMI interface re-appear, along with the option to have it automatically upgrade! I've not explored the other updated / new elements, but I'm keen to finally see parts of Update Manager making an appearance. 

Sophos UTM 9 - "install.tar not on installation media"

Like most people who use the Sophos UTM Home Firewall Software, I do not use the CD-ROM installation method. Previously I've never had any problems converting it to a bootable USB stick, and firing things up.

This time - problems.

Install came up, and promptly fell over after initialising the drive with "Install.tar not on installation media" or something along those lines.

After a few hours searching the internet I found the solution; the Install folder was not being mounted properly by the installer.

SO:

- Start the installation, and after the hardware detection do NOT hit Next.
- Hit Alt+F2 to switch to the console
- mount /dev/sdb2 /install
- Switch back to the install, and complete as normal

You may need to copy the contents of install to the root of the usb stick - I did this previously anyway, as it was a trial an error with another issue (don't know if it helped, but not going back to find out!).

Trial and tribulations of UEFI Bioses

I recently purchased a Gigabyte GA-J1800 motherboard to build a new server specifically to run my Sophos UTM system (thanks to actually having a decent broadband connection now, it needs more horsepower than my little over-crowded ESX lab).

However, I had MASSIVE problems with the board immediately. Getting into the BIOS was completely impossible, so I couldn't actually get past the EFI Boot Shell. Argh.

Solution? Install Windows 8.1, upgrade the BIOS using the Gigabyte AppCenter then everything was a success. Total nightmare however, and a major FAIL on the Gigabyte Quality Control.

It's also really picky about what USB Keyboard you use too ...

Importing large data sets into MySQL

I found myself in need of loading data into a MySQL instance today for testing an application - lots of it (millions of rows). Unfortunately I don't have a MySQL instance in my home lab (it's pretty much all Windows stuff - and a bit of Mac). So what was the easiest solution?

First I grabbed a copy of the Turnkey MySQL VM from https://www.turnkeylinux.org/ - these guys have a lot of Debian images pre-configured to just unpack, lob into a virtualisation platform and get going. A massive time saver.

After that I installed MySQL on my Yosemite Mac - purely for the command line tool.

The fire up terminal, switch into /usr/local/mysql/bin and fire up mysql command line connecting to my instance:

mysql -h hostname -u root -p 

A few quick configuration settings to make things go smoother:

set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
set foreign_key_checks = 0; 

The load the data from the file using:

source <path_to_file>

Finally, re-enable the checks

set foreign_key_checks = 1; 

Thanks to this StackOverflow post for the details!