Good-bye NatWest, thanks for the lousy security

For many years I've banked with NatWest, since I was in my teens in-fact, and during that time I've had various different "products" with them - including business accounts.

Now, with the recent trend in issues with their security as well as a rather disappointing incident today with their fraud team, I've decided it's time to move on.  But I figured it was worth highlighting WHY I feel the need to move on from this rather old institution ... just to make the point about their lax security.

These days, everyone needs to be super vigilent to ensure their money is safe - and its not enough to trust your bank to do the job on your behalf. We are constantly told to never share your internet banking credentials, to check all cashpoints when using them with your card and even more so to never give any potentially sensitive information to anyone over the phone in fear of it being used to socially engineer. Which is why today my sense were pricked ...

Last night, around 6pm, I placed an order on Zeek - something I do pretty regularly in the everlasting bargain hunt that is life. And, as typical with NatWest and Zeek (and me ??), they blocked it ... I shrugged, and put it through with Paypal and didn't think anything more about it.

This morning, at 9:33 I received a text - purporting to be from NatWest. There's no number associated with the sent message, just a name. And in the message it gives two numbers.

Hmm - I searched the NatWest website, and didn't find this number anywhere.  In fact, on their contact us page, there is a completely different number down for this situation. 

You may receive a call or voicemail from us about your bank account or debit card, to help protect you against fraud, you can call us back on: 

UK: 0800 011 3312

What the hell, I thought, I'll give it a call. It was answered by a an AVR requesting my card number ... after duely punching it in (I personally don't figure a card number as that personal information), I was greeted by a chap who immediately challenged by to go through security. I politely declined, indicating I had no idea what this was about and suggested he called me back on my registered number on the account - he said, 'what, the one you have called on'. Interested. I called in on a withheld work number, which isn't linked to my account. Another red flag. I said no, the mobile number on the account. He agreed, and we hung up.

No call arrived.

So I messaged NatWest through the app...

Yes, you are reading that right. They hide this number intentionally. And this forms part of their security. WTF.

I challenged this, and was basically told that they felt there was nothing wrong, nor potentially dangerous, with the way they handle contact with people about potential fraud on their account - completely missing the point we are always told to NOT call unpublished numbers for our bank, nor the fact that the bank usually automates this process via text message anyway.

If the bank is willing to lead people into potentially malicious and scammy situations by using this sort of method, I want absolutely nothing to do with them - and as such, will be voting with my feet. I'd suggest anyone who is remotely concerned with their money's safety to follow too. As an aside, Barclays really do all this security lark better ...

Malware testing using Cuckoo

This afternoon I decided it was time to update my Cuckoo malware analysis setup, and while I was at it, I figured it would make sense to write it up in case anyone else wants to create one !

Cuckoo Sandbox is a superb project, but as with all technical open source ones it can be a bit fiddly to get running.

I first start off with a clean Kali Linux installation, and ensure that it is fully patched (apt-get upgrade / apt-get dist-upgrade). After that, install the pre-requisites for Cuckoo: 

apt-get install python-pip python-dev libffi-dev libssl-dev
python-virtualenv python-setuptools libjpeg-dev zlib1g-dev swig

apt-get install mongodb

apt-get install virtualbox

apt-get install tcpdump apparmor-utils

And then some config tidy up:

setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump

Now, you should at this point create a user for Cuckoo, but you can (not advisable!) continue and run it as root.

adduser cuckoo

usermod -a -G vboxusers cuckoo

Install the required Python modules

pip install -U pip setuptools

pip install -U cuckoo

You need to load the current community definitions up by running this command

cuckoo community

And then its time to setup the actual virtual machines the malware is analysed in.

Start VirtualBox: virtualbox at the console.

Then File > Preferences, Network, Host-only networks.
Create a new network - this will create the default vboxnet0 network

Create a new VM; in my setup I started with Windows 10. Connect the CD-ROM through to your installation media and complete your install as you would normally.

Install Python 2.7 and Pillow; Install the agent as documented and ensure that you start it as Administrator. 

On your Kali machine, you need to edit the virtualbox.conf file as appropriate (you will find this in $HOME/.cuckoo/conf). I changed mode = gui instead of headless as I like to see whats going on, and then scrolled down to the cuckoo1 entry and changed the label to match the name of the VM I'd created, set snapshot to CuckooBase and then changed the osprofile as appropriate. Make a note of the IP address assigned while you are here.

Back in VirtualBox, ensure that you are on Host Only networking (on vboxnet0) and set the IP Address on the adapter in your virtual machine to be the IP address from the virtualbox.conf file. Subnet will likely need to be

Take a snapshot of the machine powered on and in this state, and call it CuckooBase.

And now you are ready to use the Cuckoo setup!

At a console type cuckoo submit <file> to submit the analysis job, and then cuckoo to run the process.

There is a LOT more you can do with Cuckoo, and this is really the only very basic steps to get a working environment, but it should give you a good starting point!

Importing SQL Azure database to SQL Express

This one frustrated me, but I have to say, it's pretty common to get issues it seems bringing a database down from SQL Azure to the on premise version. 

If you follow the normal route of exporting the database, downloading the bacpac, then importing it you might hit this error:

TITLE: Microsoft SQL Server Management Studio
Could not import package. Warning SQL72012: The object [data] exists in the
target, but it will not be dropped even though you selected the 'Generate drop statements
for objects that are in the target database but that are not in the source' check box.
Warning SQL72012: The object [log] exists in the target, but it will not be
dropped even though you selected the 'Generate drop statements for objects that are in the
target database but that are not in the source' check box. Error SQL72014: .Net SqlClient
Data Provider: Msg 33161, Level 15, State 1, Line 1 Database master keys without password
are not supported in this version of SQL Server. Error SQL72045: Script execution error.
The executed script: CREATE MASTER KEY; (Microsoft.SqlServer.Dac)

The cause? Well in this case it's not because you are not running a current version of SQL Server (in my case, 2016 SP1), but because SQL Azure supports a Master Key with no encryption specified - to resolve this you need to run a piece of T-SQL against your SQL Azure database to set the master key password BEFORE you export the database: 


After that, export as normal and you should be able to import your database.

Getting assigned licenses in Azure AD / Office 365

This week I found myself in a position with a large number (1000+) of users assigned in an Azure Active Directory setup that was linked to Office 365; but where only a subset of these accounts actually had any assigned Office 365 offerings. To make matters worse, it was a mixture of products being assigned and the need was to standardise things and move to using Security Groups to assign the licenses instead of manual assignment.

Obviously the first thing to do was to get a feel of the current assignments; and PowerShell certainly came in handy here - and removed the need to check each user by hand.

I used the newer AzureAD PowerShell module (Install-Module AzureAD), and knocked the following script out very quickly to dump the details to screen -- this allowed me to easily work out the group memberships needed and then run through removing the manually (direct) assigned licenses.

$VerbosePreference = 'Continue'

Import-Module AzureAd -Force
$subscribedProducts = Get-AzureADSubscribedSku | where { $_.ConsumedUnits -ge 1}

Write-Verbose "License types are:"
$licenses = $subscribedProducts | select -expandproperty ServicePlans | Format-Table -Autosize | Out-String
Write-Verbose $licenses

$users = Get-AzureADUser -All $true
Write-Verbose ("There are " + $users.count + " users in total")

foreach ($license in $subscribedProducts)
Write-Output ("Looking at SKUID " + $license.SkuId + ", " + $license.SkuPartNumber)

foreach ($user in $users)
if ($user.AssignedLicenses.SkuId -eq $license.SkuId)
Write-Output ("User has this assigned - " + $user.UserPrincipalName)

foreach ($servicePlan in $license.ServicePlans)
Write-Output ("Service Plan: " + $servicePlan.ServicePlanId + ", " + $servicePlan.ServicePlanName)
foreach ($user in $users)
if ($user.AssignedPlans.ServicePlanId -eq $servicePlan.ServicePlanId)
Write-Output ("User has this assigned - " + $user.UserPrincipalName)


Office 365 - UK Data Residency deadline

Microsoft recently announced the deadline for current Office 365 user's to request relocation of their data to their UK Azure data centres.

To do this, a service administrator should login to the Office 365 portal, select Settings then Organization Profile.  Scroll down and you will find the Data Residency option box where you can now elect to have the data residing in the UK.

The deadline for requesting the move is 15th September 2017, and the move itself will apparently be completed within 24 months of the above deadline. That's a fair wait!

It should also be noted that this only applies to "Core" data that will be moved - although finding clarity on that particular topic is challenging.  More details can be found here.

Release Management: The art of build promotion - Part 1

For a while now the software development industry has been pushing a release practice that is ultimately all around build promotion - that is the art of completing a build once, and then promoting these artifacts through different environments or stages as it progresses towards release point, but only changing environmental configuration as it moves. This has the excellent objective of confirming that actually what you release is what you tested. Of course, that's not entirely the case if you have employed any feature toggling and the toggles are not matched up across the environments but that's a different story!

For a while now, I have been working with this practice but there are always the odd situation that comes up from development teams on what is the best way to handle some scenarios. Such as support or emergency releases. But before I get onto them, lets have a run through on the general theory.

The whole premise of Release Management steps from the desire to automate as much of the development pipeline as possible; partly to reduce risk but more importantly to increase the speed that changes can be applied.

So you usually start off with Continuous Integration - the practice of compiling your code often - say on each Check In to your version control. This confirms that your code that you have comitted can at least be integrated with the rest.

After that you add your Unit Tests (and if you are really lucky to have enough automation, Smoke or Integration Tests) and you get Continuous Delivery. You can, in theory, be confident enough to take any PASSING build and send it to a customer. I say in theory, as this tends to not really be the practice!!

Finally you get Continuous Deployment. Some view this as the holy grail of Release practices, as in essence you are deploying constantly. As soon as a build is passing and has been tested, you lob it out the door. Customers / users get to see new features really quickly, and developers get feedback quickly - in this practice you really only fix forwards, as ultimately you don't need to do masses of regression testing manually etc so its just as quick.

Build Promotion techniques kind of appear in the last two of these - it can be used when you are able to do Continuous Delivery (you can select any build and promote it through the stages), but it can also apply for Continuous Deployment where you might allow business stakeholders to select when and what builds are deployed as long as you are confident enough they will work from a technical perspective. At worst, you use the technique (and tooling) to give you a mechanism to get business stakeholder approval before allowing a release to go to production - something that is extremely important in regulated companies. In these cases Build Promotion is an auditors dream as you should be able to clearly identify what was deployed to production environments when, and exactly what was changed.

Tooling such as VSTS / TFS make Release Management and Build Promotion easy to get into these days - and now with the web based versions its actually usable. However, it really is not a holy grail. There are some things you need to consider.

Lets assume you have deployed Release Management Build Promotion techniques on your entire process - you will end up with a serious of stages, or environments that will look something like this:

Dev -> Test -> UAT -> Production

A build drops in at the first stage, Dev, after it is compiled (or selected if you are starting your process - or pipeline - manually). From there it goes through each one sequentually, optionally needing further (manual) interaction or approval. 

Getting version one out the door on this process is easy enough. But what happens if you find a bug in version one, just as you have version two sitting at UAT getting it's final rubber stamp. What would you do? Scrub version two, make the fix in that code base and restart the whole process again? Do you scrub version two, make the fix on version ONE code base and start a full sequence of deployments for THAT version?


And about now you get the realisation that you have approached Release Management and Build Promotion techniques the wrong way. Instead of creating a process that can be quick and agile, you have instead created something that is about as agile as a steel girder.

[To be continued!]

Upgrading lab to Server 2016

As Server 2016 is now in GA, I figured I'd have a shot and upgrade my lab from 2012 R2 to 2016.

The majority of the machines were completed as an in-place upgrade.

Before the upgrade:

- AD Server, also holding ADFS and WSUS (server core)
- Applications Server (full fat desktop)
- SQL Server (server core)
- Web Application Proxy (server core)

After the upgrade:

- AD Server (server core)
- ADFS Server (full fat desktop)
- Applications Server (full fat desktop)
- SQL Server (server core)
- Web Application Proxy (server core)

All 2012 R2 boxes were fully patched before upgrading; ADFS was split out onto a seperate box as the upgrade is not capable of an in-place upgrade of this element - the process as documented here is fantastic for migrating this. I still haven't installed WSUS again but I'm planning on rebuilding the SCCM element of my lab anyway, so don't need it yet.

The only snag I've encountered is that when you do the in place upgrade for server core, you need to run Setup with a couple of parameters - /Compat IgnoreWarning - otherwise you just get a blue screen.

The Web Application Proxy server upgraded fine, but the WAP component was left in an invalid state; I just removed it and reinstalled as I was resetting ADFS anyway -- likewise Azure AD Connect needed to be reinstalled and reconnected.

Finally, a disk cleanup frees up space - just run 
dism.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase
although it should be noted this will prevent you reverting the upgrade etc.

Next task is to upgrade the two Hyper-V hosts for my lab ...

Chocolatey and keeping your local feed up to date

For a while now I've been using a combination of Boxstarter and Chocolatey to help me manage and maintain by devices. But one of the snags that I have encountered is ensuring that my local, moderated feed is up to date with the public feed.

You are probably wondering ... why do I bother with a private package feed? Well, two reasons:

- Offline support; I often end up having to rebuild devices while away from home - and hotels don't have great wifi. I travel with a pre-loaded USB key with a copy of my feed (and installers) for just this reason

- Control; I like to keep control of what versions are on my devices and don't particularly like being forced onto the latest and greatest - and I like to ensure all my devices are on the same version ;)

So I wrote a very simple tool to compare my local package feed to the chocolatey public feed; you can find it on GitHub:

And here it is:

No settings found, please specify your repository locations
Chocolatey Repository [] (Please enter to keep):
Local Repository [] (Please enter to keep): D:\Choco\Packages
Checking package 7-Zip (Install); local version is;
remote version is
Checking package Beyond Compare; local version is;
remote version is

Update available for Beyond Compare to
Checking package Boxstarter; local version is 2.8.29;
remote version is 2.8.29

Checking package Boxstarter Bootstrapper Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Boxstarter Chocolatey Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Boxstarter Common Module; local version is 2.8.29; remote version is 2.8.29
Checking package Boxstarter HyperV Module; local version is 2.8.29; remote version is 2.8.29
Checking package Boxstarter WinConfig Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Chocolatey; local version is 0.10.0; remote version is 0.10.1
Update available for Chocolatey to 0.10.1
Checking package ChocolateyGUI; local version is 0.13.2; remote version is 0.13.2
Checking package dashlane; local version is; remote version is
Checking package Fiddler; local version is; remote version is
Checking package Google Chrome; local version is 52.0.2743.116;
remote version is 53.0.2785.116
Update available for Google Chrome to 53.0.2785.116
Checking package Windows Management Framework and PowerShell;
local version is 5.0.10586.20151218; remote version is 5.0.10586.20151218
Checking package RSAT 1.0.5; local version is 1.0.5; remote version is 1.0.5
Checking package SQL Server Management Studio; local version is 13.0.15700.28;
remote version is 13.0.15800.18
Update available for SQL Server Management Studio to 13.0.15800.18
Checking package Sublime Text 3; local version is; remote version is
Checking package Sysinternals; local version is 2016.07.29; remote version is 2016.08.29
Update available for Sysinternals to 2016.08.29
Checking package Visual Studio 2015 Enterprise Update 3; local version is 2015.03.01;
remote version is 2015.03.02

Update available for Visual Studio 2015 Enterprise Update 3 to 2015.03.02
Finished checking packages; there are 6 packages to update.

Building a .NET App and need low cost cloud logging?

One of the most frustrating things with being an app developer is getting log files from the applications you've built and are deployed (although, don't forget, if you are deploying to end users you MUST get their permission!).

Here's an easy, and cost effective, way to get around this problem.

Simply use NLog in your app, then checkout this NLog to Azure Tables extension.


Introduction to Credential Federation

There are many misconceptions and misunderstandings around how credential federation works. This section should outline the (general) practice of federation, as well as detail the transaction flows in an attempt to clarify the technical piece.

For a web application, in a traditional authentication approach you go to the webpage and are prompted by the given application to login. Your details are then checked against details they hold (hopefully securely!). However, this means that you are likely to have a different set of credentials to remember for every service that you use online. For an enterprise, this can pose a challenge not only for the users, but also the administrators when it comes to removing user access when people move on to new pastures.

Federation adds solutions to this by integrating online services with the on premise active directory (or other) identity platform.

The moving parts

Federation Identity Server: This is the server that you, as the enterprise, will deploy on your network and validate the credentials with. This server will also serve up the login prompt to your users so you can usually brand this as needed. Usually only accessible over HTTPS (for obvious reasons I’m sure).

Relying Party: This is the third party that is going to consume the claims that your Federation Identity Server generates. Upon registering, this party will give you a key to use to sign your claims with allowing it to be 100% sure that you sent the claims. You need to take great care of this key, otherwise you might as well hand over your authentication database to whomever has it (they can pretend to be anyone on this party you see).

Claims: A set of attributes that denote things like name, email address, role, etc. – pretty flexible and normally configured as needed by the relying party and generally are attributes from your authentication system. These are cryptographically signed to verify the originator. You’ll probably hear the term SAML around this particular area.

So how does all this work?

The easiest way to explain it is to describe a user’s journey logging on via a federated approach.

1. User visits the federated identity login page for the given cloud application
Sometimes this is the same login they would normally go to, and the application will “detect” they are federated when they enter their username.
2. Web app redirects them to your federated identity server’s login page
3. User logs in
4. Federated identity server validates the identity, and generates claims.
These are often embedded in a response page after login, as a hidden form which is then submitted back to the relying party application
5. User is redirected back to the relying party application where the claims are processed
6. User receives a relying party authentication token as if they had logged in locally

Common myths

Myth: Credentials are transferred to the relying party.
Truth: In most federation claims are sent to the relying party cryptographically signed by a key that is validated by the relying party. This allows it to be confident that only the federated identity server generated the claims it has received and therefore that it can trust them.

Myth: The federation identity server is safe from attack and is not exposed.
Truth: In order to be useful – i.e. contactable – the federation identity server has to be internet accessible – unless you restrict your users to login with federation from specific locations which pretty much renders it useless.