Azure Cost Monitor announces private and shared filters

The Azure Cost Monitor Team is happy to announce the launch of the new filtering feature, starting today.

Now, users are able to create data filters to get instant access to important or costly services. Filtering enables users to only show results that match a specific criteria. For example a stakeholder may want to only report on a specific tag value or type of cloud service.
It is even possible for team administrators to share created filters with team mates and co-workers, so everyone may stay focused on their cost drivers in the Azure subscriptions.

FilterExpensiveVMs

The system supports a top to bottom “and/or” logic what means that you can create the filters as you would naturally read a sentence. This allows you to combine different attributes in an “and” or “or” clause.


How to get started?
Adding a new filter to the Azure Cost Monitor is this simple:

1) Log In to the Azure Cost Monitor Dashboard and if you don’t have a team account migrate to a team (optional):

team-02-migrate-team

2) Select the “Create Filter” drop down at the spending reports page.

CreateFilter

3) Add several conditions to the filter and save it:

FilterExpensiveVMs

4) Just switch between different filters by selecting the needed one:

SelectFilters

5) If you are the team administrator share a created filter with the team by clicking the “Share with team” button:

ShareFilter


Interested in the filtering feature?
Try the new feature today by simply logging into your Azure Cost Monitor. The feature is part of any plan, starting from the free Basic plan up to the Enterprise plan. 

Any questions, wishes or ideas? Try our feedback portal or drop a mail to help@azure-costs.com.

Advertisements

Azure Table Store: How to backup safely

Microsoft Azure Table store is an amazing, simple, cheap and powerful service of the Microsoft Azure cloud. The service is something between a real NoSQL database and a simple KVP-store. I did many projects in the last time where the Azure Table Store was just a fast read cache or also the whole persistency backend.

As soon as the tables in Azure contain important data for the application provided to customers it’s necessary to think about backup. Microsoft guarantees that the data can not be corrupted on the storage (check out my article about the different storage account options) but accidental deletion, data corruption during automated processes or just by mistake can still happen. Where people are working shit sometimes happens, nobody can change this.

Backing up table stores is not that easy as backing up blob storage based on the idea of stamp copies. Every table needs to be replicated into another table or exported to the blob account. In my current project we searched for the perfect solution and finally came up with the following stack of services: We are using the Azure Cloud Backup from RedGate to export all tables in a GEO redundant storage account. With this solution we get a daily backup of our tables in parallel to Azure SQL backups based on the Microsoft built in features. The backups are stored into a GEO redundant storage account which helps to ensure that we have access to this backups even when one datacenter of Microsoft burns 🙂 or just looses power.

This setup in combination with the Microsoft Backup support for Azure SQL is very powerful and gives everybody the good feeling of being able to recover when chaos happens.

Pit-Falls when you integrate Twitter, Facebook or other social networks

Today I received a request from a platform to vote for a specific product. I was willing todo this, because I’m using it every day and I’m really happy with that.

With this kind of sentences discussions about failed social media integration start. In a world where many people are very concerned about their own data privacy a “but” follows very fast:

Currently I can’t do that because the website enforces me to authorise via Twitter or Facebook. This would be doable if they only request the rights they really need. I will not allow every system to post in my name because that’s very personal.

At the end of the story you as a service provider loose a customer on the click path for only one reason: You tried to get more rights as you typically need to operate your application. And from my own experience the right to post in the name of someone requires a strong trustworthy relationship.

Screen Shot 2015-03-06 at 07.32.16

Neither Twitter nor Facebook enforces any developer to request more rights than they really need. I did a lot of Twitter integrations and whenever you are doing this please check the FAQ of the identity provider of your choice clearly and follow the principal of minimal privileges.

Following this ideas you will loose less people on click path to your voting system!

Azure: Do never use EF automigrations when working in teams

Todays number one approach in building database access libraries on ASP.NET based applications is the Entity Framework. Since Microsoft supports migration based database updates, which mostly look like copied from the approach Ruby on Rails offers since ages, it’s possible to use them in scenarios where continuous deployment is key.

The core model which fits perfectly into a development process around continuous delivery & deployment is a code-first approach. This means that the developer creates so called POCOs (Plain Old C# Object) and the system is able to generate the needed SQL changes, based on the current state of the database.

The Entity Framework has two options to generate the migration:

  1. Auto Migrations
    The Entity Framework tries to generate the database changes on runtime without any specific migrations implemented as code. Everything happens magically 😦
  2. Explicit Migrations
    The Entity Framework just runs the migration scripts written in a .NET based special DSL for database operations. Nothing happens magically 🙂 This needs the developers brain but is controllable even in large teams.

When you work in a team Auto Migrations are really a bad idea and should be disabled from the beginning. Assume developers do changes manually in the database which means the magic around AutoMigrations will generate a different set on instructions against this database as the next developer will get. The results of auto migrations in team environments are not reliable and repeatable.

So it’s a really good idea to disable automatic migrations from the start:

public Configuration()  {
AutomaticMigrationsEnabled = false;
}

Auto Migrations do not have a win at all when it comes to projects with more than one developer or projects which are deployed automatically. Continuous deployment relies on a strict unit of work and as less magic as possible. This will also help to troubleshoot when the continuous deployment breaks the production and a rollback is required.

Microsoft also published a nice article with more information at the MSDN: http://msdn.microsoft.com/en-US/data/jj554735.aspx

Azure Storage: Is the Geo Redundant Mode really required?

Microsoft Azure offers different replication modes for Azure Storage. Every mode approximately doubles the costs for a TB of data. During a great workshop with Patrick Heyde we talked about stamp copies and I asked myself which mode I really needed.

First I took one step back, to identify all requirements I typically have in my projects for a highly scalable, fault safe, redundant storage :

  • When a hard-drive in a storage server brakes, my data still needs to be usable.
  • When Microsoft has huge power outage on a whole datacenter, I want to bring my app up and running again in another datacenter.
  • When I (or my customers) are removing data by accident, there needs to be an option to revert to a former snapshot.

So a review of the different replication modes compared to these requirements leads me to the following results:

When a hard-drive in a storage server brakes, my data still needs to be usable:
The Local Redundant Storage fulfils this requirement perfectly. Microsoft writes 3 different copies of every bit within one single data center. When a hard drive on a stamp or the whole stamp goes down, another one can take over and all data is available without any interruption.

When Microsoft has huge power outage on a whole datacenter, I want to bring my app up and running again in another datacenter:
Microsoft offers a geographical redundant storage mode that stores another 3 copies in another datacenter a hundred miles away. This helps a lot, because every application can use the secondary location to access to the data – but is it worth the price? The price for GRS is three times higher then for LRS.
An automated replication between two different LRS storages, hosted in a Azure WebJob might be a good solution the fulfill the requirement as well.

When I (or my customers) are removing data by accident, there needs to be an option to revert to a former snapshot:
Globally Redundant Storage is not helpful when it comes to removing data by accident. As soon as the data is removed from the primary storage, the system removes the data in the backup location as well, often within seconds.
But this requirement can also be fulfilled with a replication between two different LRS storages, as already described above. The whole application needs to be designed for this use cases.

So this review brings me to the conclusion that in my personal opinion GRS storage is not needed in most of the use cases. Normally several LRS storages and an application logic optimised on the specific data security requirements works well and preserves the budget.

What’s your opinion? Do you have use cases where GRS and Read-GRS are hard requirements? If you like, leave a short comment …

Understand your service types in the Azure Cost Monitor

The Azure Cost Monitor is able to analyse and manage every existing Azure service. In the past sometimes the different service types could have become unclear, especially for people with big subscriptions and a lot of services. This was because the Azure Cost Monitor was only using an icon to explain the service type:

nontooltip

Starting today the Azure Cost Monitor also offers a tool tip with a clear explanation of the the service type, whenever a user moves the mouse over the icon. It also works on tablets or smartphones, whenever users touch the icon on a specific service.

tooltip

This little feature should really help to reduce service type confusion in the Microsoft Azure Cloud.

Validate authentication_token from Microsoft LiveID with node & express-jwt

Microsoft offers with live.com a service which can be used as identity provider for your application. I lost a couple hours when I tried to validate the issued authentication token from Microsofts IdP with the help of express-jwt in my node application. 

When an application requests a token from live.com via the oAuth2 implicit flow an access_token will be issued which needs to be used for the live service as self. In addition to that an authentication_token will be issued in the standard JWT format. This token can be used as authentication token in your node server. 

Normally validating a JWT token is very simple with node+express+express-jwt stack. Just configure a middleware and enter your application secret:

var jwt = require(‘express-jwt’);

app.use(jwt({secret: ‘<<YOUR SECRET>>, audience: ‘<<YOUR AUDIENCE>>’, issuer: ‘urn:windows:liveid’ }));

 

The Microsoft dashboard offers an application/client secret for your live application. This secret will be used in a very specific way from Microsoft as the key for generating the signature of your JWT token. I found the following solution in the history of the LiveSDK GitHub repository. 

The signing key for the token is a SHA256 hash of the given application secret plus the fixed string “JWTSig”. I ended up with the following code to generate the real secret for validation:

var crypto = require(“crypto”);
var secretTxt = ‘<<YOUR APPLICATION SECRET>>’;
var sha256 = crypto.createHash(“sha256”);
sha256.update(secretTxt + ‘JWTSig’, “utf8”);
var secretBase64 = sha256.digest(“base64”);
var secret = new Buffer(secretBase64, ‘base64’);

The generated secret can be used in the expres-jwt middleware as follows:

app.use(jwt({secret: secret, audience: ‘<<YOUR AUDIENCE>>’, issuer: ‘urn:windows:liveid’ }));

With this little piece of code it’s super simple to verify JWT tokens from live.com. I hope Microsoft starts documenting this little secrets better in  the future.