Azure Storage: Is the Geo Redundant Mode really required?

Microsoft Azure offers different replication modes for Azure Storage. Every mode approximately doubles the costs for a TB of data. During a great workshop with Patrick Heyde we talked about stamp copies and I asked myself which mode I really needed.

First I took one step back, to identify all requirements I typically have in my projects for a highly scalable, fault safe, redundant storage :

  • When a hard-drive in a storage server brakes, my data still needs to be usable.
  • When Microsoft has huge power outage on a whole datacenter, I want to bring my app up and running again in another datacenter.
  • When I (or my customers) are removing data by accident, there needs to be an option to revert to a former snapshot.

So a review of the different replication modes compared to these requirements leads me to the following results:

When a hard-drive in a storage server brakes, my data still needs to be usable:
The Local Redundant Storage fulfils this requirement perfectly. Microsoft writes 3 different copies of every bit within one single data center. When a hard drive on a stamp or the whole stamp goes down, another one can take over and all data is available without any interruption.

When Microsoft has huge power outage on a whole datacenter, I want to bring my app up and running again in another datacenter:
Microsoft offers a geographical redundant storage mode that stores another 3 copies in another datacenter a hundred miles away. This helps a lot, because every application can use the secondary location to access to the data – but is it worth the price? The price for GRS is three times higher then for LRS.
An automated replication between two different LRS storages, hosted in a Azure WebJob might be a good solution the fulfill the requirement as well.

When I (or my customers) are removing data by accident, there needs to be an option to revert to a former snapshot:
Globally Redundant Storage is not helpful when it comes to removing data by accident. As soon as the data is removed from the primary storage, the system removes the data in the backup location as well, often within seconds.
But this requirement can also be fulfilled with a replication between two different LRS storages, as already described above. The whole application needs to be designed for this use cases.

So this review brings me to the conclusion that in my personal opinion GRS storage is not needed in most of the use cases. Normally several LRS storages and an application logic optimised on the specific data security requirements works well and preserves the budget.

What’s your opinion? Do you have use cases where GRS and Read-GRS are hard requirements? If you like, leave a short comment …

Advertisement

Azure Cost Monitor: Daily Spending Reports – a first résumé

A couple of days ago, on January 17th, the new Daily Spending Report Feature of the Azure Cost Monitor went live.

We are very pleased that the reports were adopted so well. We got some nice feedback from our users appreciating the new functionality.

spending-report-demo

This feature has been build to reflect what our users told us they need – a simple way of tracking all Azure cloud costs on a daily basis and transparency for every stakeholder in the company.

The daily spending reports make it easy for the Operations Department to track the cloud spendings and react upon this data immediately.

Controllers of the Finance Department need to understand what the company is spending throughout the month. Azure Cost Monitor reports give them a convenient way to have a good overview on a daily basis.

Managers always need to be aware about the most important KPIs within the company. The new daily spending reports of the Azure Cost Monitor give them the freedom to always have a quick and precise overview on the company’s cloud spendings.

Whenever you’ve got any questions, wishes or further ideas please don’t hesitate and let us know by leaving a message or request them in our feedback portal.

azure-queue-client: build azure queue based workers in node.js

Microsoft Azure offers a very powerful and cheap queueing system based on Azure Storage. As a node developer the challenge is to build a simple to use system which is able to consume messages from the azure queue. The Azure Cost Monitor is for instance using this module to process all costs analytic tasks in the backend.

The module azure-queue-client is able to implement this in a couple of simple steps. It supports multiple workers in different processes and on different machines. The following example illustrates the usage:

// config with your settings
var qName = ‘<<YOURQUEUENAME>>’;
var qStorageAccount = ‘<<YOURACCOUNTNAME>>’;
var qStorageSecret = ‘<<YOURACCOUNTSECRET>>’;
var qPolling = 2;

// load the modules
var queueListener = require(‘azure-queue-client’).AzureQueueListener;
var q = require(“Q”);

// establish a message handler
queueListener.onMessage(function(message) {
  var defer = q.defer();

console
.log(‘Message received: ‘ + JSON.stringify(message));
defer.resolve();

return
defer.promise;
});

// start the listening
queueListener.listen(qName, qStorageAccount, qStorageSecret, qPolling, null);

Developers who are using the Azure Scheduler might recognize that the payload of the scheduler is encapsulated in a XML wrapper. This XML wrapper can be handled by the module as well so that it doesn’t matter if the message comes from an other queue client or the Azure Scheduler.

This module makes writing job workers in node, hosted on Azure or any other cloud provider a breeze.

Stay up to date – Azure Cost Monitor starts sending daily spending reports via mail

The azure cost monitor team is pleased to announce the launch of the daily spending mail reports starting January 17th. This great feature has been crafted to reflect what our users told us they need and it also builds upon new technology capable of addressing future needs:

spending-report-demo

The report will be send once a day, at about 03:00 AM CET. If – for any reason – you do not wish to get the daily reports, it’s of course possible to disable the reports in the new notifications section of the Azure Cost Monitor portal:

notifications

We hope this feature brings more transparency in your Azure Cloud Spendings and makes it much easier for you to manage and control all costs.
Any questions, wishes or ideas? Try our feedback portal or drop a mail to tickets@azurecostmonitor.uservoice.com.

SEO – Is your Azure WebSites hosted AngularJS App ready for Google :-)

Every AngularJS application is just a website which is generated by executing javascript. The google crawler and also other crawlers are not able to collect information from these sites. To handle this problem a couple of services like AjaxSnapshots or Prerender.io are trying to fill the gap. Basically these services are generating a snapshot of a website without any javascript in it. Whenever the search engine visits the page, the system delivers a plain html page without any javascript. Many technical details on how search engines crawl a page can be found here. Users who are hosting on Azure WebSites and trying to use this kind of pre-rendering tools may stuck with some configuration issues.

The required rewrite rules for the web.conf are written very fast or can be found in the AjaxSnapshots documentation:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <rewrite>
            <rules>
                <rule name="AjaxSnapshotsProxy" stopProcessing="true">
                    <!-- test all requests -->
                    <match url="(.*)" />
                    <conditions trackAllCaptures="true">
                        <!-- only proxy requests with an _escaped_fragment_ query parameter -->
                        <add input="{QUERY_STRING}" pattern="(.*_escaped_fragment_=.*)" />
                        <!-- used to capture the scheme/protocol for use in the rewrite below --> 
                        <add input="{CACHE_URL}" pattern="^(https?://)" />
                    </conditions>
                    <!-- send the request to the AjaxSnapshots service -->
                    <action type="Rewrite" 
                    url="http://api.ajaxsnapshots.com/makeSnapshot?url={UrlEncode:{C:2}{HTTP_HOST}:{SERVER_PORT}{UNENCODED_URL}}&amp;apikey=<YOUR API KEY>" 
                    logRewrittenUrl="true" appendQueryString="false" />
                </rule>
            </rules>
        </rewrite>
    </system.webServer>
</configuration>

An example for Prerender.io can be found here.

After applying the web.conf to the website, the browser returns directly a 404 which means that the website behind the target URI is wrong. This is because Azure Websites enabled the redirect module but rewriting URLs to external target requires a revers proxy module additionally. This module which is part of the application request routing in IIS is not activated in Azure WebSites by default. Thanks to the following nice trick it’s possible to use the reverse proxy module also in Azure WebSites:

http://ruslany.net/2014/05/using-azure-web-site-as-a-reverse-proxy/

As usual all technical frameworks, components or services described in this blog are used in production for several weeks in different applications, e.g. the Azure Cost Monitor. These steps should help to get every AngularJS application ready for google and other bots when the site is hosted on Azure WebSites.

Cache Fighters Part 3 – A real life example combined with grunt-ng-constant

In the last two parts of this article series, the caching behaviour of AngularJS applications was described and a solution for all issues in a typical app was delivered.
The last part now is focussed on developing a real life example which demonstrates the authors favourite solution.

When following this tutorial Yeoman should be installed on the machine. In addition the angular generator of Yeoman is required.

Generate a simple application
Yeoman initially generates a skeleton of a AngularJS application. Firing the following command starts the generation process:

yo angular testapp

All this should be done in a directory called testapp because Yeoman is not generating a subdirectory automatically. During the process Yeoman asks some details which need to be answered as follows:

yeoman-answers

Integrate ngHelperDynamicTemplateLoader
The dynamic template loader is the base component in AngularJS applications to handle the caching issues for views and partials. It can be installed with the following command line:

bower install ng-helper-dynamic-template-loader –save

After that the http request interceptors need to be registered which can be done by adding the following lines in the configuration section of the application, typically implemented in the app.js file:

'use strict';

angular
  .module('testappApp', [
    'ngAnimate',
    'ngCookies',
    'ngResource',
    'ngRoute',
    'ngSanitize',
    'ngTouch',
    'ngHelperDynamicTemplateLoader'
  ])
  .config(function ($routeProvider, $dynamicTemplateLoaderProvider) {

    $dynamicTemplateLoaderProvider.registerInterceptors();

    $routeProvider
      .when('/', {
        templateUrl: 'views/main.html',
        controller: 'MainCtrl'
      })
      .when('/about', {
        templateUrl: 'views/about.html',
        controller: 'AboutCtrl'
      })
      .otherwise({
        redirectTo: '/'
      });
  });

Generate a deployment UUID
The custom caching strategy will be based on a deployment UUID. Whenever the deployment UUID is changed, the system will change the URI for the views & partials. There are two additional modules which are required. grunt-ng-constant is responsible for generating an individual build-related configuration file. The node-uuid module is able to generate a new deployment UUID whenever it is required. The modules can be installed via npm as follows:

npm install grunt-ng-constant –save-dev
npm install node-uuid –save-dev

After that a new grunt task is available and the following grunt configuration needs to be generated in the configuration file. The config section is building an app-env.js file during every build which contains the needed deployment UUID:

ngconstant: {
  options: {
    dest: '<%= yeoman.app %>/scripts/app-env.js',
    wrap: '"use strict";\n\n {%= __ngModule %}',
    name: 'app.env'
  },
  dist: {
    constants: {
      envDeployment: {
        deploymentUUID: uuid.v4()
      }
    }
  },
  serve: {
    constants: {
      envDeployment: {
        deploymentUUID: uuid.v4()
      }
    }
  }

}

Don’t forget to require the uuid module somewhere at the start of the existing grunt file:

var uuid = require('node-uuid');

Last but not least the grunt-ng-constant task needs to be added in the build-, test- and serve-tasks to ensure that a config file is generated:

grunt.registerTask('build', [
  'clean:dist',
  'ngconstant:dist',
grunt.task.run([
  'clean:server',
  'ngconstant:serve',

grunt.registerTask('test', [
  'clean:server',
  'ngconstant:serve',

When executing the configuration a module which is called “app.env” is generated and needs to be added in the dependency list of the AngularJS application:

angular
  .module('testappApp', [
    'app.env',

Additionally the “app-env.js” needs to be registered in the index.html so that it will be loaded during application start:

<!-- build:js({.tmp,app}) scripts/scripts.js -->
<script src="scripts/app-env.js"></script>

Build a custom caching strategy
Everything is settled for the deploymentUUID based caching strategy. In the ngHelperDynamicTemplateLoader every caching strategy is implemented as a specific service.

yo angular:service CustomTemplateCaching

The following service should be added to the application to ensure that the generated deploymentUUID is used:

‘use strict’;

angular.module(‘testappApp’).service(‘CustomTemplateCaching’, [‘envDeployment’, function(envDeployment) {
var self = this;
self.processRequest = function(requestConfig) {
if (requestConfig.url.indexOf(‘?’) === -1) {
requestConfig.url = requestConfig.url + ‘?v=’ + envDeployment.deploymentUUID;
} else {
requestConfig.url = requestConfig.url + ‘&v=’ + envDeployment.deploymentUUID;
}
return requestConfig;
}
}]);

The service can be assigned to the template loader as a caching strategy in the run region of the application

.run([ '$dynamicTemplateLoader', 'CustomTemplateCaching', function($dynamicTemplateLoader, CustomTemplateCaching) {
  $dynamicTemplateLoader.setCachingStrategyService(CustomTemplateCaching);
}]);

Finally after starting the application with the command:

grunt serve

the browser will load all of the templates with a special version parameter in the url. The version parameter contains the deploymentUUID which means the system is using the cache as long as nothing has changed but after an update the system is using a new uncached version of the views.

caching

The illustrated caching provider can be improved when using a grunt task which is building hash keys for every HTML file. The hash keys can then be used in the request so that HTML files would be reloaded from the server when the specific view has changed which is a more granularly approach.

The solution stays with the eager loading approach of views & partials which works good in bigger applications. For smaller and midsize applications compiling everything in a single javascript file feels ok as well.

This article should help to solve the views and partial caching issues in all AngularJS applications. The described approach is successfully used in production applications, e.g. Azure Cost Monitor over the last weeks. The sample application developed in this article can be found here.

Understand your service types in the Azure Cost Monitor

The Azure Cost Monitor is able to analyse and manage every existing Azure service. In the past sometimes the different service types could have become unclear, especially for people with big subscriptions and a lot of services. This was because the Azure Cost Monitor was only using an icon to explain the service type:

nontooltip

Starting today the Azure Cost Monitor also offers a tool tip with a clear explanation of the the service type, whenever a user moves the mouse over the icon. It also works on tablets or smartphones, whenever users touch the icon on a specific service.

tooltip

This little feature should really help to reduce service type confusion in the Microsoft Azure Cloud.

Cache Fighters Part 2 – How to handle eager loaded partials/views

The previous part of this article series described the general behaviour of an AngularJS app when it comes to web caching. Using Yeoman and the generated grunt tasks solved many topics on the existing caching list. This article is focused on the different strategies to handle views & partials.

Views & partials are HTML files which are loaded from the AngularJS application during the first use. Whenever a user navigates to a new sub site or is using a new widget which was never loaded, AngularJS triggers an AJAX call to load the corresponding html file. Inline templates which are part of the javascript files are an exception of course.

Like every HTTP request the AJAX call can be cached from a web cache like the browser as well. Yeoman has out of the box no grunt task onboard to rename this files which would prevent caching.
But there a two options to solve this problem:

The first solution ensures that all external templates are combined to one single file as inline templates. During this process a grunt task generates one single javascript file, which contains all the views & partials. The grunt-angular-templates module allows to implement this behaviour which solves the caching issue in combination with the javascript minification as described in the last article at full length.

The second option is to implement a similar behaviour to what Yeomans grunt tasks ensure for javascript files. Whenever the view or partial is updated it should get a different name and would be reloaded from the server and not from the cache. This lets the application use as much cached files as possible and only reloads over the network when something is changed.

For the Azure Cost Monitor, I wrote a component called ngHelperDynamicTemplateLoader which totally implements this behaviour. The component is implementing a standard interceptor to add a new version parameter to every view loading AJAX call. The default caching strategie just adds a timestamp every time the view is loaded which means no caching at all. Voila, that is what we wanted to achieve.

In the next part of this article series I’m going to describe a real life example. This example will use grunt-ng-constant to generate context which can be used in a custom caching strategy.

Do you know any other options to deal with this topic? Then please leave a comment to kick off a short discussion about it.

Cache Fighters Part 1 – Caching behaviour of AngularJS Apps

Every website, which also means every AngularJS app, needs to deal with different web caches.  This can be different proxy servers or even the browser cache as a very extensive cache.

Caching, on the one hand, can be very important because it helps to run any application as fast as possible, but on the other hand it can cause different problems when e.g. updating an published app. That’s what I had to deal with, when I prepared the user interface update of the Azure Cost Monitor – an AngularJS app for cloud cost management.

So this article describes how an AngularJS app behaves under the influence of different caches and gives an outlook to different strategies how to deal with potential problems.

An AngularJS app is technically a normal website which uses a lot of AJAX calls. This means that the app contains following elements:

  • index.html (the main entry point)
  • Views/Partials
  • Javascripts
  • Stylesheets
  • Media-Assets

There are different caching headers as described here. As long as the author of an app doesn’t intervene, all files are cached, which means that it’s hardly predictable when the browser ever tries to load the different assets from the server again.

Last-Modified is a “weak” caching header in that the browser applies a heuristic to determine whether to fetch the item from cache or not. (The heuristics are different among different browsers.)

This caching behaviour is good because the application should of course be of good performance. Loading big files from the cache is certainly faster then downloading them from a remote server – but when the application gets updated different caching problems might occur and for these problems some special handling is required.

Many people around are using Yeoman as a scaffolding application. The awesome AngularJS generator in Yeoman is generating a Gruntfile which compiles the AngularJS application. Folks coming from C/C++ could wonder what compiling means in this context. Here it just means preparing the different assets to be hosted on a production server. Normally this includes dealing with a couple of caching issues. Yeoman uglifies and minifies the javascripts, stylesheets, images and html files. During this process all javascripts, stylesheets and images get a hash-based new unique name. This is the first trick to prevent caching by a web cache like the browser. Keep in mind: Cached ressources are adressed by absolute URLs.

So back to the list above, the caching behaviour of the following three elements are solved:

  • Javascripts
  • Stylesheets
  • Media-Assets

Pretty cool 😉 but how to handle the rest?

The index.html should never be cached in an AngularJS application because this file contains all references to stylesheets and javascripts. Would a web cache, like the browser, cache this file, it would become very difficult to update a published app. In Azure Websites e.g., the preconfigured IIS takes care of this via correct Cache Headers. This ensures that the AngularJS application can be updated at any time.

Last but not least, on the list above the Views/Partials in AngularJS are left. These are HTML files, which are loaded via Ajax calls when they are needed. The next part of this article series will describe how to handle them by using caching as much as possible but preventing it in different cases. So stay tuned…

Website Launch Announcement: azure cost monitor launches new site

The azure cost monitor team is pleased to announce the launch of their newly designed Web site, which goes live today, and is located at the same address: https://costs.azurewebsites.net.

It has been crafted to reflect what our users told us they need and it also builds upon new technology capable of addressing future needs. The azure cost monitor now enables our users to analyse and manage their Microsoft Azure Costs even more intuitively:

More modern and user-friendly design
The new site design, aside from being aesthetically pleasing, is more agile, interactive, and is easier to scan, read and navigate.
It is now using a responsive design, which means that you’ll see essentially the same design optimized for your smart phone, tablet and desktop.

Azure Cost Monitor

Improved and re-designed landing page
We know that landing pages are very important so we have decided to totally re-design and re-organise ours. Our new landing page now displays all important information like features, screenshots, a contact form and log-in possibilities.

Azure Cost Monitor Landing Page

 

We hope you will visit the new website at https://costs.azurewebsites.net and acquaint yourself with the new design. And while you’re there, don’t hesitate and let us know what you think by leaving a message. In the coming months, we hope to continue improving the site, so that it best serves all of your Azure cloud cost monitoring needs.