Cache Fighters Part 2 – How to handle eager loaded partials/views

The previous part of this article series described the general behaviour of an AngularJS app when it comes to web caching. Using Yeoman and the generated grunt tasks solved many topics on the existing caching list. This article is focused on the different strategies to handle views & partials.

Views & partials are HTML files which are loaded from the AngularJS application during the first use. Whenever a user navigates to a new sub site or is using a new widget which was never loaded, AngularJS triggers an AJAX call to load the corresponding html file. Inline templates which are part of the javascript files are an exception of course.

Like every HTTP request the AJAX call can be cached from a web cache like the browser as well. Yeoman has out of the box no grunt task onboard to rename this files which would prevent caching.
But there a two options to solve this problem:

The first solution ensures that all external templates are combined to one single file as inline templates. During this process a grunt task generates one single javascript file, which contains all the views & partials. The grunt-angular-templates module allows to implement this behaviour which solves the caching issue in combination with the javascript minification as described in the last article at full length.

The second option is to implement a similar behaviour to what Yeomans grunt tasks ensure for javascript files. Whenever the view or partial is updated it should get a different name and would be reloaded from the server and not from the cache. This lets the application use as much cached files as possible and only reloads over the network when something is changed.

For the Azure Cost Monitor, I wrote a component called ngHelperDynamicTemplateLoader which totally implements this behaviour. The component is implementing a standard interceptor to add a new version parameter to every view loading AJAX call. The default caching strategie just adds a timestamp every time the view is loaded which means no caching at all. Voila, that is what we wanted to achieve.

In the next part of this article series I’m going to describe a real life example. This example will use grunt-ng-constant to generate context which can be used in a custom caching strategy.

Do you know any other options to deal with this topic? Then please leave a comment to kick off a short discussion about it.

Cache Fighters Part 1 – Caching behaviour of AngularJS Apps

Every website, which also means every AngularJS app, needs to deal with different web caches.  This can be different proxy servers or even the browser cache as a very extensive cache.

Caching, on the one hand, can be very important because it helps to run any application as fast as possible, but on the other hand it can cause different problems when e.g. updating an published app. That’s what I had to deal with, when I prepared the user interface update of the Azure Cost Monitor – an AngularJS app for cloud cost management.

So this article describes how an AngularJS app behaves under the influence of different caches and gives an outlook to different strategies how to deal with potential problems.

An AngularJS app is technically a normal website which uses a lot of AJAX calls. This means that the app contains following elements:

  • index.html (the main entry point)
  • Views/Partials
  • Javascripts
  • Stylesheets
  • Media-Assets

There are different caching headers as described here. As long as the author of an app doesn’t intervene, all files are cached, which means that it’s hardly predictable when the browser ever tries to load the different assets from the server again.

Last-Modified is a “weak” caching header in that the browser applies a heuristic to determine whether to fetch the item from cache or not. (The heuristics are different among different browsers.)

This caching behaviour is good because the application should of course be of good performance. Loading big files from the cache is certainly faster then downloading them from a remote server – but when the application gets updated different caching problems might occur and for these problems some special handling is required.

Many people around are using Yeoman as a scaffolding application. The awesome AngularJS generator in Yeoman is generating a Gruntfile which compiles the AngularJS application. Folks coming from C/C++ could wonder what compiling means in this context. Here it just means preparing the different assets to be hosted on a production server. Normally this includes dealing with a couple of caching issues. Yeoman uglifies and minifies the javascripts, stylesheets, images and html files. During this process all javascripts, stylesheets and images get a hash-based new unique name. This is the first trick to prevent caching by a web cache like the browser. Keep in mind: Cached ressources are adressed by absolute URLs.

So back to the list above, the caching behaviour of the following three elements are solved:

  • Javascripts
  • Stylesheets
  • Media-Assets

Pretty cool 😉 but how to handle the rest?

The index.html should never be cached in an AngularJS application because this file contains all references to stylesheets and javascripts. Would a web cache, like the browser, cache this file, it would become very difficult to update a published app. In Azure Websites e.g., the preconfigured IIS takes care of this via correct Cache Headers. This ensures that the AngularJS application can be updated at any time.

Last but not least, on the list above the Views/Partials in AngularJS are left. These are HTML files, which are loaded via Ajax calls when they are needed. The next part of this article series will describe how to handle them by using caching as much as possible but preventing it in different cases. So stay tuned…

ngHelperUserVoice: AngularJS service for the UserVoice API

UserVoice is a Saas platform that offers different customer engagement tools i.e. feedback- and ticketing tools. A modern web application written in AngularJS, should always offer the user the possibility to open a ticket whenever it is needed. One solution to do so is to use the UserVoice Contact widget, but if you would prefer to use an API, for instance if your contact form has special styles that you don’t want to loose, here is the solution:

The new ngHelper component ngHelperUserVoice is a lightweight angular service for opening tickets in UserVoice. The component follows the angular way of building single page applications and allows the configuration of the module via provider and offers a standard service:

$uservoice.openTicket(“NAME”, “EMAIL”, “SUBJECT”, “MESSAGE”).then(function(ticketId) {
  alert(“Ticket with id: ” + ticketId + ” created”);
}).catch(function(error) {
  alert(“Error: ” + error);
});

This component makes it super simple to interact with the UserVoice API. If you like the component feel free to add other features, I will accept push requests as fast as possible.

Azure Cost Monitor: X-Mas Update

The time around christmas gave us the time to fix a couple of small issues which are annoying but not critical in the day by day business. With this article I would like to highlight the small but important changes in the Azure Cost Monitor:

HTTPs enforced
The Azure Cost Monitor is now available via https only. Every access to the http url will be redirected to the secure endpoint. This should prevent accidental unsafe usage of sensitive data via http.

No more hashbangs
In the past the Azure Cost Monitor used URLs with hashbangs in it,  which are unreadable and not easy to remember. As the hashbangs aren’t used in the urls anymore, please update your bookmarks.

Remove Contracts
Are you a customer with multiple EA contracts or a service provider who is working for several EA customers? If so, this feature is for you. Users with multiple contracts now can see a little trash icon in the contract dropdown, which allows to remove a non-active contract from the system. This should give everybody the option to stay clean with his data.

Daily Sync
The Azure Cost Monitor now syncs the data of the current month automatically every night. As soon as a new contract comes into the system, also the existing historic data will be synced. So the manually triggered action for data sync isn’t needed anymore and it will be removed in the next weeks.

Last Sync Time
The last sync time for every month is now visible in the report sheet. This should give everybody more transparency and control about the automated processes in the backend. Whenever you recognize a last sync time higher than 24 hours, please open a ticket.

Screen Shot 2014-12-29 at 10.39.20

We hope these little improvements will help everybody to get a much better experience with the Azure Cost Monitor. If you have any other wishes, feel free to request them in our feedback portal.

Enable AngularJS HTML5 Mode in Azure Websites

A brand new web application written in Javascript, e.g. AngularJS is often using the hashbang to build URIs which can be processed from the SPA router. These URIs can not be indexed by the most search engine crawlers. Also visitors have problems remembering specific resources.

This problem is addressed by the HTML5Mode of AngularJS or to be more technical by the History-API of the browser. With the HTML5Mode the Angular-App is able to change the browsers URI without performing a full page reload. The following URI

https://example.com/#/contract

becomes

https://example.com/contract

This article describes how to enable the HTML5Mode in an Angular application, including the customization of a development server: http://ericduran.io/2013/05/31/angular-html5Mode-with-yeoman/

Great, but when someone visits the app directly with that URI, the server needs to redirect the request to the index.html page of the application. This fact requires help from the server side normally realized with URL rewriting modules. Microsoft has everything on board to define the needed rules in a web.config as described in this article: http://coderwall.com/p/mycbiq/deep-linking-angularjs-on-windows-azure-iis

Hint: The same method can be used to enforce https when someone is visiting via http, just follow this link: http://stackoverflow.com/questions/9823010/how-to-force-https-using-a-web-config-file

This little changes should help to give your AngualarJS App more visibility in the web.

Deploy AngularJS-Apps to Azure WebSites with Codeship

The Azure Cost Monitor is a SaaS application which is extended and improved frequently. Updates take place several times a week without interrupting end users. This agile approach of software development requires continuous integration and a structured deployment process to keep quality and development on a high level. Written in javascript – nodejs for the backend and angularjs for the frontend – the application is deployed to Azure WebSites, the fully managed web hosting solution of Microsoft.

Azure supports deployment from GitHub which works very well for nodejs applications, but angularjs applications need to be compiled to minify the code and rename the images, what prevents caching issues in the browser. The guys from Codeship are offering one of the best cloud platforms to enable continuous deployment for Azure WebSites and many other services.

This tutorial describes a continuous deployment process for an angularjs application to Azure WebSites, based on Codeship. The source code is hosted in a public GitHub repository and the angularjs app is scaffolded with yeoman.

Step 1: Create your AngularJS application

First of all a new Angular application needs to be created with the following yeoman command:

yo angular

This example uses Sass with Compass and Bootstrap, so this options need to be selected in the creation wizard of yeoman. After creating the application it could make sense to test if everything is generated correctly, by starting the development server with this command:

grunt serve

grunt-wiredep is a grunt task which is responsible for updating all the HTML files and minify the JS/CSS files. This task has an issue in version 1.7.0 so it needs to be updated to version 1.9.0 via package.json. The following line needs to be replaced in the package.json:

“grunt-wiredep”: “^1.7.0”,

to

“grunt-wiredep”: “^1.9.0”,

Last but not least the updated package needs to be installed with

npm install

Bower is the component which is used to install all needed web components. The Codeship sandbox has no bower installed out of the box, so the dependency needs to be added with the following command to the package.json as well:

npm install bower –save-dev

Besides that, the commandline interface for grunt needs to be part of the package.json, so that it becomes installed into the Codeship sandbox:

npm install grunt-cli –save-dev

Step 2: Beam Compass & Sass into the Codship sandbox

YeoMan is using Sass and Compass for modern CSS compilation, so it’s required in the Codeship sandbox as well. All used tools are ruby gems so a simple Gemfile with the following content defines the dependencies:

source “https://rubygems.org”

gem ‘sass’, “3.2.9”
gem ‘sass-globbing’, “>= 1.1.0”
gem ‘compass’, “0.12.2”
gem ‘breakpoint’, “2.0.5”
gem ‘singularitygs’, “< 2.0.0”
gem ‘chunky_png’, “1.3.3”

Alle components need to be installed with the command

bundle install

which generates the Gemfile.lock.

Step 3: Activate the CI build in Codeship

All changes described in the steps above should be commited in a GitHub repository. After that everything is prepared to create a new project in Codeship which is connected to the GitHub repository:

Screen Shot 2014-12-20 at 23.11.09

The following setup commands of the Codeship projects should be entered to install all dependencies and kick off the angular build:

rvm use 1.9.3
bundle install
npm install
bower install
grunt build

All the build output is stored in the dist directory. After all preparation, a simple push to the projects GitHub repository lets Codeship start building your application instantly. The first build takes a little bit longer because Codeship prepares the dependency caches. The second one should be done in about 1:30 minutes.

Step 4: Prepare your Azure WebSite

A new Azure Website needs to be created in the Azure Management Portal. Every Azure WebSite supports different ways to publish content, besides traditional FTP Azure WebSites supports deployment from source control. In the specific case of this tutorial the deployment from source control feature should be set to the option “Local Git Repository”:

Screen Shot 2014-12-20 at 23.13.10

Screen Shot 2014-12-20 at 23.14.12

This means Azure hosts a git repository and everything that is committed into this repository will become active in the Azure Website. After that, the next goal is to let Codeship commit all changes into this repository after every successful build.

Step 5: Bring deployment online

Codeship allows to add a deployment script under the project settings for specific branches. In this sample every committed change from the master branch is deployed. In a production environment the system should only be triggered from a specific deployment branch, e.g. deploy/azure.

The following script deploys the compiled angular app to the Azure Website:

# Config git
git config –global user.email “$GITMAIL”
git config –global user.name “$GITUSER”
# Clone the whole azure repository
cd
git clone $GITREPO azure
# Add the compiled app
cd azure
rm -R -f *
cp -R -f ~/clone/dist/* .
git add -A
git commit -m “Code shipped”
# push to azure
git push origin

Last but not least, all used variables need to be stored as environmental settings in Codeship. This prevents that Codeship prints out sensitive data to the logs and decouples configuration settings from deployment script:

Screen Shot 2014-12-20 at 23.15.09

The GITREPO variable contains the https based git URL from the local git repository of the Azure WebSite. It includes username and password in the URL, e.g.

https://{{user}}:{{password}}@{{website}}.scm.azurewebsites.net:443/{{website}}.git

The deployment credentials can be found in the Azure portal for the Azure website.

Finally a simple push into the Git repository of the project triggers a build via Codeship and a deployment into the right Azure WebSite. Users do not loose access to the SaaS application except for a couple seconds after redeploying when the IIS load the new application.

This fully automated process supports every agile software development process and helps you to focus on making features and not updating servers.

Happy Deploying 🙂

Clear cost control based on service types

The Microsoft Azure Cloud offers many different resources, e.g. virtual machines, cloud services or websites. Some of these resources produce comprehensible costs e.g. virtual machines, but some resources produce costs that are indirect and hard to analyse, like costs for “Data Traffic”, “Visual Studio Online” or “Data Management”.

Screen Shot 2014-12-06 at 12.11.51

To solve this problem and facilitate the cost control, all service types are now visible in the Azure Cost Monitor. The new category for grouping this service types helps to identify the cost drivers in different subscriptions and projects.

Azure Cost management with resource tagging

Starting with Microsoft Azure teams or small departments often begin with one subscription. When time goes by the subscription contains more and more resources without a simple option to migrate them into new project specific subscriptions. Managing costs for this kind of subscription is not easy. Because of that the Azure Cost Monitor now offers a new feature which should really help to stay in control.

The new “Cost Tags” feature is an easy and comfortable way to categorize services. These tags can easily be used to visualize costs per responsible person, department, project or cost center. This allows cost management on a very granular level and aligned to the individual existing organisational structure.

The following screenshots show some best practices:

ScreenProjects
Categorize your services by project groups that are responsible for the costs

ScreenTypes
Categorize your services by resource type, e.g. storage or compute

This should really help to bring more control and transparancy into the Azure EA agreement. If you see any other requirements or needs feel free to drop the idea in our feedback portal.

BTW: As soon as Microsoft Azure delivers the new Azure resource tagging we will allow to mix up our custom tags and the Azure resource tags. There will be no need to do something manually from your side.

Azure Enterprise Agreement: Freedom & cost control

Microsoft offers a very lucrative deal for Azure customers: When your company is willing to do an upfront investment it’s possible to get an enterprise agreement. Besides dramatical price reduction this agreement gives your engineers the freedom to consume azure services as much as they need, the operations team is able to assign subscriptions for every team and your company gets an invoice that is compliant to local financial regulations.

But what about your financial controllers? How can they keep track of the costs to ensure that the the limits are not exceeded?

Here are some best practices that Microsoft offers to partly achieve this:

  1. Check your EA reports on a weekly basis
    Microsoft delivers weekly consumption report that need to be checked on a regular basis. A second important source is the monthly delivered summary e-mail about the monetary commitment balance.
  2. Assign your teams, business units or projects to different subscriptions
    Microsoft allows to create as many subscriptions as needed, this means it’s possible to give every team or project a separate subscription payed from the enterprise agreement. By that access rights and roles can be modelled.

After working with these two options I recognised that not all of my requirements to manage our costs efficiently were totally fulfilled:

  1. Multiple usage of the same service should be cumulated in one report entry.
  2. Analysing costs on subscription level without using a complex pivot table.
  3. Tagging different resources helps to manage multiple projects in a subscription.
  4. Cost-Prediction based on the data of the past months.
  5. Cost-Alerts when specific limits are exceed for enabled EA customers.

So I decided to build a little service based on the Azure platform, called Azure Cost Monitor:

The service is able to process the CSV files from the EA portal and gives a graphical overview including subscription drill down. It is based on modern cloud technologies exclusively e.g. Azure Table Store, Azure WebSites and Azure WebJobs. That’s why the Azure Cost Monitor scales as good as Azure scales and I would like to invite all of you to join this service (https://costs.azurewebsites.net).

login

Login with your existing Azure Account or LiveId. After that enter your EA number to start analysing your data. An useful report about the cost consumption of your subscriptions will be shown. If you don’t want to enter your personal data right now, feel free to check the demo mode by adding the demo EA number:

Screen Shot 2014-10-18 at 21.44.52

I would like to extend this service aligned to the requirements you bring from the field, so please visit the feedback portal of the Azure Cost Monitor and enter your ideas or vote and comment for existing ones.

Validate authentication_token from Microsoft LiveID with node & express-jwt

Microsoft offers with live.com a service which can be used as identity provider for your application. I lost a couple hours when I tried to validate the issued authentication token from Microsofts IdP with the help of express-jwt in my node application. 

When an application requests a token from live.com via the oAuth2 implicit flow an access_token will be issued which needs to be used for the live service as self. In addition to that an authentication_token will be issued in the standard JWT format. This token can be used as authentication token in your node server. 

Normally validating a JWT token is very simple with node+express+express-jwt stack. Just configure a middleware and enter your application secret:

var jwt = require(‘express-jwt’);

app.use(jwt({secret: ‘<<YOUR SECRET>>, audience: ‘<<YOUR AUDIENCE>>’, issuer: ‘urn:windows:liveid’ }));

 

The Microsoft dashboard offers an application/client secret for your live application. This secret will be used in a very specific way from Microsoft as the key for generating the signature of your JWT token. I found the following solution in the history of the LiveSDK GitHub repository. 

The signing key for the token is a SHA256 hash of the given application secret plus the fixed string “JWTSig”. I ended up with the following code to generate the real secret for validation:

var crypto = require(“crypto”);
var secretTxt = ‘<<YOUR APPLICATION SECRET>>’;
var sha256 = crypto.createHash(“sha256”);
sha256.update(secretTxt + ‘JWTSig’, “utf8”);
var secretBase64 = sha256.digest(“base64”);
var secret = new Buffer(secretBase64, ‘base64’);

The generated secret can be used in the expres-jwt middleware as follows:

app.use(jwt({secret: secret, audience: ‘<<YOUR AUDIENCE>>’, issuer: ‘urn:windows:liveid’ }));

With this little piece of code it’s super simple to verify JWT tokens from live.com. I hope Microsoft starts documenting this little secrets better in  the future.