Wednesday, December 21, 2011

Announcing the Enterprise Library Integration Pack for Windows Azure

Last week, we released a new integration pack for Enterprise Library, specifically targeting Windows Azure.

Most of Enterprise Library just works on Windows Azure so the focus for this integration pack was to add support for additional azure specific scenarios by providing two additional blocks:

· Autoscaling application block (codename WASABI)

· Transient Fault Handling Application Block (codename TOPAZ)


Tailspin Surveys reference implementation

To demonstrate the Windows Azure Integration Pack in a real world like application, we extended an existing reference implementation called Tailspin Surveys to now use Enteprise Library.

Tailspin Surveys is a multi-tenant cloud application that allows tenants to create surveys and analyze survey results. To accommodate fluctuations in load, that’s inherit to such applications, the tailspin surveys application uses the Enterprise Library Autoscaling Application Block to adjust the number of instances to accommodate the load.

Tailspin IT Operators can use the management application to monitor the Tailspin Surveys application and adjust the autoscaling rules.

Wasabi can gather a lot of information about the target environment, such as performance counter values, log messages etc. By turning this information into a graphical display, you can easily see how the load progressed over time and how the number of instances has been adjusted to accommodate these fluctuations.


For example, you can see the number of instances as it changes over time. You can also see when the number of instances was adjusted.


When you click on a scaling event, you can see all the log messages associated with that scaling event. For example, which rules were considered and which rule finally caused the scaling event.

You can also see the values for all the metrics that are gathered. For example, if you decide to monitor the CPU levels, memory pressure and queue lengths, you can see the values for these metrics as they change over time. This allows you to gain insight into how the load of your application progresses over time.

Running the Tailspin reference application.

The Autoscaling block doesn’t work against the development fabric. You can’t autoscale your development environment. So the most realistic way of exploring the Tailspin reference application is to deploy the application to Azure and actually run it there. There is an extensive installation document in the developer guide:

However, if you don’t wish to run it in the cloud, you can also run it in simulated mode. In simulated mode, the autoscaling application block is hosted in memory in the management web application and all the data that the autoscaling application block uses is also stored in memory. The interactions with the Windows Azure Management API are simulated.

Simulated mode allows you to play with the management application, without having to deploy it. You can edit rules, simulate load and then see the results reflected in the graphs.

Friday, October 14, 2011

Windows Azure Autoscaling Block Beta out now

Today, we shipped the Beta for the first new block in the Enterprise Library Windows Azure Integration Pack, called the Windows Azure Autoscaling Block.

This block allows you to use your Windows Azure instances more effectively by automatically scaling or changing your applications configuration as the load changes.

By providing a set of configurable rules, you can closely control how your application should handle varying levels of load. For example, you can have the autoscaling block monitor several metrics, such as the CPU level and memory usage of your web roles, or the number of messages in a Queue and increase or decrease the number of instances when certain threshold values are exceeded.

There are basically 2 types of rules you can configure:

  • Constraint rules, which set explicit boundaries on the number of instances. These rules guard your SLA (by ensuring there is a minimum number of instances) and your wallet (to ensure there will never be more than the maximum number of instances).
  • Reactive rules, which monitor a set of metrics (such as performance counters) and take actions when threshold values are exceeded.

The autoscaling block supports a new concept, called Application Throttling. This feature allows you to switch to define several modes of operation in your application and switch between these modes as the load of your application varies.  For example, you can create a rich and a lightweight version of your application. When the load increases beyond certain threshold values, you can automatically switch to the lightweight version of your application.

The Enterprise Library Windows Azure Integration Pack Beta also ships with a reference implementation, called Tailspin Surveys. This example application has been used by other patterns & practices projects in the past and has now been adapted do demonstrate features from the Enterprise Library Azure Integration Pack.


The Tailspin Surveys reference implementation contains:

  • A sample rule editor, which can be used to edit the rules configuration file.
  • A sample service information editor, which can be used to edit the service information file that describes your windows azure environment.
  • Monitoring through several graphs, that allows you to visualize the information gathered by the autoscaling block. For example, you can see the actual, minimum and maximum number of instances as it changes over time. You will also see the scaling actions and be able to retrieve detailed log messages explaining which rules triggered the scaling actions.


  • A sample Log viewer, that provides a more readable way to view the log messages generated by the autoscaling block.
  • A sample load generator, which simulates load on the application and easily allows you to see the autoscaling block in action.

For more information about the Windows Azure Autoscaling Block, check out the public anouncement, or grab the binaries from codeplex.

Tuesday, July 19, 2011

Enterprise Library Azure Integration Pack Public Feature Voting

Hi All!

Patterns & practices is about to start a new Enterpise Library Integration pack, called the Azure Integration Pack. The goal of this integration pack is to make developing enterprise scale applications on Windows Azure easier by either modifying the current application blocks, or by creating new application blocks.

Since we only want to address problems that you as a developer are facing right now, we need your input!


Your opinion matters!

Go to On this website, you’ll find a list of all the stories that we are thinking of addressing. You can vote for your favorite stories or even submit stories of your own. We’ll do our best to address the stories with the most votes. On this website, you’ll get (only) 20 votes to do with as you please, so use them wisely.

This is a great opportunity to influence the direction for the Azure Integration Pack, so I’d say: Happy voting!

Monday, June 27, 2011

Enterprise Library 5.0 Windows Azure Integration pack coming

I’m working for patterns & practices again! Though this time as a freelancer!

Right now, we’re starting work on the Enteprise Library 5.0 Windows Azure Integration Pack!

Windows Azure integration pack

Enterprise Library 5.0 already works very well in Windows Azure. Most of the blocks can easily be used within Windows Azure.

The Goal for the Windows Azure Integration Pack is to adapt Enterprise Library to support many Azure only scenario’s. This might mean making changes to the current blocks, but also creating new blocks.

We’re still figuring out which scenario’s the Azure Integration Pack should support. And you can help us to frame this effort.

Your input is welcome

We have created a survey to determine how you are using Windows Azure and how Enteprise Library can help you to build even better Azure applications.

Go to the survey!

The plans for the near future

In the near future, we’ll put up a list of scenario’s and user stories that Enteprise Library so you can vote on their priorities.