A tutorial on Node.js for dummies

 

In this world of high performance (low response time on almost real-time communication) and scalable (involving millions of user requests) Web applications, Node.js is gaining popularity everyday. Some says that Node.js is the hottest technology at Silicon Valley used by tech giants such as VMWare, Microsoft, LinkedIn, eBay, Yahoo, etc.

Let’s say what is Node.js?

In simple words, Node.js is an event-driven JavaScript runtime environment on the server-side. Node.js runs using V8 JavaScript engine developed by Google and achieves high throughput via non-blocking (asynchronous) I/O in a single-threaded event loop as well as due to the fact that V8 compiles JavaScript code at high speeds into native machine code.

The real magic behind the high performance on Node.js is the event loop. In order to scale to large volumes of user requests, all I/O intensive operations are performed asynchronously. The traditional threaded approach where for processing each user request a new thread is created is cumbersome and consumes large unnecessary resources specifically memory. To avoid this inefficiency and simplify the traditional multi-threaded programming approach, then Node.js maintains an event loop which manages all asynchronous operations for you. When a Node.js application needs to execute an intensive operations (I/O operations, heavy computation), then an asynchronous task is sent to the event loop along with a callback function, and then keep on executing the rest of the application. Then the event loop keeps track the execution status of the asynchronous task, and when it completes the callback is executed/called by returning results to the application.

The event loop efficiently manages a thread pool and optimized the execution of tasks. Leaving this responsibility to the event loop, allows the developers to focus on the application logic for solving the real problem and finally simplifying the asynchronous programming.

Now, let’s go to the practical part of this article. We’re going to install Node.js, and create a MVC framework for starting to develop our Web application.

In order to install Node.js, we need to follow the simple instruction in the official site at https://github.com/joyent/node/wiki/Installation.

Our MVC framework comprises of the following architectural building blocks:

  • a HTTP server for serving HTTP requests. Unlike other development stack where the application server (ex, Tomcat) and web server (ex, Apache) are distinct modules in the solution architecture, in Node.js  not only we implement the application but also we implement the HTTP server
  • a router in order to map HTTP requests to request handlers. This mapping is based on the URL of the request
  • a set of request handlers to fulfill/process the HTTP requests arrived to the server
  • a set of view to present data/forms to the end user
  • a request data binding mechanism in order to understand correctly the data carried onthe incoming HTTP request payload

It’s remarkable to say that we’re going to implement each building block on its underlying module in order to follow/comply the very architectural principles: separation of responsibilities to achieve loosely coupling and high cohesion.

The HTTP server

In this section, we’re going to implement the HTTP server building block of our architecture. Let’s create the server.js module in the root directory of the framework with the following code in Listing 01.

var http = require(“http”);

function start() {

   http.createServer(

      function(request, response) {

         console.log(“HTTP request has arrived”);

         response.writeHead(200, {“Content-Type”: “text/plain”});

         response.write(“Hello World”);

         response.end();

   }).listen(8888);

  console.log(“Server has started at http://localhost:8888/“);

}

exports.start = start;

Listing 01. Module server.js

Let’s analyze the previous code. The first line requires the http module that ships with Node.js and make it accessible through the variable http. This module exposes the createServer function which returns an object, and this object has method listen which receives as input parameter the port is going to listen on. The createServer function receives as input parameter an anonymous function with the logic of the Node.js application. This anonymous function receives as input parameters a request and response objects used to handle the details in the communication with client-side. In this case, we don’t use the request object as well as the response.writeHead sets the HTTP status 200 (OK) and the content type, the response.write sends the content/data to the client-side and finally the response.end closes the communication. Whenever a HTTP request arrived at Node.js, the anonymous function is called (technically, this anonymous function is a callback handler to the on_request events). In this case, whenever a new request arrives, then the message “HTTP request has arrived” is printed on the command line. You may see this message twice, because most modern browsers will try to load the favicon by requesting in this case the following URL http://localhost:8888/favicon.ico.

Finally, the server bootstrapping logic is encapsulated in the start function which is in turned exported by the server.js module.

In next step, let’s create the main module index.js which is used to connect and initialize the remaining modules as shown in the Listing 02. In this case, we’re going to import the server.js internal module and call the start function.

var httpServer = require(“./server”);

httpServer.start();

Listing 02. Module index.js

And finally, let’s run this first part of the application in your terminal as shown in Listing 03.

node index.js

Listing 03

The router and request handlers

Now, we can process a HTTP request and send a response back to the client-side. But, the common in Web application is to do something different depending on the URL associated to the HTTP request. So, we need another abstraction called the router which main feature is to decide which logic to execute based on URL patterns.

Let’s suppose that we’re developing a Web application with three features exposed as /app/feature01, /app/feature02 and /app/feature03.

First of all, let’s create the requestHandlers.js module with the handlers definition as show in the following listing. The handler definition is very simple: we’re just returning a string to display on the browser. If you want to parse the query string or bind the posted form data, the querystring module must be used.

function feature01(request, response) {

        console.log(“HTTP request at /app/feature01 has arrived”);

        response.writeHead(200, {“Content-Type”: “text/plain”});

    response.write(“Feature01 is called”);

    response.end();

}

function feature02(request, response) {

        console.log(“HTTP request at /app/feature02 has arrived”);

   

    response.writeHead(200, {“Content-Type”: “text/plain”});

    response.write(“Feature02 is called”);

    response.end();

}

function feature03(request, response) {

        console.log(“HTTP request at /app/feature03 has arrived”);

   

    response.writeHead(200, {“Content-Type”: “text/plain”});

    response.write(“Feature03 is called”);

    response.end();

}

exports.feature01 = feature01;

exports.feature02 = feature02;

exports.feature03 = feature03;

Listing 04. requestHandlers.js module

Then, let’s create the router.js module with the definition of routing logic as shown in the following listing. In this case, the router receives as input parameters the list of handlers in dictionary form (name == url pattern and value == reference to the underlying request handler), the pathname representing the current url, request and response objects.

function route(handlers, pathname, request, response) {

    if (typeof handlers[pathname] === ‘function’) {

         handlers[pathname](request, response);

    } else {

        response.writeHead(400, {“Content-Type”: “text/plain”});

        response.write(“404 Resource not found”);

        response.end();

    }

}

exports.route = route;

Listing 05. router.js module

Next, we need to refactor the server.js module as shown in the following listing. The url module is required to extract the part of the URL.

var http = require(“http”);

var url = require(“url”);

function start(route, handlers) {

   http.createServer(

      function(request, response) {

         console.log(“HTTP request has arrived”);

        

         var pathname = url.parse(request.url).pathname;

         route(handlers,pathname, request, response);

   }).listen(8888);

  console.log(“Server has started at http://localhost:8888/“);

}

exports.start = start;

Listing 06. Refactoring of server.js module

And finally, let’s refactor the index.js module to register the application URLs and wire the framework modules as shown in the following listing.

var httpServer = require(“./server”);

var requestHandlers = require(“./requestHandlers”);

var router = require(“./router”);

var handlers = {}

handlers[“/app/feature01”] = requestHandlers.feature01;

handlers[“/app/feature02”] = requestHandlers.feature02;

handlers[“/app/feature03”] = requestHandlers.feature03;

httpServer.start(router.route, handlers);

Listing 07. Refactoring module index.js

Now let’s refactor our application to present a Web form and process the input data. Let’s suppose the /app/feature01 is Web form and the /app/feature02 is the endpoint for form processing.

Now let’s create the views.js module with the definition of the Web form related to the /app/feature01 as shown in the following listing.

function feature01View() {

 var html = ‘<html>’+

            ‘<head>’+

            ‘<meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″ />’+

            ‘</head>’+

            ‘<body>’+

            ‘<form action=”/app/feature02″ method=”post”>’+

            ‘<input type=”text” name=”data” id=”name”></input>’+

            ‘<input type=”submit” value=”Submit data” />’+

            ‘</form>’+

            ‘</body>’+

            ‘</html>’;

  return html;

}

exports.feature01View = feature01View;

Listing 08. views.js module

Next, we need to refactor the feature01 request handler to send back to the end user the Web form as shown in the following listing.

var views = require(“./views”);

function feature01(request, response) {

        console.log(“HTTP request at /app/feature01 has arrived”);

        response.writeHead(200, {“Content-Type”: “text/html”});

    var html = views.feature01View();

    response.write(html);

    response.end();

}

Listing 09. Refactoring the feature01 request handler function

Now, we have a very interesting way to process POST HTTP request. Because in Node.js the key concept is to process anything in an asynchronous way, then Node.js delivers to our application code the posted data in chunks through callbacks to specific events on Node.js. The data event is called when a new data chunk is ready to be delivered to the application code while the end event is called when all the data chunks are received by Node.js and ready to be delivered to the application code.

Now, we need to refactor the server.js module in order to add logic to select how to process GET or POST methods as shown in the following listing. In the case of the GET method, everything is same as before (no changes). In the case of POST method, we do the following steps:

  1. define that the expected data is encoded using utf8
  2. register a listener callback for the event data to be invoked every time a new chunk of posted data arrives, and then we append every chunk to the resultPostedData variable
  3. register a listener callback for the event end to be invoked when all the posted data is received and ready to be delivered to the underlying request handler passing the  resultPostedData as parameter

var http = require(“http”);

var url = require(“url”);

function start(route, handlers) {

   http.createServer(

      function(request, response) {

         console.log(“HTTP request has arrived”);

         var resultPostedData = “”;

         var pathname = url.parse(request.url).pathname;

         if(request.method==’GET’)

         {

           route(handlers,pathname, request, response);

         }

         else

         if(request.method==’POST’)

         {

           request.setEncoding(“utf8”);

           request.addListener(“data”, function(chunk) {

               resultPostedData += chunk;

           });

           request.addListener(“end”, function() {

               route(handlers, pathname, request, response, resultPostedData);

           });

         }

   }).listen(8888);

  console.log(“Server has started at http://localhost:8888/&#8221;);

}

exports.start = start;

Listing 10. Refactoring of server.js module

We also need to refactor the router.js module as shown in the following listing.

function route(handlers, pathname, request, response, postedData) {

    if (typeof handlers[pathname] === ‘function’) {

         if(postedData === undefined)

        {

           handlers[pathname](request, response);

        }

        else

        {

            handlers[pathname](request, response, postedData);

        }

    } else {

        response.writeHead(400, {“Content-Type”: “text/plain”});

        response.write(“404 Resource not found”);

        response.end();

    }

}

exports.route = route;

Listing 11. Refactoring of the router.js module

And finally, let’s refactor the feature02 request hander as show in the following listing.

function feature02(request, response, postedData) {

        console.log(“HTTP request at /app/feature01 has arrived”);

        response.writeHead(200, {“Content-Type”: “text/html”});

    response.write(“Posted data is”+postedData);

    response.end();

}

Listing 12. Refactoring the feature02 request handler function

And to finish this tutorial, I will explain how to return JSON formatted objects to the client-side. This is  one the brightest features of Node.js because we can build a very scalable and fast back-end on Node.js and later to consume the related services from any client-side technology that understand simple HTTP such as browser, mobile application, etc.

Because we have developed a good framework, then we only need to refactor the feature03 as shown in the following listing.

function feature03(request, response, postedData) {

        console.log(“HTTP request at /app/feature01 has arrived”);

        response.writeHead(200, {“Content-Type”: “application/json”});

    var anObject = { prop01:”value01″, prop02:”value02″}

    response.write(JSON.stringify(anObject));

    response.end();

}

Listing 12. Refactoring the feature02 request handler function

I hope this tutorial on Node.js for dummies is very helpful for you.

I look forward to hearing from you comments anytime you like …

Lessons Learned: What could be a great business idea

Image 

I spend part of my time thinking how to provide solutions using IT technologies. I would like to share some insights that I´ve concluded on my short career as entrepreneur. In this article, I´m talking about what could be a great business idea.

If you want to make this world a better place providing your own technological solutions and decide to run your own business (aka becoming an entrepreneur); from my humble point of view there are some elements that contribute to the success of a business idea and they´re highly considered by investor when evaluating your proposal.

If you can apply math and create a formula by factoring the key elements on evaluating the potential of a business idea; it should be similar to:

Sucessful business idea = big market + unfair advantage (defensible product) + very good traction + scalable business model

Let´s talk about these factors a little bit more.

  • Big market: A big market offers the potential of monetize at a large scale as well as it gives you the room to pivot the business model onto different directions without changing the target market. You can discover big market by doing market research and reading the trends on the IT market on the reports from IDC, Gartner, Yankee Group, etc
  • Unfair advantage (defensible product): Defensibility is essential to survive in today highly competitive global markets; specially when you face large companies with unlimited resources that could act as copy-cats. There is no right strategy or silver bullet for this aspect and it´s up to your creativity
  • Very good traction: If you have a good traction, then your product is making happy your customer by solving a real pain (problem). Your customers are using product so frequently, so the churn rate is low which is equal to that the customer lifecycle is expanded in the time; and therefore your stream of revenue increases positively. As well as, if your product is sticky (again, equal to happy customers), then your customers can speak very well about your product (impacting positively on the word-of-mouth marketing channel — your own customers can/could become part of your sales force indirectly), then producing the disired viral effect on the product; therefore increasing your customer base and more importantly reducing your cost of customer acquisition.
  • Scalable business model: This is the interception between revenue streams and cost structure, or how you make money. If you achieve a scalable business model by acquiring customers (CAC) very less costly than the profit of the entire lifecycle of your customer (CLTV) using your product (CLTV>3CAC) and you achieve a very low marginal cost (cost of providing another unit of service related to your product); then you are doing very good and ready to scale (it´s not costly to grow) your business model to its maximum potential (the machine makes more money)

Although there could be a lot of factors to evaluate what could be a great business idea; I consider the aforementioned the essentials to discovery a successful business model and run your own company.

I would really appreciate to hear different worldviews, so what do you think about?

The simplest way for archiving and searching emails

01

In this article/post, I want to share with the entrepreneur community a new business idea that I have in mind in order to get feedbacks, comments and thoughts (I would really appreciate it). As a software and enterprise architect, I´m always designing simple, usable (functional) and pretty (good user experience from the aesthetic view) architecture solutions and refactoring to optimize the architecture of legacies. I´m a fond of optimization problems (research and practice), specifically in the topics of high performance for storage, indexing and processing. Applying the customer development and lean startup concepts, my vision is to (re-)segment the problem/solution space of data archiving toward the email archiving. So, I´m thinking to make my contribution to the computer world by providing an optimized email archiving solution in terms of storage cost effectiveness and simple user interface for organizing and searching emails very easily (this is my unique value proposition). I´m guessing that the target market (customer segment in the business model canvas) are personal-, small-, and medium- business which mainly need to store/archive the emails for legal and particular purposes in a large period of time. I want to name this service as Archivrfy.

Business transactions not only can involve enterprise applications and OLTP systems but also a bunch of emails, for example to register contracts, sales conversations and agreements as well as for evidence of invoices and payment information. Most companies underestimate the effort for the maintanance of traditional tape backups and data indexing to simplify searches.

In order to protect and retain mission critical data contained in emails; companies need a new and very simple approach to easily capture, preserve, search and access emails. For example, the legal department and people see e-mail as an essential factor in its discovery response strategy to present them as evidence in a court of law. The volume of email being generated every day is becoming a huge problem, so organizations and people can free its storage space by moving emails to archiving vaults.

You can archive emails at low cost for reducing local storage space and complying with legal requirements. For security, data is encrypted while in transit and preserve in the final vaults using adjustable retention policy. It´s based on cloud computing technology and available always with 99.99%. It´s scalable with unlimited growth (depending on the available resources in the cloud).

You can use a very simple and fast user experience to search (supporting eDiscovery scenarios) and access to the archived emails for improving productivity. It´s based on an optimized search algorithm by enriching the content with metadata to organizing and extending dynamically the search criteria. As well as it´s a flexible tecnology for simple integration with existing platforms and exporting to several portable email files.

In order to support the previous scenearios using a solution with scalability, high availability and high performance in mind; we need to design a robust architecture for Archivrfy service as shown below.

02

In this article, I´ve share my vision of an email archiving solution in terms of the unique value proposition and the underlying technical architecture.

I really appreciate any thoughts and feedback to help me improve my business vision.

I look forward to hearing from you … very soon

Architecture patterns for scalability in the cloud

Scalability is the ability of system (system==software in the case of computer science) for growing without degradation. That is, if the amount of request (work) increases significatively, then the quality aspects of the software system, particularly the performance, are not impacted negatively; so we add more computing resources easily.

Common scalability scenario are request growing over 30% a month, 500 million page requests a day, a peak rate of ~40k requests per second and ~3TB of new data to store a day.

So it’s important to architect/design system with the scalability in mind in advanced because this is a very important aspect and sometimes it’s very costly to fulfill late in the product lifecycle. Today, scalability is not a difficult constraint because we can grow, in theory, unlimited and very cheaply using cloud computing resources; and the only requirement is to follow a good architecture in mind.

In this article, I will cover several cloud architecture patterns to support scalability. I will follow a logical evolution of the software architecture according to the level of services provided by the product when the workload increases. In order to make concrete the architecture, I will provide examples based on Amazon AWS.

The simplest architecture pattern is to have a web server/application server software stack and database server in the same node (AWS EC2) and database backups are done to a high available storage medium (AWS S3). This infrastructure is in the same availability zone or data center.

01

Next architecture pattern is for improving the availability, scalability and performance of the system. It´s based on the idea of separation of dynamic-generated data from static data and the underlying processing flow. We need to create/use a virtual volume for data storage (AWS ESB) to store dynamic data (mainly for relational databases) independently of the node (AWS EC2) of the web/application server stack and database server. In the case of failure of the main node, we can instantiate another node (AWS EC2) with the same configuration (web/application/database servers) and mount the dynamic data volume to this new node. Static data is stored in a high available storage medium (AWS S3). We also have to make database and logs rotation backups as well as volume snapshot to the high available storage medium (AWS S3). This infrastructure is in the same availability zone or data center.

02

Next architecture pattern is for mainly improving performance and availability of the system. It´s based on the idea of setting a content delivery network (AWS CloudFront) for static data (text, graphics, scripts, media files, documents, etc) in order to make closer the data to the final user as well as caching the mostly used, so reducing the latency and throughput when serving content. AWS CloudFront technology is distributed in servers around the world. We need to register two domains: one for accessing dynamic data and the other to accessing static data. This infrastructure for (web/application/database server) is in the same availability zone or data center while AWS CloudFront servers are in different data centers across the world.

03

Next architecture pattern is for improving the availability, scalability, reliability, security and performance of the system. It´s based on the idea of separation of architecture artifacts and the underlying processing nodes according to their concerns in the whole picture. In order to achieve this goal, we have a multi-layer architecture as described below:

  • Front-end layer is the only layer facing to the final user. In this layer, we have our public IPs and registered domain names (AWS Route53 or other DNS provider) as well as the load balancers configured (AWS ELB or other technology such as HAProxy hosted in AWS EC2). Requests are incoming to this point using a secure channel (SSL to protect the data in transit by Internet) and the processing flow is balanced/forwarded according to the workload of the back-end application servers (achieving high availability, scalability and high performance). Load balancers are in different sub-networks than the back-end servers (possible separated by network elements such as routers and firewalls), so if an intruder breaks this layer cannot proceed to the inside (achieving security)
  • Application server layer. This layer is the first in the back-end and mainly hosts the farm of web/application servers for processing the requests sent by the load balancers (achieving high availability, scalability and high performance). For request processing, we select a huge variety of web framework stack. For example, a common platform stack might be Apache HTTPD web server in the front serving HTTP requests plus Tomcat application server(s) as servlet container processing business logic. We can also improve the performance in this layer by adding caching technology (AWS Elastic Cache or other technology such as memcached) for caching/storing in memory master data and (pre-)processed results in order to alleviate recurring processing and database server workload. We can also use database connection pool technologies (improving performance) such as Pgbouncer for maintaining open connections (note: it´s very expensive to open connections to database servers) to PostgreSQL servers. I recommend for these servers (nodes in the web farm) to use AWS High CPU Extra Large EC2 machines
  • The storage system layer. It´s where our data is persisted and comprises the relational database servers (RDBMS) and storage platform. In order to process a huge amount of transactions from the application layer, we have established a master-slave scheme for the RDBMS, so we have one active (master) server serving the transaction requests and replicating the changes to the slave servers (one or more) at a reasonable frequency to avoid outdated data (achieving high availability, scalability and high performance). The master-slave scheme is well supported by several RDBMS such as Oracle, SQL Server and PostgreSQL. Another configuration to improve the performance is to enable the applications send their database write requests to the master database, while the read requests are directed to a load balancer, which distributes those read requests to a pool of slave databases (note: for applications that rapidly write and then read the same data object, this may not be the most effective method of database scaling). It´s remarkable to say that a master-master scheme is not a good scalable solution because in a multiple master databases, with each master has the ability to modify any data object within the database using distributed transactions which is very costly and locks a lot objects or transaction replications with a latency between masters, and it is extremely difficult to ensure consistency between masters, and thus the integrity of the data. The database server instances must be AWS High Memory Extra Large machines in order to support the workload. And finally, for the storage platform, the idea is to have different volumes (AWS ESB) for each kind of data objects (transactional data, partitioning and sharding data, master data, indexes, transaction logs, external file, etc). For transactional data, we need to provision high level IOPS to improve the requests to data. Static data is stored in a high available storage medium (AWS S3). We also have to make database and logs rotation backups as well as volume snapshot to the high available storage medium (AWS S3). It´s remarkable to format the volume using XFS for making easy the creation of the snapshot and improving performance of the file system

In the general way, we need to separate the dynamic-generated data and static data. The static data is served using CDN mechanisms to reduce the latency and throughput. In order to implement disaster recovery mechanisms and improve the system availability, the idea is to distribute the nodes (load balancer, web/application servers, database servers and memcache servers) in different availability zones or data centers across the world.

The architecture vision is illustrated in the following figure.

04

Another scalability technique at the database level is data sharding. A database shard is a horizontal partition in a database, that is, take a large database, and break it into a number of smaller databases across servers. This design principle (horizontal partition) whereby rows of a database table are held separately, rather than being split into columns (which is what normalization and vertical partitioning do, to differing extents). Each partition forms part of a shard, which may in turn be located on a separate database server.

Let´s illustrate this concept as shown below. The primary shard table is the customer entity. The customer table is the parent of the shard hierarchy, with the customer_order and order_item entities as child tables. The global tables are the common lookup tables, which have relatively low activity, and these tables are replicated to all shards to avoid cross-shard joins. There are design concerns when you architect your data shards:

  • Generate and assign unique id to each piece of data
  • Shard id based on least used shard and the shardId is embedded in the primary key itself
  • Whenever an object needs to be looked up we parse the objectId and get the logical shard id, from this  we lookup the physical shard id and get a connection pool and do the query there. For example: ObjectId= 32.XXXXXXX maps to logical shard 32 and logical shard 32 lives physically on shard 4 and shard 4 lives physically on host2, schema5

05

In this article, I´ve covered the key principles, techniques and architectures related to scalability of software system, specifically from the cloud computing perspective in order to take advantage of this emerging/well-established technology that fits very well when growing our business and the underlying applications.

Design patterns in Microsoft.NET. Part 2

Introduction

As a solutions architect, we have to deal with several design patterns in order to design our solution. Design patterns are a way to conceptualize a reusable solution to commonly occurring problem in a given context in software design. Design pattern is today a way to communicate architectural ideas and decisions through a common language. In this series of articles, I will explain the essential design patterns (by the Gang of Four) with real-world code examples. In this second article, I want to talk about behavioral design patterns.

Talking about the design patterns

Behavioral design patterns

Chain of Responsibility design pattern: It consists of a source of command objects and a series of processing objects and it’s a way of communicating between objects. Each processing object contains a set of logic that describes the types of command objects that it can handle, and how to pass off those that it cannot to the next processing object in the chain. A mechanism also exists for adding new processing objects to the end of this chain.

You can use this design pattern when you have more than one object that may handle a request, you have a scenario that you need to pass a request to one of several objects without specifying the receiver and you have handlers of a request that should be specified dynamically.

The UML diagram for this design pattern is shown in the Figure 1.

01

Figure 1

And an example of this design pattern is found in the Listing 1.

public abstract class Handler

{

public int RequestLimit { get; private set; }

public Handler NextHandler { get; private set; }

 

public abstract void HandleRequest(int request);

 

public Handler(Handler handler, int requestLimit)

{

NextHandler = handler;

RequestLimit = requestLimit;

}

}

 

public class Worker : Handler

{

public Worker(Handler handler)

: base(handler, 10000)

{}

 

public override void HandleRequest(int request)

{

if (request < RequestLimit)

{

Console.WriteLine(“{0} handled a {1} request”, GetType().Name, request);

}

else

{

if (NextHandler != null)

{

NextHandler.HandleRequest(request);

}

}

}

}

 

public class Manager : Handler

{

public Manager(Handler handler)

: base(handler, 20000)

{}

 

public override void HandleRequest(int request)

{

if (request < RequestLimit)

{

Console.WriteLine(“{0} handled a {1} request”,GetType().Name, request);

}

else

{

if (NextHandler != null)

{

NextHandler.HandleRequest(request);

}

}

}

}

 

public class SeniorManager : Handler

{

public SeniorManager(Handler handler)

: base(handler, 50000)

{}

 

public override void HandleRequest(int request)

{

if (request < RequestLimit)

{

Console.WriteLine(“{0} handled a {1} request”,GetType().Name, request);

}

else

{

if (NextHandler != null)

{

NextHandler.HandleRequest(request);

}

}

}

}

 

class Program

{

static void Main(string[] args)

{

SeniorManager seniorManager = new SeniorManager(null);

Manager manager = new Manager(seniorManager);

Worker worker = new Worker(manager);

 

// Run requests along the chain

worker.HandleRequest(5000);

worker.HandleRequest(15000);

worker.HandleRequest(35000);

Console.WriteLine();

manager.HandleRequest(5000);

manager.HandleRequest(15000);

manager.HandleRequest(35000);

Console.WriteLine();

seniorManager.HandleRequest(5000);

seniorManager.HandleRequest(15000);

seniorManager.HandleRequest(35000);

// Wait for user

 

Console.Read();

}

}

Listing 1

Command design pattern: It encapsulates a request as an object, and it’s mainly used to represent and encapsulate all the information needed to call a method at a later time. This information includes the method name, the object that owns the method and values for the method parameters.

Three terms always associated with the command pattern are client, invoker and receiver. The client instantiates the command object and provides the information required to call the method at a later time. The invoker decides when the method should be called. The receiver is an instance of the class that contains the method’s code.

The UML diagram for this design pattern is shown in the Figure 2.

02

Figure 2

And the example for this design pattern is shown in the Listing 2.

public interface Command

{

void Execute();

}

 

public class Light

{

public Light() { }

 

public void TurnOn()

{

System.Console.WriteLine(“The light is on”);

}

 

public void TurnOff()

{

System.Console.WriteLine(“The light is off”);

}

}

 

public class FlipUpCommand : Command

{

private Light _objLight;

 

public FlipUpCommand(Light light)

{

this._objLight = light;

}

 

public void Execute()

{

this._objLight.TurnOn();

}

}

 

public class FlipDownCommand : Command

{

private Light _objLight;

 

public FlipDownCommand(Light light)

{

this._objLight = light;

}

 

public void Execute()

{

this._objLight.TurnOff();

}

}

 

public class Switch : Command

{

private Command _objFlipUpCommand;

private Command _objFlipDownCommand;

 

private bool _bIsUp;

public bool IsUp

{

get

{

return this._bIsUp;

}

set

{

this._bIsUp = value;

}

}

 

public Switch()

{

Light objLamp = new Light();

 

this._objFlipUpCommand = new FlipUpCommand(objLamp);

this._objFlipDownCommand = new FlipDownCommand(objLamp);

}

 

public void Execute()

{

if (this._bIsUp)

{

this._objFlipUpCommand.Execute();

}

else

{

this._objFlipDownCommand.Execute();

}

}

}

 

public class PressSwitch {

 

public static void Main(String[] args)

{

Switch objSwitch = new Switch();

objSwitch.Execute();

}

}

Listing 2

Iterator design pattern: It provides a way to sequentially access aggregate objects without exposing the structure of the aggregate.

For example, a tree, linked list, hash table, and an array all need to be iterated with the methods search, sort, and next. Rather than having 12 different methods to manage (one implementation for each of the previous three methods in each structure), using the iterator pattern yields just seven: one for each class using the iterator to obtain the iterator and one for each of the three methods. Therefore, to run the search method on the array, you would call array.search(), which hides the call to array.iterator.search().

The pattern is widely used in C# and in the .NET framework we have the IEnumerator and IEnumerable interfaces to help us to implement iterators for aggregates. When you implement your own aggregate object you should implement these interfaces to expose a way to traverse your aggregate.

You use this when you need a uniform interface to traverse different aggregate structures, you have various ways to traverse an aggregate structure and you don’t want to expose the aggregate object’s internal representation.

The UML diagram for this design pattern is shown in the Listing 2.

03

Figure 3

And an example code in C# of this design pattern is shown in the Listing 3.

public class AggregateItem

{

public string Data { get; set; }

public AggregateItem(string data)

{

this.Data = data;

}

}

 

interface Aggregate

{

Iterator GetIterator();

}

 

class AggregateImpl : Aggregate

{

private readonly List<AggregateItem> _aggregate;

public int Count

{

get

{

return _aggregate.Count;

}

}

 

public AggregateItem this[int index]

{

get

{

return _aggregate[index];

}

set

{

_aggregate[index] = value;

}

}

 

public AggregateImpl()

{

_aggregate = new List<AggregateItem>();

}

 

public Iterator GetIterator()

{

return new IteratorImpl(this);

}

}

 

interface Iterator

{

object First();

object Next();

bool IsDone();

object Current();

}

 

class IteratorImpl : Iterator

{

private readonly AggregateImpl _aggregate;

private int _nCurrentIndex;

 

public object First()

{

return _aggregate[0];

}

 

public object Next()

{

object result = null;

if (_nCurrentIndex < _aggregate.Count – 1)

{

result = _aggregate[_nCurrentIndex];

_nCurrentIndex++;

}

return result;

}

 

public bool IsDone()

{

return _nCurrentIndex >= _aggregate.Count;

}

 

public object Current()

{

return _aggregate[_nCurrentIndex];

}

 

public IteratorImpl(AggregateImpl aggregate)

{

_nCurrentIndex = 0;

_aggregate = aggregate;

}

}

Listing 3

Mediator design pattern: – It encapsulates the interaction between a set of objects. The pattern helps to lose couple the object by keeping them from referring each other. You use this design pattern when the behavior is distributed between some objects can be grouped or customized, object reuse is difficult because it communicates with other objects, and objects in the system communicate in well-defined but complex ways.

The UML diagram for this design pattern is shown in the Figure 4.

04

Figure 4

And an example of this design pattern is shown in the Listing 4.

public interface Mediator

{

void Send(string message, Colleague colleague);

}

 

public class ConcreteMediator : Mediator

{

public List<ConcreteColleague> Colleagues { get; private set; }

public ConcreteMediator()

{

Colleagues = new List<ConcreteColleague>();

}

 

public void Send(string message, Colleague colleague)

{

foreach (Colleague currentColleague in Colleagues)

{

if (!currentColleague.Equals(colleague))

{

currentColleague.Recieve(message);

}

}

}

}

 

public abstract class Colleague

{

protected Mediator _mediator;

 

public Colleague(Mediator mediator)

{

_mediator = mediator;

}

public abstract void Send(string message);

public abstract void Recieve(string message);

}

 

public class ConcreteColleague : Colleague

{

public int ID { get; set; }

 

public ConcreteColleague(Mediator mediator, int id)

: base(mediator)

{

ID = id;

}

public override void Send(string message)

{

_mediator.Send(message, this);

}

public override void Recieve(string message)

{

Console.WriteLine(“{0} recieved the message: {1}”,ID, message);

}

}

 

class Program

{

static void Main(string[] args)

{

ConcreteMediator mediator = new ConcreteMediator();

ConcreteColleague colleague1 = new ConcreteColleague(mediator, 1);

ConcreteColleague colleague2 = new ConcreteColleague(mediator, 2);

mediator.Colleagues.Add(colleague1);

mediator.Colleagues.Add(colleague2);

colleague1.Send(“Hello from colleague 1”);

colleague2.Send(“Hello from colleague 2”);

Console.Read();

}

}

Listing 4

Memento design pattern: It helps to save the object internal state in an external place enabling us to restore the state later when needed. The memento pattern doesn’t violate encapsulation of the internal state. The pattern is rarely used but it’s very helpful in scientific computing or in computer games.

It’s mainly used when you need to save object’s state and use the saved state later in order to restore the saved state and you don’t want to expose the internal state of your object (see Figure 5).

05

Figure 5

An example code for this pattern is shown in the Listing 5.

public class Originator<T>

{

public T State { get; set; }

public Memento<T> SaveMemento()

{

return (new Memento<T>(State));

}

public void RestoreMemento(Memento<T> memento)

{

State = memento.State;

}

}

 

public class Memento<T>

{

public T State { get; private set; }

public Memento(T state)

{

State = state;

}

}

 

public class Caretaker<T>

{

public Memento<T> Memento { get; set; }

}

 

class Program

{

static void Main(string[] args)

{

Originator<string> org = new Originator<string>();

org.State = “Old State”;

Caretaker<string> caretaker = new Caretaker<string>();

caretaker.Memento = org.SaveMemento();

Console.WriteLine(“This is the old state: {0}”, org.State);

org.State = “New state”;

Console.WriteLine(“This is the new state: {0}”, org.State);

org.RestoreMemento(caretaker.Memento);

Console.WriteLine(“Old state was restored: {0}”, org.State);

Console.Read();

}

}

Listing 5

Observer design pattern: It is a software design pattern in which an object, called the subject, maintains a list of its dependents, called observers, and notifies them automatically of any state changes, usually by calling one of their methods. It is mainly used to implement distributed event handling systems.

You mainly use this pattern when you have a publisher/subscriber model, objects need to be notified of a change in another objects, and you need that the object that notify its state change would not know about its subscribers.

The UML diagram for this design pattern is shown in the Figure 6.

06

Figure 6

An example code for this design pattern is shown in the Listing 6.

public abstract class Subject

{

private List<IObserver> _observers;

 

public Subject()

{

_observers = new List<IObserver>();

}

 

public void Attach(IObserver observer)

{

_observers.Add(observer);

}

 

public void Detach(IObserver observer)

{

_observers.Remove(observer);

}

 

public void Notify()

{

foreach (IObserver observer in _observers)

{

observer.Update();

}

}

}

 

public class ConcreteSubject<T> : Subject

{

public T SubjectState { get; set; }

}

 

public interface IObserver

{

void Update();

}

 

public class ConcreteObserver<T> : IObserver

{

private T _observerState;

public ConcreteObserver(ConcreteSubject<T> subject)

{

Subject = subject;

}

 

public void Update()

{

_observerState = Subject.SubjectState;

Console.WriteLine(“The new state of the observer:{0}”, _observerState.ToString());

}

}

 

class Program

{

static void Main(string[] args)

{

ConcreteSubject<string> subject = new ConcreteSubject<string>();

subject.Attach(new ConcreteObserver<string>(subject));

subject.Attach(new ConcreteObserver<string>(subject));

subject.SubjectState = “Hello World”;

subject.Notify();

System.Console.Read();

}

}

Listing 6

Strategy design pattern: It allows you to use multiple algorithms interchangeably.  One reason you might use a Strategy design pattern is to simplify an overly complex algorithm. Sometimes, as an algorithm evolves to handle more and more situations, it can become very complex and difficult to maintain. Breaking these complex algorithms down into smaller more manageable algorithms might make your code more readable and more easily maintained.

The UML diagram for this pattern is shown in the Figure 7.

07

Figure 7

An example code for this design pattern is shown in the Listing 7.

using System;

 

namespace Wikipedia.Patterns.Strategy

{

class MainApp

{

static void Main()

{

Context anObject;

 

// Three contexts following different strategies

anObject = new Context(new ConcreteStrategyA());

anObject.Execute();

 

anObject.UpdateContext(new ConcreteStrategyB());

anObject.Execute();

 

anObject.UpdateContext(new ConcreteStrategyC());

anObject.Execute();

}

}

 

interface IStrategy

{

void Execute();

}

 

class ConcreteStrategyA : IStrategy

{

public void Execute()

{

Console.WriteLine(“Called ConcreteStrategyA.Execute()”);

}

}

 

class ConcreteStrategyB : IStrategy

{

public void Execute()

{

Console.WriteLine(“Called ConcreteStrategyB.Execute()”);

}

}

 

class ConcreteStrategyC : IStrategy

{

public void Execute()

{

Console.WriteLine(“Called ConcreteStrategyC.Execute()”);

}

}

 

class Context

{

IStrategy strategy;

 

public Context(IStrategy strategy)

{

this.strategy = strategy;

}

 

public void UpdateContext(IStrategy strategy)

{

this.strategy = strategy;

}

 

public void Execute()

{

strategy.Execute();

}

}

}

Listing 7

Conclusion

In this article, I’ve explained the essential design patterns (by the Gang of Four) with UML diagrams and real-world code examples.

 

Customer Discovery, first phase of Customer Development

Image

As part of my research about the Steve Blank´s  (@sgblank) Customer Development methodology, now I´m reading the book The Startup Owner’s Manual: The Step-By-Step Guide for Building a Great Company (http://www.amazon.com/The-Startup-Owners-Manual-Step-By-Step/dp/0984999302) which is very good resource for every entrepreneur.

In this post, I would like to share my notes and insights about the first phase in the Customer Development method: the Customer Discovery. The goal of Customer Discovery is to be sure that a specific product solves known problem for an identifiable customer segment. This phase is executed by the founders.

A startup begins with the vision of its founders: a vision of a new product or service that solves a customers´ problem. The goal of customer discovery is to turn the founders´ initial hypothesis (guesses) about customers, market and solution (product/service) into facts in order to search for the problem/solution fit. Facts exist only outside the building, where customer live, so we need to get out of the building in front of the customers (days, months and even years). It´s done by the founders, so they can know if it´s a vision or just hallucination, so the value proposition matches the customer segment it plans to target. Remember that most business model fails because we waste money, effort and time in building the wrong product.

It´s remarkable to say that there may be multiple value propositions and multiple customer segments, but the problem/solution fit is only achieved when the revenue and pricing model, value proposition, and customer acquisition efforts all match up with customers´ needs.

In a startup, at the first day, we don´t have plenty of customer knowledge, so the first product (minimum viable product – MVP) is not designed to satisfy a mainstream customers but small group of early customers who have bought the startup´s vision. They will give feedback to startups necessary to add features to the product over time and tell others about the product to the world. The idea is to put the MVP in front of customers to find out whether we have understood the customer problem to define the key features of the solution. Then, we iteratively refine the solution.

The business model canvas from Alex Osterwalder (@AlexOsterwalder) is the scorecard used in the customer discovery step to organizing our thinking by specifing the hypotheses (guesses) and experiments as well as a medium to record the result (pass/fail) of the experiments for the validation of hypotheses in searching for the business model.

The business model canvas (http://en.wikipedia.org/wiki/Business_Model_Canvas) enables specifying how the company expects to make money in nine blocks: value proposition, customer segment, channels, customer relationship and demand creation, revenue and pricing model, partners, activities and costs structure.

The business model canvas can´t be static snapshot but dynamic. We need to update the canvas to reflect any pivot and iterations in time period (let´s say a week). After the time period, we need to agree on the changes to the business model and integrate them in a new view of the canvas to work on for the next time period. In any precise moment of the time, we have the current canvas and a stack of previous canvases.

We need to do a market research using Total Available Market (TAM) and Served Available Market (SAM). TAM covers every way a customer can currently meet a need, and SAM is the portion of the TAM that our product covers. TAM answers the question: How big is the market (the total of all unit sales of all the competing products)? In short, TAM is the total potential market. TAM is expressed using dollar value. Identifying the TAM and SAM can help to understand the target customers. We can use several tools such as Google Insights, Google Trends and Facebook ads, industry-analyst reports, market-research reports and competitor´s  press.

We also have to launch a landing page (with call-to-action) as product concept, traffic analysis and medium for validating hypothesis as well as to contact the target customers using a contact list in order to conduct survey, get insights and receive feedback. After that, we need to launch a low fidelity MVP.

Customer Discovery phase has four sub-phases.

  • Phase 1 deconstructs the founders´ vision into the nine parts of the business model canvas
    • Goal: To sketch out the possible problems we´re solving and what product we´re building and how we believe this will create value for the customers, in other words we´re stating our hypothesis
    • Description: We need to describe the jobs the customers are trying to get done and outline their pains and gains as well as to list the products/services we´re trying to offer to alleviate pains and create gains. The team specifies the hypothesis for each part of the business model (value proposition, customer segment, channels, market type, customer relationship and demand creation, revenue and pricing model, partners, activities and costs structure) including the list of experiments to conduct to prove or disprove each one
    • Tools:
  • Phase 2 enables conducting experiments to test the problem-related hypotheses in the business model canvas
    • Goal: To understand the problem/solution fit by turning hypotheses into facts or discarding them if they´re wrong and replacing them with new hypotheses
    • Description: We do so by hearing our customers and testing the most important elements in the business model including the customer problems, value proposition, pricing, channel strategy and sales process in order to understand how important the problem is and how big it can become (start getting out of the building to talk as many potential customers as possible). Building a landing page is hard because we need customer insights and iterating without talking directly to customers is slow. So, talking directly to customers has more learning validation than any other method. In other to structure the problem presentation, we can use the following checklist:
    • State the top 3 problems
    • Ask the customer to prioritize the problems
    • Ask the customer how he works today and his pains and gains today
    • Ask the customer how he solves the problems today
    • Very briefly, describe how we might solve the problem
    • Would the customer use the solution if it were free?
    • Would the customer pay $X per year?
    • Ask for referrals to others

Phase 3 enables testing the solution by presenting the value proposition and low fidelity MVP

Phase 4 enables stopping and assessing the results of the experiments we´ve conducted and verified

  • Goal: To verify, if we need whether to pivot or to start selling the product because we´ve achieved the problem/solution fit
  • Description: We can have a full understanding of customers´ problems and needs, confirmed the value proposition solves real problems, determined a sizable volume of customers, learn the customers will pay for the product and finally made certain the revenue will deliver a profitable business
  • Tools:

According to the Ash Maurya (@ashmaurya), there are 3 rules to actionable metrics derived from Lean Startup principles.

Rule 1: Measure the right macro

Eric Ries recommends focusing on the macro effect of an experiment such as sign-ups versus button clicks.

There are only a handful of macro metrics (only 5 – AARRR) that really matter. These are the metrics for pirates from Dave McClure (@davemcclure) organized depending on the customer lifecycle as shown below:

  • Acquisition: Users come to the site from several channels
    • Key question:
      • How do users find you?
    • Examples:
      • X number of clicks
      • Y number of page views
      • Z time on the site
  • Activation: Users have a happy first experience with the product
    • Key question:
      • Do users have a great first experience?
    • Examples:
      • Conversion rate
      • Number of users that sign-up
      • Number of users that watch product demonstration
  • Retention: Users come several times to the site to use the product
    • Key question:
      • Do users come back?
    • Examples:
      • Number of users using the product per month
      • Number of email click throughts
      • Number of feedback
      • Retention rate
  • Referral: Users like the product enough to refer it to others
    • Key question:
      • Do users tell others?
    • Examples:
      • Number of referrals
      • Number of activations
      • Viral factor > 1 (it means that every customer gets more than one another customer, therefore your product will grow virally)
      • Conversion rate
  • Revenue: Users engage on monetization activities (purchase, subscription, etc), so to find out how much profit they make for every customer and scale the number of customers
    • Key question:
      • How do you make money?
    • Examples:
      • How much money do you make for every customer you acquire
      • Minimum revenue
      • Cancellation rate. Number of customer who cancel in any given month compared to total (paying) customers

Of the 5 metrics, only 2 metrics matter before the product/market fit (activation and retention). Before the product/market fit, we´re building some product that people want by providing a great first experience (activation) and most important customer engagement (retention).

Rule 2: Create simple reports

Reports that are hard to understand, simply won´t get used. Funnel reports are a great way to summarize key metrics. Funnel reports are simple and map well to Dave McClure´s AARRR startup metrics.

Image

Figure 1

Image

Figure 2

Image

Figure 3

David Cancel´s Funnel is shown below:

Image

Figure 4

Funnel reports have a key drawback: because we´re constantly changing the product, it´s impossible to tie back observed results to specific actions taken a month ago; so, it´s used a reporting period where events generated in that period are aggregated across all users

Funnel reports work well for micro-optimization experiments (such as landing page conversion) but fall short for macro-pivot experiments, so we need to combine them with cohorts.

A cohort analysis is a form of study design. For example, when the development team makes design choices, then they go back to review traffic data to evaluate the success of their choices. A cohort is a group of people who share a common characteristic or experience and we wish to track this property, for example, bucket users into the month they join. The most common cohort attribute is “join date”.

Let´s illustrate the concept of cohort analysis with two figures:

Image

Figure 5

The previous report is used for Retention. It´s generated using monthly cohort by join date and tracking key activities over time.

Image

Figure 6

A good report combining funnel and cohort is “Weekly Cohort Report by Join Date” as shown below:

Image

Figure 7

In the previous report, we group users by the week in the year they signed-up and then we track all their events over time. We can see visible changes in the metrics which can be tied back to specific activities done in a particular week.

Apart from reactive monitoring the funnel, cohorts can also be used to proactively measure A/B test experiments. For example, a report shows the experiment about a cohort measuring the “plan type” attribute for the Freemium versus Free Trials.

Image

Figure 8

Rule 3: Metrics are people

Metrics can only tell you what users did. To make them actionable, we need to tie them to actual people. This is important before the product/market fit when we don´t have a huge number of users and we rely on qualitative versus quantitative validation.

For example, a list of people which failed in the download step in the funnel.

Image

Figure 9

Finally, there are some techniques to getting to actionable metrics:

  • Split tests metrics. Produce the most actionable of all metrics, because it can confirm or refute a specific hypothesis. The real value is when we integrate them into the decision loop: putting the ideas into practice, seeing what happens, and learning. A good rule of thumb is to ask: if this test turns out differently from how I expect, will that cast serious doubts on what I think I know about my customers? If not, try something bigger. For example, let´s say we add a new feature, we´re using A/B tests in which 50% of customers (group A) see the new feature and the other 50% (group B) not. After some days, we measure the revenue and noticed that group B has 20% of revenue higher. After that, we roll out the feature to 100% (group A + group B) customers and keep on doing experiments/tests with more features in the same way
  • Per-customer metrics. It means that metrics are people too. For example, instead of looking at the total number of page views in a given month, consider looking at the number of page views per new and returning customer (most conversion). Those metrics should be relatively constant
  • Funnel metrics and cohort analysis. It´s a kind of per-customer metrics. For example, consider an ecommerce product that has a couple of key customer lifecycle events: registering for the product, signing for a free trial, using the product, and becoming a paying customer. We can create a simple report that shows these metrics for subsequent cohort (groups) over time. If the report says what percentage of customers who registered subsequently went on to take each lifecycle action. If these numbers are holding steady from cohort to cohort, then we have feedback telling that nothing is changing. If one cohort suddenly shifts up and down, we get into investigation

After, the customer discovery phase is successfully finished; we can proceed to customer validation in order to try to validate the sales roadmap, so we can be sure that a market is saleable and large enough that a viable business might be built.

What do you think about customer discovery phase? Please, feel free to tell your experience and comments.

Running Windows Instances on Amazon EC2

In this article, I want to explain step by step how to create and run Windows instances on Amazon EC2.

In order to create the Windows instance. Open the AWS Management Console and select the EC2 tab at https://console.aws.amazon.com/console/home. Select the desired region to launch the instance (red circle) and click on the Lauch Instance button (blue circle) in the following figure.

Figure 1

Next select the Microsoft Windows Server 2008 R2 with SQL Server Express and IIS (see Figure 2).

Figure 2

Next page is for setting the instance details such as number of instances, the instance type and availability zone as shown in the following figure.

Figure 3

After that, we have a set of advanced setting for the instances configuration. When you reach the key pairs page, you have to click on the Create and Download your Key Pair and save the myfirst_instance_keypair.pem file in your local machine. You need a key pair (public/private) to be authenticated when logging using ssh. The public key is retained in the instance (copied into the .ssh/authorized_keys file in the primary user account´s home directory), allowing you to use the private key (downloaded to your local machine) to log in securely without a password. You can have several pair keys, and each key pair requires a name.

Figure 4

Next page allows you creating security groups to define your instance firewall. By default, you will leave open for the outsiders the HTTP (80) and Remote Desktop (3389) ports . Enter a name and description for the security group (blue circle in the following figure)

Figure 5

And finally, you´re able to review all your inputs regarding the instance and click on the button Launch (see Figure 6).

Figure 6

Once the instance is running OK, you might connect using Remote Desktop. One thing to note is that you need to locate the myfirst_instance_keypair.pem to get the Windows Administrator´s password.

In order to get the Windows Admininstrator´s password, you need to select the instance and select the “Get Windows Admin Password” option (see Figure 7).

Figure 7 

After that, the Retrieve Default Windows Administrator Password pop-up is open, and you need to open your key pair file and copy the entire content in the Private Key text field and finally click on the Decrypt Password button as show in the Figure 8.

Figure 8

Now, we need to open Remote Desktop Connection program and start logging to the instance.

In this article, I´ve shown how to provision an EC2 Windows instance.