Mediator Design Pattern with .NET CORE

Design patterns are documented and tested solutions for recurring problems. They are used to solve the problems of objection creation and integration. Developers encounters these problems on a day to day basis. So, it is a best practice to use design patterns as they are reusable solutions.

It is important to remember that you do not need to over-do design patterns. Design patterns are for projects, projects are not for design patterns. So, use it in the project if you think it is a good fit. Otherwise, It can make your project messy.

Types of Design Patterns

Gang of Four categorised design patterns into three main categories

1. Creational Design Pattern
2. Structural Design Pattern
3. Behavioural Design Pattern

In this blog, we will discuss Mediator design pattern. It is a behavioral design pattern that promotes loose coupling.

Mediator Design Pattern

It is used to reduce communication between objects. This pattern provides a mediator object and that object handles all the communication between objects. 

In the following diagram, you can see there are 3 objects(X, Y, and Z). Suppose, object X has to send data to Y and then Y has to send a message to Z. These objects need to know each other. They are tightly coupled. In the real world, there are hundreds and thousands of objects that have to call each other. It makes the on-going management of code an up-hill task.

How Mediator Pattern solves the problem?

If object X has to call object Y and Z. It doesn’t have to call directly. It will call the mediator object and it is the responsibility of the mediator object to pass the message to the destination object. The mediator helps you promote loose coupling by keeping objects from referring and interacting with each other.

Implementation of Mediator Pattern

You can install by Nuget Package Manager

Install-Package MediatR

or via .Net core CLI

dotnet add package MediatR

dotnet add package MediatR.Extensions.Microsoft.DependencyInjection

I have installed MediatR and MediatR.Extensions.Microsoft.DependencyInjection in my demo project.

mediator installation
Register MediatR with assembly

Next thing is that you have to inject the mediatR in controller.

The mediator call consists of a request and a handler. Every request has its handler. The request could be anything from a complex object to an integer value.

In the following image, you can see we are making a request to the mediator which takes CreateUserCommand as a model.

Mediator request call


Requests are of two types.

1. IRequest<TRequest> – returns TRequest
2. IRequest – returns nothing

A request is a class that has a handler attached to it. You can see in the image below, I have implemented IRequest<TRequest>. CreateUserCommand request takes AppUser as a return value.


Request Handler (Responses) are of two types.

1. IRequestHandler<TRequest, TResponse> – returns TResponse
2. IRequestHandler<TRequest> – returns nothing

You can see in the image below, I have implemented IRequestHandler<TRequest, TResponse>. This handler implements the request of CreateUserCommand and returns AppUser.

Mediator handler request


The mediator design pattern helps us reduce coupling and promote SOLID principles. You should use it if you think it can be a good fit for your project. I have explained in this blog about the design pattern and its implementation. As a reference, you can find the complete source code in my demo app.

Single Responsibility Principle (SRP)

In this article, we are going to discuss SRP which is one of the SOLID design principles. S in SOLID denotes the Single responsibility principle. It is a recommended practice to implement it in our code to make the code leaner and less coupled.

What is SRP?

Each software module or class should have only and only one reason to change.

Robert C. Martin

We need to design the software so that every class has one or a similar set of functions. For example, we are designing a module in which the user gets a welcome email on a sign-up and all the user actions are logged. If we write all of these functions in a single class then it would become really complex. In the future, a single change would impact the whole class and can break things.

To make the code manageable, which is one of the principles of OOP. We would split the functionality into multiple classes. So each class has a single function to perform thus resulting in a maintainable software.

Without SRP

As you can see, the User module/class is doing multiple things. It is saving a user, sending an email, and logging all the activity. It is violating the Single Responsibility Principle. If we don’t change the code then we would have to copy the email/log activity code to all the other classes where it is needed. User class should only do one thing which to save users to the database.

With SRP

With SRP you can see each class has its own set of function(s). Users can communicate with databases, email class is responsible for emails and logging class to log activity. As a result it enforces the single responsiblity principle.

Email and logging class are re-useable. You can inject them in any other modules where they are needed.


In this blog, we discussed SRP which is a very important principle in today’s programming world. It makes code modular, leaner, and testable. In the next blog, we will discuss the Interface Segregation Principle and its importance.

What is the Cartesian Explosion in EF Core?

An entity framework is an ORM that makes developer’s lives easier. Using EF, we can interact with the database and perform CRUD operations using a programming language (C#, In this blog, we are going to discuss the cartesian explosion in regards to loading data.

Cartesian Explosion

It is related to performing joins. When you perform a join, one table’s columns repeat the X number of times of the matched records from the joined table.

var customers = context.customers
.include(a => a.address)

The LINQ query in SQL:

Select * From Customers
 Left Join Address 
 on Address.Id = Customers.AddressId
Customer IdAddress Details

You can see, the columns of the customer table are repeated 500 times. Now, imagine if you are working on a complex dataset where you have to add a few more includes. The returned data set would explode – it is known as the cartesian explosion.



The solution to this is that you split the queries and get data two data sets, rather than one. It would result in two database round trips but would improve performance (fewer reads and CPU time)

var customers = context.customers.ToList();

var addresses = context.addresses.ToList();


In this blog, we discussed what is the cartesian explosion in EF’s eager loading data pattern. You have to be very careful about when to avoid this problem. It is recommended to use the split query approach when the dataset is too large and requires optimization. Otherwise, in normal scenarios include would do the job.

Docker Compose with Asp.Net Core Microservice

In the last blog, we learned about the core concepts of Docker and how we can create a run .net core app in the container. Today, we are going to learn about the docker-compose to run more than one container together.

Docker-compose file

It is a YAML file. By default, it’s named docker-compose.yml but it can be changed and specified as a parameter.

First of all, we specify the version of the file. This should be the latest version. I am using 3.6.

Next, we define our set of services. This is where the magic happens. We can define multi-services in one file and run all the service with a single command.

Service has a name, image, build context, environment, network and volume. You can see all the options here.

Docker-compose commands

It’s very easy to manage dockers with docker-compose commands. There are two main commands. Docker-compose up / Docker-compose down.

Docker-compose up

docker-compose up -d –build

Docker-compose up command run all the services in the compose file. You need to run this command from the directory where docker-compose .yml file is placed.

-d paramter is used to detach console from the container.

By default, images are built when you run this command for the first time. If you make changes to the code, you need to use the –build parameter to tell docker to build image(s) again.

Docker-compose down

docker-compose down

Docker-compose down command stops/removes all the services defined in the compose file. You need to run this command from the directory where the docker-compose .yml file is placed.


Docker-compose is very simple, and helps us run multi-container services from a single file. It is very handy and powerful.

You can download the reference source code on my Github.

What is Docker and how to dockerize ASP .Net Core Microservice?

Docker helps us develop, run, and ship apps anywhere. In this blog post, we will explore what is docker and how it can be added to a project using visual studio code.

What is Docker?

Docker is a container management service. It is written in Go and open source. Docker container enables developers to bundle application and it’s dependencies and ship everything out as one package which then can be deployed.

Container is a just a process which shares the host system kernel. It is limited to what resources it can access and stops when process stops. It is very small than a virtual machine which makes it very light and fast to start and stop.

You can download the docker from for windows/linux based on your machine’s OS.

Docker Terminologies

You should become familiar with some of the following terms and definitions

Container Image: A set of all the dependencies and information needed to create a container.

Container: A container describes a runtime for a system, process, or service. It is an instance of a docker image.

Tag: Tag is used to label images to identify different versions and environments.

Dockerfile: It is a text file that holds commands on how to build a docker image.

Compose: YAML file format with metadata for creating and running multi-container applications. It is a very handy CLI tool that let’s us manage the multi-containers with single commands.

Dockerize ASP.NET Core Application

.Net core works on Linux and windows both. To containerize .net core application add the Dockerfile in your project folder.

Dockerfile should be in the directory of your project. If you have referenced other projects. You need to place Dockerfile in the right context.


In my project, I have placed Dockerfile in the Solution folder because I need to copy my multi-layers API, domain, service, data DLL’s.

Dockerfile Overview

A Dockerfile is a text file that holds commands on how to build a docker image. It can contain multi-stage builds. The following file has four build stages.

The base stage gets ASP .net core runtime and NuGet packages from the docker hub. It then sets the working directory to the ‘app’. Exposes docker’s port 80 and 443 for network communication with the outside world.

The build stage gets the SDK image to run the application. It will set the working directory to ‘src’, copy all the project files, and store DLLs inside this folder. Next, it will restore the NuGet packages of User.API project.

Copy . . is a very important command. It copies all the files from your docker build context(current project directory) to the docker image directory.

Publish stage publishes the application to /app/publish directory. The parameter ‘-c Release -o /app/publish’ compiles the app with release configuration to the output path.

In the final stage, we set the path back to ‘/app’ because it is where our app is going to run with a runtime environment (see Stage 1). Copy all the files from /app/publish directory to /app.

When the docker container starts. ENTRYPOINT will execute the ‘Dotnet project_name.dll’ command. In our case, it is bound to run the application with the help of User.API.dll.

Build and Run Docker Image

Now we have to build an image and run that image in the container. Go to the project directory where the Dockerfile exists and run the following command

docker build -t user.api .

-t is optional and used to tag images. “.” tells the docker build command to set the build context to the current directory of the project.

The following command will run the image in container.

docker run -p 3000:80 –name userapi-container user.api 

With -p (–publish) option we bind the container’s port to the host (using TCP). With the –name option we specify the name of the container. At the end we pass the name of the image.


In this post, I walked you through about core concepts of Docker and how it can be added to .Net core. In a fast DevOps culture, it is required to quickly change the application and deploy it in production. Docker makes this very easy.

In the next post, I will explain how you can use powerful commands like docker-compose to deploy multiple services with a single command.

You can download the reference source code on my Github.

CQRS pattern with ASP.NET CORE

CQRS is a command and query responsibility pattern that separates read and write operations into different models. Commands (Insert/update) operations will have a separate model. Queries (read) operations will have a separate model.  By separating them, it allows models to be more focused on the tasks that they are performing. 

Traditionally, commands will have a large model as it would map the model to the whole database table with some business logic validating the model before saving it the database. Whereas, on the other hand, queries will have a simple model, returning a dataset according to UI requirements. 

Traditional vs CQRS

In traditional applications, we use the same data model for read and write operations. It is good for simple CRUD applications but if you have a large complex application which has a large model. It get’s really difficult to manage it as every other change can cause issues in testing with additional bloated code. 

When we use CQRS, we can either have one database or we have can have two databases, one for commands and others for queries. If we create two databases then we have to keep them in sync. This is the additional work you have to do.

How to implement it?

You will see two folders, one for command and another for the query. Query folder has one model class and one handler. I am using a mediator pattern here, which I will explain in my other blog post.


In the following image, you can see the handler class receives the query model and makes a query to the database to get all users from the database.


In the following image, you can see the handler class receives the command model and makes a write operation to save the user in the database.


There are pros and cons to everything. Some of you may feel that implementing CQRS may add additional complexity to the project. In my opinion, It comes down to the use case of the project and how you use this pattern. I have also made a working demo app on GitHub with the CQRS pattern. 

Introduction to Microservices

Microservices is a new buzzword that is very popular nowadays. I have spoken to a few developers and every single one of them defines it differently. Some say ‘Microservices is just a pattern to break the large monolithic application into smaller monolithic applications’ while others describe it as ‘Microservices is just SOA (Services oriented architecture) done right’. In this blog, we will focus on microservices.

What is a Microservices?

It is an architectural style pattern that is used to develop large complex applications into small modular loosely coupled services that are small, independent, easily manageable, and deployable. In simple words, microservice helps you break a large application into smaller applications that enforce SRP (Single responsible principle).

Companies run into trouble when it gets too difficult to manage a large application and upgrade it.  Microservices is the answer and can help us break down complex large applications into small independent applications that communicate with each other using language-independent interfaces (APIs).

History of Microservices

Term microservice was coined in mid of 2011. 

The idea behind microservice is just good architecture principles. It has been around for a few decades. In the early 1980’s the first distribution technology Remote Procedure Calls (RPC) was introduce by Sun microsystems. 

Who is using it?

Companies like Netflix, Amazon, Uber, eBay, and Spotify have a microservice architecture that helps them serve resources with intensive requests in a scalable manner. Also, a lot of other companies are moving their monolithic products to microservice architecture.

There are many advantages and disadvantages of Microservices.


Modular and independent

Microservices are broken down into multiple service components by design, which are easily developed by small teams. Each of the service component is focused on a specific module, can be developed and deployed on docker independently. This helps new team members understand the functionality in less time. 

Decentralised and cross-functional

As Conway’s law states “The design of the software can tell us about the social fabric of the organisation and vice versa”. The small teams where every team member is responsible for each business function is ideal organisation for the development of microservices. From Architects, developers, QA engineers, product owner, analysts and DevOps – Each one is responsible for their own piece of microservices.

Highly Resilient

Microservices are better for fault isolation, if one service fails, other services will continue to work without impacting the whole system. If our monolithic application fails, everything that it’s accountable fails at the side of it. Therefore, microservices are highly resilient, needs orchestrator  and good cloud infrastructure for high availability.

Highly Scalable

Microservices are designed for scaling. Scaling allows applications to react to variable load by increasing and decreasing the number of instances of the services it’s using. Also, microservices are cost efficient as you scale the services based on their demand but in monolithic application you would have to scale the whole application irrespective of which service/function would be highly used.


Data Consistency

Microservices are designed to have their own data storage. This raises another question that how do we achieve the data consistency?  We use events driven approach and advanced messaging queue technology (Bus Service, RabbitMQ, etc). If a price of product is changed in the product microservice then we have to update the price in basket microservice. 


Microservice is a disturbed architectural design pattern. We have to make calls to different services to get the data. This leads to increase in round trips and more latency. However, you can minimise this by using API aggregate design to aggregate the data through API gateway.

High Complexity

Microservices are small and modular but when you have an application which consists of hundreds of microservices, calling each other. It becomes really complex.


Today, we have discussed what is a microservice, it’s advantages and disadvantages. I have also made a demo app for you to understand the whole concept. You can download the code and play around with it. In the next upcoming articles, I will explain a more hands-on approach and walk you through, how you can develop a microservice.

Azure Locks – How to prevent accidental deletion of azure resources

Microsoft Azure offers a feature known as ‘Locks’. It enables to prevent deletion and applying unexpected changes to azure resources accidentally. By default, Owner and User administrators have access to apply Locks.

There are two types of Locks: CanNotDelete and ReadOnly

CanNotDelete means authorized users can still read and modify a resource, but they can’t delete the resource.

ReadOnly means authorized users can read a resource, but they can’t delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role

You can apply locks on followings levels in Azure

  1. Subscription
  2. Resource Group
  3. Resource

How to apply Locks?

Go to Azure Portal and select the resource. In my case I am applying it at resource level (SQL database).

Azure Locks
How to find locks

Under settings, click on ‘Locks’ to open the blade, enter the lock details and click on ‘Add’.

Azure Locks
Adding Lock

Once the lock is created, go to resource and click on delete. Azure will give you an error message saying delete operation can not be performed because it is locked.

Now we know what locks are and what they do. We can apply them on all kind of resources. Especially on resources which if deleted can not be recovered. For example, storage, if ‘soft delete’ is not ON, once storage account is deleted, it can not be recovered. Other scenario can be if you have multiple co-admins, contributors in your organisation and you want to make sure no resource is deleted accidentally, you can lock those resources. In a nutshell, locks can be used in a lot of ways and can make our life easy.

What are Minification and CDN?

Minification is the process of reducing the size of files to load content faster which results in lower bandwidth, faster results and better user experience.

Step by step minification process:

It is a very simple process:
1. Write the code in development environment.
2. Minify the file.
3. Deploy it on the server and you are ready to go.

When the user opens the web page, a minified file is sent to the client’s browser.

Minified JS file


Normal JS file
Normal JS file

You may have noticed the difference between a normal and a minified file as shown in above images. When you minify a file all the white spaces are removed and long variable names are shortened. The minified file has less size which results in lower bandwidth consumption and loads web page 60% faster. There’s a difference of 150kb+ between the normal and minified version of JQUERY JS library file.

Content delivery network (CDN)

The best practice is to load all assets and JS files through CDN. There are different cloud vendors which offer CDN services like Azure CDN and AWS CDN.

CDN always cache files for better performance and request management. It’s always recommended that you version your files so when you push a new change in JS file, CDN loads a newer version.

For example: There’s a file named login.min.js?v=1.0. When you make changes in this file, update it’s version to login.min.js?v=1.1 so server always sends an updated version with new changes.

Minification Tools

There are a lot of tools available on market but following are the most popular ones.

BizSpark Program and Microsoft for Startups

Microsoft started the BizSpark program to help startups and software entrepreneurs in 2009. It offered tons of benefits like 750$ credit/month for Microsoft Azure, free access to MSDN Subscriptions in which you can use licensed software like Visual Studio, Microsoft Word, Excel, and other Microsoft products. The main aim of this program was to help entrepreneurs bring ideas to life and build product/services which can help businesses and people.

To signup for BizSpark, simply you had to fill up the form online and send the details of your idea. If the BizSpark team liked your idea, they will approve it and give you free three years access to Microsoft Azure and other products.

Recently, the BizSpark program was discontinued on February 14th, 2018 and Microsoft created a new program with the name ‘Microsoft for Startups‘. Microsoft has partnered up with startup accelerators, incubators and VCs all around the world. To sign up for this program you have to contact one of Microsoft’s local program director to learn if you can qualify for this programs exclusive benefits. Here is a list of partners which you can look on Microsofts website.

This program can really help entrepreneurs out there who are working on their ideas or are in the middle of their startup journey, they can really get benefit out of this program. I wrote this blog just to spread the word about this program or if it can help anybody. If you need further help, shoot me an email and I can get you in touch with one of local Microsoft’s representative.