Single Responsibility Principle (SRP)

In this article, we are going to discuss SRP which is one of the SOLID design principles. S in SOLID denotes the Single responsibility principle. It is a recommended practice to implement it in our code to make the code leaner and less coupled.

What is SRP?

Each software module or class should have only and only one reason to change.

Robert C. Martin

We need to design the software so that every class has one or a similar set of functions. For example, we are designing a module in which the user gets a welcome email on a sign-up and all the user actions are logged. If we write all of these functions in a single class then it would become really complex. In the future, a single change would impact the whole class and can break things.

To make the code manageable, which is one of the principles of OOP. We would split the functionality into multiple classes. So each class has a single function to perform thus resulting in a maintainable software.

Without SRP

As you can see, the User module/class is doing multiple things. It is saving a user, sending an email, and logging all the activity. It is violating the Single Responsibility Principle. If we don’t change the code then we would have to copy the email/log activity code to all the other classes where it is needed. User class should only do one thing which to save users to the database.

With SRP

With SRP you can see each class has its own set of function(s). Users can communicate with databases, email class is responsible for emails and logging class to log activity. As a result it enforces the single responsiblity principle.

Email and logging class are re-useable. You can inject them in any other modules where they are needed.

Conclusion

In this blog, we discussed SRP which is a very important principle in today’s programming world. It makes code modular, leaner, and testable. In the next blog, we will discuss the Interface Segregation Principle and its importance.

What is the Cartesian Explosion in EF Core?

An entity framework is an ORM that makes developer’s lives easier. Using EF, we can interact with the database and perform CRUD operations using a programming language (C#, VB.net). In this blog, we are going to discuss the cartesian explosion in regards to loading data.

Cartesian Explosion

It is related to performing joins. When you perform a join, one table’s columns repeat the X number of times of the matched records from the joined table.

var customers = context.customers
.include(a => a.address)
.ToList();

The LINQ query in SQL:

Select * From Customers
 Left Join Address 
 on Address.Id = Customers.AddressId
Customer IdAddress Details
11
12
…..…..
1500

You can see, the columns of the customer table are repeated 500 times. Now, imagine if you are working on a complex dataset where you have to add a few more includes. The returned data set would explode – it is known as the cartesian explosion.

on-to-the-next-project

Solution

The solution to this is that you split the queries and get data two data sets, rather than one. It would result in two database round trips but would improve performance (fewer reads and CPU time)

var customers = context.customers.ToList();

var addresses = context.addresses.ToList();

Conclusion

In this blog, we discussed what is the cartesian explosion in EF’s eager loading data pattern. You have to be very careful about when to avoid this problem. It is recommended to use the split query approach when the dataset is too large and requires optimization. Otherwise, in normal scenarios include would do the job.

Docker Compose with Asp.Net Core Microservice

In the last blog, we learned about the core concepts of Docker and how we can create a run .net core app in the container. Today, we are going to learn about the docker-compose to run more than one container together.

Docker-compose file

It is a YAML file. By default, it’s named docker-compose.yml but it can be changed and specified as a parameter.

First of all, we specify the version of the file. This should be the latest version. I am using 3.6.

Next, we define our set of services. This is where the magic happens. We can define multi-services in one file and run all the service with a single command.

Service has a name, image, build context, environment, network and volume. You can see all the options here.

Docker-compose commands

It’s very easy to manage dockers with docker-compose commands. There are two main commands. Docker-compose up / Docker-compose down.

Docker-compose up

docker-compose up -d –build

Docker-compose up command run all the services in the compose file. You need to run this command from the directory where docker-compose .yml file is placed.

-d paramter is used to detach console from the container.

By default, images are built when you run this command for the first time. If you make changes to the code, you need to use the –build parameter to tell docker to build image(s) again.

Docker-compose down

docker-compose down

Docker-compose down command stops/removes all the services defined in the compose file. You need to run this command from the directory where the docker-compose .yml file is placed.

Conclusion

Docker-compose is very simple, and helps us run multi-container services from a single file. It is very handy and powerful.

You can download the reference source code on my Github.