Load balancing and Fail-over systems

Load balancing is a technique of distributing your requests over a network when your server is reaching maximum amount of size of the CPU or disk or database IO rate. The objective of load balancing is optimizing resource use and minimizing response time, thereby avoiding overburden of any one of the resources.
The goal of fail-over is the ability to continue the work of a particular network component or the whole server, by another, should the first one fail. Fail-over allows you to perform maintenance of individual servers or nodes, without any interruption of your services.
It is important to note that load balancing and fail-over systems may not be the same, but they go hand in hand in helping you achieve high availability.

Implementing Load Balancing:

Although the idea of load balancing is very clear, its implementation is not. In this post, I would touch upon the basic ideas of implementation of load balancing. Load balancing can be performed with the help of hardware and software and sometimes a combination of both


he simplest way of load balancing is to use different servers for different services. For instance, you could run the web server on one instance, the database server on another and serve static content through a CDN. It’s easy because there is no issue of data replication.
A second way to perform load balancing is to have multiple front end servers. That would mean that multiple IP addresses would be setup for the same domain. When a client sends a request, a random IP address is given to him, spreading the load around

Implementing Fail-over:

Since fail-over involves the systems going down (or failing) completely, the data needs to be present at all servers, or in other words, there is a need for data replication. In Unix based systems, file systems can be synced using rsync and cron jobs, whereas for databases.
Fail-over typically involves two servers- a primary and a secondary server. The primary takes the normal load, processing requests, while the secondary monitors the primary and waits for it to shut down in order to take over the services.

For the process to take place successfully, you need to detect the failure in a system and hence, route the request to a new system. This fail-over can be triggered by changing the IP address that your domain points to. However, IP address changes take a few minutes to be implemented.

We hope that this post helped you understand the basics of load balancers and fail-over systems and served as an important step for you in implementing these techniques to your product.

Comments

Popular posts from this blog

Email Sending through O365 using OAuth Protocol

IoT Technology

What is reverse proxy?