Posts

Showing posts from April, 2019

Load balancing and Fail-over systems

Image
Load balancing is a technique of distributing your requests over a network when your server is reaching maximum amount of size of the CPU or disk or database IO rate. The objective of load balancing is optimizing resource use and minimizing response time, thereby avoiding overburden of any one of the resources. The goal of fail-over is the ability to continue the work of a particular network component or the whole server, by another, should the first one fail. Fail-over allows you to perform maintenance of individual servers or nodes, without any interruption of your services. It is important to note that load balancing and fail-over systems may not be the same, but they go hand in hand in helping you achieve high availability. Implementing Load Balancing: Although the idea of load balancing is very clear, its implementation is not. In this post, I would touch upon the basic ideas of implementation of load balancing. Load balancing can be performed with the help of hardware a

Only one usage of each socket address (protocol/network address/port) is normally permitted exception throwing in WCF

Lets say that you are invoking a web service from another web service. Both are on the same box. You might be making authenticated or unauthenticated calls and perhaps you are setting the  KeepAlive = false.  Intermittently, (under load) you might get  "Only one usage of each socket address (protocol/network address/port) is normally permitted (typically under load)."  You might be wondering why are you getting a *SOCKET* Exception... Here is the scoop 1. When you make authenticated calls, the client is closing connections. And when you are making authenticated calls  repeatedly to the same server, you are making and closing connections repeatedly 2. The same might happen when you are making regular http [un authenticated] calls but setting keep-alive = false. When a connection is closed, on the side that is closing the connection the 5 tuple  { Protocol, Local IP, Local Port, Remote IP, Remote Port} goes into a TIME_WAIT state for 240 seconds by default. In t

API Gateway

Image
An API gateway is a service which is the entry point into the application from the outside world. It’s responsible for request routing, API composition, and other functions, such as authentication. Let’s take a look at the API gateway pattern. Overview of the API gateway pattern The drawbacks of clients making multiple requests in order to display information to the user are well known (I describe them in my book!). A much better approach is for a client to make a single request to what’s known as an API gateway. An API gateway is a service which is the single entry-point for API requests into an application from outside the firewall. It’s similar to the Facade pattern from object-oriented design. Like a facade, an API gateway encapsulates the application’s internal architecture and provides an API to its clients. It might also have other responsibilities, such authentication, monitoring, and rate limiting.  Figure 1 shows the relationship between the clients, the API gateway,