Posts

RabbitMQ Retry Architechure

Image
  Event-Driven architecture services communicate by messages using Message brokers like RabbitMQ. The entire transaction will be completed if the message will get processed successfully. While processing messages from Queue there will be a chance of failure due to invalid data or resources unavailable. If data is invalid, then the message will get failed while processing. In this case, we can reject the message from the Queue and needs to send a notification to the corresponding services about the invalid data. Resource unavailability will be based on completing the entire transaction in the distributed system. The availability time will vary from milliseconds to seconds in distributed systems.  Once Resource is available, we need to process the message again. We need to retry the message in case of failure that will increase our system reliability and accuracy. Retry Mechanism: We can retry the message from RabbitMQ using two approaches.     1) Rollback  ...

RabbitMQ Designs

Image
  RabbitMQ is used to make communication between services by messages. Using this we can achieve event-driven architecture. Most people have experienced RabbitMQ with the Simple  Push to Pull(P2P)  mechanism. They will simply push the messages into the queue and another service it will consume those messages. It's a kind of one-to-one mapping between the producer and consumer. People have read the exchanges and routing, but they didn't have the chance to implement it or map it into their use cases. I have started my RabbitMQ development with a simple P2P mechanism later it has enhanced with one too many relationships using exchanges and routing. P2P: I need to send an SMS to the end-user in my eCommerce application once the user confirmed the order. I have created a separate service to send an SMS based on the RabbitMQ message. Order service will push the message into the SMS queue once the user confirmed the order. SMS service will read the message from the SMS queue and...

Dapper Micro ORM (Connection Management)

Image
Dapper is a micro-ORM or a Data Mapper. It internally uses ADO.NET. Additionally, Dapper maps the ADO.NET data structures to your custom POCO classes. As this is additional work Dapper does, in theory, it cannot be faster than ADO.NET. It's Prevent SQL Injection from external user input by avoiding raw-SQL query string building. Dapper provides methods to build parameterized queries as well as passing sanitized parameters to stored procedures. Dapper has provided multiple methods to Query or execute the store procedures and SQL queries. At the High level, we can split into two categories as Query and Execute. Query: Its mainly used to fetch the data from the database. It has the following feature. To query the data in sync or async way  To fetch single or multiple records. To query single or multiple result sets also. Execute: It's mainly used to manipulate data in the database. We can pass store procedures or SQL statements as input to execute them into the database. It ...

Distributed Caching (Redis)

Image
  Caching is mainly used to store the data that are used very frequently or some form of complex computation. Caching is mainly used to improve the application performance and reduce the APIs latency. Cache memory made of high-speed static RAM so it will be faster than the normal Main memory. Cache memory will be expansive because of its speed. There is an N number of in-memory cache available. Redis is one of the open-source in-memory data stores to store the data in key-value format with optional durability. So while design the distributed caching we need to take the following consideration in order to meet the low latency with low cost. Is Caching Required: Based on the below points we can identity caching will be required for the application or not. 1) Number of reads and writes operations. 2) Number of reading operations at the individual data or user level.  3) Data can be converted into different formats for a particular period of time. It will reduce the heav...

Access As a Service

Image
 As part of our needs, we are using a different kind of application. Each application has a different kind of login mechanism. Each mechanism will enhance user experience like federated login with an external provider and improve the application security like adding a verification layer as part of user login. Most of the applications are stateless and they are accessing their backend APIs using an access token. This access token should have information about the user access, minimal user information and it should expire as per the security needs. We can generate the access token in different ways using different types of grant types. ClientCredientials: This type of grant is commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. Multiple application they can communicate with each other using this flow.  They will pass their client id and the secret to the IDP and get the token. Using that token they will acce...

.NET Core NUGET packages

 .Net core development is based on the open-source NuGet packages. NuGet will reduce work and manage the application very easily. We need to select the NuGet based on the purpose otherwise it will impact the application performance as well as application size also. As part of the development lifecycle, we need to select a different set of libraries on each stage.  While creating a new service we need to inform what kind of APIs are going to be available and what are fields will belong to that each APIs. It will provide high-level information about the service. So We need clear documentation at the first phase of development itself. We can create documentation either manually or code-wise. Manual documentation will double the work. Code wise we can generate the document by code and based on the needs we can update it later. We can create high-level API documentation using Swashbuckle at the initial stage of development. Swashbuckle: We need to share the API specs while developi...

API Calls Evolution(Callback vs Promise vs Observables)

Application without API calls is very less in the market. At least they will send the app analytics details to the backend to improve their application feature. Making API calls is simple but it will have a lot of complexities in the implementation. As part of javascript, making API calls is a very critical part in terms of performance. We shouldn't block the thread until we receive the data. The application needs to know when the data is returned. Then its need to execute the data processing. Execute the data processing based on the API response will be simple or complex based on the business logic. Data processing parts have been improved a lot based on the developer pain-points. I have started my developments with callbacks, then promise, and now Observer. In each phase, they have addressed a lot of problems. Callback: Calling the function once a response is received. The response will be success or failure. Based on that function will get executed. This is a simple use case. ...