Interesting thing about stream storage clients like libvegas is that they pack a lot of functionality. Therefore it’s important to choose a programming language suitable for this task. For us one of the key considerations for choosing the appropriate programming language was the deployment model for libvegas. Below we present the two models we investigated. Out of process deployment In this model, we package libvegas as an executable and distribute it as a part of the library for each programming language.
A few years ago, I wanted to dabble with an AWS service that I hadn’t used before. One day, I tried Amazon Kinesis Data Streams (KDS). I was shocked by how little has been said about it. It’s a stream storage service that is completely serverless, can auto scale (both in and out with a single API call) and it’s designed for applications with high throughout and low latency in mind šŸ¤Æ.
In this blog post, we are going to explore how to tune AWS SDK HTTP connection pool for S3 in high throughput, low latency environments. Instead of diving straight into the deep end, we will discuss why connection pooling is important and how it works. Then we will go through a series of data collected while adjusting the connection pool for optimal performance. For the purpose of experimentation, I used AWS Go SDK V2 however, the concepts discussed here are applicable to other SDKs as well.
Large systems are usually constructed by composing many small systems. My favorite browser for example distributes its work across different processes while I’m conveniently enjoying my morning dose of hacker news. This also applies to server side systems with the added complexity of modules being distributed across multiple machines, built using different tool-chains etc. Modularisation is great as it gives us the ability to reduce coupling and cohesion. However, ultimately, we are only interested in the overall state of the system rather than the individual modules.