Using signalfd and pidfd to make signals less painful under Linux

Anyone introduced to Unix programming gets to marvel at the clever construct of signals. In the life-cycle of a process, fortune and misfortune are present in good measure. Signals allow the operating system to tell the process about the occurrence of various events like the execution of illegal CPU instructions, a user typing and thus… Continue reading Using signalfd and pidfd to make signals less painful under Linux

Containers the hard way: Gocker: A mini Docker written in Go

They are popular and they are misunderstood. Containers have become the default way applications are packaged and run on servers, initially popularized by Docker. Now, Docker itself is misunderstood. It is the name of a company and a command (a suite of commands, rather) that allow you to manage containers (create, run, delete, network) easily.… Continue reading Containers the hard way: Gocker: A mini Docker written in Go

Sparkler: A KVM-based Virtual Machine Manager

[Join the discussion on Hacker News here.] Serverless computing is quite the rage these days and AWS Lambda is on the forefront of this. A while ago, they released Firecracker, the engine behind Lambda. Unsurprisingly, it was based on Linux’s KVM (Kernel-based Virtual Machine) technology, but what was surprising was how it gave up the… Continue reading Sparkler: A KVM-based Virtual Machine Manager

Linux Applications Performance: Part VII: epoll Servers

This chapter is part of a series of articles on Linux application performance. If you came here without reading the poll()-based implementation description All the explanation of how exactly we move from a process or thread-based model to an event based model is there in the poll()-based article. Without going through it, this article might… Continue reading Linux Applications Performance: Part VII: epoll Servers

Linux Applications Performance: Part IV: Threaded Servers

This chapter is part of a series of articles on Linux application performance. Threads were all the rage in the 90s. They allowed the cool guys to give jaw-dropping demos to their friends and colleagues. Like so many cool things from the 90s, today, it is yet another tool in the arsenal for any programmer.… Continue reading Linux Applications Performance: Part IV: Threaded Servers

Linux Applications Performance: Part V: Pre-threaded Servers

This chapter is part of a series of articles on Linux application performance. The design discussed in this article is more popularly known as “thread pool”. Essentially, there is a pre-created pool of threads that are ready to serve any incoming requests. This is comparable to the pre-forked server design. Whereas there was a process… Continue reading Linux Applications Performance: Part V: Pre-threaded Servers

Linux Applications Performance: Part VI: Polling Servers

This chapter is part of a series of articles on Linux application performance. When things happen sequentially, we get them. All our flowcharts, algorithms or workflows are sequential in nature. It’s easy for our brains to understand sequential happenings. Moving to a multi-process or a threaded model is also a simple extension of that model.… Continue reading Linux Applications Performance: Part VI: Polling Servers

Linux Applications Performance: Part III: Preforked Servers

This chapter is part of a series of articles on Linux application performance. While the iterative server has trouble serving clients in parallel, the forking server incurs a lot of overhead forking a child process every time a client request is received. We saw from the performance numbers that in our test setup, it has… Continue reading Linux Applications Performance: Part III: Preforked Servers

Linux Applications Performance: Part II: Forking Servers

This chapter is part of a series of articles on Linux application performance. In Part I: Iterative servers, we took a look at a server which deals with one client request at a time. This server called accept() whenever it was done serving one client so that it could accept more client connections and process… Continue reading Linux Applications Performance: Part II: Forking Servers

Linux Applications Performance: Part I: Iterative Servers

This chapter is part of a series of articles on Linux application performance. The iterative network server is one of the earliest programs you might have written if you ever took a course or read a book on network programming. These type of servers are not very useful except for learning network programming or in… Continue reading Linux Applications Performance: Part I: Iterative Servers