This article tells you how to increase your Nestjs app performance. You can take this article as a checklist for your app performance improvement whether you are refactoring your app or creating a new one.

Performance is a broader term that encompasses how well a system executes a task or set of tasks. here we will consider the factors below to measure performance.

  • Response Time: The time taken by the application to respond to a single user request
  • Latency: The time the application takes to respond to a user under an enormous number of requests.
  • Throughput: refers to how many requests can be handled per second

I created a simple application that has poor performance in terms of the above measures. I will work on fixing this app and improving the measures as much as possible.

The application and the tool

The application I use here is very simple, it is a single api to get users array after capitalizing the first letter in each word in Username and about. it reads user data from Json file.

This test run on these specs pc, Processor 12th Gen Intel(R) Core(TM) i9–12900K, 3200 Mhz, 16 Core(s), 24 Logical Processor(s), 64 GB RAM.

I will use the chrome browser to measure a single request to this Api.

I use autocannon to make load test on the app using this command

 autocannon -c 10 -d 10s http://localhost:3000/users

In this command autocannon will open 10 connections for 10 seconds and will send requests as much as my app can process. The first load test result will be like this

The summary of this load test result as below:

  • Avg Latency: 73 ms
  • Avg Req/Sec : 134.4
  • Avg Bytes/Sec: 5.97 MB
  • Total Requests: 1k
  • Total size of all requests: 58.7 MB
  • Single Request Time: 4ms
  • Single Request Size: 43.5 kb

I will arrange my results in this table to track the difference easily.

The code used in this application is intended to include bad practices to show the impact of each optimization step.

In this guide, I will assume that database performance practices are already optimized and will focus solely on the application aspect. I will mention caching as a crucial step of enhancing performance of nestjs , But I will not measure its impact.

1- Logging

In Node.js, synchronous logging can significantly impact performance compared to asynchronous logging. When using synchronous logging, each log entry is written to the log file or console immediately, blocking the execution of the application until the write operation completes. This can lead to noticeable delays, particularly under high-load scenarios where frequent logging occurs. On the other hand, asynchronous logging decouples the logging operation from the main execution thread, allowing the application to continue processing requests while log entries are written in the background.

Although we have only one line that log (“Get users”), it has impact on the app performance. There are alternative logging libraries out there like Winston, Pino, Morgan, Bunyan, Log4js, and Errsole. I choose here nest-winston which is a nestjs provider for Winston library that operate async.

After replacing Nestjs logging — check your branch Logging — the result of load testing as below.

As you see here the Avg latency declined to 11.7 ms from 70 ms. this is around 6x in latency. Avg requests/sec increased to 864 and total requests processed by app increased to be 8k , it means the app has enough resources to serve more requests, thanks to asynch logging.

Advice: change your synchronous logging library to async logging library.

2- Injecting Scope

Injecting scope in NestJs can significantly impact performance. While the default scope is singleton by creating single instance shared across request, the request scope adds performance overhead. Each request instantiates a new instance of the class and a new instance of each dependent class. This will lead to increase in memory consumption and potential bottleneck under high load. In Our load test for instance, a 1k requests hit the server and 1k of userservice instance created.

@Injectable({ scope: Scope.REQUEST })
export class UserService { ..}

While there is no reason to make this service request scoped, but you may have use cases that you need a request scoped service. In such case you can use Durability Provider or use Async-local-storage.

In our application i will just remove the Scope.REQUEST and keep the default Singlton.

@Injectable()
export class UserService { ..}

Lets Run autocannon again..🚀

Again, improvement in Avg Latency by 10% to be 10.33 ms and total requests increased by 1k. this is much better now, let’s continue improving.

Advice: Asses your service design if it really need request scope or singleton will be sufficient.

3- Code Optimization

Bad code is the root of all evil, yet code optimization is a broad term that cannot be fully covered here. There are many measures to Analyze algorithm performance, but the most 2 important measures are Time (Big O Notation) and Space. It is recommended to do profiling to your code to detect functions that take time and space.

Anyway, let’s check capitalizeInefficient function. The function has a time complexity of O(w * k), which simplifies to O(n) because n = w * k (the total number of characters in the string). characters using +=, which creates new strings at every iteration. This creates multiple intermediate strings, leading to higher constant-time overhead due to frequent memory allocation.

so let’s improve it a little bit …

private capitalizeOptimized(str: string): string {

return str
.split(' ')
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(' ') ;
}

This code enhancement use charAt(0).toUpperCase() and slice(1). These methods are optimized internally by the JavaScript engine, reducing the overhead of creating new strings multiple times. Each call to slice(1) generates a substring in a single step, avoiding the repeated memory allocations that happen in the first version.

Let’s Run autocannon again. 🚀

We save 2 ms in Avg latency and Total Requests increased to 11k, for the first time single request take less time as 3 ms. Good! but not enough.

4- Caching

While I am not going to include caching effect on the application performance, yet I want to mention difference between caching levels you can use in nestjs app.

5- Compression

In NestJS, using compression middleware can help improve performance by making data smaller and faster to transfer between the server and clients, which saves bandwidth and speeds up responses. However, you need to be careful with how much compression you use. More compression means more CPU work, which can slow things down if your server is under heavy load. I used Gzip for compression with level 1 to keep things fast. You should find a balance between speed and how much you compress your data for your specific needs.

app.use(compression({level:1}));

autocannon… 🚀

autocannon result after gzip compression

Number of user request increased by 9% and response size decreased dramatically to 8.2 kb. The single request time increased again to 4ms and this is normal because the time taken of compression.

Let’s try a better compression Brotli developed by google.

app.use(
compression({
level: 1, // Adjust the compression level as needed
filter: (req, res) => {
// Use Brotli for compression
return compression.filter(req, res) && req.headers['accept-encoding']?.includes('br');
},
brotli: {
// Brotli-specific options can be added here
},
}),

autocannon… 🚀

autocannon result after brotli compression

Nothing significant here except avg latency and avg req/sec enhanced a little bit. Brotli has a good compression but it doesn’t enhance performance.

6- Fastify

Fastify is a framework known for its high performance and low overhead, making it an attractive alternative to Express for building Node.js applications. The key benefit of using Fastify lies in its asynchronous, schema-based approach, which optimizes request handling and significantly reduces the time spent on routing and middleware processing.

Changing your nestjs app to Fastify is easy unless you depend on special features of Express on your code.

import { FastifyAdapter, NestFastifyApplication } from '@nestjs/platform-fastify';


async function bootstrap() {
const app = await NestFactory.create<NestFastifyApplication>(
AppModule,
new FastifyAdapter(),
);
// Apply compression middleware
app.use(compression({level:1}));
await app.listen(3000);
}
bootstrap();

autocannon… 🚀

autocannon result after using Fastify

Switching to Fastify from Express provides modest performance improvements, reducing latency by about 3.4% and handling 3.4% more requests per second. Fastify also increases data throughput by 4.4%, processing more requests and data overall.

7- Async call

while our code has only one IO operation and it is done only one times because it is Singlton using async in the controller increase the performance. Asynchronous functions allow the event loop to continue processing other tasks while waiting for the current task to complete. This non-blocking behavior enables the server to handle multiple requests concurrently, reducing idle time and improving overall throughput.

Let’s fix this


@Get()
async getUsers() {
return this.userService.getUsers();
}

autocannon… 🚀

The load test results show that using async in NestJS API significantly improves performance compared to Fastify. With async, the average latency drops by 17%, and the server handles 28% more requests per second. Additionally, data throughput increases by 26%. This means that async allows the server to process more requests simultaneously and more efficiently, leading to faster response times and better overall performance.

Other Enhancements

There are a few other enhancements worth mentioning.

Nestia is a framework built on top of Fastify that promises improved performance in serialization and validation. It might be worth trying to see if it makes a difference for your use case.

Web Workers allow you to run tasks in parallel across multiple threads, but their effectiveness is highly dependent on CPU-intensive logic; otherwise, you may not see a performance boost and could potentially experience slower performance.

Additionally, testing HTTP/2.0 could be beneficial, especially for front-facing services serving the UI, due to its multiplexing capabilities which can enhance performance.

Final Thoughts

Performance enhancements in this article not necessary give you the same results in your application, there are many variables contribute to the enhancements like the hardware used and your code. However, it gives you indication about the impact of each performance practice.

Almost all practices mentioned here can be done once, like change to factify or change your logging library. The only practice that you have to keep monitoring and optimize is your code.

Improving application performance can lead to significant financial saving on your cloud bill. A faster, more efficient application reduces server load, no of scaling instances and network bandwidth usage.

While striving for excellence in all three dimensions is ideal, achieving a perfect balance is often challenging. Prioritizing velocity might lead to sacrifices in performance or adaptability. Conversely, focusing on performance could impact velocity or adaptability. Effective software development requires careful consideration of these trade-offs and strategic decisions to optimize the overall system’s value