My experience with next.js has been it’s like working with struts or enterprise java beans or something. It’s a giant, batteries included framework whose primary purpose is to lock you in to the Vercel ecosystem. There are some bright spots - next-auth is decent - but all this SSR spaghetti with leaky abstractions dripping out (try rendering emails in the backend of a next app) make it really not worth it.
Also compile times when working on it locally can get astronomical with relatively modest applications for no obvious reasons. Like Scala but less predictable.
I suspect the % of apps built with next.js that actually genuinely get a lot out of SSR is tiny, it’s mostly the darling of B2B SaaS companies.
I discussed it with a couple of Next.js maintainers (https://github.com/vercel/next.js/discussions/75930), and they indicated that it's only a problem for "standalone" deployments (i.e. not on Vercel). However, I'm not entirely convinced that is true. I wonder if there are major optimizations that could be made to, for example, the routing system.
I have no particular views on performance but the thing i keep hearing about next of 'this is solved by using one particular hosting provider' really surprises me as something people are ok with.
I think people reasonably expect, say, an aws lambda to be aws specific.
That's a very different story to React, which is supposed to be a library for general application ui development, and the official react documentation recommending Next as the way to use it.
I've been using React on and off since 2017 or so. Back in the day it was typical to bundle your React code and integrate it with the backend framework of your choice. It was stable and cheap. In the 2020s frontend tech took a bizarre turn...
Is he wrong though? Next.js reason of existence is essentially that .json file navigation. The addition of server side compute (ie: middlewares, router, etc...) is mostly a business decision that the framework has nothing to do with as it breaks your back-end/front-end separation (unless your application is very simple to fit in a nextjs function).
> and they indicated that it's only a problem for "standalone" deployments (i.e. not on Vercel)
The fix is clearly to take out your wallet /not.
NextJs is slow. It doesn't have to be. For their swc, turborepo and everything else going on they could have made the actual production usage fast just as well.
Next has been a nightmare to use every step of the way. Endless undocumented ways to shoot yourself in the foot, especially if you’re trying to deploy somewhere other than Vercel. I can’t imagine anyone using it for anything other than the most basic CRUD app. And even then I’d recommend not using it.
If you use nextjs you need to strictly limit yourself to their Link navigation and maybe their router. Optimally, you want to be able to generate static HTML at all times.
> Next has been a nightmare to use every step of the way.
Nextjs tech is interesting. Unfortunately, their business model is reliant on their integration being a nightmare and thus your complete reliance to use their platform for deployment.
I did expect SSR to be generally quite slow, but ~34 rps on the VPS is below my expectations. If I understand the blog post correctly, this tests an essentially static site with no DB requests. That would be pretty abysmal performance for something like that.
I'd be really concerned to have to work around a performance limitation like that in a more complex app.
We need a lot more details to make sense of this benchmark.
I did a default Next.js install and performed an Apache benchmark, yielding a 2586 RPS for me on a Lenovo IdeaPad Slim 3 (a low-budget laptop).
Are we benchmarking the cache implementation, the JSON parser, or maybe the proxy configuration?
I don't know if I entirely buy the argument here. CSS/JS/Images should be at the very least cached by your reverse proxy if not on a CDN(I'm a strong believer you should never be serving Images/Videos from your own VPS, even if you don't like Cloudflare there are many CDN providers).
So at most it is the actual dynamic content being served, ie API calls.
Now whether next.js bad deployment experience on your own hardware is to blame for the complexity of doing that easily is up for debate.
As much as I dislike the accidental complexity of React and it's "frameworks"(react-router/next), I can deploy a good looking site with good accessibility very quickly using it and evey JS bootcamp dev won't be completely lost if they happened to need to work on the project.
Sometimes the technically best decision is decided by non-technical factors.
I like this article and it directly relates to some things I'm working on at present; but I wish it had gone a bit deeper into server-side rendering.
In an earlier section, there was a statement about how there's something like 60 individual requests in a webpage load, so the "90% less" (1/10th speed) could actually be faster overall.
Also worth investigating is how many concurrent requests can exist. If it's a little slower; but a single server can handle 5x the number of concurrent requests, because most of the interaction is busy-waiting for something else, that could be worthwhile.
Given how many job openings seem to be interested in Next.js and/or anything 100% Javascript, it seems like some parts of the industry are pushing all JS all the time; but it seems like maybe that's not the right route to go, and that _also_ is interesting.
I think here pre-rendering means running React on the server (so the React render method called before the HTML hits the wire). So equivalent of if say Ruby On Rails rendered ERB but with React instead.
They call the pregenerated static HTML "SSG" or "Static Site Generation"
That should be as fast as hell: basically a CDN job.
I migrated an app from NextJS to Django to make it fast. Django because there was a lot of python code anyway to interop with. But any of the common MVC frameworks in any language/runtime that let you cache etc. will do. They are pretty mature.
The NextJS big killer for me was optimising page speed was nuts. You always get a fatty JS bundle even if the page is pre rendered or not.
Normally, yes. But there's a couple rendering modes with these frameworks. In this case, the rendering is most likely 'hybrid'. Some routes are statically pre-rendered, some are served via SSR. You'd need a JS server for the SSR ofc.
Using Next.js for a blog/home website (not PWA) is overengineering. Why not just put a static website made in Publii, hugo or similar and just sparkle it with a bit of vanilla JS where it is appropriate? This is how it should be - progressive enhancement. Requesting hundreds of KB of JS code just to show a static content is bad practice. KISS!
That is why Request per seconds has been a meaningless measurement for the past 20 years. A page view would be a better metric. Whether that is 1 request or 100 request.
That is especially true when comparing a site done with Rails + Hotwire vs Others + React / Client Side Rendering.
Interesting. My hunch is that Next.js is not optimized for the dockerized Node server deployment. I would say that you could get much greater prerendering performance from Next.js by just fronting the assets directly using Caddy/Nginx.
Actually it is like any other Node server app, which is important if you're using dynamic routes. Of course there is a lot you can put behind client-side JS, but most people are interested in SSR for good reason. So while a separate web server can help in some scenarios, it sort of indicates Next.js might be more than you need.
Let me guess: Significantly less than the same site as an SPA on a CDN. All with significantly more cost, complexity, and a whole additional domain of security concerns. The only up side is rapid search engine indexing, about a week faster - if that even matters to you. Performance improvement is questionable. Debugging is very difficult. It's difficult to imagine why you would want your team to ensure anything server side for your web front end in 2025.
It's not that straightforward for SSR applications, however yes SSG is offered by a lot of frameworks and next's build times are horrendous in 2025. I'm glad to see the status quo changing
Even aside from Next.js specifically - when did it become okay that visiting a website fires off dozens of HTTP requests? Back in the day the server would just render the site and return it in one request. It seems nowadays we are causing both the client and server to do a lot more work. And it's not like development has gotten easier, it's mostly gotten harder. So it's simply the worst of all worlds?
The server would almost always respond with at the very least an HTML page and a stylesheet, since ages ago you would likely also make a request for js.
Every image has always been a seperate request and video, which is chuncked, is multiple requests.
Unlesd you are deploying an unstyle HTML page with no media the server isn't serving 1 request even for old school web pages.
> One key thing to underline: each request in the test is just a single call to the pre-rendered Next.js page. No assets are requested. A real visitor would need 60+ additional requests. That means as few as three simultaneous visitors could push the server dangerously close to its limit.
Ok but remember browsers will reuse HTTP connections, HTTP2 introduces multiplexing, and browsers have tight mode. So you can't just take the figures for 1 request and divide by 60 to get the real world performance.
Reusing HTTP connections is not relevant if the app is behind a reverse proxy like nginx, which, last time I checked, still didn’t proxy with http2 multiplexing connections to the app server.
Depends where the bottleneck is. But basically, we're saying different server setups are also a factor and another reason why you need to do testing with representative loads :-)
Next.js is basically designed to run on servers that process one event at a time and in which there are many parallel deployments (ie. 'serverless'). Still, it's shocking just how bad its performance actually is - virtually unusable for anything serious hosted off of Vercel or analogous providers.
I just don't understand how this can be this slow. What on earth is it doing to get 193 requests per second on static, cached content?
The article doesn't dig too deep into this sadly, rather just accepts that it's slow and uses a different tool.
But seriously, this is responding to a request with the content of a file. How can it be 100,000x slower than the naive solution. What can it possibly be doing, and why is it maxxing out the CPU?
If no-one else looks into this, I might get around to it.
Maybe hn could periodically hide ddosed sites on the front page.
I ran a script once that would show an archived copy for links that stopped working. Then ended up hosting/stealing most of someone website which moved to a different domain. The concept was nice tho.
I've been down the same path with next.js. When seeing terrible performance after testing with k6, I pushed all static files (js, css, images etc) to a cdn.
This way only the first request hits your actual server, the rest is handled for you.
No mention of https://nodejs.org/api/cluster.html#cluster? Not sure if NextJS supports it even, but if you are only using one core of your webserver for node, you are leaving stuff on the table.
> Then again, I don't think <500 kB is too much traffic for a modern website, especially since the page is already usable after the first 100 kB.
I could not disagree more. Why does a "modern" website that just has simple static text and images need to be tens of times larger/slower to load than a simple static website with plain old HTML and CSS?
What kind of "developer experience" do you need for a static website? Just write HTML or markup and run it on a local server with hot reload -- what more do you want/need? Specifically what use cases is NextJS satisfying here?
> excellent developer experience (DX)
My experience with next.js has been it’s like working with struts or enterprise java beans or something. It’s a giant, batteries included framework whose primary purpose is to lock you in to the Vercel ecosystem. There are some bright spots - next-auth is decent - but all this SSR spaghetti with leaky abstractions dripping out (try rendering emails in the backend of a next app) make it really not worth it.
Also compile times when working on it locally can get astronomical with relatively modest applications for no obvious reasons. Like Scala but less predictable.
I suspect the % of apps built with next.js that actually genuinely get a lot out of SSR is tiny, it’s mostly the darling of B2B SaaS companies.
Did you have issues self-hosting? https://nextjs.org/docs/app/building-your-application/deploy...
I have also found that Next.js is shockingly slow.
I recently added some benchmarks to the TechEmpower Web Framework Benchmarks suite, and Next.js ranked near dead last, even for simple JSON API endpoint (i.e. no React SSR involved): https://www.techempower.com/benchmarks/#section=data-r23&hw=...
I discussed it with a couple of Next.js maintainers (https://github.com/vercel/next.js/discussions/75930), and they indicated that it's only a problem for "standalone" deployments (i.e. not on Vercel). However, I'm not entirely convinced that is true. I wonder if there are major optimizations that could be made to, for example, the routing system.
I have no particular views on performance but the thing i keep hearing about next of 'this is solved by using one particular hosting provider' really surprises me as something people are ok with.
kind of like the AWS tools only really work if you use them with Amazon Web Services...
I think people reasonably expect, say, an aws lambda to be aws specific.
That's a very different story to React, which is supposed to be a library for general application ui development, and the official react documentation recommending Next as the way to use it.
https://react.dev/learn/creating-a-react-app
I've been using React on and off since 2017 or so. Back in the day it was typical to bundle your React code and integrate it with the backend framework of your choice. It was stable and cheap. In the 2020s frontend tech took a bizarre turn...
Going through the GitHub discussion is eye-opening given that CEO of Vercel just publicly stated that Next.js is an API framework: https://x.com/rauchg/status/1895599156724711665
Is he wrong though? Next.js reason of existence is essentially that .json file navigation. The addition of server side compute (ie: middlewares, router, etc...) is mostly a business decision that the framework has nothing to do with as it breaks your back-end/front-end separation (unless your application is very simple to fit in a nextjs function).
> and they indicated that it's only a problem for "standalone" deployments (i.e. not on Vercel)
The fix is clearly to take out your wallet /not.
NextJs is slow. It doesn't have to be. For their swc, turborepo and everything else going on they could have made the actual production usage fast just as well.
Next has been a nightmare to use every step of the way. Endless undocumented ways to shoot yourself in the foot, especially if you’re trying to deploy somewhere other than Vercel. I can’t imagine anyone using it for anything other than the most basic CRUD app. And even then I’d recommend not using it.
If you use nextjs you need to strictly limit yourself to their Link navigation and maybe their router. Optimally, you want to be able to generate static HTML at all times.
> Next has been a nightmare to use every step of the way.
Nextjs tech is interesting. Unfortunately, their business model is reliant on their integration being a nightmare and thus your complete reliance to use their platform for deployment.
We switched to Sveltekit and are much happier. The output is also much faster and leaner.
I did expect SSR to be generally quite slow, but ~34 rps on the VPS is below my expectations. If I understand the blog post correctly, this tests an essentially static site with no DB requests. That would be pretty abysmal performance for something like that.
I'd be really concerned to have to work around a performance limitation like that in a more complex app.
We need a lot more details to make sense of this benchmark. I did a default Next.js install and performed an Apache benchmark, yielding a 2586 RPS for me on a Lenovo IdeaPad Slim 3 (a low-budget laptop). Are we benchmarking the cache implementation, the JSON parser, or maybe the proxy configuration?
Ditch next, build a static site, put it in your object storage of choice, cache/cdn in front… solved problem?
What are you silly? There’s no resume padding with a solution like that!!!
I don't know if I entirely buy the argument here. CSS/JS/Images should be at the very least cached by your reverse proxy if not on a CDN(I'm a strong believer you should never be serving Images/Videos from your own VPS, even if you don't like Cloudflare there are many CDN providers).
So at most it is the actual dynamic content being served, ie API calls.
Now whether next.js bad deployment experience on your own hardware is to blame for the complexity of doing that easily is up for debate.
As much as I dislike the accidental complexity of React and it's "frameworks"(react-router/next), I can deploy a good looking site with good accessibility very quickly using it and evey JS bootcamp dev won't be completely lost if they happened to need to work on the project.
Sometimes the technically best decision is decided by non-technical factors.
I am also surprised and suspect he's not using any reverse proxy.. so is node doing tls, etc?
I like this article and it directly relates to some things I'm working on at present; but I wish it had gone a bit deeper into server-side rendering.
In an earlier section, there was a statement about how there's something like 60 individual requests in a webpage load, so the "90% less" (1/10th speed) could actually be faster overall.
Also worth investigating is how many concurrent requests can exist. If it's a little slower; but a single server can handle 5x the number of concurrent requests, because most of the interaction is busy-waiting for something else, that could be worthwhile.
Given how many job openings seem to be interested in Next.js and/or anything 100% Javascript, it seems like some parts of the industry are pushing all JS all the time; but it seems like maybe that's not the right route to go, and that _also_ is interesting.
Just interesting things all around :)
I’ve never used next.js. Wouldn’t pre-rendering a site remove dependence on any framework? It should just be html js and css right?
I think here pre-rendering means running React on the server (so the React render method called before the HTML hits the wire). So equivalent of if say Ruby On Rails rendered ERB but with React instead.
They call the pregenerated static HTML "SSG" or "Static Site Generation"
That should be as fast as hell: basically a CDN job.
But that is not pre-rendering. That is Server-Side Rendering (SSR).
NextJS has its own particular nomenclature:
https://nextjs.org/learn/pages-router/data-fetching-pre-rend...
Ah this clears it up for me. No wonder OP’s results were the way they were. Next.js uses “fast” words to describe “slow” processes.
I migrated an app from NextJS to Django to make it fast. Django because there was a lot of python code anyway to interop with. But any of the common MVC frameworks in any language/runtime that let you cache etc. will do. They are pretty mature.
The NextJS big killer for me was optimising page speed was nuts. You always get a fatty JS bundle even if the page is pre rendered or not.
Normally, yes. But there's a couple rendering modes with these frameworks. In this case, the rendering is most likely 'hybrid'. Some routes are statically pre-rendered, some are served via SSR. You'd need a JS server for the SSR ofc.
you can always just do html js and css even without pre-rendering. Just as a vanilla site.
This is the world of js frameworks. It's never _that_ simple because reasons.
Using Next.js for a blog/home website (not PWA) is overengineering. Why not just put a static website made in Publii, hugo or similar and just sparkle it with a bit of vanilla JS where it is appropriate? This is how it should be - progressive enhancement. Requesting hundreds of KB of JS code just to show a static content is bad practice. KISS!
That is why Request per seconds has been a meaningless measurement for the past 20 years. A page view would be a better metric. Whether that is 1 request or 100 request.
That is especially true when comparing a site done with Rails + Hotwire vs Others + React / Client Side Rendering.
Interesting. My hunch is that Next.js is not optimized for the dockerized Node server deployment. I would say that you could get much greater prerendering performance from Next.js by just fronting the assets directly using Caddy/Nginx.
Actually it is like any other Node server app, which is important if you're using dynamic routes. Of course there is a lot you can put behind client-side JS, but most people are interested in SSR for good reason. So while a separate web server can help in some scenarios, it sort of indicates Next.js might be more than you need.
Yeah but this just means that you can get better performance if someone else handles 70% of the requests.
Let me guess: Significantly less than the same site as an SPA on a CDN. All with significantly more cost, complexity, and a whole additional domain of security concerns. The only up side is rapid search engine indexing, about a week faster - if that even matters to you. Performance improvement is questionable. Debugging is very difficult. It's difficult to imagine why you would want your team to ensure anything server side for your web front end in 2025.
It's not that straightforward for SSR applications, however yes SSG is offered by a lot of frameworks and next's build times are horrendous in 2025. I'm glad to see the status quo changing
Even aside from Next.js specifically - when did it become okay that visiting a website fires off dozens of HTTP requests? Back in the day the server would just render the site and return it in one request. It seems nowadays we are causing both the client and server to do a lot more work. And it's not like development has gotten easier, it's mostly gotten harder. So it's simply the worst of all worlds?
The server would almost always respond with at the very least an HTML page and a stylesheet, since ages ago you would likely also make a request for js.
Every image has always been a seperate request and video, which is chuncked, is multiple requests.
Unlesd you are deploying an unstyle HTML page with no media the server isn't serving 1 request even for old school web pages.
Since at least 15 years.
> One key thing to underline: each request in the test is just a single call to the pre-rendered Next.js page. No assets are requested. A real visitor would need 60+ additional requests. That means as few as three simultaneous visitors could push the server dangerously close to its limit.
Ok but remember browsers will reuse HTTP connections, HTTP2 introduces multiplexing, and browsers have tight mode. So you can't just take the figures for 1 request and divide by 60 to get the real world performance.
Reusing HTTP connections is not relevant if the app is behind a reverse proxy like nginx, which, last time I checked, still didn’t proxy with http2 multiplexing connections to the app server.
Depends where the bottleneck is. But basically, we're saying different server setups are also a factor and another reason why you need to do testing with representative loads :-)
Next.js is basically designed to run on servers that process one event at a time and in which there are many parallel deployments (ie. 'serverless'). Still, it's shocking just how bad its performance actually is - virtually unusable for anything serious hosted off of Vercel or analogous providers.
I just don't understand how this can be this slow. What on earth is it doing to get 193 requests per second on static, cached content?
The article doesn't dig too deep into this sadly, rather just accepts that it's slow and uses a different tool.
But seriously, this is responding to a request with the content of a file. How can it be 100,000x slower than the naive solution. What can it possibly be doing, and why is it maxxing out the CPU?
If no-one else looks into this, I might get around to it.
I'm getting 2586 RPS with the default next.js starter app on a low budget laptop. Something isn't right here.
Maybe hn could periodically hide ddosed sites on the front page.
I ran a script once that would show an archived copy for links that stopped working. Then ended up hosting/stealing most of someone website which moved to a different domain. The concept was nice tho.
I've been down the same path with next.js. When seeing terrible performance after testing with k6, I pushed all static files (js, css, images etc) to a cdn.
This way only the first request hits your actual server, the rest is handled for you.
No mention of https://nodejs.org/api/cluster.html#cluster? Not sure if NextJS supports it even, but if you are only using one core of your webserver for node, you are leaving stuff on the table.
I think these stats are for running with the built in next.js server?
For shits and giggles, you should try compiling to completely static files, then host them with nginx.
> Then again, I don't think <500 kB is too much traffic for a modern website, especially since the page is already usable after the first 100 kB.
I could not disagree more. Why does a "modern" website that just has simple static text and images need to be tens of times larger/slower to load than a simple static website with plain old HTML and CSS?
What kind of "developer experience" do you need for a static website? Just write HTML or markup and run it on a local server with hot reload -- what more do you want/need? Specifically what use cases is NextJS satisfying here?
when your site gets really large and complex then plain vanilla js is hard to maintain, no?
have you considered using remix?