1

I have been trying for hours to get the Node.js server to handle 2 requests parallelly but have no success.

Here is my full code:

var http = require('http'); http.createServer(function (req, res) { console.log("log 1"); handleRequest().then(() => { console.log("request handled"); res.write('Hello World!'); res.end(); }); console.log("log 2"); }).listen(8080); const handleRequest = () => { const p = new Promise((resolve, reject) => { setTimeout(() => resolve('hello'), 10000); }) return p; } 

When i run this i immediately open 2 tabs in my browser (Chrome) and watch the logs in the IDE. Here is the logs i'm getting:

log 1 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00) log 2 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00) request handled log 1 Fri Mar 12 2021 23:27:49 GMT+0300 (GMT+03:00) log 2 Fri Mar 12 2021 23:27:49 GMT+0300 (GMT+03:00) request handled 

For individual requests, it seems that my "async" code works as I expected. At first, logs printed, after 10 seconds, request handling completed. But as you can see in the timestamps, despite I am opening 2 tabs just after another (sending two requests same time), they are not handled parallelly. Actually, I was hoping to get logs like this:

log 1 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00) log 2 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00) log 1 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00) log 2 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00) request handled request handled 

It seems my second request is not handled until the first one completely done. What I am doing wrong here? Can you please give me some ideas?

4
  • Where does request handled come from in the logs? I don't see any code that includes that so we can't really say how it fits with the rest of the code. Commented Mar 12, 2021 at 20:46
  • Also, if you're sending the EXACT same request from the browser, it may serialize those requests in the interest of efficiency/possible caching. Make it a different URL for each request (just add a random query parameter) or use a client who's code you control that won't do behind the scenes request management. Commented Mar 12, 2021 at 20:49
  • @jfriend00 Sorry. I've edited my code now. Commented Mar 12, 2021 at 20:53
  • @jfriend00 Omg! I've tried your advice and added query parameters to my urls. Now it works exactly like i want. Thank you! Now i can sleep happily. Commented Mar 12, 2021 at 20:58

1 Answer 1

1

Some browsers will not send two identical GET requests to the same host at the same time (for caching/efficiency reasons as it waits for the response from the prior request to see if its cacheable). So, if you're trying to bypass this browser sequencing, then you can add a query string with some sort of random or always different value in it (such that the GET requests are not the exact same URL).

http://sample.com/somePath?r=1 http://sample.com/somePath?r=2 http://sample.com/somePath?r=3 

To understand the classic example for why this is, imagine a web page that uses a small image for an expando glyph. There are 100 uses of that image in the web page. You do not want the browser making 100 requests to your server for that image.

Instead, you want it to make one request to your server for that image and, wait for the response and if the headers look like they permit caching of that image, then fetch the image from the cache for all 99 other occurrences of that image. To make that work, the browser has to queue up identical requests for the same URL, then when the first one comes back, examine it for caching and then either use the cached result or send the next request.

So, for testing purposes, the way you bypass that browser optimization is to make sure each URL is unique and therefore the browser won't "hold" it in hopes of using a previously cached result.


FYI, you can implement this in a testing environment with either an incrementing counter:

const url = "http://sample.com/somepath"; // counter defined in a scope where it persists // from one request to the next let counter = 0; fetch(`${url}?r=${++counter}`).then(...).catch(...); 

Or with Math.random():

const url = "http://sample.com/somepath"; fetch(`${url}?r=${Math.random()}`).then(...).catch(...); 

If the browser code is actually using the fetch() interface, then you can also use the {cache: "no-store"} option as in:

fetch(url, {cache: "no-store"}).then(...).catch(...); 

to tell the browser not to consider caching and this will also keep the browser from waiting for prior requests to the same URL to complete.

Sign up to request clarification or add additional context in comments.

2 Comments

Yes that's right i had the same issue with Browsers based on Chromium but Firefox can handle multiple identical requests to the same host at the same time.
@Abdes - Interesting to know Firefox behaves differently. FYI (for other readers), Chrome will eventually send all the requests - it just may wait for prior responses to see if they are cacheable. This is a tradeoff. Most of the time in the general web, identical responses are things like multiple references to the same image that will actually benefit from caching. So, this is a tradeoff. And, most of the time in general web programming, you aren't purposely sending the same request to the same host multiple times rapidly in a row.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.