The latest version is out and solves quite some problems !
This one is quite good but even if it is the recommended one, there are still wrinkles here and there.
- The good: It loads the test HTML page completely and completes more than a dozen of GET, up to loading the favicon.ico at the end, all with one socket. And it keeps the favicon in cache, so that part of the protocol is ok.
- The bad: for the it sends maybe 3 parallel requests after the first successful one. It also seems to limit the refresh rate for PING and the likes (not more than 1 or 2 per second ?).
- The offense: I can't understand WHY it sends null-sized requests (which are so far welcome with a close) some time (a minute or less ?) after the page has finished loading. Is it a method to "keep alive" ? Fortunately it doesn't interfere with an open/working socket.
Conclusion: Apparently, Firefox is smart enough to see that if one connection/socket is slammed on its face, the pending requests can/should go to the other working socket(s). Simple HTTaP/? requests seem to work rather smoothly.
So at least there is something that works, even though some behaviours need more investigations.
It can do simple things right but its tries so much to optimise things that it sometimes feels like a fanatic or a lunatic. Maybe it was really too focused on working with the websites of the Alphabet Group.
Let's start with an easy case : GET /?PING is fine, with the little detail that apparently, Cache-Control: max-age=200 is not understood. So WHAT is required by Chrome to keep that data in cache ? Anyway at least the function is performed (though at least one parallel connection is opened and slammed) but wait for the rest.
Let's try to load a web page with about a dozen of external links.
10 parallel sockets are opened and slammed, 5 resources (including index.html) are loaded, in a seemingly random order, probably because of the extra sockets that contain the requests. The main socket closes from timeout without getting the missing ten requests. What the... ???
Setting the HTTAP_TCP_BACKLOG #define to 0 or 1 makes the situation even worse, the page loads slooooowly and incompletely... Chrome won't get the clue !
There is still the possibility to limit the number of concurrent open sockets on a webserver but it requires a user manipulation on the browser and it's too specific and intrusive. Meanwhile HTTP1.1 doesn't seem to specify anything about explicitly limiting the number in the headers
And then, there is the question : WHY does Firefox get the clue but Chrome won't ? I suspect it's because they use different queuing algorithms but this does not help me so far.
Remediation : this problem seems to affect webpages while the HTTaP parts doesn't seem to be affected.
- Keep pages small with few external elements.
- inline elements (using "data" URI in base64 encoding)
- use JS to serialise and download extra resources
I hope that I can solve this problem soon anyway...
I just found why FF sends empty packets !!!
It happens when you hover the cursor over a local link...
Firefox speculates that you will click on it and prepares the connection !