-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tcp server load test #7
Comments
Hey, thanks for the feedback. No there's no limit on the number of coroutines. If you're throwing 500 concurrent connections at it, then you need to increase your listen backlog: server:listen(1024) -- default is 128 I don't think that was the problem though, ab would just get connection refused and it would have to back off. So if you try calling client:shutdown() after client:write() I think you'll get better behaviour from ab. close is a bit harsh, so it's probably getting incomplete responses. I've also seen these 20 second pauses (although node.js was much worse under the same work load), which might have to do with file-descriptor reuse and linger on them. Finally, you can do ab -k for keepalive and try this guy: https://gist.github.com/3857202 and run with: ab -k -r -n 10000 -c 500 http://127.0.0.1:8080/ I'm getting in the order of 16k req/sec with very few I/O errors. |
Seems I have the same issue: Benchmarking 127.0.0.1 (be patient) The server printed lots of: I have modified the line to: and then got: accept: userdata<luv.net.tcp>: 0x945124c ApacheBench/2.3 got zero, EOF? GET / HTTP/1.0 Accept: / got zero, EOF? GET / HTTP/1.0 So clearly there is some bufferring issue: request data gets mixed up. |
I've pushed a change to git head which should fix it. I wasn't pushing the length of the read buffer if there was pending data. |
Try git head now. On Oct 9, 2012, at 10:27 AM, miko wrote:
|
Much better now: I get all the request data. But when it runs succesfully; I get at the exit: But most of the time it just hangs as in the original report, until ab times out. |
Repeated the test with luajit instead of lua (and modified print line), got: got zero, EOF? GET / HTTP/1.0 �� � �� � got zero, EOF? GET / HTTP/1.0 �� � (in case it is not visible: the data has appended some binary bytes, which my terminal interprets as unicode) After another modification: |
Ah, okay, buf.len is the size of the buffer and not how much is read. Try now. On Oct 9, 2012, at 10:54 AM, miko wrote:
|
This error comes when you try to read after an error has occurred. try doing: if client:write(http_response) == -1 then I'll have to fix it so that it doesn't crash like that. Thanks again for the report. On Oct 9, 2012, at 10:49 AM, miko wrote:
|
seems behaving much better, thank you. |
The buffer size issue is now resolved: when using ab everything works great! I do still get uv__read assertion, and I do use client:close(), as it is your httpd-k.lua example. The reason may be the ab just drops connection, but still the server should not crash like this. I get this also for ab -k -n 1 -c 1 http://127.0.0.1:8080/ |
dvv: I have put my version of httpd-k.lua with timers in https://gist.github.com/3857202#comments |
Hi miko, can you try git head now? I've made stream:close() yield and made a couple of other changes which hopefully should fix it. Thanks for all your support. On Oct 9, 2012, at 12:21 PM, miko wrote:
|
I've been throwing ab at this thing all morning and it seems that it's a bit pathological. It'll pump in the same headers repeatedly even without keepalive, so I'm see some 20k buffers coming in in a single libuv So basically libuv will keep happily reading from the socket as long as ab is writing, and ab doesn't stop writing as long as libuv is reading. So if you get these big chunks occasionally. What I've done now is give stream:read and parameter which specifies the buffer size to allocate. So stream:read(1024) stops libuv reading past that size and it fires the callback to rouse the coroutine. Another other issue is that I'm getting occasional 20 second stalls and I think it's related to this discussion: Unfortunately there's no uv_tcp_linger implementation (just the keepalive probes) :( Another problem is when reading EOF. Sometimes, immediately after a socket is accepted, reading from it gets an EOF from libuv. I think the only sane thing to do is to propagate this back to the caller and wake the coroutine, but If you close the socket, ab will barf with EPIPE or ECONNRESET, which sucks. To summarize, I'm finding that libuv + ab aren't really happy with each other. Perhaps it's just my code. I'll keep at it, since you guys seem hell bent on building an HTTP deamon out of this :) |
i switched to https://github.com/wg/wrk and things are going well so far. ab is too slow and dumb to test luv :) |
awesome, thanks for the tip! On Oct 9, 2012, at 2:05 PM, Vladimir Dronnikov wrote:
|
That's what I got on my slow setup: $ wrk -t8 -c2048 -r1m http://localhost:8080/
Making 1000000 requests to http://localhost:8080/
8 threads and 2048 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 214.97ms 2.25s 1.88m 98.43%
Req/Sec 0.07 8.53 1.00k 99.99%
1000035 requests in 4.85m, 60.08MB read
Socket errors: connect 0, read 2406, write 0, timeout 159084
Requests/sec: 3438.60
Transfer/sec: 211.55KB |
Thanks, that fixed it for me! And yes, http daemon built into an application is nice ;) |
i believe we just need http-parser binding and the lua-level http request/response logic. i wonder if @creationix 's luvit/web collection would fit |
I want luvit/web to be as portable as possible. But I don't think I can get away from having a defined spec for readable and writable streams as well as a data format (currently lua strings). We can probably add support for multiple data formats (lua strings, ffi cdata, and lev cdata buffers). My stream interface is very simple, but is callback based. I'm not sure how that fits into this project. Readable stream: any table that has a :read()(callback(err, chunk)) method. Read is a method that returns a function that accepts a callback that gets err and chunk. Writable stream: any table that has a :write(chunk)(callback(err)) method. Write is a method that accepts the chunk and returns a function that accepts the callback that gets the err. Since the continuable is returned from the methods, it should be easy to write wrappers for this for other systems. I have coro sugar in my continuable library where you can do things like |
For data encoding, we could add encoding arguments to both :read(encoding) and :write(chunk, encoding) to allow supporting multiple data types. :write could probably even auto-detect the type and convert for you. The harder issue for "luv" is I use continuables (functions that return functions that accept callbacks) |
On Oct 9, 2012, at 5:23 PM, Tim Caswell wrote:
Yeah, I think we'd need to start with https://github.com/joyent/http-parser.git and work our way up from there, as you did. |
wonder how to write a wrapper for callback-style logic to be used in these project fibers? |
just fyi i explored common logic for http_parser in C at https://github.com/dvv/luv/blob/master/src/uhttp.c#L270 ago -- got stuck at C memory management. but if we want kinda generic http parser which reports data/end, i believe most of its callbacks may be hardcoded in C. |
You could do something like: my.wrapper.read = function() On Oct 9, 2012, at 7:46 PM, Vladimir Dronnikov wrote:
|
I'd start at the top. I think the API should look something like this: local req = httpd:recv() where req is a RACK/PSGI [1] like environment table with: req = {
} The body of the request (if any) is read, by the application, from the A response can be either streamed out via httpd:send({ 200, { ["Content-Type"] = "text/html", … }, }) [1] http://search.cpan.org/~miyagawa/PSGI-1.101/PSGI.pod#The_Environment On Oct 9, 2012, at 8:11 PM, Vladimir Dronnikov wrote:
|
that looks interesting, i'd love to try it |
After update to the latest head I no longer get seg faults on archlinux. Thanks! I think this issue can be closed now. Regarding parsing http, I suggest opening a new issue (feature request), as it is hard to follow. |
indeed. #10 |
Hi!
I've drafted a simple hello-world HTTP server, to test luv under the load via
ab -n100000 -c500 ...
The result is that the server stopped responding after circa 50000 requests.What can be wrong?
I wonder do we have any explicit ot implicit limit on the number of concurrent coroutines?
The text was updated successfully, but these errors were encountered: