tom@Rias:/tmp$ wrk -t1 -c256 --latency http://127.0.0.1:8081/status Running 10s test @ http://127.0.0.1:8081/status 1 threads and 256 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.73ms 4.22ms 228.72ms 99.90% Req/Sec 57.28k 1.06k 59.15k 76.00% Latency Distribution 50% 1.68ms 75% 1.75ms 90% 1.78ms 99% 1.86ms 569899 requests in 10.04s, 46.20MB read Socket errors: connect 0, read 0, write 0, timeout 12 Requests/sec: 56772.06 Transfer/sec: 4.60MB
The last couple days I've been experimenting with PHP-FFI and tried out making a simple Webserver with io_uring (or more specifically, liburing-ffi). The results are pretty cool. The ergonomics are okay for something that is primarily a socket server and can be expanded on with a simpel HttpServer "overlay".
The whole thing is a singlethreaded double-eventloop right now (double eventloop because io_uring and revolt are used). Using Revolt in the PHP code made interacting with io_uring much easier, although the integration between the two is pretty hard (no streams to wait on for example). Using callbacks (accept/read/write) or abstracting the whole socket logic entirely with a simple on(path, closure)
would probably improve performance a bit more but make it less ergonomic and/or flexible. From what I could gather online (I couldn't get NodeJS to run on WSL2 because of incompatibilities in the shipped NodeJS version and the NPM packages) a multithreaded NodeJS server gets around 2 Million Requests on a 32-thread CPU in the scenario above, so I'd say the performance is not too shabby anyways.
Next steps are integrating newer Kernel features (WSL2 is currently on 5.15, and most of the improvements to io_uring landed in 5.19) and multithreading, not to mention code cleanup ๐
submitted by /u/RiafRuby
[link] [comments]