It looks like there are like 3 separate optimizations, but I think the most important one is the "enable_listen_spawn" feature. Here is how they describe it:
Fastsocket creates one local listen socket table for each CPU core. With this feature, application process can decide to process new connections from a specific CPU core. It is done by copying the original listen socket and inserting the copy into the local listen socket table. When there is a new connection on one CPU core, kernel tries to match a listen socket in the local listen table of that CPU core and inserts the connection into the accept queue of the local listen socket if there is match. Later, the process can accept the connection from the local listen socket exclusively. This way each network softirq has its own local socket to queue new connection and each process has its own local listen socket to pull new connection out. When the process is bound with the CPU core specified, then connections delivered to that CPU core by NIC are entirely processed by the same CPU core with in all stages, including hardirp, softirq, syscall and user process. As a result, connections are processed without contension across all CPU cores, which achieves passive connection locality.
The kernel in its normal configuration will try to spread the IRQs evenly across CPUs. So for their use case, where you have one worker thread per CPU handling zillions of short lived TCP connections, they can eliminate a bunch of locking and cache thrashing that would otherwise happen when handling new connections and dispatching the related epoll events within the kernel.
posted on 2014-10-23 21:38 阅读( ...) 评论( ...)