From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eliezer Tamir Subject: [PATCH v8 net-next 5/7] net: simple poll/select low latency socket poll Date: Mon, 03 Jun 2013 11:02:00 +0300 Message-ID: <20130603080200.18273.52073.stgit@ladj378.jer.intel.com> References: <20130603080107.18273.34279.stgit@ladj378.jer.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Willem de Bruijn , Or Gerlitz , e1000-devel@lists.sourceforge.net, netdev@vger.kernel.org, HPA , linux-kernel@vger.kernel.org, Alex Rosenbaum , Jesse Brandeburg , Eliezer Tamir , Andi Kleen , Ben Hutchings , Eric Dumazet , Eilon Greenstien To: David Miller Return-path: In-Reply-To: <20130603080107.18273.34279.stgit@ladj378.jer.intel.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: e1000-devel-bounces@lists.sourceforge.net List-Id: netdev.vger.kernel.org A very naive select/poll busy-poll support. Add busy-polling to sock_poll(). When poll/select have nothing to report, call the low-level sock_poll() again untill we are out of time or we find something. Rigth now we poll every socket once, this is subpotimal but impoves latency when the number of sockets polled is not large. Signed-off-by: Alexander Duyck Signed-off-by: Jesse Brandeburg Tested-by: Willem de Bruijn Signed-off-by: Eliezer Tamir --- fs/select.c | 7 +++++++ net/socket.c | 10 +++++++++- 2 files changed, 16 insertions(+), 1 deletions(-) diff --git a/fs/select.c b/fs/select.c index 8c1c96c..f116bf0 100644 --- a/fs/select.c +++ b/fs/select.c @@ -27,6 +27,7 @@ #include #include #include +#include #include @@ -400,6 +401,7 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time) poll_table *wait; int retval, i, timed_out = 0; unsigned long slack = 0; + cycles_t ll_time = ll_end_time(); rcu_read_lock(); retval = max_select_fd(n, fds); @@ -486,6 +488,8 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time) break; } + if (can_poll_ll(ll_time)) + continue; /* * If this is the first loop and we have a timeout * given, then we convert to ktime_t and set the to @@ -750,6 +754,7 @@ static int do_poll(unsigned int nfds, struct poll_list *list, ktime_t expire, *to = NULL; int timed_out = 0, count = 0; unsigned long slack = 0; + cycles_t ll_time = ll_end_time(); /* Optimise the no-wait case */ if (end_time && !end_time->tv_sec && !end_time->tv_nsec) { @@ -795,6 +800,8 @@ static int do_poll(unsigned int nfds, struct poll_list *list, if (count || timed_out) break; + if (can_poll_ll(ll_time)) + continue; /* * If this is the first loop and we have a timeout * given, then we convert to ktime_t and set the to diff --git a/net/socket.c b/net/socket.c index 721f4e7..02d0e15 100644 --- a/net/socket.c +++ b/net/socket.c @@ -1148,13 +1148,21 @@ EXPORT_SYMBOL(sock_create_lite); /* No kernel lock held - perfect */ static unsigned int sock_poll(struct file *file, poll_table *wait) { + unsigned int poll_result; struct socket *sock; /* * We can't return errors to poll, so it's either yes or no. */ sock = file->private_data; - return sock->ops->poll(file, sock, wait); + + poll_result = sock->ops->poll(file, sock, wait); + + if (!(poll_result & (POLLRDNORM | POLLERR | POLLRDHUP | POLLHUP)) && + sk_valid_ll(sock->sk) && sk_poll_ll(sock->sk, 1)) + poll_result = sock->ops->poll(file, sock, NULL); + + return poll_result; } static int sock_mmap(struct file *file, struct vm_area_struct *vma) ------------------------------------------------------------------------------ Get 100% visibility into Java/.NET code with AppDynamics Lite It's a free troubleshooting tool designed for production Get down to code-level detail for bottlenecks, with <2% overhead. Download for free and get started troubleshooting in minutes. http://p.sf.net/sfu/appdyn_d2d_ap2 _______________________________________________ E1000-devel mailing list E1000-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/e1000-devel To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired