From mboxrd@z Thu Jan 1 00:00:00 1970 From: Roman Penyaev Subject: Re: [PATCH 05/15] Add io_uring IO interface Date: Fri, 18 Jan 2019 09:23:53 +0100 Message-ID: <3189352fb5866022d782811a65022a50@suse.de> References: <20190116175003.17880-1-axboe@kernel.dk> <20190116175003.17880-6-axboe@kernel.dk> <718b4d1fbe9f97592d6d7b76d7a4537d@suse.de> <02568485-cd10-182d-98e3-619077cf9bdc@kernel.dk> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Sender: owner-linux-aio@kvack.org To: Jeff Moyer Cc: Jens Axboe , linux-fsdevel@vger.kernel.org, linux-aio@kvack.org, linux-block@vger.kernel.org, linux-arch@vger.kernel.org, hch@lst.de, avi@scylladb.com, linux-block-owner@vger.kernel.org List-Id: linux-arch.vger.kernel.org On 2019-01-17 21:50, Jeff Moyer wrote: > Jens Axboe writes: > >> On 1/17/19 1:09 PM, Jens Axboe wrote: >>> On 1/17/19 1:03 PM, Jeff Moyer wrote: >>>> Jens Axboe writes: >>>> >>>>> On 1/17/19 5:48 AM, Roman Penyaev wrote: >>>>>> On 2019-01-16 18:49, Jens Axboe wrote: >>>>>> >>>>>> [...] >>>>>> >>>>>>> +static int io_allocate_scq_urings(struct io_ring_ctx *ctx, >>>>>>> + struct io_uring_params *p) >>>>>>> +{ >>>>>>> + struct io_sq_ring *sq_ring; >>>>>>> + struct io_cq_ring *cq_ring; >>>>>>> + size_t size; >>>>>>> + int ret; >>>>>>> + >>>>>>> + sq_ring = io_mem_alloc(struct_size(sq_ring, array, >>>>>>> p->sq_entries)); >>>>>> >>>>>> It seems that sq_entries, cq_entries are not limited at all. Can >>>>>> nasty >>>>>> app consume a lot of kernel pages calling io_setup_uring() from a >>>>>> loop >>>>>> passing random entries number? (or even better: decreasing entries >>>>>> number, >>>>>> in order to consume all pages orders with min number of loops). >>>>> >>>>> Yes, that's an oversight, we should have a limit in place. I'll add >>>>> that. >>>> >>>> Can we charge the ring memory to the RLIMIT_MEMLOCK as well? I'd >>>> prefer >>>> not to repeat the mistake of fs.aio-max-nr. >>> >>> Sure, we can do that. With the ring limited in size (it's now 4k >>> entries >>> at most), the amount of memory gobbled up by that is much smaller >>> than >>> the fixed buffers. A max sized ring is about 256k of memory. > > Per io_uring. Nothing prevents a user from calling io_uring_setup in a > loop and continuing to gobble up memory. What if we set a sane limit for a uring instance (not for the whole io_uring), but allocate rings on mmap? Then greedy / nasty app will be killed by oom. -- Roman -- To unsubscribe, send a message with 'unsubscribe linux-aio' in the body to majordomo@kvack.org. For more info on Linux AIO, see: http://www.kvack.org/aio/ Don't email: aart@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx2.suse.de ([195.135.220.15]:59754 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727502AbfARIXz (ORCPT ); Fri, 18 Jan 2019 03:23:55 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Fri, 18 Jan 2019 09:23:53 +0100 From: Roman Penyaev Subject: Re: [PATCH 05/15] Add io_uring IO interface In-Reply-To: References: <20190116175003.17880-1-axboe@kernel.dk> <20190116175003.17880-6-axboe@kernel.dk> <718b4d1fbe9f97592d6d7b76d7a4537d@suse.de> <02568485-cd10-182d-98e3-619077cf9bdc@kernel.dk> Message-ID: <3189352fb5866022d782811a65022a50@suse.de> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Jeff Moyer Cc: Jens Axboe , linux-fsdevel@vger.kernel.org, linux-aio@kvack.org, linux-block@vger.kernel.org, linux-arch@vger.kernel.org, hch@lst.de, avi@scylladb.com, linux-block-owner@vger.kernel.org Message-ID: <20190118082353.2vsaM9bAR5_g5hpiWdafyc5TnCch_fJdlCEOaHz_pq8@z> On 2019-01-17 21:50, Jeff Moyer wrote: > Jens Axboe writes: > >> On 1/17/19 1:09 PM, Jens Axboe wrote: >>> On 1/17/19 1:03 PM, Jeff Moyer wrote: >>>> Jens Axboe writes: >>>> >>>>> On 1/17/19 5:48 AM, Roman Penyaev wrote: >>>>>> On 2019-01-16 18:49, Jens Axboe wrote: >>>>>> >>>>>> [...] >>>>>> >>>>>>> +static int io_allocate_scq_urings(struct io_ring_ctx *ctx, >>>>>>> + struct io_uring_params *p) >>>>>>> +{ >>>>>>> + struct io_sq_ring *sq_ring; >>>>>>> + struct io_cq_ring *cq_ring; >>>>>>> + size_t size; >>>>>>> + int ret; >>>>>>> + >>>>>>> + sq_ring = io_mem_alloc(struct_size(sq_ring, array, >>>>>>> p->sq_entries)); >>>>>> >>>>>> It seems that sq_entries, cq_entries are not limited at all. Can >>>>>> nasty >>>>>> app consume a lot of kernel pages calling io_setup_uring() from a >>>>>> loop >>>>>> passing random entries number? (or even better: decreasing entries >>>>>> number, >>>>>> in order to consume all pages orders with min number of loops). >>>>> >>>>> Yes, that's an oversight, we should have a limit in place. I'll add >>>>> that. >>>> >>>> Can we charge the ring memory to the RLIMIT_MEMLOCK as well? I'd >>>> prefer >>>> not to repeat the mistake of fs.aio-max-nr. >>> >>> Sure, we can do that. With the ring limited in size (it's now 4k >>> entries >>> at most), the amount of memory gobbled up by that is much smaller >>> than >>> the fixed buffers. A max sized ring is about 256k of memory. > > Per io_uring. Nothing prevents a user from calling io_uring_setup in a > loop and continuing to gobble up memory. What if we set a sane limit for a uring instance (not for the whole io_uring), but allocate rings on mmap? Then greedy / nasty app will be killed by oom. -- Roman