From: Baoquan He <bhe@redhat.com>
To: Jiri Bohac <jbohac@suse.cz>
Cc: Michal Hocko <mhocko@suse.com>, Pingfan Liu <piliu@redhat.com>,
Tao Liu <ltao@redhat.com>, Vivek Goyal <vgoyal@redhat.com>,
Dave Young <dyoung@redhat.com>,
kexec@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/4] kdump: crashkernel reservation from CMA
Date: Thu, 30 Nov 2023 12:01:36 +0800 [thread overview]
Message-ID: <ZWgJIAYwfVvA+r8h@MiWiFi-R3L-srv> (raw)
In-Reply-To: <ZWcXyeDiI2v251F_@dwarf.suse.cz>
On 11/29/23 at 11:51am, Jiri Bohac wrote:
> Hi Baoquan,
>
> thanks for your interest...
>
> On Wed, Nov 29, 2023 at 03:57:59PM +0800, Baoquan He wrote:
> > On 11/28/23 at 10:08am, Michal Hocko wrote:
> > > On Tue 28-11-23 10:11:31, Baoquan He wrote:
> > > > On 11/28/23 at 09:12am, Tao Liu wrote:
> > > [...]
> > > > Thanks for the effort to bring this up, Jiri.
> > > >
> > > > I am wondering how you will use this crashkernel=,cma parameter. I mean
> > > > the scenario of crashkernel=,cma. Asking this because I don't know how
> > > > SUSE deploy kdump in SUSE distros. In SUSE distros, kdump kernel's
> > > > driver will be filter out? If latter case, It's possibly having the
> > > > on-flight DMA issue, e.g NIC has DMA buffer in the CMA area, but not
> > > > reset during kdump bootup because the NIC driver is not loaded in to
> > > > initialize. Not sure if this is 100%, possible in theory?
>
> yes, we also only add the necessary drivers to the kdump initrd (using
> dracut --hostonly).
>
> The plan was to use this feature by default only on systems where
> we are reasonably sure it is safe and let the user experiment
> with it when we're not sure.
>
> I grepped a list of all calls to pin_user_pages*. From the 55,
> about one half uses FOLL_LONGTERM, so these should be migrated
> away from the CMA area. In the rest there are four cases that
> don't use the pages to set up DMA:
> mm/process_vm_access.c: pinned_pages = pin_user_pages_remote(mm, pa, pinned_pages,
> net/rds/info.c: ret = pin_user_pages_fast(start, nr_pages, FOLL_WRITE, pages);
> drivers/vhost/vhost.c: r = pin_user_pages_fast(log, 1, FOLL_WRITE, &page);
> kernel/trace/trace_events_user.c: ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT,
>
> The remaining cases are potentially problematic:
> drivers/gpu/drm/i915/gem/i915_gem_userptr.c: ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE,
> drivers/iommu/iommufd/iova_bitmap.c: ret = pin_user_pages_fast((unsigned long)addr, npages,
> drivers/iommu/iommufd/pages.c: rc = pin_user_pages_remote(
> drivers/media/pci/ivtv/ivtv-udma.c: err = pin_user_pages_unlocked(user_dma.uaddr, user_dma.page_count,
> drivers/media/pci/ivtv/ivtv-yuv.c: uv_pages = pin_user_pages_unlocked(uv_dma.uaddr,
> drivers/media/pci/ivtv/ivtv-yuv.c: y_pages = pin_user_pages_unlocked(y_dma.uaddr,
> drivers/misc/genwqe/card_utils.c: rc = pin_user_pages_fast(data & PAGE_MASK, /* page aligned addr */
> drivers/misc/xilinx_sdfec.c: res = pin_user_pages_fast((unsigned long)src_ptr, nr_pages, 0, pages);
> drivers/platform/goldfish/goldfish_pipe.c: ret = pin_user_pages_fast(first_page, requested_pages,
> drivers/rapidio/devices/rio_mport_cdev.c: pinned = pin_user_pages_fast(
> drivers/sbus/char/oradax.c: ret = pin_user_pages_fast((unsigned long)va, 1, FOLL_WRITE, p);
> drivers/scsi/st.c: res = pin_user_pages_fast(uaddr, nr_pages, rw == READ ? FOLL_WRITE : 0,
> drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c: actual_pages = pin_user_pages_fast((unsigned long)ubuf & PAGE_MASK, num_pages,
> drivers/tee/tee_shm.c: rc = pin_user_pages_fast(start, num_pages, FOLL_WRITE,
> drivers/vfio/vfio_iommu_spapr_tce.c: if (pin_user_pages_fast(tce & PAGE_MASK, 1,
> drivers/video/fbdev/pvr2fb.c: ret = pin_user_pages_fast((unsigned long)buf, nr_pages, FOLL_WRITE, pages);
> drivers/xen/gntdev.c: ret = pin_user_pages_fast(addr, 1, batch->writeable ? FOLL_WRITE : 0, &page);
> drivers/xen/privcmd.c: page_count = pin_user_pages_fast(
> fs/orangefs/orangefs-bufmap.c: ret = pin_user_pages_fast((unsigned long)user_desc->ptr,
> arch/x86/kvm/svm/sev.c: npinned = pin_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages);
> drivers/fpga/dfl-afu-dma-region.c: pinned = pin_user_pages_fast(region->user_addr, npages, FOLL_WRITE,
> lib/iov_iter.c: res = pin_user_pages_fast(addr, maxpages, gup_flags, *pages);
>
> We can easily check if some of these drivers (of which some we don't
> even ship/support) are loaded and decide this system is not safe
> for CMA crashkernel. Maybe looking at the list more thoroughly
> will show that even some of the above calls are acually safe,
> e.g. because the DMA is set up for reading only.
> lib/iov_iter.c seem like it could be the real
> problem since it's used by generic block layer...
Hmm, yeah. From my point of view, we may need make sure the safety of
reusing ,cma area in kdump kernel without exception. That we can use it
on system we 100% sure, let people to experiment with if if not sure,
seems to be not safe. Most of time, user even don't know how to judge
the system they own is 100% safe, or the safety is not sure. That's too
hard.
> > > > The crashkernel=,cma requires no userspace data dumping, from our
> > > > support engineers' feedback, customer never express they don't need to
> > > > dump user space data. Assume a server with huge databse deployed, and
> > > > the database often collapsed recently and database provider claimed that
> > > > it's not database's fault, OS need prove their innocence. What will you
> > > > do?
> > >
> > > Don't use CMA backed crash memory then? This is an optional feature.
>
> Right. Our kdump does not dump userspace by default and we would
> of course make sure ,cma is not used when the user wanted to turn
> on userspace dumping.
>
> > > Jiri will know better than me but for us a proper crash memory
> > > configuration has become a real nut. You do not want to reserve too much
> > > because it is effectively cutting of the usable memory and we regularly
> > > hit into "not enough memory" if we tried to be savvy. The more tight you
> > > try to configure the easier to fail that is. Even worse any in kernel
> > > memory consumer can increase its memory demand and get the overall
> > > consumption off the cliff. So this is not an easy to maintain solution.
> > > CMA backed crash memory can be much more generous while still usable.
> >
> > Hmm, Redhat could go in a different way. We have been trying to:
> > 1) customize initrd for kdump kernel specifically, e.g exclude unneeded
> > devices's driver to save memory;
>
> ditto
>
> > 2) monitor device and kenrel memory usage if they begin to consume much
> > more memory than before. We have CI testing cases to watch this. We ever
> > found one NIC even eat up GB level memory, then this need be
> > investigated and fixed.
> > With these effort, our default crashkernel values satisfy most of cases,
> > surely not call cases. Only rare cases need be handled manually,
> > increasing crashkernel.
>
> We get a lot of problems reported by partners testing kdump on
> their setups prior to release. But even if we tune the reserved
> size up, OOM is still the most common reason for kdump to fail
> when the product starts getting used in real life. It's been
> pretty frustrating for a long time.
I remember SUSE engineers ever told you will boot kernel and do an
estimation of kdump kernel usage, then set the crashkernel according to
the estimation. OOM will be triggered even that way is taken? Just
curious, not questioning the benefit of using ,cma to save memory.
>
> > Wondering how you will use this crashkernel=,cma syntax. On normal
> > machines and virt guests, not much meomry is needed, usually 256M or a
> > little more is enough. On those high end systems with hundreds of Giga
> > bytes, even Tera bytes of memory, I don't think the saved memory with
> > crashkernel=,cma make much sense.
>
> I feel the exact opposite about VMs. Reserving hundreds of MB for
> crash kernel on _every_ VM on a busy VM host wastes the most
> memory. VMs are often tuned to well defined task and can be set
> up with very little memory, so the ~256 MB can be a huge part of
> that. And while it's theoretically better to dump from the
> hypervisor, users still often prefer kdump because the hypervisor
> may not be under their control. Also, in a VM it should be much
> easier to be sure the machine is safe WRT the potential DMA
> corruption as it has less HW drivers. So I actually thought the
> CMA reservation could be most useful on VMs.
Hmm, we ever discussed this in upstream with David Hildend who works in
virt team. VMs problem is much easier to solve if they complain the
default crashkernel value is wasteful. The shrinking interface is for
them. The crashkernel value can't be enlarged, but shrinking existing
crashkernel memory is functioning smoothly well. They can adjust that in
script in a very simple way.
Anyway, let's discuss and figure out any risk of ,cma. If finally all
worries and concerns are proved unnecessary, then let's have a new great
feature. But we can't afford the risk if the ,cma area could be entangled
with 1st kernel's on-going action. As we know, not like kexec reboot, we
only shutdown CPUs, interrupt, most of devices are alive. And many of
them could be not reset and initialized in kdump kernel if the relevant
driver is not added in.
Earlier, we met several on-flight DMA stomping into memory when kexec
rebooting because some pci devices didn't provide shutdown() method. It
gave people so much headache to figure out and fix it. Simillarly for
kdump, we absolutely don't expect to see that happening with ,cma,
it absolutely will be a disaster to kdump, no matter how much memory it
can save. Because you don't know what happened, how to debug, until you
suspect this and turn it off.
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
next prev parent reply other threads:[~2023-11-30 4:02 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-24 19:54 [PATCH 0/4] kdump: crashkernel reservation from CMA Jiri Bohac
2023-11-24 19:57 ` [PATCH 1/4] kdump: add crashkernel cma suffix Jiri Bohac
2023-11-25 7:24 ` kernel test robot
2023-11-24 19:58 ` [PATCH 2/4] kdump: implement reserve_crashkernel_cma Jiri Bohac
2023-11-24 19:58 ` [PATCH 3/4] kdump, x86: implement crashkernel CMA reservation Jiri Bohac
2023-11-24 19:58 ` [PATCH 4/4] kdump, documentation: describe craskernel " Jiri Bohac
2023-11-25 1:51 ` [PATCH 0/4] kdump: crashkernel reservation from CMA Tao Liu
2023-11-25 21:22 ` Jiri Bohac
2023-11-28 1:12 ` Tao Liu
2023-11-28 2:11 ` Baoquan He
2023-11-28 9:08 ` Michal Hocko
2023-11-29 7:57 ` Baoquan He
2023-11-29 9:25 ` Michal Hocko
2023-11-30 2:42 ` Baoquan He
2023-11-29 10:51 ` Jiri Bohac
2023-11-30 4:01 ` Baoquan He [this message]
2023-12-01 12:35 ` Jiri Bohac
2023-11-29 8:10 ` Baoquan He
2023-11-29 15:03 ` Donald Dutile
2023-11-30 3:00 ` Baoquan He
2023-11-30 10:16 ` Michal Hocko
2023-11-30 12:04 ` Baoquan He
2023-11-30 12:31 ` Baoquan He
2023-11-30 13:41 ` Michal Hocko
2023-12-01 11:33 ` Philipp Rudo
2023-12-01 11:55 ` Michal Hocko
2023-12-01 15:51 ` Philipp Rudo
2023-12-01 16:59 ` Michal Hocko
2023-12-06 11:08 ` Philipp Rudo
2023-12-06 11:23 ` David Hildenbrand
2023-12-06 13:49 ` Michal Hocko
2023-12-06 15:19 ` Michal Hocko
2023-12-07 4:23 ` Baoquan He
2023-12-07 8:55 ` Michal Hocko
2023-12-07 11:13 ` Philipp Rudo
2023-12-07 11:52 ` Michal Hocko
2023-12-08 1:55 ` Baoquan He
2023-12-08 10:04 ` Michal Hocko
2023-12-08 2:10 ` Baoquan He
2023-12-07 11:13 ` Philipp Rudo
2023-11-30 13:29 ` Michal Hocko
2023-11-30 13:33 ` Pingfan Liu
2023-11-30 13:43 ` Michal Hocko
2023-12-01 0:54 ` Pingfan Liu
2023-12-01 10:37 ` Michal Hocko
2023-11-28 2:07 ` Pingfan Liu
2023-11-28 8:58 ` Michal Hocko
2023-12-01 11:34 ` Philipp Rudo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZWgJIAYwfVvA+r8h@MiWiFi-R3L-srv \
--to=bhe@redhat.com \
--cc=dyoung@redhat.com \
--cc=jbohac@suse.cz \
--cc=kexec@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ltao@redhat.com \
--cc=mhocko@suse.com \
--cc=piliu@redhat.com \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox