qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Juan Quintela <quintela@redhat.com>,
	Yuan Liu <yuan1.liu@intel.com>,
	farosas@suse.de, leobras@redhat.com, qemu-devel@nongnu.org,
	nanhai.zou@intel.com
Subject: Re: [PATCH 0/5] Live Migration Acceleration with IAA Compression
Date: Thu, 19 Oct 2023 11:23:31 -0400	[thread overview]
Message-ID: <ZTFJ84SnSOAcU5gY@x1n> (raw)
In-Reply-To: <ZTFCnqbbqlmsUkRC@redhat.com>

On Thu, Oct 19, 2023 at 03:52:14PM +0100, Daniel P. Berrangé wrote:
> On Thu, Oct 19, 2023 at 01:40:23PM +0200, Juan Quintela wrote:
> > Yuan Liu <yuan1.liu@intel.com> wrote:
> > > Hi,
> > >
> > > I am writing to submit a code change aimed at enhancing live migration
> > > acceleration by leveraging the compression capability of the Intel
> > > In-Memory Analytics Accelerator (IAA).
> > >
> > > Enabling compression functionality during the live migration process can
> > > enhance performance, thereby reducing downtime and network bandwidth
> > > requirements. However, this improvement comes at the cost of additional
> > > CPU resources, posing a challenge for cloud service providers in terms of
> > > resource allocation. To address this challenge, I have focused on offloading
> > > the compression overhead to the IAA hardware, resulting in performance gains.
> > >
> > > The implementation of the IAA (de)compression code is based on Intel Query
> > > Processing Library (QPL), an open-source software project designed for
> > > IAA high-level software programming.
> > >
> > > Best regards,
> > > Yuan Liu
> > 
> > After reviewing the patches:
> > 
> > - why are you doing this on top of old compression code, that is
> >   obsolete, deprecated and buggy
> > 
> > - why are you not doing it on top of multifd.
> > 
> > You just need to add another compression method on top of multifd.
> > See how it was done for zstd:
> 
> I'm not sure that is ideal approach.  IIUC, the IAA/QPL library
> is not defining a new compression format. Rather it is providing
> a hardware accelerator for 'deflate' format, as can be made
> compatible with zlib:
> 
>   https://intel.github.io/qpl/documentation/dev_guide_docs/c_use_cases/deflate/c_deflate_zlib_gzip.html#zlib-and-gzip-compatibility-reference-link
> 
> With multifd we already have a 'zlib' compression format, and so
> this IAA/QPL logic would effectively just be a providing a second
> implementation of zlib.
> 
> Given the use of a standard format, I would expect to be able
> to use software zlib on the src, mixed with IAA/QPL zlib on
> the target, or vica-verca.
> 
> IOW, rather than defining a new compression format for this,
> I think we could look at a new migration parameter for
> 
> "compression-accelerator": ["auto", "none", "qpl"]
> 
> with 'auto' the default, such that we can automatically enable
> IAA/QPL when 'zlib' format is requested, if running on a suitable
> host.

I was also curious about the format of compression comparing to software
ones when reading.

Would there be a use case that one would prefer soft compression even if
hardware accelerator existed, no matter on src/dst?

I'm wondering whether we can avoid that one more parameter but always use
hardware accelerations as long as possible.

Thanks,

-- 
Peter Xu



  reply	other threads:[~2023-10-19 15:24 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-18 22:12 [PATCH 0/5] Live Migration Acceleration with IAA Compression Yuan Liu
2023-10-18 22:12 ` [PATCH 1/5] configure: add qpl meson option Yuan Liu
2023-10-19 11:12   ` Juan Quintela
2023-10-18 22:12 ` [PATCH 2/5] qapi/migration: Introduce compress-with-iaa migration parameter Yuan Liu
2023-10-19 11:15   ` Juan Quintela
2023-10-19 14:02   ` Peter Xu
2023-10-18 22:12 ` [PATCH 3/5] ram compress: Refactor ram compression functions Yuan Liu
2023-10-19 11:19   ` Juan Quintela
2023-10-18 22:12 ` [PATCH 4/5] migration iaa-compress: Add IAA initialization and deinitialization Yuan Liu
2023-10-19 11:27   ` Juan Quintela
2023-10-18 22:12 ` [PATCH 5/5] migration iaa-compress: Implement IAA compression Yuan Liu
2023-10-19 11:36   ` Juan Quintela
2023-10-19 11:13 ` [PATCH 0/5] Live Migration Acceleration with IAA Compression Juan Quintela
2023-10-19 11:40 ` Juan Quintela
2023-10-19 14:52   ` Daniel P. Berrangé
2023-10-19 15:23     ` Peter Xu [this message]
2023-10-19 15:31       ` Juan Quintela
2023-10-19 15:32       ` Daniel P. Berrangé
2023-10-23  8:33         ` Liu, Yuan1
2023-10-23 10:29           ` Daniel P. Berrangé
2023-10-23 10:47             ` Juan Quintela
2023-10-23 14:54               ` Liu, Yuan1
2023-10-23 14:36             ` Liu, Yuan1
2023-10-23 10:38           ` Juan Quintela
2023-10-23 16:32             ` Liu, Yuan1

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZTFJ84SnSOAcU5gY@x1n \
    --to=peterx@redhat.com \
    --cc=berrange@redhat.com \
    --cc=farosas@suse.de \
    --cc=leobras@redhat.com \
    --cc=nanhai.zou@intel.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=yuan1.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).