From: Jakub Narebski <jnareb@gmail.com>
To: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
Cc: git@vger.kernel.org
Subject: Re: Is offloading to GPU a worthwhile feature?
Date: Wed, 11 Apr 2018 18:46:46 +0200 [thread overview]
Message-ID: <86h8oh6689.fsf@gmail.com> (raw)
In-Reply-To: <57c33d0a-458e-f591-164d-33f8257d3972@linuxfoundation.org> (Konstantin Ryabitsev's message of "Mon, 9 Apr 2018 13:57:55 -0400")
Konstantin Ryabitsev <konstantin@linuxfoundation.org> writes:
> On 04/08/18 09:59, Jakub Narebski wrote:
>>> This is an entirely idle pondering kind of question, but I wanted to
>>> ask. I recently discovered that some edge providers are starting to
>>> offer systems with GPU cards in them -- primarily for clients that need
>>> to provide streaming video content, I guess. As someone who needs to run
>>> a distributed network of edge nodes for a fairly popular git server, I
>>> wondered if git could at all benefit from utilizing a GPU card for
>>> something like delta calculations or compression offload, or if benefits
>>> would be negligible.
>>
>> The problem is that you need to transfer the data from the main memory
>> (host memory) geared towards low-latency thanks to cache hierarchy, to
>> the GPU memory (device memory) geared towards bandwidth and parallel
>> access, and back again. So to make sense the time for copying data plus
>> the time to perform calculations on GPU (and not all kinds of
>> computations can be speed up on GPU -- you need fine-grained massively
>> data-parallel task) must be less than time to perform calculations on
>> CPU (with multi-threading).
>
> Would something like this be well-suited for tasks like routine fsck,
> repacking and bitmap generation? That's the kind of workloads I was
> imagining it would be most well-suited for.
All of those, I think, would need to use some graph algorithms. While
there are here ready graph libraries on GPU (like nVidia's nvGRAPH),
graphs are irregular structures not that well souted to the SIMD type of
parallelism that GPU is best for.
I also wonder if the amound of memory on GPU would be enough (and if
not, would be it possible to perform calculations in batches).
>> Also you would need to keep non-GPU and GPGPU code in sync. Some parts
>> of code do not change much; and there also solutions to generate dual
>> code from one source.
>>
>> Still, it might be good idea,
>
> I'm still totally the wrong person to be implementing this, but I do
> have access to Packet.net's edge systems which carry powerful GPUs for
> projects that might be needing these for video streaming services. It
> seems a shame to have them sitting idle if I can offload some of the
> RAM- and CPU-hungry tasks like repacking to be running there.
Happily, GPGPU programming (in CUDA C mainly, which limits use to nVidia
hardware) is one of my areas if interests...
Best regards,
--
Jakub Narębski
prev parent reply other threads:[~2018-04-11 16:46 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-27 20:52 Is offloading to GPU a worthwhile feature? Konstantin Ryabitsev
2018-02-27 22:08 ` Stefan Beller
2018-04-08 13:59 ` Jakub Narebski
2018-04-09 17:57 ` Konstantin Ryabitsev
2018-04-11 16:46 ` Jakub Narebski [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=86h8oh6689.fsf@gmail.com \
--to=jnareb@gmail.com \
--cc=git@vger.kernel.org \
--cc=konstantin@linuxfoundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).