From: Chuck Lever <cel@kernel.org>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: Daniel Gomez <da.gomez@kruces.com>,
kdevops@lists.linux.dev,
Devasena Inupakutika <devasena.i@samsung.com>,
DongjooSeo <dongjoo.seo1@samsung.com>,
Joel Fernandes <Joelagnelf@nvidia.com>
Subject: Re: [PATCH v2 0/4] vLLM and the vLLM production stack
Date: Sat, 4 Oct 2025 13:14:00 -0400 [thread overview]
Message-ID: <e49254e9-620e-4cce-aad6-410b9281d1ae@kernel.org> (raw)
In-Reply-To: <aOFTTOG3YV_jOGxB@bombadil.infradead.org>
On 10/4/25 1:03 PM, Luis Chamberlain wrote:
>> I haven't done the follow-up work yet to integrate GPU-enabled AMIs into
>> the AWS Compute menu. That seems like it should be the top priority. I
>> need to go back and look at what you did to generate those in your
>> prototype, to close those gaps.
> Oh yes that's needed. I also need a patch to disable VPCs and enable
> public IPs.
AWS instances are created with public IPs (at least mine are, since my
buildbot master is still in my basement. :-)
Can you elaborate on what's missing here, and I will review patches or
try to help however I can.
> While at it, so that AWS won't eat my corporate expenditures
> I added a slack cloud-bill support too. The whole "static" stuff can be
> ignored, it doesn't work, I was just trying to add static instances
> to see if I could get some larger GPU instnaces to work but it didn't
> work and I gave up.
I assume by "static" you mean long-lived. I've been considering that for
things like kernel build nodes and buildbot masters. Sounds like there
are some interesting use cases to consider.
> But the rest of the changes are legit, please feel
> free to cherry pick what you see useful from here:
>
> https://github.com/linux-kdevops/kdevops/tree/ci-testing/
> mcgrof/20251004-cloud-bill
Great! I will have a look at those.
--
Chuck Lever
next prev parent reply other threads:[~2025-10-04 17:14 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-04 16:38 [PATCH v2 0/4] vLLM and the vLLM production stack Luis Chamberlain
2025-10-04 16:38 ` [PATCH v2 1/4] workflows: Add vLLM workflow for LLM inference and production deployment Luis Chamberlain
2025-10-04 16:38 ` [PATCH v2 2/4] vllm: Add DECLARE_HOSTS support for bare metal and existing infrastructure Luis Chamberlain
2025-10-04 16:38 ` [PATCH v2 3/4] vllm: Add GPU-enabled defconfig with compatibility documentation Luis Chamberlain
2025-10-04 16:38 ` [PATCH v2 4/4] defconfigs: Add composable fragments for Lambda Labs vLLM deployment Luis Chamberlain
2025-10-04 16:39 ` [PATCH v2 0/4] vLLM and the vLLM production stack Luis Chamberlain
2025-10-04 16:55 ` Chuck Lever
2025-10-04 17:03 ` Luis Chamberlain
2025-10-04 17:14 ` Chuck Lever [this message]
2025-10-08 17:46 ` Chuck Lever
2025-10-10 0:55 ` Luis Chamberlain
2025-10-10 12:38 ` Chuck Lever
2025-10-10 16:20 ` Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e49254e9-620e-4cce-aad6-410b9281d1ae@kernel.org \
--to=cel@kernel.org \
--cc=Joelagnelf@nvidia.com \
--cc=da.gomez@kruces.com \
--cc=devasena.i@samsung.com \
--cc=dongjoo.seo1@samsung.com \
--cc=kdevops@lists.linux.dev \
--cc=mcgrof@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox