public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Direct access to GPGPU do-able?
@ 2009-12-01  7:08 Carsten Aulbert
  2009-12-01  8:45 ` Weidong Han
  0 siblings, 1 reply; 4+ messages in thread
From: Carsten Aulbert @ 2009-12-01  7:08 UTC (permalink / raw)
  To: kvm

Hi,

I'll start with a one-off question here, so please cc me on the reply. 

We are running a largish cluster and are currently buying GPGPU systems (Tesla 
and soon Fermi based). We will have at least 2 possibly 4 of these cards per 
box and have the problem that some codes need different CUDA kernel drivers to 
run. As these boxes have 4 CPU cores, 12 GB of memory and CPU-VT support we 
thought that this might be solvable by creating (para-) virtualized guests on 
the boxes and passing one GPGPU device into a guest at a time. In there we 
then can run any kernel/driver combo necessary.

But since my current virtualization experience only stretches to OpenVZ and 
VirtualBox (tinkering with Xen a couple of years back), I don't know if KVM is 
the right approach here. We need something which we can automatically set-up 
via CLI, i.e. starting and stopping the guests need to be fully automatic, we 
don't need a graphical environment within the guests, just plain text is good 
enough.

What do you think, is looking at KVM the right choice for this? Can we pass a 
device directly into a guest?

Cheers

Carsten

-- 
Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics
Callinstrasse 38, 30167 Hannover, Germany
Phone/Fax: +49 511 762-17185 / -17193
http://www.top500.org/system/9234 | http://www.top500.org/connfam/6/list/3

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-02-25 14:03 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-01  7:08 Direct access to GPGPU do-able? Carsten Aulbert
2009-12-01  8:45 ` Weidong Han
2009-12-01 18:33   ` Fede
2010-02-25 14:03     ` André Weidemann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox