CEPH filesystem development
 help / color / mirror / Atom feed
* Fwd: CEPH Multitenancy and Data Isolation
       [not found] ` <CFBB8E65.E806%vilobhmm-ZXvpkYn067l8UrSeD/g0lQ@public.gmane.org>
@ 2014-06-10  8:56   ` Vilobh Meshram
  2014-06-10 23:30     ` [ceph-users] " Josh Durgin
  0 siblings, 1 reply; 2+ messages in thread
From: Vilobh Meshram @ 2014-06-10  8:56 UTC (permalink / raw)
  To: ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
  Cc: ceph-users-Qp0mS5GaXlQ@public.gmane.org


[-- Attachment #1.1: Type: text/plain, Size: 1255 bytes --]



How does CEPH guarantee data isolation for volumes which are not meant to be shared in a Openstack tenant?

When used with OpenStack the data isolation is provided by the Openstack level so that all users who are part of same tenant will be able to access/share the volumes created by users in that tenant. Consider a case where we have one pool named “Volumes” for all the tenants. All the tenants use the same keyring to access the volumes in the pool.

  1.  How do we guarantee that one user can’t see the contents of the volumes created by another user; if the volume is not meant to be shared.
  2.  If someone malicious user gets the access to the keyring (which we used as a authentication mechanism between the client/Openstack and CEPH) how does CEPH guarantee that the malicious user can’t access the volumes in that pool.
  3.  Lets say our Cinder services are running on the Openstack API node. How does the CEPH keyring information gets transferred from the API node to the Hypervisor node ? Does this keyring passed through message queue? If yes can the malicious user have a look at the message queue and grab this keyring information ? If not then how does it reach from the API node to the Hypervisor node.

Thanks,
Vilobh

[-- Attachment #1.2: Type: text/html, Size: 2078 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [ceph-users] Fwd: CEPH Multitenancy and Data Isolation
  2014-06-10  8:56   ` Fwd: CEPH Multitenancy and Data Isolation Vilobh Meshram
@ 2014-06-10 23:30     ` Josh Durgin
  0 siblings, 0 replies; 2+ messages in thread
From: Josh Durgin @ 2014-06-10 23:30 UTC (permalink / raw)
  To: Vilobh Meshram, ceph-devel@vger.kernel.org; +Cc: ceph-users@ceph.com

On 06/10/2014 01:56 AM, Vilobh Meshram wrote:
>> How does CEPH guarantee data isolation for volumes which are not meant
>> to be shared in a Openstack tenant?
>>
>> When used with OpenStack the data isolation is provided by the
>> Openstack level so that all users who are part of same tenant will be
>> able to access/share the volumes created by users in that tenant.
>> Consider a case where we have one pool named “Volumes” for all the
>> tenants. All the tenants use the same keyring to access the volumes in
>> the pool.
>>
>>  1. How do we guarantee that one user can’t see the contents of the
>>     volumes created by another user; if the volume is not meant to be
>>     shared.

OpenStack users or tenants have no access to the keyring. Cinder tracks
volume ownership and checks permissions when a volume is attached, and
qemu prevents users from seeing anything outside of their vm, including 
the keyring.

>>  2. If someone malicious user gets the access to the keyring (which we
>>     used as a authentication mechanism between the client/Openstack
>>     and CEPH) how does CEPH guarantee that the malicious user can’t
>>     access the volumes in that pool.

The keyring gives a user access to the cluster. If someone has a valid 
keyring, Ceph treats them as a valid user, since there is no information
to say otherwise. Ceph can't tell whether the user of a keyring is
malicious.

>>  3. Lets say our Cinder services are running on the Openstack API
>>     node. How does the CEPH keyring information gets transferred from
>>     the API node to the Hypervisor node ? Does this keyring passed
>>     through message queue? If yes can the malicious user have a look
>>     at the message queue and grab this keyring information ? If not
>>     then how does it reach from the API node to the Hypervisor node.

The keyring is static and configured by the administrator on the nodes
running cinder-volume and nova-compute. It's not sent over the network,
and is not needed by nova or cinder api nodes.

Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-06-10 23:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CFBB8E65.E806%vilobhmm@yahoo-inc.com>
     [not found] ` <CFBB8E65.E806%vilobhmm-ZXvpkYn067l8UrSeD/g0lQ@public.gmane.org>
2014-06-10  8:56   ` Fwd: CEPH Multitenancy and Data Isolation Vilobh Meshram
2014-06-10 23:30     ` [ceph-users] " Josh Durgin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox