CEPH filesystem development
 help / color / mirror / Atom feed
* rbd after removal still 10 TB used.
@ 2014-06-10  0:46 Alphe Salas Michels
  0 siblings, 0 replies; only message in thread
From: Alphe Salas Michels @ 2014-06-10  0:46 UTC (permalink / raw)
  To: ceph-devel

Hello,
I have this issue to submit you.It is with ceph emperor version 0.72 i 
don t know if in firefly it is solved. I didnt see in the changelog any 
change to that issue.

First of all ceph is storage ogre. Let me explain. In an global 40TB 
disk the real data i can use is 40 TB /2 - 25% = around 15TB. But as I 
delete data and store new data I notice the replicas are never 
overwritten. logically a pg has its mirror and so if the pg is updated 
then the mirrored pg corresponding is updated too. or better said if a 
pg is overwritten with new data then its related pg mirror is 
overwritten too with that new data. Experience in real life shown to me 
that it is not the case. Simply pg are overwriten and a new pg mirror is 
created to contain the new data letting the old pg mirror remain. So as 
i delete and overwrite  data to my rbd image I notice the ever growing 
effect that lead to a forever pgs stuck to backfill osds. So slowly ceph 
is stopping to accept new data. More osd are in near_full ratio then 
full ratio.

Still after I do a rbd rm myimagename I notice that because some pgs 
where stuck to backfill then i still have 10 TB locked. Only way to 
retrieve that data is to completly clear the ceph cluster and reinstall 
a new version. I don t think most people using ceph will affort an ever 
growing ceph system, because yes beleive it or not new disks and new 
nodes have a cost, and that it is not  wise to have replicas that over 
match more than 3 times the data they backup on a replica = 2 environement.


At the time ceph is willing to extend it seems we are all oblivious to 
the core problems of ceph. No data triming no data replicas triming.

Probably triming data is ressource consuming. So we should have at least 
a "lets plan and do a trim for that day" possibility. The other way 
around would be to have a better replica management since that is where 
the problem is orginated.

Best regards

-------
Alphe Salas

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2014-06-10  0:46 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-10  0:46 rbd after removal still 10 TB used Alphe Salas Michels

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox