Ceph - howto, rbd, lvm, cluster

From Skytech
Revision as of 12:07, 8 December 2012 by Martin (talk | contribs)
Jump to navigation Jump to search


Install ceph

wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc | apt-key add -
echo deb http://ceph.com/debian/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list
apt-get update && apt-get install ceph


Video to ceph intro

https://www.youtube.com/watch?v=UXcZ2bnnGZg
http://www.youtube.com/watch?v=BBOBHMvKfyc&feature=g-high

Rebooting node stops everything / Set number of replicas across all nodes

Make sure that the min replica count is set to nodes-1.

ceph osd pool set <poolname> min_size 1

Then the remaing node[s] will start up with just 1 node if everything else is down.

Keep in mind this can potentially make stuff ugly as there are no replicas now.

More info here: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/10481

Adjust/create pools

/etc/init.d/ceph start osd.12
ceph osd crush set 12 osd.12 0.5 pool=default rack=unknownrack host=ceph1

Check that is in the right place with:

ceph osd tree

More info here: http://ceph.com/docs/master/rados/operations/pools/

Delete pools/OSD

Make sure you have the right disk, run

ceph osd tree

to get an overview.

Delete a OSD

ceph osd crush remove osd.12

Replicating from OSD-based to replication across hosts in a ceph cluster

More info here: http://jcftang.github.com/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/