Ceph - howto, rbd, lvm, cluster
Install ceph
wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc | apt-key add - echo deb http://ceph.com/debian/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list apt-get update && apt-get install ceph
Video to ceph intro
https://www.youtube.com/watch?v=UXcZ2bnnGZg http://www.youtube.com/watch?v=BBOBHMvKfyc&feature=g-high
Rebooting node stops everything / Set number of replicas across all nodes
Make sure that the min replica count is set to nodes-1.
ceph osd pool set <poolname> min_size 1
Then the remaing node[s] will start up with just 1 node if everything else is down.
Keep in mind this can potentially make stuff ugly as there are no replicas now.
More info here: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/10481
Adjust/create pools
Prepare the disk as usual (partition or entire disk) - format with filesystem of choosing. Add to fstab and mount. Add to /etc/ceph/ceph.conf and replicate the new conf to the other nodes.
Start the disk, I'm assuming we've added osd.12 here.
## Auth stuff to make sure that the OSD is accepted into the cluser: mkdir /srv/ceph/osd12 ceph-osd -i 12 --mkfs --mkkey ceph auth add osd.12 osd 'allow *' mon 'allow rwx' -i /etc/ceph/keyring.osd.12 ## Create disk and start it ceph osd create osd.12 /etc/init.d/ceph start osd.12 ## Add it to the cluster and allow replicated based on CRUSH map. ceph osd crush set 12 osd.12 1.0 pool=default rack=unknownrack host=ceph1
Check that is in the right place with:
ceph osd tree
More info here: http://ceph.com/docs/master/rados/operations/pools/
Delete pools/OSD
Make sure you have the right disk, run
ceph osd tree
to get an overview.
Delete a OSD
ceph osd crush remove osd.12
Replicating from OSD-based to replication across hosts in a ceph cluster
More info here: http://jcftang.github.com/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/