Aoe: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
[[Category:Linux]] |
[[Category:Linux]] |
||
= Serversetup with AOE = |
|||
= Aoe = |
|||
⚫ | |||
⚫ | |||
If you want to expose a complete LVM first do the usual stuff with (assuming /dev/md1 will be the exposed storage): |
|||
<pre> |
<pre> |
||
pvcreate /dev/md1 |
|||
⚫ | |||
vgcreate storage /dev/md1 |
|||
⚫ | |||
</pre> |
</pre> |
||
== Setup vblade-persist == |
|||
Then setup the appropriate shelf and slots on the netdevice you want with the storage backend you want: |
|||
<pre> |
<pre> |
||
vblade-persist setup 0 |
/usr/sbin/vblade-persist setup 0 1 br0 /dev/md1 |
||
vblade-persist setup 0 1 lan /dev/mapper/vg_zorro0-polori0 |
|||
⚫ | |||
⚫ | |||
⚫ | |||
0 0 lan /dev/mapper/vg_zorro0-roti0 auto down |
|||
</pre> |
</pre> |
||
Then make sure it automatically starts on boot: |
|||
<pre> |
|||
⚫ | |||
</pre> |
|||
Lastly start it now (or reboot) |
|||
<pre> |
|||
⚫ | |||
</pre> |
|||
You can always check status with |
|||
<pre> |
|||
/usr/sbin/vblade-persist ls |
|||
⚫ | |||
0 1 br0 /dev/md1 auto run |
|||
</pre> |
|||
= Client setup with AOE = |
|||
== Install tools == |
|||
Install aoetools |
|||
<pre> |
|||
apt-get install aoetools |
|||
</pre> |
|||
Run aoe-stat and it should find the exposed storage* |
|||
<pre> |
|||
aoe-stat |
|||
e0.1 4000.526GB br0 up |
|||
e1.1 1500.300GB br0 up |
|||
e2.0 920.198GB br0 up |
|||
</pre> |
|||
'''*''' - keep in mind aoe is not TCP - so it has to be on the same network as all the servers using it. All servers on that network can then see the exported data. |
|||
Now you should be able to run lvscan, vgs etc on all clients on this network and see all lvm-data. |
|||
If you want to automagically start various volumes, you could change |
|||
<pre> |
|||
/etc/default/aoetools |
|||
INTERFACES="br0" |
|||
LVMGROUPS="storage nas mirrordata" |
|||
</pre> |
|||
for instance |
Revision as of 09:38, 12 August 2012
Serversetup with AOE
install vblade-persist
Install vblade-persist if not already.
If you want to expose a complete LVM first do the usual stuff with (assuming /dev/md1 will be the exposed storage):
pvcreate /dev/md1 vgcreate storage /dev/md1
Setup vblade-persist
Then setup the appropriate shelf and slots on the netdevice you want with the storage backend you want:
/usr/sbin/vblade-persist setup 0 1 br0 /dev/md1
Then make sure it automatically starts on boot:
/usr/sbin/vblade-persist auto all
Lastly start it now (or reboot)
/usr/sbin/vblade-persist start all
You can always check status with
/usr/sbin/vblade-persist ls #shelf slot netif source auto? stat 0 1 br0 /dev/md1 auto run
Client setup with AOE
Install tools
Install aoetools
apt-get install aoetools
Run aoe-stat and it should find the exposed storage*
aoe-stat e0.1 4000.526GB br0 up e1.1 1500.300GB br0 up e2.0 920.198GB br0 up
* - keep in mind aoe is not TCP - so it has to be on the same network as all the servers using it. All servers on that network can then see the exported data.
Now you should be able to run lvscan, vgs etc on all clients on this network and see all lvm-data.
If you want to automagically start various volumes, you could change
/etc/default/aoetools INTERFACES="br0" LVMGROUPS="storage nas mirrordata"
for instance