일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
Tags
- tomcat
- clustering
- 가상파일시스템
- Replication
- 포기해버린꿈
- ext4
- PERL
- LVS
- pgbench
- Nexenta
- mailfiler
- OCFS2
- connection tunning
- 파일시스템
- 오라클
- php-oracle 연동
- pvfs
- 펄 코딩스타일
- perltidy
- Openfiler
- pgsql
- 시그널
- ZFS
- postfix
- pgpool-ii
- inotify
- 리눅스
- ext3
- CVSROOT 세팅
- 펄
Archives
- Today
- Total
avicom의 신변잡기
drbd + heartbeat 본문
DRBD
설치
설정
fail-over test
HEARTBEAT
설치
가상인터페이스 확인
fail-over test
node 1 shutdown
wackamole + spread 대비 장점
단점
설치
http://oss.linbit.com 에서 최신 버전 다운로드
# wget http://oss.linbit.com/drbd/8.3/drbd-8.3.0.tar.gz
# tar xvfz drbd-8.3.0.tar.gz
# cd drbd-8.3.0
# cp drbd.spec.in /usr/src/redhat/SPECS/drbd.spec
# cd /usr/src/redhat/SPECS/
#
drbd.spec 을 vi로 열고 다음 항목을 수정
--Packager:
++Packager: htkim <htkim@simplexi.com>
--test "$(scripts/get_uts_release.sh)" = %{kernelversion}
++test "2.6.18-92.el5" = %{kernelversion} # 2.6.18-92.el5 : 현재 커널 버전
RPM 빌드
# rpmbduild -bb drbd.spec
RPM 설치
# cd /usr/src/redhat/RPMS/x86_64
# rpm -Uvh drbd-8.3.0-3.x86_64.rpm drbd-km-2.6.18_92.el5-8.3.0-3.x86_64.rpm drbd-debuginfo-8.3.0-3.x86_64.rpm
설정
설정 파일 위치는 /etc/drbd.conf, node1, node2에 동일하게 설정한다.
drbd1 # cat /etc/drbd.conf
global { usage-count no; }
resource drbd {
protocol C;
startup { wfc-timeout 0; degr-wfc-timeout 120; }
disk { on-io-error detach; } # or panic, ...
net { cram-hmac-alg "sha1"; shared-secret "Cent0Sru!3z"; } # don't forget to choose a secret for auth !
syncer { rate 10M; }
on drbd1.cafe24test.com {
device /dev/drbd0;
disk /dev/hdb;
address 192.168.190.40:7789;
meta-disk internal;
}
on drbd2.cafe24test.com {
device /dev/drbd0;
disk /dev/hdb;
address 192.168.190.41:7789;
meta-disk internal;
}
}
메타 데이터 저장소 생성 (node1, node2)
drbd1 # drbdadm create-md drbd
Writing meta data...
initialising activity log
NOT initialized bitmap
New drbd meta data block sucessfully created.
drbd 기동 (node1, node2)
# service drbd start
Starting DRBD resources: [ d(drbd) s(drbd) n(drbd) ].
node1을 primary로 지정하고, node1에서 secondary인 node2 타겟 디스크를 동기화시킨다.
# drbdadm -- --overwrite-data-of-peer primary drbd
node1:/dev/drbd0 -> node2:/dev/drbd0으로 데이터 동기화가 진행된다.
drbd2 # cat /proc/drbd
version: 8.3.0 (api:88/proto:86-89)
GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@drbd1.cafe24test.com, 2009-03-20 07:18:34
0: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r---
ns:0 nr:2186836 dw:2179028 dr:0 al:0 bm:120 lo:245 pe:110 ua:244 ap:0 ep:1 wo:b oos:3086064
[======>.............] sync'ed: 39.8% (3013/4996)M
finish: 0:13:11 speed: 3,864 (4,604) K/sec
디스크 포맷과 마운트 : 사용할 물리적인 디스크는 /dev/hdb 이지만, drbd가 기동하면서 /dev/drbd0로 매핑하므로 /dev/drbd0를 포맷해야한다. primary node에서만 해주면 그 디스크의 상태가 secondary node에 sync되기때문에 secondary에서 따로 포맷해줄 필요는 없다.
drbd1 # mkfs.ext3 /dev/drbd0 ; mkdir /data ; mount /dev/drbd0 /data
여기까지하면 기본적인 설정이 끝난다.
fail-over test
primary node에 파일 생성
[root@drbd1 data]# for a in `seq 1 100`
> do
> touch file_$a
> done
[root@drbd1 data]# ls
file_1 file_14 file_2 file_25 file_30 file_36 file_41 file_47 file_52 file_58 file_63 file_69 file_74 file_8 file_85 file_90 file_96 test.img
file_10 file_15 file_20 file_26 file_31 file_37 file_42 file_48 file_53 file_59 file_64 file_7 file_75 file_80 file_86 file_91 file_97
file_100 file_16 file_21 file_27 file_32 file_38 file_43 file_49 file_54 file_6 file_65 file_70 file_76 file_81 file_87 file_92 file_98
file_11 file_17 file_22 file_28 file_33 file_39 file_44 file_5 file_55 file_60 file_66 file_71 file_77 file_82 file_88 file_93 file_99
file_12 file_18 file_23 file_29 file_34 file_4 file_45 file_50 file_56 file_61 file_67 file_72 file_78 file_83 file_89 file_94 lost+found
file_13 file_19 file_24 file_3 file_35 file_40 file_46 file_51 file_57 file_62 file_68 file_73 file_79 file_84 file_9 file_95 postgresql
두 노드의 역할 변경 (primary <-> secondary)
[root@drbd1 data]# umount /data
[root@drbd1 data]# drbdadm secondary drbd
[root@drbd1 ~]# cat /proc/drbd
version: 8.3.0 (api:88/proto:86-89)
GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@drbd1.cafe24test.com, 2009-03-20 07:18:34
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---
-----------------
ns:114872 nr:522220 dw:637092 dr:982 al:51 bm:205 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
[root@drbd2]# drbdadm primary drbd
[root@drbd2]# mkdir /data; mount /dev/drbd0 /data
[root@drbd2 ~]# cat /proc/drbd
version: 8.3.0 (api:88/proto:86-89)
GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@drbd1.cafe24test.com, 2009-03-20 07:18:34
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---
-----------------
ns:522348 nr:6172776 dw:6175352 dr:529364 al:156 bm:607 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
[root@drbd2]# cd /data
[root@drbd2 data]# ls
file_1 file_14 file_2 file_25 file_30 file_36 file_41 file_47 file_52 file_58 file_63 file_69 file_74 file_8 file_85 file_90 file_96 test.img
file_10 file_15 file_20 file_26 file_31 file_37 file_42 file_48 file_53 file_59 file_64 file_7 file_75 file_80 file_86 file_91 file_97
file_100 file_16 file_21 file_27 file_32 file_38 file_43 file_49 file_54 file_6 file_65 file_70 file_76 file_81 file_87 file_92 file_98
file_11 file_17 file_22 file_28 file_33 file_39 file_44 file_5 file_55 file_60 file_66 file_71 file_77 file_82 file_88 file_93 file_99
file_12 file_18 file_23 file_29 file_34 file_4 file_45 file_50 file_56 file_61 file_67 file_72 file_78 file_83 file_89 file_94 lost+found
file_13 file_19 file_24 file_3 file_35 file_40 file_46 file_51 file_57 file_62 file_68 file_73 file_79 file_84 file_9 file_95 postgresql
HEARTBEAT
drbd 노드간의 스위칭은 수동으로 이루어진다. 여기에 heartbeat을 추가하면 두 노드간에 alive check를 해서 primary node가 shutdown될 때 secondary가 자동으로 올라오도록 할 수 있다.
설치
yum으로 설치
# yum install heartbeat heartbeat-devel
/etc/ha.d/ha.cf, haresource, authkeys를 설정. node의 이름은 uname -n 명령으로 나오는 이름을 그대로 입력한다.
[root@drbd1 ha.d]# cat ha.cf
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 1
deadtime 10
warntime 4
initdead 60
udpport 694
#ucast eth0 192.168.190.41
bcast eth0
ping 192.168.190.41 # 상대편 node의 아이피
respawn hacluster /usr/lib64/heartbeat/ipfail
auto_failback on
node drbd1.cafe24test.com # node 선언 순서는 반드시 primary를 먼저 해야한다.
node drbd2.cafe24test.com
[root@drbd1 ha.d]# cat authkeys
auth 2
2 crc
[root@drbd1 ha.d]# chmod 600 authkeys # 반드시 해줘야함
[root@drbd1 ha.d]# cat haresources
drbd1.cafe24test.com drbddisk::drbd \
Filesystem::/dev/drbd0::/data::ext3 \ # fail-over될 때 가지고 올라올 파일시스템 지정
Delay::1::0 \
IPaddr::192.168.190.42/32/eth0:0 # fail-over될 때 가지고 올라올 가상 아이피와 인터페이스 지정
primary node에서 heartbeat 기동
[root@drbd1 ha.d]# service heartbeat start
Starting High-Availability services:
[ OK ]
secondary node에서 heartbeat 기동
[root@drbd2 /]# service heartbeat start
Starting High-Availability services:
[ OK ]
가상인터페이스 확인
[root@drbd1 ha.d]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:0D:87:83
inet addr:192.168.190.40 Bcast:192.168.255.255 Mask:255.255.0.0
inet6 addr: fe80::216:3eff:fe0d:8783/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:553957 errors:0 dropped:0 overruns:0 frame:0
TX packets:263244 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:584995856 (557.8 MiB) TX bytes:140502168 (133.9 MiB)
Interrupt:169 Base address:0xc000
eth0:0 Link encap:Ethernet HWaddr 00:16:3E:0D:87:83
inet addr:192.168.190.42 Bcast:192.168.190.42 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:169 Base address:0xc000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:142 errors:0 dropped:0 overruns:0 frame:0
TX packets:142 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:54049 (52.7 KiB) TX bytes:54049 (52.7 KiB)
[root@drbd2 /]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:0D:87:84
inet addr:192.168.190.41 Bcast:192.168.255.255 Mask:255.255.0.0
inet6 addr: fe80::216:3eff:fe0d:8784/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4761439 errors:0 dropped:446 overruns:0 frame:0
TX packets:2234962 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6588163762 (6.1 GiB) TX bytes:703858554 (671.2 MiB)
Interrupt:169 Base address:0xc000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:56 errors:0 dropped:0 overruns:0 frame:0
TX packets:56 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:21013 (20.5 KiB) TX bytes:21013 (20.5 KiB)
[root@manticore postgresql-8.3.1]# ssh 192.168.190.42
root@192.168.190.42's password:
Last login: Tue Mar 24 18:16:51 2009 from 192.168.10.52
[root@drbd1 ~]#
fail-over test
두 노드 상태 확인
[root@drbd1 ha.d]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda2 4.8G 492M 4.1G 11% /
/dev/hda7 3.7G 72M 3.4G 3% /home
/dev/hda6 2.0G 91M 1.8G 5% /var
/dev/hda5 7.6G 2.0G 5.3G 28% /usr
/dev/hda1 99M 12M 83M 12% /boot
tmpfs 250M 0 250M 0% /dev/shm
/dev/drbd0 4.9G 746M 3.9G 16% /data
[root@drbd2 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda2 4.8G 355M 4.2G 8% /
/dev/hda7 3.7G 72M 3.4G 3% /home
/dev/hda6 2.0G 91M 1.8G 5% /var
/dev/hda5 7.6G 2.0G 5.3G 28% /usr
/dev/hda1 99M 12M 83M 12% /boot
tmpfs 250M 0 250M 0% /dev/shm
[root@manticore postgresql-8.3.1]# ssh 192.168.190.42
root@192.168.190.42's password:
Last login: Tue Mar 24 18:16:51 2009 from 192.168.10.52
[root@drbd1 ~]#
node 1 shutdown
[root@drbd1 ha.d]# service heartbeat stop
Stopping High-Availability services:
[ OK ]
[root@drbd1 ha.d]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda2 4.8G 492M 4.1G 11% /
/dev/hda7 3.7G 72M 3.4G 3% /home
/dev/hda6 2.0G 92M 1.8G 5% /var
/dev/hda5 7.6G 2.0G 5.3G 28% /usr
/dev/hda1 99M 12M 83M 12% /boot
tmpfs 250M 0 250M 0% /dev/shm
[root@drbd1 ha.d]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:0D:87:83
inet addr:192.168.190.40 Bcast:192.168.255.255 Mask:255.255.0.0
inet6 addr: fe80::216:3eff:fe0d:8783/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:556005 errors:0 dropped:0 overruns:0 frame:0
TX packets:263985 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:585179866 (558.0 MiB) TX bytes:140621259 (134.1 MiB)
Interrupt:169 Base address:0xc000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:142 errors:0 dropped:0 overruns:0 frame:0
TX packets:142 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:54049 (52.7 KiB) TX bytes:54049 (52.7 KiB)
[root@drbd2 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda2 4.8G 355M 4.2G 8% /
/dev/hda7 3.7G 72M 3.4G 3% /home
/dev/hda6 2.0G 91M 1.8G 5% /var
/dev/hda5 7.6G 2.0G 5.3G 28% /usr
/dev/hda1 99M 12M 83M 12% /boot
tmpfs 250M 0 250M 0% /dev/shm
/dev/drbd0 4.9G 746M 3.9G 16% /data
[root@drbd2 /]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:0D:87:84
inet addr:192.168.190.41 Bcast:192.168.255.255 Mask:255.255.0.0
inet6 addr: fe80::216:3eff:fe0d:8784/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4763258 errors:0 dropped:446 overruns:0 frame:0
TX packets:2235618 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6588323728 (6.1 GiB) TX bytes:703970528 (671.3 MiB)
Interrupt:169 Base address:0xc000
eth0:0 Link encap:Ethernet HWaddr 00:16:3E:0D:87:84
inet addr:192.168.190.42 Bcast:192.168.190.42 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:169 Base address:0xc000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:56 errors:0 dropped:0 overruns:0 frame:0
TX packets:56 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:21013 (20.5 KiB) TX bytes:21013 (20.5 KiB)
[root@manticore postgresql-8.3.1]# ssh 192.168.190.42
root@192.168.190.42's password:
Last login: Tue Mar 24 19:51:54 2009 from 192.168.10.52
[root@drbd2 ~]#
fail-over 소요시간은 대략 20초 내외.
wackamole + spread 대비 장점
- drbd + heartbeat 조합은 비교적 쉬운 설정와 월등한 안정성
- drbd + heartbeat은 문서화가 잘 되어있고 wackamole보다는 잦은 update.
단점
- 성능 하락이 불가피
전송율 측면에서 local disk 대비 약 30%의 성능 하락이 있음.
http://www.slideshare.net/jessejajti/hadrbdpostgres-postgreswest-08-presentation
http://www.slideshare.net/jessejajti/hadrbdpostgres-postgreswest-08-presentation