Skip to content


给邮件服务器postfix增加spf和dkim

Google宣布发送给Gmail的91.4%邮件都采用DKIM或SPF反伪造标准

spf 在域名的dns管理中增加txt记录 以邮件服务器mta.c1gstudio.com ip:208.133.199.7 为例 1.SPF 记录指向A主机记录 @ IN TXT “v=spf1 include:mta.c1gstudio.com -all”

2.这个mta记录下增加spf IP地址(ip地址为邮件服务器对外ip) mta IN TXT “v=spf1 ip4: 208.133.199.7 -all” 如果需要ip段 mta IN TXT “v=spf1 ip4: 208.133.199.0/24 -all”

使用nslookup命令进行查询验证 windows下cmd->nslookup 查询的类型(type)为txt (SPF记录都是txt类型文件)set type=txt 输入要查询SPF记录的域名。这里要查询的域名是”163.com”。如果邮件地址是 [email protected] ,那么需要查询的域名就是 yy.com 查询的结果,从结果里的”include”这个关键字可以知道,163.com的SPF记录包含在”spf.163.com”的”txt”类型中 所以,再一次查询”spf.163.com”的”txt”类型 最终结果出来了 检查输出的IP,确认是否已经包含了所有发信IP

Received-SPF: neutral (google.com: 60.195.249.163 is neither permitted nor denied by domain of [email protected]) 发送邮件到gmail后,查看邮件头,包含SPF: pass为通过 Received-SPF: pass (google.com: domain of #####@yeah.net designates 60.12.227.137 as permitted sender

以上spf算是完成了.

dkim 安装opendkim可以使用epel库方便快捷 rhel5/centos5用这个 rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm rhel6/centos6用这个 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

安装opendkim会增加opendkim用户和组 如果密码文件/etc/passwd已加锁,记得解锁

yum clean all yum install opendkim

Installing: opendkim x86_64 2.8.4-1.el5 epel 262 k Installing for dependencies: ldns x86_64 1.6.16-1.el5 epel 507 k libopendkim x86_64 2.8.4-1.el5 epel 75 k unbound-libs x86_64 1.4.20-2.el5 epel 258 k

安装完成后,输入如下命令,会在当前目录下生成公钥和私钥两个文件:default.private 和 default.txt

1. cd /etc/opendkim/keys opendkim-genkey -r -d mta.c1gstudio.com default.txt 里面的内容类似如下:

default._domainkey IN TXT ( “v=DKIM1; k=rsa; s=email; ” “p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDTEMk2ow/WZuQheCMxZrowErith/LVsaWLVhkkrONT2S4LZcuuVbgCkr2EGaoJUNjc/Ztu0Z47RPbLDkGGjtCwy6VnUeKWh7ijwu2+AHyq6mJIn33z7TGz90Io0o30PjQLSeO1E/ozRpBSljvJMikYUDYfZpWZSxcIhtOetOp8mwIDAQAB” ) ; —– DKIM key default for mta.c1gstudio.com

2.修改文件属性 chown opendkim default.private

3.编缉配置文件 vi /etc/opendkim.conf

Mode sv UserID opendkim:opendkim Socket inet:8891@localhost Canonicalization relaxed/simple Domain mta.c1gstudio.com KeyFile /etc/opendkim/keys/default.private KeyTable refile:/etc/opendkim/KeyTable SigningTable refile:/etc/opendkim/SigningTable InternalHosts refile:/etc/opendkim/TrustedHosts SyslogSuccess No LogWhy No

4.信任的发信ip地址 vi /etc/opendkim/TrustedHosts

127.0.0.1 #hostname1.example1.com 192.168.1.0/24

5.private key vi /etc/opendkim/KeyTable

default._domainkey.mta.c1gstudio.com mta.c1gstudio.com:default:/etc/opendkim/keys/default.private

6.多个域名可以指定使用哪个public key vi /etc/opendkim/SigningTable

*@mta.c1gstudio.com [email protected] *@c1gstudio.com [email protected]

7.启动opendkim /etc/init.d/opendkim start Starting OpenDKIM Milter: [ OK ]

开机启动 chkconfig opendkim on

8.编辑postfix,在末尾加入 vi /etc/postfix/main.cf

#################dkim################### smtpd_milters= inet:localhost:8891 milter_default_action = accept milter_protocol = 2 non_smtpd_milters = inet:localhost:8891

如果Postfix 版本小于2.6需加上 milter_protocol = 2

9.重启postfix /usr/local/postfix/sbin/postfix reload

10.设置域名dns 将public key加入到mta.c1gstudio.com的txt记录中

default._domainkey IN TXT ( “v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDTEMk2ow/WZuQheCMxZrowErith/LVsaWLVhkkrONT2S4LZcuuVbgCkr2EGaoJUNjc/Ztu0Z47RPbLDkGGjtCwy6VnUeKWh7ijwu2+AHyq6mJIn33z7TGz90Io0o30PjQLSeO1E/ozRpBSljvJMikYUDYfZpWZSxcIhtOetOp8mwIDAQAB” )

这里吐槽下商务中国的域名面板无法加含”_”下划线的主机名,转战dnspod.

11.验证域名 http://dkimcore.org/tools/ Check a published DKIM Core Key: Selector:default Domain name:mail.c1gstudio.com

12.邮件测试 tail -f /var/log/maillog Dec 12 17:36:48 c1g opendkim[30546]: 9F36E1A181C8: DKIM-Signature field added (s=default, d=mta.c1gstudio.com)

接着再发邮件到gmail和hotmail中查看dkim是否有签名。 给gmail发邮件,然后查看邮件头,看到spf=pass和dkim=pass

Received-SPF: pass (google.com: domain of [email protected] designates 208.133.199.7 as permitted sender) client-ip=208.133.199.7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 208.133.199.7 as permitted sender) [email protected]; dkim=pass [email protected] Received: from c1g (c1g [208.133.199.7]) by mta.c1gstudio.com (Postfix) with ESMTP id C3BB21A1825C for ; Fri, 13 Dec 2013 10:47:02 +0800 (CST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=mta.c1gstudio.com; s=default; t=1386902822; bh=spJ3t+Wlf7qUYsyf9zBZsdaztFW9Vj9zn2URNGhfZ3o=; h=Date:From:To:Subject; b=R7gyGPsyKqNq3ceDTQYq958qRkdJE04xKCCKOseCb/Gi5v9rLr6jyZyUWsP5PstqX WhfTjErGippcxJEe84B4yTOE0gYDRJB//I8pEF+jux/qF7JhmIc3+Cs/yqI powsNbGdwkJaB3WlCPwroBk88/wP8W64tWda4oyGTVYZzNU=

trouble shooting: 1. tail -f /var/log/maillog opendkim no signature data 没有签名,需改成将opendkim.conf中的模式改成sv Mode sv

2. Dec 13 16:15:03 localhost opendkim[3248]: can’t load key from /etc/opendkim/keys/default.private: Permission denied cd /etc/opendkim/keys/ chown opendkim default.private

参考: http://mail.163.com/hd/all/chengxin/3.htm http://stevejenkins.com/blog/2011/08/installing-opendkim-rpm-via-yum-with-postfix-or-sendmail-for-rhel-centos-fedora/ http://www.banping.com/2011/07/19/postfix-dkim/ http://www.info110.com/mailserver/in27377-1.htm

Posted in Mail/Postfix.

Tagged with , , , .


UnixBench测试服务器性能

UnixBench是一款不错的Linux下的性能测试软件.

测试机配置

dell R410 5620(4核2.4G) *2 8G*2 SAS 15K 300G*2 noraid 7.2K SAS 2T*1 centos 5.8 64bit Linux LocalIm 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux wget http://byte-unixbench.googlecode.com/files/unixbench-5.1.2.tar.gz tar zxvf unixbench-5.1.2.tar.gz cd unixbench-5.1.2 make ./Run

Unixbench会开始性能测试,根据cpu核心数量的不同,测试时间长短不一,这次测试时间大概一小时左右

make all make[1]: Entering directory `/root/src/toolkits/unixbench-5.1.2′ Checking distribution of files ./pgms exists ./src exists ./testdir exists ./tmp exists ./results exists make[1]: Leaving directory `/root/src/toolkits/unixbench-5.1.2′ sh: 3dinfo: command not found # # # # # # # ##### ###### # # #### # # # # ## # # # # # # # ## # # # # # # # # # # # ## ##### ##### # # # # ###### # # # # # # ## # # # # # # # # # # # # ## # # # # # # # ## # # # # #### # # # # # ##### ###### # # #### # # Version 5.1.2 Based on the Byte Magazine Unix Benchmark Multi-CPU version Version 5 revisions by Ian Smith, Sunnyvale, CA, USA December 22, 2007 johantheghost at yahoo period com 1 x Dhrystone 2 using register variables 1 2 3 4 5 6 7 8 9 10 1 x Double-Precision Whetstone 1 2 3 4 5 6 7 8 9 10 1 x Execl Throughput 1 2 3 1 x File Copy 1024 bufsize 2000 maxblocks 1 2 3 1 x File Copy 256 bufsize 500 maxblocks 1 2 3 1 x File Copy 4096 bufsize 8000 maxblocks 1 2 3 1 x Pipe Throughput 1 2 3 4 5 6 7 8 9 10 1 x Pipe-based Context Switching 1 2 3 4 5 6 7 8 9 10 1 x Process Creation 1 2 3 1 x System Call Overhead 1 2 3 4 5 6 7 8 9 10 1 x Shell Scripts (1 concurrent) 1 2 3 1 x Shell Scripts (8 concurrent) 1 2 3 16 x Dhrystone 2 using register variables 1 2 3 4 5 6 7 8 9 10 16 x Double-Precision Whetstone 1 2 3 4 5 6 7 8 9 10 16 x Execl Throughput 1 2 3 16 x File Copy 1024 bufsize 2000 maxblocks 1 2 3 16 x File Copy 256 bufsize 500 maxblocks 1 2 3 16 x File Copy 4096 bufsize 8000 maxblocks 1 2 3 16 x Pipe Throughput 1 2 3 4 5 6 7 8 9 10 16 x Pipe-based Context Switching 1 2 3 4 5 6 7 8 9 10 16 x Process Creation 1 2 3 16 x System Call Overhead 1 2 3 4 5 6 7 8 9 10 16 x Shell Scripts (1 concurrent) 1 2 3 16 x Shell Scripts (8 concurrent) 1 2 3 ======================================================================== BYTE UNIX Benchmarks (Version 5.1.2) System: Impala: GNU/Linux OS: GNU/Linux — 2.6.18-308.el5 — #1 SMP Tue Feb 21 20:06:06 EST 2012 Machine: x86_64 (x86_64) Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″) CPU 0: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 1: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4787.9 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 2: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 3: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 4: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 5: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 6: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 7: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4787.9 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 8: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 9: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 10: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 11: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 12: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 13: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 14: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 15: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization 14:24:24 up 324 days, 22:35, 1 user, load average: 0.07, 0.02, 0.06; runlevel 3 ———————————————————————— Benchmark Run: Fri Nov 08 2013 14:24:24 – 14:52:38 16 CPUs in system; running 1 parallel copy of tests Dhrystone 2 using register variables 14762933.9 lps (10.0 s, 7 samples) Double-Precision Whetstone 1509.6 MWIPS (8.7 s, 7 samples) Execl Throughput 2427.3 lps (29.7 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 597721.9 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 168768.0 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 1430654.0 KBps (30.0 s, 2 samples) Pipe Throughput 1166940.5 lps (10.0 s, 7 samples) Pipe-based Context Switching 105565.7 lps (10.0 s, 7 samples) Process Creation 8799.5 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 5298.7 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 2831.9 lpm (60.0 s, 2 samples) System Call Overhead 943466.8 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 14762933.9 1265.0 Double-Precision Whetstone 55.0 1509.6 274.5 Execl Throughput 43.0 2427.3 564.5 File Copy 1024 bufsize 2000 maxblocks 3960.0 597721.9 1509.4 File Copy 256 bufsize 500 maxblocks 1655.0 168768.0 1019.7 File Copy 4096 bufsize 8000 maxblocks 5800.0 1430654.0 2466.6 Pipe Throughput 12440.0 1166940.5 938.1 Pipe-based Context Switching 4000.0 105565.7 263.9 Process Creation 126.0 8799.5 698.4 Shell Scripts (1 concurrent) 42.4 5298.7 1249.7 Shell Scripts (8 concurrent) 6.0 2831.9 4719.8 System Call Overhead 15000.0 943466.8 629.0 ======== System Benchmarks Index Score 940.2 ———————————————————————— Benchmark Run: Fri Nov 08 2013 14:52:38 – 15:21:02 16 CPUs in system; running 16 parallel copies of tests Dhrystone 2 using register variables 109893471.8 lps (10.0 s, 7 samples) Double-Precision Whetstone 19203.2 MWIPS (9.2 s, 7 samples) Execl Throughput 40389.3 lps (29.7 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 232275.2 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 66058.1 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 701683.9 KBps (30.0 s, 2 samples) Pipe Throughput 11422586.8 lps (10.0 s, 7 samples) Pipe-based Context Switching 2899346.4 lps (10.0 s, 7 samples) Process Creation 115952.7 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 42915.9 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 8344.0 lpm (60.0 s, 2 samples) System Call Overhead 5982682.0 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 109893471.8 9416.7 Double-Precision Whetstone 55.0 19203.2 3491.5 Execl Throughput 43.0 40389.3 9392.9 File Copy 1024 bufsize 2000 maxblocks 3960.0 232275.2 586.6 File Copy 256 bufsize 500 maxblocks 1655.0 66058.1 399.1 File Copy 4096 bufsize 8000 maxblocks 5800.0 701683.9 1209.8 Pipe Throughput 12440.0 11422586.8 9182.1 Pipe-based Context Switching 4000.0 2899346.4 7248.4 Process Creation 126.0 115952.7 9202.6 Shell Scripts (1 concurrent) 42.4 42915.9 10121.7 Shell Scripts (8 concurrent) 6.0 8344.0 13906.7 System Call Overhead 15000.0 5982682.0 3988.5 ======== System Benchmarks Index Score 4199.4 dell R420 2420(6核1.9G) *2 8G*2 SAS 15K 300G*2 noraid centos 5.8 64bit Linux loclaP 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux BYTE UNIX Benchmarks (Version 5.1.2) System: ParkAvenue: GNU/Linux OS: GNU/Linux — 2.6.18-308.el5 — #1 SMP Tue Feb 21 20:06:06 EST 2012 Machine: x86_64 (x86_64) Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″) CPU 0: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 1: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 2: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 3: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 4: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.2 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 5: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 6: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 7: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 8: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 9: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 10: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 11: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 12: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 13: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 14: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 15: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 16: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 17: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 18: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 19: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 20: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 21: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 22: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 23: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization 12:01:45 up 21:39, 1 user, load average: 0.07, 0.05, 0.01; runlevel 3 ———————————————————————— Benchmark Run: Fri Dec 06 2013 12:01:45 – 12:29:21 24 CPUs in system; running 1 parallel copy of tests Dhrystone 2 using register variables 13721062.3 lps (10.0 s, 7 samples) Double-Precision Whetstone 2121.9 MWIPS (7.2 s, 7 samples) Execl Throughput 2052.4 lps (29.8 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 567410.8 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 157374.9 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 1499112.8 KBps (30.0 s, 2 samples) Pipe Throughput 966163.2 lps (10.0 s, 7 samples) Pipe-based Context Switching 64456.9 lps (10.0 s, 7 samples) Process Creation 6327.8 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 4121.4 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 2182.9 lpm (60.0 s, 2 samples) System Call Overhead 801849.7 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 13721062.3 1175.8 Double-Precision Whetstone 55.0 2121.9 385.8 Execl Throughput 43.0 2052.4 477.3 File Copy 1024 bufsize 2000 maxblocks 3960.0 567410.8 1432.9 File Copy 256 bufsize 500 maxblocks 1655.0 157374.9 950.9 File Copy 4096 bufsize 8000 maxblocks 5800.0 1499112.8 2584.7 Pipe Throughput 12440.0 966163.2 776.7 Pipe-based Context Switching 4000.0 64456.9 161.1 Process Creation 126.0 6327.8 502.2 Shell Scripts (1 concurrent) 42.4 4121.4 972.0 Shell Scripts (8 concurrent) 6.0 2182.9 3638.1 System Call Overhead 15000.0 801849.7 534.6 ======== System Benchmarks Index Score 818.6

Posted in 测试.

Tagged with .


postfix如何过滤某个域名邮件的发送?

mail日志中可以看到有些无效的域名邮件可以直接丢弃

5305 example.com[93.184.216.119]:25: Connection timed out 1265 mail.qq.co[50.23.198.74]:25: Connection refused 394 Host not found, try again 374 qqq.com[98.126.157.163]:25: Connection refused

设置check_recipient_access访问表 在smtpd_recipient_restrictions参数中增加check_recipient_access hash:/etc/postfix/recipient_access 例:

smtpd_recipient_restrictions = check_recipient_access hash:/etc/postfix/recipient_access,permit_mynetworks,permit_sasl_authenticated,reject_invalid_hostname,reject_non_fqdn_hostname,reject_unknown_sender_domain,reject_non_fqdn_sender,reject_non_fqdn_recipient,reject_unknown_recipient_domain,reject_unauth_pipelining,reject_unauth_destination

/etc/postfix/recipient_access内容为:

[email protected] REJECT #具体地址 example.com REJECT #对整个域实现访问控制

postmap /etc/postfix/recipient_acces 之后执行 postfix reload

Posted in Mail/Postfix.

Tagged with .


nagios监控memcached

Nagios的check_memcached 下载地址:

http://search.cpan.org/CPAN/authors/id/Z/ZI/ZIGOROU/Nagios-Plugins-Memcached-0.02.tar.gz

这个脚本是用perl编的,所以你要先确保自己的机器里面是否有perl环境. 安装方法:

#wget http://search.cpan.org/CPAN/authors/id/Z/ZI/ZIGOROU/Nagios-Plugins-Memcached-0.02.tar.gz #tar xzvf Nagios-Plugins-Memcached-0.02.tar.gz #cd Nagios-Plugins-Memcached-0.02 #perl Makefile.PL

执行后会出现一些提示让你选择,一路回车

#make

这时会下载一些运行时需要的东西

#make install

默认会把check_memcached文件放到/usr/bin/check_memcached 做个软链接抟到Nagios libexec目录下.

ln -s /usr/bin/check_memcached /usr/local/nagios/libexec/

测试 /usr/local/nagios/libexec/check_memcached -H192.168.0.40:12111 MEMCACHED OK – OK 如果提示没有Nagios::Plugin

可以通过CPAN来安装perl-nagios-plugin

perl -MCPAN -e shell install File::Slurp install YAML install HTML::Parser install URI install Compress::Zlib install Module::Runtime install Module::Implementation install Attribute::Handlers install Params::Validate install Nagios::Plugin

重新配置cpan的命令为”o conf init”

测试通过后修改nagios commands.cfg配置文件.加上这些内容:

define command { command_name check_memcached_response command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ -w $ARG3$ -c $ARG4$ } define command { command_name check_memcached_size command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ –size-warning $ARG3$ –size-critical $ARG4$ } define command { command_name check_memcached_hit command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ –hit-warning $ARG3$ –hit-critical $ARG4$ }

然后在需监控主机的cfg配置文件里加上:

define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_response check_command check_memcached_response!192.168.0.40!12111!300!500 } define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_size check_command check_memcached_size!192.168.0.40!12111!90!95 } define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_hit check_command check_memcached_hit!192.168.0.40!12111!10!5 }

/etc/init.d/nagios reload 重新reload后就可以看到,目前此插件不支持pnp作图

参考:http://simblog.vicp.net/?p=250 http://blog.csdn.net/deccmtd/article/details/6799647

Posted in Nagios.

Tagged with , .


svn Compression of svndiff data failed

svn无法提交修改,报错: Compression of svndiff data failed

检查svn服务器为apache内存泄漏,重启一下就解决了.

Posted in Subversion.

Tagged with , .


drbd+xfs+heartbeat+mysql实现高可用

一.前言

DRBD是一种块设备,可以被用于高可用(HA)之中.它类似于一个网络RAID-1功能.当你将数据写入本地 文件系统时,数据还将会被发送到网络中另一台主机上.以相同的形式记录在一个文件系统中. 本地(主节点)与远程主机(备节点)的数据可以保证实时同步.当本地系统出现故障时,远程主机上还会保留有一份相同的数据,可以继续使用. Heartbeat 是可以从 Linux-HA 项目 Web 站点公开获得的软件包之一,它提供了所有 HA 系统所需要的基本功能,比如启动和停止资源、监测群集中系统的可用性、在群集中的节点间转移共享 IP 地址的所有者等

系统os为centos5.8 64bit drbd83 xfs-1.0.2-5.el5_6.1 Heartbeat 2.1.3 Percona-Server-5.5.22-rel25.2

资源分配 192.168.0.45 mysql1 192.168.0.43 mysql2 192.168.0.46 vip DRBD数据目录 /data

mysql做主复制,后接从机,实现高可用

二.安装xfs支持

XFS的主要特性包括: 数据完全性 采用XFS文件系统,当意想不到的宕机发生后,首先,由于文件系统开启了日志功能,所以你磁盘上的文件不再会意外宕机而遭到破坏了。不论目前文件系统上存储的文件与数据有多少,文件系统都可以根据所记录的日志在很短的时间内迅速恢复磁盘文件内容。 传输特性 XFS文件系统采用优化算法,日志记录对整体文件操作影响非常小。XFS查询与分配存储空间非常快。xfs文件系统能连续提供快速的反应时间。笔者曾经对XFS、JFS、Ext3、ReiserFS文件系统进行过测试,XFS文件文件系统的性能表现相当出众。 可扩展性 XFS 是一个全64-bit的文件系统,它可以支持上百万T字节的存储空间。对特大文件及小尺寸文件的支持都表现出众,支持特大数量的目录。最大可支持的文件大 小为263 = 9 x 1018 = 9 exabytes,最大文件系统尺寸为18 exabytes。 XFS使用高的表结构(B+树),保证了文件系统可以快速搜索与快速空间分配。XFS能够持续提供高速操作,文件系统的性能不受目录中目录及文件数量的限制。 传输带宽 XFS 能以接近裸设备I/O的性能存储数据。在单个文件系统的测试中,其吞吐量最高可达7GB每秒,对单个文件的读写操作,其吞吐量可达4GB每秒。

xfs在默认的系统安装上是不被支持的,需要自己手动安装默认的包。

rpm -qa |grep xfs xorg-x11-xfs-1.0.2-5.el5_6.1

yum install xfsprogs kmod-xfs

挂载分区

modprobe xfs //载入xfs文件系统模块,如果不行要重启服务器 lsmod |grep xfs //查看是否载入了xfs模块

df -h -P -T

Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol02 ext3 97G 3.7G 89G 5% / /dev/mapper/VolGroup00-LogVol01 ext3 9.7G 151M 9.1G 2% /tmp /dev/mapper/VolGroup00-LogVol04 ext3 243G 470M 230G 1% /opt /dev/mapper/VolGroup00-LogVol03 ext3 39G 436M 37G 2% /var /dev/sda1 ext3 99M 13M 81M 14% /boot tmpfs tmpfs 7.9G 0 7.9G 0% /dev/shm

vgdisplay

— Volume group — VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.09 TB PE Size 32.00 MB Total PE 35692 Alloc PE / Size 13056 / 408.00 GB Free PE / Size 22636 / 707.38 GB VG UUID 7nAuuH-cpyx-u2Jr-bNNJ-fGPe-TUKg-T4KmmU

划出300G数据分区

lvcreate -L 300G -n DRBD VolGroup00

Logical volume “DRBD” created /dev/VolGroup00/DRBD

三.安装配置drbd

1.设置hostname

vi /etc/sysconfig/network 修改HOSTNAME为node1

编辑hosts vi /etc/hosts 添加:

192.168.0.45 node1 192.168.0.43 node2 使node1 hostnmae临时生效

hostname node1 node2设置类似。

2.安装drbd yum install drbd83 kmod-drbd83

Installing: drbd83 x86_64 8.3.15-2.el5.centos extras 243 k kmod-drbd83 x86_64 8.3.15-3.el5.centos extras yum install -y heartbeat heartbeat-ldirectord heartbeat-pils heartbeat-stonith Installing: heartbeat x86_64 2.1.3-3.el5.centos extras 1.8 M heartbeat-ldirectord x86_64 2.1.3-3.el5.centos extras 199 k heartbeat-pils x86_64 2.1.3-3.el5.centos extras 220 k heartbeat-stonith x86_64 2.1.3-3.el5.centos extras 348 k

3.配置drbd master上配置配置文件(/etc/drbd.d目录下)global_common.conf,然后scp到slave相同目录下.

vi /etc/drbd.d/global_common.conf

global { usage-count yes; # minor-count dialog-refresh disable-ip-verification } common { protocol C; handlers { # These are EXAMPLE handlers only. # They may have severe implications, # like hard resetting the node under certain circumstances. # Be careful when chosing your poison. pri-on-incon-degr “/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”; pri-lost-after-sb “/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”; local-io-error “/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f”; # fence-peer “/usr/lib/drbd/crm-fence-peer.sh”; # split-brain “/usr/lib/drbd/notify-split-brain.sh root”; # out-of-sync “/usr/lib/drbd/notify-out-of-sync.sh root”; # before-resync-target “/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 — -c 16k”; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb wfc-timeout 15; degr-wfc-timeout 15; outdated-wfc-timeout 15; } disk { # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes # no-disk-drain no-md-flushes max-bio-bvecs on-io-error detach; fencing resource-and-stonith; } net { # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork timeout 60; connect-int 15; ping-int 15; ping-timeout 50; max-buffers 8192; ko-count 100; cram-hmac-alg sha1; shared-secret “mypassword123”; } syncer { # rate after al-extents use-rle cpu-mask verify-alg csums-alg rate 100M; al-extents 512; csums-alg sha1; } }

4.配置同步资源 vi /etc/drbd.d/share.res

resource share { meta-disk internal; device /dev/drbd0; #device指定的参数最后必须有一个数字,用于global的minor-count,否则会报错。device指定drbd应用层设备。 disk /dev/VolGroup00/DRBD; on node1{ #注意:drbd配置文件中,机器名大小写敏感! #on Locrosse{ address 192.168.0.45:9876; } on node2 { #on Regal { address 192.168.0.43:9876; } }

5.将文件传送到slave上 scp -P 22 /etc/drbd.d/share.res /etc/drbd.d/global_common.conf [email protected]:.

6.在两台机器上分别创建DRBD分区 drbdadm create-md share,share为资源名,对应上面share.res,后面的share皆如此 drbdadm create-md share no resources defined!

在drbd.conf中加入配件文件 vi /etc/drbd.conf

include “drbd.d/global_common.conf”; include “drbd.d/*.res”;

drbdadm create-md share

md_offset 322122543104 al_offset 322122510336 bm_offset 322112679936 Found xfs filesystem 314572800 kB data area apparently used 314563164 kB left usable by current configuration Device size would be truncated, which would corrupt data and result in ‘access beyond end of device’ errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command ‘drbdmeta 0 v08 /dev/VolGroup00/DRBD internal create-md’ terminated with exit code 40 drbdadm create-md share: exited with code 40

解决办法 dd if=/dev/zero bs=1M count=1 of=/dev/VolGroup00/DRBD

1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00221 seconds, 474 MB/s

drbdadm create-md share

Writing meta data… initializing activity log NOT initialized bitmap New drbd meta data block successfully created. success

7.添加itpables

iptables -A INPUT -p tcp -m tcp -s 192.168.0.0/24 –dport 9876 -j ACCEPT

8.在两台机器上启动DRBD service drbd start 或 /etc/init.d/drbd start

9.查看DRBD启动状态 service drbd status 或 /etc/init.d/drbd status,cat /proc/drbd 或 drbd-overview 查看数据同步状态,第一次同步数据比较慢,只有等数据同步完才能正常使用DRBD。 没有启动drbd或iptables没有打开 service drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share WFConnection Secondary/Unknown Inconsistent/DUnknown C

成功连上 /etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share Connected Secondary/Secondary Inconsistent/Inconsistent C

10.设置主节点 把node1设置为 primary节点, 首次设置primary时用命令drbdsetup /dev/drbd0 primary -o 或drbdadm –overwrite-data-of-peer primary share ,而不能直接用命令drbdadm primary share, 再次查看DRBD状态 drbdsetup /dev/drbd0 primary -o /etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype … sync’ed: 1.9% (301468/307188)M 0:share SyncSource Primary/Secondary UpToDate/Inconsistent C

11.格式化为xfs 在node1上为DRBD分区(/dev/drbd0)创建xfs文件系统,并制定标签为drbd 。mkfs.xfs -L drbd /dev/drbd0,若提示未安装xfs,直接yum install xfsprogs安装 mkfs.xfs -L drbd /dev/drbd0

meta-data=/dev/drbd0 isize=256 agcount=16, agsize=4915049 blks = sectsz=512 attr=0 data = bsize=4096 blocks=78640784, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0

挂载DRBD分区到本地(只能在primary节点上挂载),首先创建本地文件夹, mkdir /data 之后 mount /dev/drbd0 /data 然后就可以使用drbd分区了。

12.测试,主从切换 首先在master节点/data文件夹中新建test文档touch /data/test,卸载DRBD分区/dev/drbd0, umount /dev/drbd0,把primary节点降为secondary节点drbdadm secondary share touch /data/test umount /dev/drbd0 drbdadm secondary share

在slave节点上,把secondary升级为paimary节点 drbdadm primary share,创建/data文件夹 mkdir /data,挂载DRBD分区/dev/drbd0到本地 mount /dev/drbd0 /data,然后查看/data文件夹如果发现test文件则DRBD安装成功。 drbdadm primary share mkdir /data mount /dev/drbd0 /data ll /data

/etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share Connected Primary/Secondary UpToDate/UpToDate C /data xfs

主众切换提要 在master上,先umount umount /dev/drbd0 把primary节点降为secondary节点 drbdadm secondary share

在slave上 drbdadm primary share mount /dev/drbd0 /data 这样slave就升为主了

四.安装配置heartbeat

====================

1.手动编译失败版 1.1.安装libnet http://sourceforge.net/projects/libnet-dev/ wget http://downloads.sourceforge.net/project/libnet-dev/libnet-1.2-rc2.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flibnet-dev%2F&ts=1368605826&use_mirror=jaist

tar zxvf libnet-1.2-rc2.tar.gz cd libnet ./configure make && make install

1.2.解锁用户文件 chattr -i /etc/passwd /etc/shadow /etc/group /etc/gshadow /etc/services

1.3.添加用户 groupadd haclient useradd -g haclient hacluster

1.4.查看添加结果 tail /etc/passwd hacluster:x:498:496::/var/lib/heartbeat/cores/hacluster:/bin/bash

tail /etc/shadow hacluster:!!:15840:0:99999:7:::

tail /etc/gshadow haclient:!::

tail /etc/group haclient:x:496:

1.5.安装heartbeat cd .. wget http://www.ultramonkey.org/download/heartbeat/2.1.3/heartbeat-2.1.3.tar.gz

tar -zxvf heartbeat-2.1.3.tar.gz cd heartbeat-2.1.3 ./ConfigureMe configure make && make install 出现这个错误过不去了 /usr/local/lib/libltdl.a: could not read symbols:

2.yum安装heartbeat yum install libtool-ltdl-devel yum install heartbeat

总共有三个文件需要配置: ha.cf 监控配置文件 haresources 资源管理文件 authkeys 心跳线连接加密文件

3.同步两台节点的时间 ntpdate -d cn.pool.ntp.org

4.添加iptables iptables -A INPUT -p udp -m udp –dport 694 -s 192.168.0.0/24 -j ACCEPT

5.配置ha.cf ip分配 192.168.0.45 node1 192.168.0.43 node2 192.168.0.46 vip

vi /etc/ha.d/ha.cf

debugfile /var/log/ha-debug #打开错误日志报告 keepalive 2 #两秒检测一次心跳线连接 deadtime 10 #10 秒测试不到主服务器心跳线为有问题出现 warntime 6 #警告时间(最好在 2 ~ 10 之间) initdead 120 #初始化启动时 120 秒无连接视为正常,或指定heartbeat #在启动时,需要等待120秒才去启动任何资源。 udpport 694 #用 udp 的 694 端口连接 ucast eth0 192.168.0.43 #单播方式连接(主从都写对方的 ip 进行连接) node node1 #声明主服(注意是主机名uname -n不是域名) node node2 #声明备服(注意是主机名uname -n不是域名) auto_failback on #自动切换(主服恢复后可自动切换回来)这个不要开启 respawn hacluster /usr/lib64/heartbeat/ipfail #监控ipfail进程是否挂掉,如果挂掉就重启它,64位为/lib64,32位为/lib

6.配置authkeys vi /etc/ha.d/authkeys 写入:

auth 1 1 crc

chmod 600 /etc/ha.d/authkeys

7.配置haresources mkdir /usr/lib/heartbeat

vi /etc/ha.d/haresources 写入:

node1 IPaddr::192.168.0.46/24/eth0 drbddisk::share Filesystem::/dev/drbd0::/data::xfs mysqld

node1:master主机名 IPaddr::192.168.0.46/24/eth0:设置虚拟IP(对外使用ip) drbddisk::r0:管理资源r0 Filesystem::/dev/drbd1::/data::ext3:执行mount与unmout操作 node2配置基本相同,不同的是ha.cf中的192.168.0.43 改为192.168.0.45。 mysql为后续的服务 以空格分开可写多个

8.配置mysql服务 从mysql编译目录复制出mysql.server cd mysql编译目录 cp support-files/mysql.server /etc/init.d/mysqld chown root:root /etc/init.d/mysqld chmod 700 /etc/init.d/mysqld

定义mysql数据目录及二进制日志目录是drbd资源 vi /opt/mysql/my.cnf

socket = /opt/mysql/mysql.sock pid-file = /opt/mysql/var/mysql.pid log = /opt/mysql/var/mysql.log datadir = /data/mysql/var/ log-bin=/data/mysql/var/mysql-bin

9.DRBD主从自动切换测试 首先先在node1启动heartbeat,接着在node2启动,这时,node1等node2完全启动后,相继执行设置虚拟IP,启动drbd并设置primary,并挂载/dev/drbd0到/data目录,启动命令为: service heartbeat start 这时,我们执行ip a命令,发现多了一个IP 192.168.0.46,这个就是虚拟IP,cat /proc/drbd查看drbd状态,显示primary/secondary状态,df -h显示/dev/drbd0已经挂载到/data目录。 然后我们来测试故障自动切换,停止node1的heartbeat服务或者断开网络连接,几秒后到node2查看状态。 接着恢复node1的heartbeat服务或者网络连接,查看其状态。

故障转移测试 service heartbeat standby
执行上面的命令,就会将haresources里面指定的资源从node1正常的转移到node2上,这时通过cat /proc/drbd在两台机器上查看,角色已发生了改变,mysql的服务也已在从服务器上成功开启,通过查看两台机器上的heartbeat日志便可清楚可见资源切换的整个过程, cat /var/log/ha-log

实验结果 在node1使用 heartbeat stop或 standby后会切到slave ,node1再次开启heartbeat 后会自动再切回 所以关闭heartbeat服务要先关从机,否则会切到从机.

错误1: /usr/lib/heartbeat/ipfail is not executable 在64位系统中此处为lib64 错误2: ERROR: Bad permissions on keyfile [/etc/ha.d/authkeys], 600 recommended. chmod 600 /etc/ha.d/authkeys

=================

ext3和xfs写性能简单测试

sas300g 15k 四盘组raid 0 条带大小Stripe Element Size:256K http://www.wmarow.com/strcalc/ 读取策略:自适应 Adaptive 写策略:回写(write back) 打开电池(bbu)

lvm管理

time dd if=/dev/zero of=/opt/c1gfile bs=1024 count=20000000

20000000+0 records in 20000000+0 records out 20480000000 bytes (20 GB) copied, 95.6601 seconds, 214 MB/s

real 1m35.950s user 0m9.389s sys 1m19.322s

time dd if=/dev/zero of=/data/c1gfile bs=1024 count=20000000

time dd if=/dev/zero of=/data/c1gfile bs=1024 count=20000000 20000000+0 records in 20000000+0 records out 20480000000 bytes (20 GB) copied, 45.6391 seconds, 449 MB/s

real 0m45.652s user 0m7.663s sys 0m37.922s

参考:

http://blog.csdn.net/rzhzhz/article/details/7107115 http://www.51osos.com/a/MySQL/jiagouyouhua/jiagouyouhua/2010/1122/Heartbeat.html http://bbs.linuxtone.org/thread-7413-1-1.html http://www.centos.bz/2012/03/achieve-drbd-high-availability-with-heartbeat/ http://www.doc88.com/p-714757400456.html http://icarusli.iteye.com/blog/1751435 http://www.mysqlops.com/2012/03/10/dba%e7%9a%84%e4%ba%b2%e4%bb%ac%e5%ba%94%e8%af%a5%e7%9f%a5%e9%81%93%e7%9a%84raid%e5%8d%a1%e7%9f%a5%e8%af%86.html http://www.mysqlsystems.com/2010/11/drbd-heartbeat-make-mysql-ha.html http://www.wenzizone.cn/?p=234

Posted in 高可用/集群.

Tagged with , , , , .


内网配置错误引起的nginx 504 Connection timed out

架构为nginx_cache->nginx群->php群->memcache群->mysql群

现象为用户在每小时的10分、20分等整10分时刷新论坛页面会出现502,504错误,再刷一下就可以显示。

在nginx访问日志中有504记录 tail -f /var/log/nginx/bbs.c1gstudio.com.log |grep ‘” 504’

111.166.167.206 – – [28/Apr/2013:15:30:16 +0800] “GET /forum.php?mod=ajax&action=forumchecknew&fid=276&time=1367134175&inajax=yes HTTP/1.1” 504 578 “http://bbs.c1gstudio.com/forum-276-1.html” “Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11” –

在nginx.conf中打开error记录后调试

error_log /var/log/nginx/nginx_error.log error;

tail -f /var/log/nginx/nginx_error.log

2013/05/03 14:05:47 [error] 14295#0: *968832 connect() failed (110: Connection timed out) while connecting to upstream, client: 220.181.125.23, server: bbs.c1gstudio.com, request: “GET /forum-739-1.html HTTP/1.1”, upstream: “http://192.168.0.33:80/502.html”, host: “bbs.c1gstudio.com”

110: Connection timed out 连接内网192.168.0.33超时。。。

用iftop监控并持续ping,没有问题 检查各机器的crontab,php,nginx,iptables无果 在重启nginx_cache网络时提示内网ip占用,这才发现有一台新上的机器分配了相同的内网ip 修改ip后没有问题了.

Posted in Nginx.

Tagged with , .


linux 内网网关nat 上网

内网机器需要更新下载软件时,用内网可上网机器作nat连接外网

网关 外网网卡ip:61.88.54.23 网关:61.88.54.1 内网网卡ip:192.168.0.39 网关:无

客户机 内网网卡ip:192.168.0.40

内网网卡接在同一交换机上

1.网关机 #打开内核的包转发功能 echo 1 > /proc/sys/net/ipv4/ip_forward #建立IP转发和映射

iptables -A FORWARD -d 192.168.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j SNAT –to-source 61.88.54.23

2.客户机 先加个corn 15点40自动清除,防止操作失败 crontab -e

40 15 * * * /sbin/route del default gw 192.168.0.39

3.添加路由 route add default gw 192.168.0.39

4.测试外网连接 ping 8.8.8.8

5.开机运行 vi /etc/rc.local

route add default gw 192.168.0.39

6.清除crontab

参考: http://panpan.blog.51cto.com/489034/189072

Posted in LINUX.

Tagged with , .


linux centos5.8 安装memcached

1.安装libevent yum install libevent.x86_64 libevent-devel.x86_64 没有libevent编译memcached为出错

checking for libevent directory… configure: error: libevent is required. You can get it from http://www.monkey.org/~provos/libevent/ If it’s already installed, specify its path using –with-libevent=/dir/

2.安装memcached

wget http://memcached.googlecode.com/files/memcached-1.4.15.tar.gz tar zxvf memcached-1.4.15.tar.gz cd memcached-1.4.15 ./configure –prefix=/opt/memcached-1.4.15 make make install ln -s /opt/memcached-1.4.15 /opt/memcached

3.配置文件 vi /opt/memcached/my.conf

PORT=”11200″ IP=”192.168.0.40″ USER=”root” MAXCONN=”1524″ CACHESIZE=”3000″ OPTIONS=”” #memcached

4.启动/关闭脚本 vi /etc/init.d/memcached

#!/bin/bash # # Save me to /etc/init.d/memcached # And add me to system start # chmod +x memcached # chkconfig –add memcached # chkconfig –level 35 memcached on # # Written by lei # # chkconfig: – 80 12 # description: Distributed memory caching daemon # # processname: memcached # config: /usr/local/memcached/my.conf source /etc/rc.d/init.d/functions ### Default variables PORT=”11211″ IP=”192.168.0.40″ USER=”root” MAXCONN=”1524″ CACHESIZE=”64″ OPTIONS=”” SYSCONFIG=”/opt/memcached/my.conf” ### Read configuration [ -r “$SYSCONFIG” ] && source “$SYSCONFIG” RETVAL=0 prog=”/opt/memcached/bin/memcached” desc=”Distributed memory caching” start() { echo -n $”Starting $desc ($prog): ” daemon $prog -d -p $PORT -l $IP -u $USER -c $MAXCONN -m $CACHESIZE $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/memcached return $RETVAL } stop() { echo -n $”Shutting down $desc ($prog): ” killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/memcached return $RETVAL } restart() { stop start } reload() { echo -n $”Reloading $desc ($prog): ” killproc $prog -HUP RETVAL=$? echo return $RETVAL } case “$1″ in start) start ;; stop) stop ;; restart) restart ;; condrestart) [ -e /var/lock/subsys/$prog ] && restart RETVAL=$? ;; reload) reload ;; status) status $prog RETVAL=$? ;; *) echo $”Usage: $0 {start|stop|restart|condrestart|status}” RETVAL=1 esac exit $RETVAL

5.添加iptables 充许192.168.0.0/24访问

iptables -A INPUT -s 192.168.0.0/255.255.255.0 -p tcp -m tcp –dport 11200 -j ACCEPT

6.启动 /etc/init.d/memcached start

7.web 管理界面 http://www.junopen.com/memadmin/

Posted in Memcached/redis.

Tagged with .


linux centos5.8 安装redis

Redis 是一个高性能的key-value数据库。 redis的出现,很大程度补偿了memcached这类key/value存储的不足,在部 分场合可以对关系数据库起到很好的补充作用。它提供了Python,Ruby,Erlang,PHP客户端,使用很方便。 一.安装tcl 否则在redis make test时出报错

You need tcl 8.5 or newer in order to run the Redis test make[1]: *** [test] Error 1 wget http://downloads.sourceforge.net/tcl/tcl8.6.0-src.tar.gz tar zxvf tcl8.6.0-src.tar.gz cd tcl8.6.0-src cd unix && ./configure –prefix=/usr \ –mandir=/usr/share/man \ $([ $(uname -m) = x86_64 ] && echo –enable-64bit) make && sed -e “s@^\(TCL_SRC_DIR=’\).*@\1/usr/include’@” \ -e “/TCL_B/s@=’\(-L\)\?.*unix@=’\1/usr/lib@” \ -i tclConfig.sh make test Tests ended at Tue Apr 16 12:02:27 CST 2013 all.tcl: Total 116 Passed 116 Skipped 0 Failed 0 make install && make install-private-headers && ln -v -sf tclsh8.6 /usr/bin/tclsh && chmod -v 755 /usr/lib/libtcl8.6.so

二.redis 1.安装redis

wget http://redis.googlecode.com/files/redis-2.6.12.tar.gz tar xzf redis-2.6.12.tar.gz cd redis-2.6.12 make make test

以下错误可以忽略 https://github.com/antirez/redis/issues/1034

[exception]: Executing test client: assertion:Server started even if RDB was unreadable!. assertion:Server started even if RDB was unreadable! while executing “error “assertion:$msg”” (procedure “fail” line 2) invoked from within “fail “Server started even if RDB was unreadable!”” (“uplevel” body line 2) invoked from within “uplevel 1 $elsescript” (procedure “wait_for_condition” line 7) invoked from within “wait_for_condition 50 100 { [string match {*Fatal error loading*} [exec tail -n1

make命令执行完成后,会在当前目录下生成4个可执行文件,分别是redis-server、redis-cli、redis-benchmark、redis-stat,它们的作用如下: redis-server:Redis服务器的daemon启动程序 redis-cli:Redis命令行操作工具。当然,你也可以用telnet根据其纯文本协议来操作 redis-benchmark:Redis性能测试工具,测试Redis在你的系统及你的配置下的读写性能 redis-stat:Redis状态检测工具,可以检测Redis当前状态参数及延迟状况

2.建立Redis目录

mkdir -p /opt/redis/bin mkdir -p /opt/redis/etc mkdir -p /opt/redis/var cp redis.conf /opt/redis/etc/ cd src cp redis-server redis-cli redis-benchmark redis-check-aof redis-check-dump /opt/redis/bin/ useradd redis chown -R redis.redis /opt/redis

建立Redis目录,只是为了将Redis相关的资源更好的统一管理。你也可以使用 make install 安装在系统默认目录

3.复制启动文件

cp ../utils/redis_init_script /etc/init.d/redis

4.启动redis

cd /opt/redis/bin ./redis-server /opt/redis/etc/redis.conf _._ _.-“__ ”-._ _.-“ `. `_. ”-._ Redis 2.6.12 (00000000/0) 64 bit .-“ .-“`. “`\/ _.,_ ”-._ ( ‘ , .-` | `, ) Running in stand alone mode |`-._`-…-` __…-.“-._|’` _.-‘| Port: 6379 | `-._ `._ / _.-‘ | PID: 24454 `-._ `-._ `-./ _.-‘ _.-‘ |`-._`-._ `-.__.-‘ _.-‘_.-‘| | `-._`-._ _.-‘_.-‘ | http://redis.io `-._ `-._`-.__.-‘_.-‘ _.-‘ |`-._`-._ `-.__.-‘ _.-‘_.-‘| | `-._`-._ _.-‘_.-‘ | `-._ `-._`-.__.-‘_.-‘ _.-‘ `-._ `-.__.-‘ _.-‘ `-._ _.-‘ `-.__.-‘ [24454] 12 Apr 10:34:19.519 # Server started, Redis version 2.6.12 [24454] 12 Apr 10:34:19.519 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command ‘sysctl vm.overcommit_memory=1’ for this to take effect. [24454] 12 Apr 10:34:19.519 * The server is now ready to accept connections on port 6379

成功安装Redis后,直接执行redis-server即可运行Redis,此时它是按照默认配置来运行的(默认配置不是后台运行)。如果我们希望Redis按我们的要求运行,则需要修改配置文件,

5.设置itpables,充许192.168.0内网网段访问

iptables -A INPUT -s 192.168.0.0/24 -p tcp -m tcp –dport 6379 -j ACCEPT /etc/init.d/iptables save

6.配置redis vi /opt/redis/etc/redis.conf

daemonize yes #是否作为守护进程运行 默认0 pidfile /var/run/redis.pid # 指定一个pid,默认为/var/run/redis.pid port 6379 #Redis默认监听端口 bind 192.168.0.41 #绑定主机IP,默认值为127.0.0.1 timeout 300 #客户端闲置多少秒后,断开连接,默认为300(秒) tcp-keepalive 0 # tcp保持连接 loglevel notice #日志记录等级,有4个可选值,debug,verbose(默认值),notice,warning logfile /opt/redis/var/redis.log #指定日志输出的文件名,默认值为stdout,也可设为/dev/null屏蔽日志 databases 16 #可用数据库数,默认值为16 save 900 1 #保存数据到disk的策略 #当有一条Keys数据被改变是,900秒刷新到disk一次 save 300 10 #当有10条Keys数据被改变时,300秒刷新到disk一次 save 60 10000 #当有1w条keys数据被改变时,60秒刷新到disk一次 #当dump .rdb数据库的时候是否压缩数据对象 rdbcompression yes #本地数据库文件名,默认值为dump.rdb dbfilename dump.rdb #本地数据库存放路径,默认值为 ./ dir /opt/redis/var/ #内存限制 maxmemory 2G #刷新策略 maxmemory-policy allkeys-lru
  1. 调整系统内核参数 如果内存情况比较紧张的话,需要设定内核参数:
echo 1 > /proc/sys/vm/overcommit_memory

这里说一下这个配置的含义:/proc/sys/vm/overcommit_memory 该文件指定了内核针对内存分配的策略,其值可以是0、1、2。 0,表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 1,表示内核允许分配所有的物理内存,而不管当前的内存状态如何。 2,表示内核允许分配超过所有物理内存和交换空间总和的内存 Redis在dump数据的时候,会fork出一个子进程,理论上child进程所占用的内存和parent是一样的,比如parent占用的内存为8G,这个时候也要同样分配8G的内存给child, 如果内存无法负担,往往会造成redis服务器的down机或者IO负载过高,效率下降。所以这里比较优化的内存分配策略应该设置为 1(表示内核允许分配所有的物理内存,而不管当前的内存状态如何)

vi /etc/sysctl.conf

vm.overcommit_memory = 1

然后应用生效:

sysctl –p

8.修改redis服务脚本 vi /etc/init.d/redis

REDISPORT=6379 EXEC=/opt/redis/bin/redis-server CLIEXEC=/opt/redis/bin/redis-cli PIDFILE=/var/run/redis.pid CONF=”/opt/redis/etc/redis.conf”

如果监听非本机IP地址时还需修改下脚本,不然关不掉

#增加监听地址变量 REDIHOST=192.168.0.41 #增加-h $REDIHOST echo “Stopping …” $CLIEXEC -h $REDIHOST -p $REDISPORT shutdown

9.测试

/etc/init.d/redis start

Starting Redis server…

/opt/redis/bin/redis-cli -h 192.168.0.41

redis 127.0.0.1:6379> ping PONG redis 127.0.0.1:6379> set mykey c1gstduio OK redis 127.0.0.1:6379> get mykey “c1gstduio” redis 127.0.0.1:6379> exit

benchmark /opt/redis/bin/redis-benchmark -h 192.168.0.41

====== PING_INLINE ====== 10000 requests completed in 0.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.51%
  1. 关闭服务 /opt/redis/bin/redis-cli shutdown 如果端口变化可以指定端口: /opt/redis/bin/redis-cli -h 192.168.0.41 -p 6379 shutdown

  2. 保存/备份 数据备份可以通过定期备份该文件实现。 因为redis是异步写入磁盘的,如果要让内存中的数据马上写入硬盘可以执行如下命令: redis-cli save 或者 redis-cli -p 6379 save(指定端口) 注意,以上部署操作需要具备一定的权限,比如复制和设定内核参数等。 执行redis-benchmark命令时也会将内存数据写入硬盘。

/opt/redis/bin/redis-cli save OK

查看结果 ll -h var/

total 48K -rw-r–r– 1 root root 40K Apr 12 11:49 dump.rdb -rw-r–r– 1 root root 5.3K Apr 12 11:49 redis.log

12.开机运行 vi /etc/rc.local

/etc/init.d/redis start

三.php扩展

redis扩展非常多,光php就有好几个,这里选用phpredis,Predis需要PHP >= 5.3 Predis ? ★ Repository JoL1hAHN Mature and supported phpredis ? ★ Repository yowgi This is a client written in C as a PHP module. Rediska ? Repository Homepage shumkov RedisServer Repository OZ Standalone and full-featured class for Redis in PHP Redisent ? Repository justinpoliey
Credis Repository colinmollenhour Lightweight, standalone, unit-tested fork of Redisent which wraps phpredis for best performance if available.

1.安装php扩展phpredis

wget https://nodeload.github.com/nicolasff/phpredis/zip/master –no-check-certificate cd phpredis-master unzip master cd phpredis-master/ /opt/php/bin/phpize ./configure –with-php-config=/opt/php/bin/php-config make make install

接下来在php.ini中添加extension=redis.so vi /opt/php/etc/php.ini

extension_dir = “/opt/php/lib/php/extensions/no-debug-non-zts-20060613/” extension=redis.so

vi test.php

phpinfo(); $redis = new Redis(); $redis->connect(“127.0.0.1”,6379); $redis->set(“test”,”Hello World”); echo $redis->get(“test”); ?>

redis Redis Support enabled Redis Version 2.2.2

四.基于php的web管理工具

phpRedisAdmin https://github.com/ErikDubbelboer/phpRedisAdmin 演示:http://dubbelboer.com/phpRedisAdmin/?overview 这个用的人很多,但是我装完无法显示

readmin http://readmin.org/ 演示:http://demo.readmin.org/ 需要装在根目录

phpredmin https://github.com/sasanrose/phpredmin#readme 可以显示,有漂良的图表 使用伪rewrite,/index.php/welcome/stats

wget https://nodeload.github.com/sasanrose/phpredmin/zip/master unzip phpredmin-master.zip

ln -s phpredmin/public predmin

编辑nginx,支持rewrite nginx.conf

location ~* ^/predmin/index.php/ { rewrite ^/predmin/index.php/(.*) /predmin/index.php?$1 break; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fcgi.conf; }

crontab -e

* * * * * root cd /var/www/phpredmin/public && php index.php cron/index

配置redis vi phpreadmin/config

‘redis’ => Array( ‘host’ => ‘192.168.0.30’, ‘port’ => ‘6379’, ‘password’ => Null, ‘database’ => 0

访问admin.xxx.com/predmin/就可以显示界面

五.实际使用感觉 redis装完后做discuzx2.5的cache 带宽消耗惊人,大概是memcached的十倍左右 内存占用也多了四分之一左右 discuz连接redis偶尔会超时和出错 最后还是切回了memcached

参考: http://redis.io/topics/quickstart http://hi.baidu.com/mucunzhishu/item/ead872ba3cec36db84dd798c http://www.linuxfromscratch.org/blfs/view/svn/general/tcl.html

Posted in Memcached/redis.

Tagged with .