Skip to content


将php出错日志同步输出到splunk

vi /opt/php/etc/php.ini 打开syslog

; Log errors to specified file. error_log = /opt/php/logs/php_error.log ; Log errors to syslog (Event Log on NT, not valid in Windows 95). error_log = syslog

centos6为rsyslog,以下为syslog vi /etc/rsyslog.conf vi /etc/syslog.conf

#php log user.* /opt/php/logs/php_error.log #splunk *.* @192.168.0.39

192.168.0.39为splunk所在ip,安装spunk可看这篇blog 重新reload syslog和php 将php出错日志同步输出到splunk并在本机保留一份日志 这里没有做日志切割,需要的自已加

在splunk可看到如下日志

01:37:13.000 Jan 14 01:37:13 192.168.0.24 php-cgi: PHP Warning: mkdir() [function.mkdir]: File exists in /opt/htdocs/c1gblog/globals/class_cache.php on line 1013 host=192.168.0.24 选项| sourcetype=syslog 选项| source=tcp:1999 选项| process=php-cgi 选项

Posted in 日志.

Tagged with , , .


安装淘宝开源web服务器tengine替换nginx并使用proxy_cache做前端代理

简介 Tengine是由淘宝网发起的Web服务器项目。它在Nginx的基础上,针对大访问量网站的需求,添加了很多高级功能和特性。Tengine的性能和稳定性已经在大型的网站如淘宝网,天猫商城等得到了很好的检验。它的最终目标是打造一个高效、稳定、安全、易用的Web平台。

目前稳定版[2013-11-22] Tengine-1.5.2 特性 继承Nginx-1.2.9的所有特性,100%兼容Nginx的配置; 动态模块加载(DSO)支持。加入一个模块不再需要重新编译整个Tengine; 流式上传到HTTP后端服务器或FastCGI服务器,大量减少机器的I/O压力; 更加强大的负载均衡能力,包括一致性hash模块、会话保持模块,还可以对后端的服务器进行主动健康检查,根据服务器状态自动上线下线; 输入过滤器机制支持。通过使用这种机制Web应用防火墙的编写更为方便; 动态脚本语言Lua支持。扩展功能非常高效简单; 支持管道(pipe)和syslog(本地和远端)形式的日志以及日志抽样; 组合多个CSS、JavaScript文件的访问请求变成一个请求; 自动去除空白字符和注释从而减小页面的体积 自动根据CPU数目设置进程个数和绑定CPU亲缘性; 监控系统的负载和资源占用从而对系统进行保护; 显示对运维人员更友好的出错信息,便于定位出错机器; 更强大的防攻击(访问速度限制)模块; 更方便的命令行参数,如列出编译的模块列表、支持的指令等; 可以根据访问文件类型设置过期时间; …

安装jemalloc可以增加性能

cd /root/src/toolkits/ wget http://www.canonware.com/download/jemalloc/jemalloc-3.4.1.tar.bz2 tar jxvf jemalloc-3.4.1.tar.bz2 cd jemalloc-3.4.1 ./configure –prefix=/usr/local/jemalloc-3.4.1 make && make install ldconfig

GeoIP白名单

wget http://geolite.maxmind.com/download/geoip/api/c/GeoIP.tar.gz tar -zxvf GeoIP.tar.gz cd GeoIP-1.4.6 ./configure make; make install ldconfig

使用proxy_cache时增加purge模块

wget http://labs.frickle.com/files/ngx_cache_purge-2.1.tar.gz tar zxvf ngx_cache_purge-2.1.tar.gz –add-module=../ngx_cache_purge-2.1

后端nginx编译时需加上–with-http_realip_module以获取真实ip,并指定来源

set_real_ip_from 61.199.67.2; #前端ip set_real_ip_from 192.168.0.111;#前端ip real_ip_header X-Real-IP;

编译安装tengine jemalloc为编译路径

wget http://tengine.taobao.org/download/tengine-1.5.1.tar.gz tar zxvf tengine-1.5.1.tar.gz cd tengine-1.5.1 ./configure –user=www –group=website –prefix=/opt/tengine-1.5.1 –add-module=../ngx_cache_purge-2.1 –with-http_stub_status_module –with-http_ssl_module –with-http_realip_module \ –with-http_concat_module=shared \ –with-http_sysguard_module=shared \ –with-http_limit_conn_module=shared \ –with-http_limit_req_module=shared \ –with-http_footer_filter_module=shared \ –with-http_upstream_ip_hash_module=shared \ –with-http_upstream_least_conn_module=shared \ –with-http_upstream_session_sticky_module=shared \ –with-jemalloc=/root/src/lempelf/packages/jemalloc-3.4.1 make make install

GeoIp数据

cd /opt/tengine-1.5.1/conf wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz gunzip GeoIP.dat.gz chgrp -R website /opt/tengine-1.5.1/conf chmod -R 764 /opt/tengine-1.5.1/conf chmod 774 /opt/tengine-1.5.1/conf

复制原nginx的配制文件于tengine

cd /opt/nginx/conf cp awstats.conf fcgi.conf htpasswd block.conf nginx.conf /opt/tengine-1.5.1/conf/

检测配置文件

/opt/tengine-1.5.1/sbin/nginx -t -c /opt/tengine-1.5.1/conf/nginx.conf nginx: [emerg] unknown directive “limit_zone” in /opt/tengine-1.5.1/conf/nginx.conf:71 nginx: [emerg] unknown directive “limit_conn” in /opt/tengine-1.5.1/conf/nginx.conf:136 如果有以上错误,需去掉limit_conn配置,ngx_http_limit_conn_module 模块在新版已使用新指令

增加新的功能 vi /opt/tengine-1.5.1/conf/nginx.conf 根据cpu数量自动设定Tengine的worker进程数量,并进行cpu绑定。

worker_processes auto; worker_cpu_affinity auto;

关闭系统信息

server_info off; server_tag off;

ngx_http_sysguard_module 系统过载保护

sysguard on; sysguard_load load=10.5 action=/loadlimit; sysguard_mem swapratio=20% action=/swaplimit; sysguard_mem free=100M action=/freelimit; location /loadlimit { return 503; } location /swaplimit { return 503; } location /freelimit { return 503; }

ngx_http_limit_req_module 并发限制模块

limit_req_zone $binary_remote_addr zone=one:3m rate=1r/s; limit_req_zone $binary_remote_addr $uri zone=two:3m rate=1r/s; limit_req_zone $binary_remote_addr $request_uri zone=three:3m rate=1r/s; location / { limit_req zone=one burst=5; limit_req zone=two forbid_action=@test1; limit_req zone=three burst=3 forbid_action=@test2; } location /off { limit_req off; } location @test1 { rewrite ^ /test1.html; } location @test2 { rewrite ^ /test2.html; }

删除旧的nginx软链接,给tengine增加软链接 rm /opt/nginx ln -s /opt/tengine-1.5.1 /opt/nginx

一个完整的nginx.conf

user www website; worker_processes auto; worker_cpu_affinity auto; error_log /var/log/nginx/nginx_error.log error; pid /dev/shm/nginx.pid; #Specifies the value for maximum file descriptors that can be opened by this process. worker_rlimit_nofile 51200; dso { load ngx_http_footer_filter_module.so; load ngx_http_limit_conn_module.so; load ngx_http_limit_req_module.so; load ngx_http_sysguard_module.so; load ngx_http_upstream_ip_hash_module.so; load ngx_http_upstream_least_conn_module.so; load ngx_http_upstream_session_sticky_module.so; } events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; log_format access ‘$remote_addr – $remote_user [$time_local] “$request” ‘ ‘$status $body_bytes_sent “$http_referer” ‘ ‘”$http_user_agent” $http_x_forwarded_for’; open_log_file_cache max=1000 inactive=20s min_uses=2 valid=1m; server_names_hash_bucket_size 128; #linux 2.4+ sendfile on; tcp_nopush on; tcp_nodelay on; #tengine server_info off; server_tag off; #server_tag Apache; server_tokens off; server_name_in_redirect off; keepalive_timeout 60; client_header_buffer_size 16k; client_body_timeout 60; client_max_body_size 8m; large_client_header_buffers 4 32k; fastcgi_intercept_errors on; fastcgi_hide_header X-Powered-By; fastcgi_connect_timeout 180; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 128K; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; fastcgi_temp_path /dev/shm; #open_file_cache max=51200 inactive=20s; #open_file_cache_valid 30s; #open_file_cache_min_uses 2; #open_file_cache_errors off; gzip on; gzip_min_length 1k; gzip_comp_level 5; gzip_buffers 4 16k; gzip_http_version 1.1; gzip_types text/plain application/x-javascript text/css application/xml; gzip_proxied any; limit_req_log_level error; limit_req_zone $binary_remote_addr $uri zone=two:30m rate=10r/s; #访问限制白名单 geo $white_ip { #ranges; default 0; 127.0.0.1/32 1; 182.55.21.28/32 1; 192.168.0.0/16 1; 61.199.67.0/24 1; } client_body_buffer_size 512k; proxy_connect_timeout 5; proxy_read_timeout 60; proxy_send_timeout 5; proxy_buffer_size 16k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; #注:proxy_temp_path和proxy_cache_path指定的路径必须在同一分区 proxy_temp_path /opt/nginx/proxy_temp_dir; #设置Web缓存区名称为cache_www,内存缓存空间大小为3000MB,1天没有被访问的内容自动清除,硬盘缓存空间大小为30GB。 proxy_cache_path /opt/nginx/proxy_cache_www levels=1:2 keys_zone=cache_www:3000m inactive=1d max_size=20g; upstream www_server { server 192.168.0.131:80; } server { listen 80 default; server_name _; return 444; access_log off; } server { listen 80; server_name www.c1gstudio.com; index index.html index.htm index.php; root /opt/htdocs/www; access_log /var/log/nginx/proxy.www.c1gstudio.com.log access buffer=24k; if (-d $request_filename){ rewrite ^/(.*)([^/])$ http://$host/$1$2/ permanent; } limit_req_whitelist geo_var_name=white_ip geo_var_value=1; limit_req zone=two burst=50 forbid_action=/visitfrequently.html; location @visitfrequently { rewrite ^ /visitfrequently.html; } location ~/\.ht { deny all; } #用于清除缓存,假设一个URL为http://192.168.8.42/test.txt,通过访问http://192.168.8.42/purge/test.txt就可以清除该URL的缓存。 location ~ /purge(/.*) { #设置只允许指定的IP或IP段才可以清除URL缓存。 allow 127.0.0.1; allow 192.168.0.0/16; deny all; proxy_cache_purge cache_www $host$1$is_args$args; error_page 405 =200 /purge$1; #处理squidclient purge的时候出现的405错误 } if ( $request_method = “PURGE” ) { rewrite ^(.*)$ /purge$1 last; } location / { error_page 502 504 /502.html; proxy_set_header Host $host; #proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://www_server; add_header X-Cache Cache-Skip; } location ~ 404\.html$ { proxy_set_header Host $host; #proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://www_server; add_header X-Cache Cache-Skip; } location ~ .*\.(htm|html|)?$ { #如果后端的服务器返回502、504、执行超时等错误,自动将请求转发到upstream负载均衡池中的另一台服务器,实现故障转移。 proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_cache cache_www; #对不同的HTTP状态码设置不同的缓存时间 proxy_cache_valid 200 304 5m; #以域名、URI、参数组合成Web缓存的Key值,Nginx根据Key值哈希,存储缓存内容到二级缓存目录内 proxy_cache_key $host$uri$is_args$args; proxy_set_header Host $host; proxy_http_version 1.1; #proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://www_server; #支持后台expires proxy_ignore_headers “Cache-Control” “Expires”; add_header X-Cache Cache; } location ~* ^.+\.(jpg|jpeg|gif|png|rar|zip|css|js)$ { valid_referers none blocked *.c1gstudio.com; if ($invalid_referer) { rewrite ^/ http://leech.c1gstudio.com/leech.gif; return 412; break; } access_log off; #如果后端的服务器返回502、504、执行超时等错误,自动将请求转发到upstream负载均衡池中的另一台服务器,实现故障转移。 proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_cache cache_www; #对不同的HTTP状态码设置不同的缓存时间 proxy_cache_valid 200 304 5m; #以域名、URI、参数组合成Web缓存的Key值,Nginx根据Key值哈希,存储缓存内容到二级缓存目录内 proxy_cache_key $host$uri$is_args$args; proxy_set_header Host $host; proxy_http_version 1.1; #proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://www_server; #支持后台expires proxy_ignore_headers “Cache-Control” “Expires”; add_header X-Cache Cache; } } }

启动tengine /opt/nginx/sbin/nginx

观察了下top降低了许多

===========2014/1/3更新============= 如果负载时大时小可能有io瓶颈,可以将proxy_cache放到/dev/shm 中来解决.(/dev/shm默认为内存一半大小) 创建目录并加入开机执行

mkdir /dev/shm/nginx

vi /etc/rc.local 在nginx启动前加入 mkdir /dev/shm/nginx

修改nginx.conf

proxy_temp_path /dev/shm/nginx/proxy_temp_dir; #设置Web缓存区名称为cache_www,内存缓存空间大小为3000MB,1天没有被访问的内容自动清除,硬盘缓存空间大小为30GB。 proxy_cache_path /dev/shm/nginx/proxy_cache_www levels=1:2 keys_zone=cache_www:3000m inactive=1d max_size=20g;

Posted in Nginx.

Tagged with , .


给邮件服务器postfix增加spf和dkim

Google宣布发送给Gmail的91.4%邮件都采用DKIM或SPF反伪造标准

spf 在域名的dns管理中增加txt记录 以邮件服务器mta.c1gstudio.com ip:208.133.199.7 为例 1.SPF 记录指向A主机记录 @ IN TXT “v=spf1 include:mta.c1gstudio.com -all”

2.这个mta记录下增加spf IP地址(ip地址为邮件服务器对外ip) mta IN TXT “v=spf1 ip4: 208.133.199.7 -all” 如果需要ip段 mta IN TXT “v=spf1 ip4: 208.133.199.0/24 -all”

使用nslookup命令进行查询验证 windows下cmd->nslookup 查询的类型(type)为txt (SPF记录都是txt类型文件)set type=txt 输入要查询SPF记录的域名。这里要查询的域名是”163.com”。如果邮件地址是 [email protected] ,那么需要查询的域名就是 yy.com 查询的结果,从结果里的”include”这个关键字可以知道,163.com的SPF记录包含在”spf.163.com”的”txt”类型中 所以,再一次查询”spf.163.com”的”txt”类型 最终结果出来了 检查输出的IP,确认是否已经包含了所有发信IP

Received-SPF: neutral (google.com: 60.195.249.163 is neither permitted nor denied by domain of [email protected]) 发送邮件到gmail后,查看邮件头,包含SPF: pass为通过 Received-SPF: pass (google.com: domain of #####@yeah.net designates 60.12.227.137 as permitted sender

以上spf算是完成了.

dkim 安装opendkim可以使用epel库方便快捷 rhel5/centos5用这个 rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm rhel6/centos6用这个 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

安装opendkim会增加opendkim用户和组 如果密码文件/etc/passwd已加锁,记得解锁

yum clean all yum install opendkim

Installing: opendkim x86_64 2.8.4-1.el5 epel 262 k Installing for dependencies: ldns x86_64 1.6.16-1.el5 epel 507 k libopendkim x86_64 2.8.4-1.el5 epel 75 k unbound-libs x86_64 1.4.20-2.el5 epel 258 k

安装完成后,输入如下命令,会在当前目录下生成公钥和私钥两个文件:default.private 和 default.txt

1. cd /etc/opendkim/keys opendkim-genkey -r -d mta.c1gstudio.com default.txt 里面的内容类似如下:

default._domainkey IN TXT ( “v=DKIM1; k=rsa; s=email; ” “p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDTEMk2ow/WZuQheCMxZrowErith/LVsaWLVhkkrONT2S4LZcuuVbgCkr2EGaoJUNjc/Ztu0Z47RPbLDkGGjtCwy6VnUeKWh7ijwu2+AHyq6mJIn33z7TGz90Io0o30PjQLSeO1E/ozRpBSljvJMikYUDYfZpWZSxcIhtOetOp8mwIDAQAB” ) ; —– DKIM key default for mta.c1gstudio.com

2.修改文件属性 chown opendkim default.private

3.编缉配置文件 vi /etc/opendkim.conf

Mode sv UserID opendkim:opendkim Socket inet:8891@localhost Canonicalization relaxed/simple Domain mta.c1gstudio.com KeyFile /etc/opendkim/keys/default.private KeyTable refile:/etc/opendkim/KeyTable SigningTable refile:/etc/opendkim/SigningTable InternalHosts refile:/etc/opendkim/TrustedHosts SyslogSuccess No LogWhy No

4.信任的发信ip地址 vi /etc/opendkim/TrustedHosts

127.0.0.1 #hostname1.example1.com 192.168.1.0/24

5.private key vi /etc/opendkim/KeyTable

default._domainkey.mta.c1gstudio.com mta.c1gstudio.com:default:/etc/opendkim/keys/default.private

6.多个域名可以指定使用哪个public key vi /etc/opendkim/SigningTable

*@mta.c1gstudio.com [email protected] *@c1gstudio.com [email protected]

7.启动opendkim /etc/init.d/opendkim start Starting OpenDKIM Milter: [ OK ]

开机启动 chkconfig opendkim on

8.编辑postfix,在末尾加入 vi /etc/postfix/main.cf

#################dkim################### smtpd_milters= inet:localhost:8891 milter_default_action = accept milter_protocol = 2 non_smtpd_milters = inet:localhost:8891

如果Postfix 版本小于2.6需加上 milter_protocol = 2

9.重启postfix /usr/local/postfix/sbin/postfix reload

10.设置域名dns 将public key加入到mta.c1gstudio.com的txt记录中

default._domainkey IN TXT ( “v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDTEMk2ow/WZuQheCMxZrowErith/LVsaWLVhkkrONT2S4LZcuuVbgCkr2EGaoJUNjc/Ztu0Z47RPbLDkGGjtCwy6VnUeKWh7ijwu2+AHyq6mJIn33z7TGz90Io0o30PjQLSeO1E/ozRpBSljvJMikYUDYfZpWZSxcIhtOetOp8mwIDAQAB” )

这里吐槽下商务中国的域名面板无法加含”_”下划线的主机名,转战dnspod.

11.验证域名 http://dkimcore.org/tools/ Check a published DKIM Core Key: Selector:default Domain name:mail.c1gstudio.com

12.邮件测试 tail -f /var/log/maillog Dec 12 17:36:48 c1g opendkim[30546]: 9F36E1A181C8: DKIM-Signature field added (s=default, d=mta.c1gstudio.com)

接着再发邮件到gmail和hotmail中查看dkim是否有签名。 给gmail发邮件,然后查看邮件头,看到spf=pass和dkim=pass

Received-SPF: pass (google.com: domain of [email protected] designates 208.133.199.7 as permitted sender) client-ip=208.133.199.7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 208.133.199.7 as permitted sender) [email protected]; dkim=pass [email protected] Received: from c1g (c1g [208.133.199.7]) by mta.c1gstudio.com (Postfix) with ESMTP id C3BB21A1825C for ; Fri, 13 Dec 2013 10:47:02 +0800 (CST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=mta.c1gstudio.com; s=default; t=1386902822; bh=spJ3t+Wlf7qUYsyf9zBZsdaztFW9Vj9zn2URNGhfZ3o=; h=Date:From:To:Subject; b=R7gyGPsyKqNq3ceDTQYq958qRkdJE04xKCCKOseCb/Gi5v9rLr6jyZyUWsP5PstqX WhfTjErGippcxJEe84B4yTOE0gYDRJB//I8pEF+jux/qF7JhmIc3+Cs/yqI powsNbGdwkJaB3WlCPwroBk88/wP8W64tWda4oyGTVYZzNU=

trouble shooting: 1. tail -f /var/log/maillog opendkim no signature data 没有签名,需改成将opendkim.conf中的模式改成sv Mode sv

2. Dec 13 16:15:03 localhost opendkim[3248]: can’t load key from /etc/opendkim/keys/default.private: Permission denied cd /etc/opendkim/keys/ chown opendkim default.private

参考: http://mail.163.com/hd/all/chengxin/3.htm http://stevejenkins.com/blog/2011/08/installing-opendkim-rpm-via-yum-with-postfix-or-sendmail-for-rhel-centos-fedora/ http://www.banping.com/2011/07/19/postfix-dkim/ http://www.info110.com/mailserver/in27377-1.htm

Posted in Mail/Postfix.

Tagged with , , , .


UnixBench测试服务器性能

UnixBench是一款不错的Linux下的性能测试软件.

测试机配置

dell R410 5620(4核2.4G) *2 8G*2 SAS 15K 300G*2 noraid 7.2K SAS 2T*1 centos 5.8 64bit Linux LocalIm 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux wget http://byte-unixbench.googlecode.com/files/unixbench-5.1.2.tar.gz tar zxvf unixbench-5.1.2.tar.gz cd unixbench-5.1.2 make ./Run

Unixbench会开始性能测试,根据cpu核心数量的不同,测试时间长短不一,这次测试时间大概一小时左右

make all make[1]: Entering directory `/root/src/toolkits/unixbench-5.1.2′ Checking distribution of files ./pgms exists ./src exists ./testdir exists ./tmp exists ./results exists make[1]: Leaving directory `/root/src/toolkits/unixbench-5.1.2′ sh: 3dinfo: command not found # # # # # # # ##### ###### # # #### # # # # ## # # # # # # # ## # # # # # # # # # # # ## ##### ##### # # # # ###### # # # # # # ## # # # # # # # # # # # # ## # # # # # # # ## # # # # #### # # # # # ##### ###### # # #### # # Version 5.1.2 Based on the Byte Magazine Unix Benchmark Multi-CPU version Version 5 revisions by Ian Smith, Sunnyvale, CA, USA December 22, 2007 johantheghost at yahoo period com 1 x Dhrystone 2 using register variables 1 2 3 4 5 6 7 8 9 10 1 x Double-Precision Whetstone 1 2 3 4 5 6 7 8 9 10 1 x Execl Throughput 1 2 3 1 x File Copy 1024 bufsize 2000 maxblocks 1 2 3 1 x File Copy 256 bufsize 500 maxblocks 1 2 3 1 x File Copy 4096 bufsize 8000 maxblocks 1 2 3 1 x Pipe Throughput 1 2 3 4 5 6 7 8 9 10 1 x Pipe-based Context Switching 1 2 3 4 5 6 7 8 9 10 1 x Process Creation 1 2 3 1 x System Call Overhead 1 2 3 4 5 6 7 8 9 10 1 x Shell Scripts (1 concurrent) 1 2 3 1 x Shell Scripts (8 concurrent) 1 2 3 16 x Dhrystone 2 using register variables 1 2 3 4 5 6 7 8 9 10 16 x Double-Precision Whetstone 1 2 3 4 5 6 7 8 9 10 16 x Execl Throughput 1 2 3 16 x File Copy 1024 bufsize 2000 maxblocks 1 2 3 16 x File Copy 256 bufsize 500 maxblocks 1 2 3 16 x File Copy 4096 bufsize 8000 maxblocks 1 2 3 16 x Pipe Throughput 1 2 3 4 5 6 7 8 9 10 16 x Pipe-based Context Switching 1 2 3 4 5 6 7 8 9 10 16 x Process Creation 1 2 3 16 x System Call Overhead 1 2 3 4 5 6 7 8 9 10 16 x Shell Scripts (1 concurrent) 1 2 3 16 x Shell Scripts (8 concurrent) 1 2 3 ======================================================================== BYTE UNIX Benchmarks (Version 5.1.2) System: Impala: GNU/Linux OS: GNU/Linux — 2.6.18-308.el5 — #1 SMP Tue Feb 21 20:06:06 EST 2012 Machine: x86_64 (x86_64) Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″) CPU 0: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 1: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4787.9 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 2: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 3: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 4: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 5: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 6: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 7: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4787.9 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 8: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 9: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 10: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 11: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 12: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 13: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 14: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 15: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization 14:24:24 up 324 days, 22:35, 1 user, load average: 0.07, 0.02, 0.06; runlevel 3 ———————————————————————— Benchmark Run: Fri Nov 08 2013 14:24:24 – 14:52:38 16 CPUs in system; running 1 parallel copy of tests Dhrystone 2 using register variables 14762933.9 lps (10.0 s, 7 samples) Double-Precision Whetstone 1509.6 MWIPS (8.7 s, 7 samples) Execl Throughput 2427.3 lps (29.7 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 597721.9 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 168768.0 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 1430654.0 KBps (30.0 s, 2 samples) Pipe Throughput 1166940.5 lps (10.0 s, 7 samples) Pipe-based Context Switching 105565.7 lps (10.0 s, 7 samples) Process Creation 8799.5 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 5298.7 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 2831.9 lpm (60.0 s, 2 samples) System Call Overhead 943466.8 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 14762933.9 1265.0 Double-Precision Whetstone 55.0 1509.6 274.5 Execl Throughput 43.0 2427.3 564.5 File Copy 1024 bufsize 2000 maxblocks 3960.0 597721.9 1509.4 File Copy 256 bufsize 500 maxblocks 1655.0 168768.0 1019.7 File Copy 4096 bufsize 8000 maxblocks 5800.0 1430654.0 2466.6 Pipe Throughput 12440.0 1166940.5 938.1 Pipe-based Context Switching 4000.0 105565.7 263.9 Process Creation 126.0 8799.5 698.4 Shell Scripts (1 concurrent) 42.4 5298.7 1249.7 Shell Scripts (8 concurrent) 6.0 2831.9 4719.8 System Call Overhead 15000.0 943466.8 629.0 ======== System Benchmarks Index Score 940.2 ———————————————————————— Benchmark Run: Fri Nov 08 2013 14:52:38 – 15:21:02 16 CPUs in system; running 16 parallel copies of tests Dhrystone 2 using register variables 109893471.8 lps (10.0 s, 7 samples) Double-Precision Whetstone 19203.2 MWIPS (9.2 s, 7 samples) Execl Throughput 40389.3 lps (29.7 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 232275.2 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 66058.1 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 701683.9 KBps (30.0 s, 2 samples) Pipe Throughput 11422586.8 lps (10.0 s, 7 samples) Pipe-based Context Switching 2899346.4 lps (10.0 s, 7 samples) Process Creation 115952.7 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 42915.9 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 8344.0 lpm (60.0 s, 2 samples) System Call Overhead 5982682.0 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 109893471.8 9416.7 Double-Precision Whetstone 55.0 19203.2 3491.5 Execl Throughput 43.0 40389.3 9392.9 File Copy 1024 bufsize 2000 maxblocks 3960.0 232275.2 586.6 File Copy 256 bufsize 500 maxblocks 1655.0 66058.1 399.1 File Copy 4096 bufsize 8000 maxblocks 5800.0 701683.9 1209.8 Pipe Throughput 12440.0 11422586.8 9182.1 Pipe-based Context Switching 4000.0 2899346.4 7248.4 Process Creation 126.0 115952.7 9202.6 Shell Scripts (1 concurrent) 42.4 42915.9 10121.7 Shell Scripts (8 concurrent) 6.0 8344.0 13906.7 System Call Overhead 15000.0 5982682.0 3988.5 ======== System Benchmarks Index Score 4199.4 dell R420 2420(6核1.9G) *2 8G*2 SAS 15K 300G*2 noraid centos 5.8 64bit Linux loclaP 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux BYTE UNIX Benchmarks (Version 5.1.2) System: ParkAvenue: GNU/Linux OS: GNU/Linux — 2.6.18-308.el5 — #1 SMP Tue Feb 21 20:06:06 EST 2012 Machine: x86_64 (x86_64) Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″) CPU 0: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 1: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 2: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 3: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 4: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.2 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 5: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 6: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 7: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 8: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 9: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 10: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 11: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 12: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 13: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 14: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 15: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 16: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 17: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 18: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 19: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 20: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 21: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 22: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 23: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization 12:01:45 up 21:39, 1 user, load average: 0.07, 0.05, 0.01; runlevel 3 ———————————————————————— Benchmark Run: Fri Dec 06 2013 12:01:45 – 12:29:21 24 CPUs in system; running 1 parallel copy of tests Dhrystone 2 using register variables 13721062.3 lps (10.0 s, 7 samples) Double-Precision Whetstone 2121.9 MWIPS (7.2 s, 7 samples) Execl Throughput 2052.4 lps (29.8 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 567410.8 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 157374.9 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 1499112.8 KBps (30.0 s, 2 samples) Pipe Throughput 966163.2 lps (10.0 s, 7 samples) Pipe-based Context Switching 64456.9 lps (10.0 s, 7 samples) Process Creation 6327.8 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 4121.4 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 2182.9 lpm (60.0 s, 2 samples) System Call Overhead 801849.7 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 13721062.3 1175.8 Double-Precision Whetstone 55.0 2121.9 385.8 Execl Throughput 43.0 2052.4 477.3 File Copy 1024 bufsize 2000 maxblocks 3960.0 567410.8 1432.9 File Copy 256 bufsize 500 maxblocks 1655.0 157374.9 950.9 File Copy 4096 bufsize 8000 maxblocks 5800.0 1499112.8 2584.7 Pipe Throughput 12440.0 966163.2 776.7 Pipe-based Context Switching 4000.0 64456.9 161.1 Process Creation 126.0 6327.8 502.2 Shell Scripts (1 concurrent) 42.4 4121.4 972.0 Shell Scripts (8 concurrent) 6.0 2182.9 3638.1 System Call Overhead 15000.0 801849.7 534.6 ======== System Benchmarks Index Score 818.6

Posted in 测试.

Tagged with .


postfix如何过滤某个域名邮件的发送?

mail日志中可以看到有些无效的域名邮件可以直接丢弃

5305 example.com[93.184.216.119]:25: Connection timed out 1265 mail.qq.co[50.23.198.74]:25: Connection refused 394 Host not found, try again 374 qqq.com[98.126.157.163]:25: Connection refused

设置check_recipient_access访问表 在smtpd_recipient_restrictions参数中增加check_recipient_access hash:/etc/postfix/recipient_access 例:

smtpd_recipient_restrictions = check_recipient_access hash:/etc/postfix/recipient_access,permit_mynetworks,permit_sasl_authenticated,reject_invalid_hostname,reject_non_fqdn_hostname,reject_unknown_sender_domain,reject_non_fqdn_sender,reject_non_fqdn_recipient,reject_unknown_recipient_domain,reject_unauth_pipelining,reject_unauth_destination

/etc/postfix/recipient_access内容为:

[email protected] REJECT #具体地址 example.com REJECT #对整个域实现访问控制

postmap /etc/postfix/recipient_acces 之后执行 postfix reload

Posted in Mail/Postfix.

Tagged with .


nagios监控memcached

Nagios的check_memcached 下载地址:

http://search.cpan.org/CPAN/authors/id/Z/ZI/ZIGOROU/Nagios-Plugins-Memcached-0.02.tar.gz

这个脚本是用perl编的,所以你要先确保自己的机器里面是否有perl环境. 安装方法:

#wget http://search.cpan.org/CPAN/authors/id/Z/ZI/ZIGOROU/Nagios-Plugins-Memcached-0.02.tar.gz #tar xzvf Nagios-Plugins-Memcached-0.02.tar.gz #cd Nagios-Plugins-Memcached-0.02 #perl Makefile.PL

执行后会出现一些提示让你选择,一路回车

#make

这时会下载一些运行时需要的东西

#make install

默认会把check_memcached文件放到/usr/bin/check_memcached 做个软链接抟到Nagios libexec目录下.

ln -s /usr/bin/check_memcached /usr/local/nagios/libexec/

测试 /usr/local/nagios/libexec/check_memcached -H192.168.0.40:12111 MEMCACHED OK – OK 如果提示没有Nagios::Plugin

可以通过CPAN来安装perl-nagios-plugin

perl -MCPAN -e shell install File::Slurp install YAML install HTML::Parser install URI install Compress::Zlib install Module::Runtime install Module::Implementation install Attribute::Handlers install Params::Validate install Nagios::Plugin

重新配置cpan的命令为”o conf init”

测试通过后修改nagios commands.cfg配置文件.加上这些内容:

define command { command_name check_memcached_response command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ -w $ARG3$ -c $ARG4$ } define command { command_name check_memcached_size command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ –size-warning $ARG3$ –size-critical $ARG4$ } define command { command_name check_memcached_hit command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ –hit-warning $ARG3$ –hit-critical $ARG4$ }

然后在需监控主机的cfg配置文件里加上:

define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_response check_command check_memcached_response!192.168.0.40!12111!300!500 } define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_size check_command check_memcached_size!192.168.0.40!12111!90!95 } define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_hit check_command check_memcached_hit!192.168.0.40!12111!10!5 }

/etc/init.d/nagios reload 重新reload后就可以看到,目前此插件不支持pnp作图

参考:http://simblog.vicp.net/?p=250 http://blog.csdn.net/deccmtd/article/details/6799647

Posted in Nagios.

Tagged with , .


svn Compression of svndiff data failed

svn无法提交修改,报错: Compression of svndiff data failed

检查svn服务器为apache内存泄漏,重启一下就解决了.

Posted in Subversion.

Tagged with , .


drbd+xfs+heartbeat+mysql实现高可用

一.前言

DRBD是一种块设备,可以被用于高可用(HA)之中.它类似于一个网络RAID-1功能.当你将数据写入本地 文件系统时,数据还将会被发送到网络中另一台主机上.以相同的形式记录在一个文件系统中. 本地(主节点)与远程主机(备节点)的数据可以保证实时同步.当本地系统出现故障时,远程主机上还会保留有一份相同的数据,可以继续使用. Heartbeat 是可以从 Linux-HA 项目 Web 站点公开获得的软件包之一,它提供了所有 HA 系统所需要的基本功能,比如启动和停止资源、监测群集中系统的可用性、在群集中的节点间转移共享 IP 地址的所有者等

系统os为centos5.8 64bit drbd83 xfs-1.0.2-5.el5_6.1 Heartbeat 2.1.3 Percona-Server-5.5.22-rel25.2

资源分配 192.168.0.45 mysql1 192.168.0.43 mysql2 192.168.0.46 vip DRBD数据目录 /data

mysql做主复制,后接从机,实现高可用

二.安装xfs支持

XFS的主要特性包括: 数据完全性 采用XFS文件系统,当意想不到的宕机发生后,首先,由于文件系统开启了日志功能,所以你磁盘上的文件不再会意外宕机而遭到破坏了。不论目前文件系统上存储的文件与数据有多少,文件系统都可以根据所记录的日志在很短的时间内迅速恢复磁盘文件内容。 传输特性 XFS文件系统采用优化算法,日志记录对整体文件操作影响非常小。XFS查询与分配存储空间非常快。xfs文件系统能连续提供快速的反应时间。笔者曾经对XFS、JFS、Ext3、ReiserFS文件系统进行过测试,XFS文件文件系统的性能表现相当出众。 可扩展性 XFS 是一个全64-bit的文件系统,它可以支持上百万T字节的存储空间。对特大文件及小尺寸文件的支持都表现出众,支持特大数量的目录。最大可支持的文件大 小为263 = 9 x 1018 = 9 exabytes,最大文件系统尺寸为18 exabytes。 XFS使用高的表结构(B+树),保证了文件系统可以快速搜索与快速空间分配。XFS能够持续提供高速操作,文件系统的性能不受目录中目录及文件数量的限制。 传输带宽 XFS 能以接近裸设备I/O的性能存储数据。在单个文件系统的测试中,其吞吐量最高可达7GB每秒,对单个文件的读写操作,其吞吐量可达4GB每秒。

xfs在默认的系统安装上是不被支持的,需要自己手动安装默认的包。

rpm -qa |grep xfs xorg-x11-xfs-1.0.2-5.el5_6.1

yum install xfsprogs kmod-xfs

挂载分区

modprobe xfs //载入xfs文件系统模块,如果不行要重启服务器 lsmod |grep xfs //查看是否载入了xfs模块

df -h -P -T

Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol02 ext3 97G 3.7G 89G 5% / /dev/mapper/VolGroup00-LogVol01 ext3 9.7G 151M 9.1G 2% /tmp /dev/mapper/VolGroup00-LogVol04 ext3 243G 470M 230G 1% /opt /dev/mapper/VolGroup00-LogVol03 ext3 39G 436M 37G 2% /var /dev/sda1 ext3 99M 13M 81M 14% /boot tmpfs tmpfs 7.9G 0 7.9G 0% /dev/shm

vgdisplay

— Volume group — VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.09 TB PE Size 32.00 MB Total PE 35692 Alloc PE / Size 13056 / 408.00 GB Free PE / Size 22636 / 707.38 GB VG UUID 7nAuuH-cpyx-u2Jr-bNNJ-fGPe-TUKg-T4KmmU

划出300G数据分区

lvcreate -L 300G -n DRBD VolGroup00

Logical volume “DRBD” created /dev/VolGroup00/DRBD

三.安装配置drbd

1.设置hostname

vi /etc/sysconfig/network 修改HOSTNAME为node1

编辑hosts vi /etc/hosts 添加:

192.168.0.45 node1 192.168.0.43 node2 使node1 hostnmae临时生效

hostname node1 node2设置类似。

2.安装drbd yum install drbd83 kmod-drbd83

Installing: drbd83 x86_64 8.3.15-2.el5.centos extras 243 k kmod-drbd83 x86_64 8.3.15-3.el5.centos extras yum install -y heartbeat heartbeat-ldirectord heartbeat-pils heartbeat-stonith Installing: heartbeat x86_64 2.1.3-3.el5.centos extras 1.8 M heartbeat-ldirectord x86_64 2.1.3-3.el5.centos extras 199 k heartbeat-pils x86_64 2.1.3-3.el5.centos extras 220 k heartbeat-stonith x86_64 2.1.3-3.el5.centos extras 348 k

3.配置drbd master上配置配置文件(/etc/drbd.d目录下)global_common.conf,然后scp到slave相同目录下.

vi /etc/drbd.d/global_common.conf

global { usage-count yes; # minor-count dialog-refresh disable-ip-verification } common { protocol C; handlers { # These are EXAMPLE handlers only. # They may have severe implications, # like hard resetting the node under certain circumstances. # Be careful when chosing your poison. pri-on-incon-degr “/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”; pri-lost-after-sb “/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”; local-io-error “/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f”; # fence-peer “/usr/lib/drbd/crm-fence-peer.sh”; # split-brain “/usr/lib/drbd/notify-split-brain.sh root”; # out-of-sync “/usr/lib/drbd/notify-out-of-sync.sh root”; # before-resync-target “/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 — -c 16k”; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb wfc-timeout 15; degr-wfc-timeout 15; outdated-wfc-timeout 15; } disk { # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes # no-disk-drain no-md-flushes max-bio-bvecs on-io-error detach; fencing resource-and-stonith; } net { # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork timeout 60; connect-int 15; ping-int 15; ping-timeout 50; max-buffers 8192; ko-count 100; cram-hmac-alg sha1; shared-secret “mypassword123”; } syncer { # rate after al-extents use-rle cpu-mask verify-alg csums-alg rate 100M; al-extents 512; csums-alg sha1; } }

4.配置同步资源 vi /etc/drbd.d/share.res

resource share { meta-disk internal; device /dev/drbd0; #device指定的参数最后必须有一个数字,用于global的minor-count,否则会报错。device指定drbd应用层设备。 disk /dev/VolGroup00/DRBD; on node1{ #注意:drbd配置文件中,机器名大小写敏感! #on Locrosse{ address 192.168.0.45:9876; } on node2 { #on Regal { address 192.168.0.43:9876; } }

5.将文件传送到slave上 scp -P 22 /etc/drbd.d/share.res /etc/drbd.d/global_common.conf [email protected]:.

6.在两台机器上分别创建DRBD分区 drbdadm create-md share,share为资源名,对应上面share.res,后面的share皆如此 drbdadm create-md share no resources defined!

在drbd.conf中加入配件文件 vi /etc/drbd.conf

include “drbd.d/global_common.conf”; include “drbd.d/*.res”;

drbdadm create-md share

md_offset 322122543104 al_offset 322122510336 bm_offset 322112679936 Found xfs filesystem 314572800 kB data area apparently used 314563164 kB left usable by current configuration Device size would be truncated, which would corrupt data and result in ‘access beyond end of device’ errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command ‘drbdmeta 0 v08 /dev/VolGroup00/DRBD internal create-md’ terminated with exit code 40 drbdadm create-md share: exited with code 40

解决办法 dd if=/dev/zero bs=1M count=1 of=/dev/VolGroup00/DRBD

1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00221 seconds, 474 MB/s

drbdadm create-md share

Writing meta data… initializing activity log NOT initialized bitmap New drbd meta data block successfully created. success

7.添加itpables

iptables -A INPUT -p tcp -m tcp -s 192.168.0.0/24 –dport 9876 -j ACCEPT

8.在两台机器上启动DRBD service drbd start 或 /etc/init.d/drbd start

9.查看DRBD启动状态 service drbd status 或 /etc/init.d/drbd status,cat /proc/drbd 或 drbd-overview 查看数据同步状态,第一次同步数据比较慢,只有等数据同步完才能正常使用DRBD。 没有启动drbd或iptables没有打开 service drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share WFConnection Secondary/Unknown Inconsistent/DUnknown C

成功连上 /etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share Connected Secondary/Secondary Inconsistent/Inconsistent C

10.设置主节点 把node1设置为 primary节点, 首次设置primary时用命令drbdsetup /dev/drbd0 primary -o 或drbdadm –overwrite-data-of-peer primary share ,而不能直接用命令drbdadm primary share, 再次查看DRBD状态 drbdsetup /dev/drbd0 primary -o /etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype … sync’ed: 1.9% (301468/307188)M 0:share SyncSource Primary/Secondary UpToDate/Inconsistent C

11.格式化为xfs 在node1上为DRBD分区(/dev/drbd0)创建xfs文件系统,并制定标签为drbd 。mkfs.xfs -L drbd /dev/drbd0,若提示未安装xfs,直接yum install xfsprogs安装 mkfs.xfs -L drbd /dev/drbd0

meta-data=/dev/drbd0 isize=256 agcount=16, agsize=4915049 blks = sectsz=512 attr=0 data = bsize=4096 blocks=78640784, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0

挂载DRBD分区到本地(只能在primary节点上挂载),首先创建本地文件夹, mkdir /data 之后 mount /dev/drbd0 /data 然后就可以使用drbd分区了。

12.测试,主从切换 首先在master节点/data文件夹中新建test文档touch /data/test,卸载DRBD分区/dev/drbd0, umount /dev/drbd0,把primary节点降为secondary节点drbdadm secondary share touch /data/test umount /dev/drbd0 drbdadm secondary share

在slave节点上,把secondary升级为paimary节点 drbdadm primary share,创建/data文件夹 mkdir /data,挂载DRBD分区/dev/drbd0到本地 mount /dev/drbd0 /data,然后查看/data文件夹如果发现test文件则DRBD安装成功。 drbdadm primary share mkdir /data mount /dev/drbd0 /data ll /data

/etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share Connected Primary/Secondary UpToDate/UpToDate C /data xfs

主众切换提要 在master上,先umount umount /dev/drbd0 把primary节点降为secondary节点 drbdadm secondary share

在slave上 drbdadm primary share mount /dev/drbd0 /data 这样slave就升为主了

四.安装配置heartbeat

====================

1.手动编译失败版 1.1.安装libnet http://sourceforge.net/projects/libnet-dev/ wget http://downloads.sourceforge.net/project/libnet-dev/libnet-1.2-rc2.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flibnet-dev%2F&ts=1368605826&use_mirror=jaist

tar zxvf libnet-1.2-rc2.tar.gz cd libnet ./configure make && make install

1.2.解锁用户文件 chattr -i /etc/passwd /etc/shadow /etc/group /etc/gshadow /etc/services

1.3.添加用户 groupadd haclient useradd -g haclient hacluster

1.4.查看添加结果 tail /etc/passwd hacluster:x:498:496::/var/lib/heartbeat/cores/hacluster:/bin/bash

tail /etc/shadow hacluster:!!:15840:0:99999:7:::

tail /etc/gshadow haclient:!::

tail /etc/group haclient:x:496:

1.5.安装heartbeat cd .. wget http://www.ultramonkey.org/download/heartbeat/2.1.3/heartbeat-2.1.3.tar.gz

tar -zxvf heartbeat-2.1.3.tar.gz cd heartbeat-2.1.3 ./ConfigureMe configure make && make install 出现这个错误过不去了 /usr/local/lib/libltdl.a: could not read symbols:

2.yum安装heartbeat yum install libtool-ltdl-devel yum install heartbeat

总共有三个文件需要配置: ha.cf 监控配置文件 haresources 资源管理文件 authkeys 心跳线连接加密文件

3.同步两台节点的时间 ntpdate -d cn.pool.ntp.org

4.添加iptables iptables -A INPUT -p udp -m udp –dport 694 -s 192.168.0.0/24 -j ACCEPT

5.配置ha.cf ip分配 192.168.0.45 node1 192.168.0.43 node2 192.168.0.46 vip

vi /etc/ha.d/ha.cf

debugfile /var/log/ha-debug #打开错误日志报告 keepalive 2 #两秒检测一次心跳线连接 deadtime 10 #10 秒测试不到主服务器心跳线为有问题出现 warntime 6 #警告时间(最好在 2 ~ 10 之间) initdead 120 #初始化启动时 120 秒无连接视为正常,或指定heartbeat #在启动时,需要等待120秒才去启动任何资源。 udpport 694 #用 udp 的 694 端口连接 ucast eth0 192.168.0.43 #单播方式连接(主从都写对方的 ip 进行连接) node node1 #声明主服(注意是主机名uname -n不是域名) node node2 #声明备服(注意是主机名uname -n不是域名) auto_failback on #自动切换(主服恢复后可自动切换回来)这个不要开启 respawn hacluster /usr/lib64/heartbeat/ipfail #监控ipfail进程是否挂掉,如果挂掉就重启它,64位为/lib64,32位为/lib

6.配置authkeys vi /etc/ha.d/authkeys 写入:

auth 1 1 crc

chmod 600 /etc/ha.d/authkeys

7.配置haresources mkdir /usr/lib/heartbeat

vi /etc/ha.d/haresources 写入:

node1 IPaddr::192.168.0.46/24/eth0 drbddisk::share Filesystem::/dev/drbd0::/data::xfs mysqld

node1:master主机名 IPaddr::192.168.0.46/24/eth0:设置虚拟IP(对外使用ip) drbddisk::r0:管理资源r0 Filesystem::/dev/drbd1::/data::ext3:执行mount与unmout操作 node2配置基本相同,不同的是ha.cf中的192.168.0.43 改为192.168.0.45。 mysql为后续的服务 以空格分开可写多个

8.配置mysql服务 从mysql编译目录复制出mysql.server cd mysql编译目录 cp support-files/mysql.server /etc/init.d/mysqld chown root:root /etc/init.d/mysqld chmod 700 /etc/init.d/mysqld

定义mysql数据目录及二进制日志目录是drbd资源 vi /opt/mysql/my.cnf

socket = /opt/mysql/mysql.sock pid-file = /opt/mysql/var/mysql.pid log = /opt/mysql/var/mysql.log datadir = /data/mysql/var/ log-bin=/data/mysql/var/mysql-bin

9.DRBD主从自动切换测试 首先先在node1启动heartbeat,接着在node2启动,这时,node1等node2完全启动后,相继执行设置虚拟IP,启动drbd并设置primary,并挂载/dev/drbd0到/data目录,启动命令为: service heartbeat start 这时,我们执行ip a命令,发现多了一个IP 192.168.0.46,这个就是虚拟IP,cat /proc/drbd查看drbd状态,显示primary/secondary状态,df -h显示/dev/drbd0已经挂载到/data目录。 然后我们来测试故障自动切换,停止node1的heartbeat服务或者断开网络连接,几秒后到node2查看状态。 接着恢复node1的heartbeat服务或者网络连接,查看其状态。

故障转移测试 service heartbeat standby
执行上面的命令,就会将haresources里面指定的资源从node1正常的转移到node2上,这时通过cat /proc/drbd在两台机器上查看,角色已发生了改变,mysql的服务也已在从服务器上成功开启,通过查看两台机器上的heartbeat日志便可清楚可见资源切换的整个过程, cat /var/log/ha-log

实验结果 在node1使用 heartbeat stop或 standby后会切到slave ,node1再次开启heartbeat 后会自动再切回 所以关闭heartbeat服务要先关从机,否则会切到从机.

错误1: /usr/lib/heartbeat/ipfail is not executable 在64位系统中此处为lib64 错误2: ERROR: Bad permissions on keyfile [/etc/ha.d/authkeys], 600 recommended. chmod 600 /etc/ha.d/authkeys

=================

ext3和xfs写性能简单测试

sas300g 15k 四盘组raid 0 条带大小Stripe Element Size:256K http://www.wmarow.com/strcalc/ 读取策略:自适应 Adaptive 写策略:回写(write back) 打开电池(bbu)

lvm管理

time dd if=/dev/zero of=/opt/c1gfile bs=1024 count=20000000

20000000+0 records in 20000000+0 records out 20480000000 bytes (20 GB) copied, 95.6601 seconds, 214 MB/s

real 1m35.950s user 0m9.389s sys 1m19.322s

time dd if=/dev/zero of=/data/c1gfile bs=1024 count=20000000

time dd if=/dev/zero of=/data/c1gfile bs=1024 count=20000000 20000000+0 records in 20000000+0 records out 20480000000 bytes (20 GB) copied, 45.6391 seconds, 449 MB/s

real 0m45.652s user 0m7.663s sys 0m37.922s

参考:

http://blog.csdn.net/rzhzhz/article/details/7107115 http://www.51osos.com/a/MySQL/jiagouyouhua/jiagouyouhua/2010/1122/Heartbeat.html http://bbs.linuxtone.org/thread-7413-1-1.html http://www.centos.bz/2012/03/achieve-drbd-high-availability-with-heartbeat/ http://www.doc88.com/p-714757400456.html http://icarusli.iteye.com/blog/1751435 http://www.mysqlops.com/2012/03/10/dba%e7%9a%84%e4%ba%b2%e4%bb%ac%e5%ba%94%e8%af%a5%e7%9f%a5%e9%81%93%e7%9a%84raid%e5%8d%a1%e7%9f%a5%e8%af%86.html http://www.mysqlsystems.com/2010/11/drbd-heartbeat-make-mysql-ha.html http://www.wenzizone.cn/?p=234

Posted in 高可用/集群.

Tagged with , , , , .


内网配置错误引起的nginx 504 Connection timed out

架构为nginx_cache->nginx群->php群->memcache群->mysql群

现象为用户在每小时的10分、20分等整10分时刷新论坛页面会出现502,504错误,再刷一下就可以显示。

在nginx访问日志中有504记录 tail -f /var/log/nginx/bbs.c1gstudio.com.log |grep ‘” 504’

111.166.167.206 – – [28/Apr/2013:15:30:16 +0800] “GET /forum.php?mod=ajax&action=forumchecknew&fid=276&time=1367134175&inajax=yes HTTP/1.1” 504 578 “http://bbs.c1gstudio.com/forum-276-1.html” “Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11” –

在nginx.conf中打开error记录后调试

error_log /var/log/nginx/nginx_error.log error;

tail -f /var/log/nginx/nginx_error.log

2013/05/03 14:05:47 [error] 14295#0: *968832 connect() failed (110: Connection timed out) while connecting to upstream, client: 220.181.125.23, server: bbs.c1gstudio.com, request: “GET /forum-739-1.html HTTP/1.1”, upstream: “http://192.168.0.33:80/502.html”, host: “bbs.c1gstudio.com”

110: Connection timed out 连接内网192.168.0.33超时。。。

用iftop监控并持续ping,没有问题 检查各机器的crontab,php,nginx,iptables无果 在重启nginx_cache网络时提示内网ip占用,这才发现有一台新上的机器分配了相同的内网ip 修改ip后没有问题了.

Posted in Nginx.

Tagged with , .


linux 内网网关nat 上网

内网机器需要更新下载软件时,用内网可上网机器作nat连接外网

网关 外网网卡ip:61.88.54.23 网关:61.88.54.1 内网网卡ip:192.168.0.39 网关:无

客户机 内网网卡ip:192.168.0.40

内网网卡接在同一交换机上

1.网关机 #打开内核的包转发功能 echo 1 > /proc/sys/net/ipv4/ip_forward #建立IP转发和映射

iptables -A FORWARD -d 192.168.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j SNAT –to-source 61.88.54.23

2.客户机 先加个corn 15点40自动清除,防止操作失败 crontab -e

40 15 * * * /sbin/route del default gw 192.168.0.39

3.添加路由 route add default gw 192.168.0.39

4.测试外网连接 ping 8.8.8.8

5.开机运行 vi /etc/rc.local

route add default gw 192.168.0.39

6.清除crontab

参考: http://panpan.blog.51cto.com/489034/189072

Posted in LINUX.

Tagged with , .