Skip to content


部署snort+base入侵检测系统

【简 介】   Snort是一个轻便的网络入侵检测系统,可以完成实时流量分析和对网络上的IP包登录进行测试等功能,能完成协议分析,内容查找/匹配,能用来探测多种攻击和嗅探(如缓冲区溢出、秘密断口扫描、CGI攻击、SMB嗅探、拇纹采集尝试等)。

snort 需安装libpcap和dap As of Snort 2.9.0, and DAQ, Snort now requires the use of a libpcap version greater than 1.0. Unfortunately for people using RHEL 5 (and below), CentOS 5.5 (and below), and Fedora Core 11 (and below), there is not an official RPM for libpcap 1.0.

Sourcefire will not repackage libpcap and distribute libpcap with Snort as part of an RPM, as it may cause other problems and will not be officially supported by Redhat.

yum 安装

yum install libpcap libpcap-devel wget http://www.tcpdump.org/release/libpcap-1.4.0.tar.gz tar zxvf libpcap-1.4.0.tar.gz cd libpcap-1.4.0 ./configure make make install cd .. http://code.google.com/p/libdnet/ wget https://libdnet.googlecode.com/files/libdnet-1.12.tgz tar zxvf libdnet-1.12.tgz cd libdnet-1.12 ./configure make && make install cd .. wget http://www.snort.org/downloads/2778 tar zxvf dap-2.0.2.tar.gz cd daq-2.0.2 ./configure –with-libpcap-libraries=/usr/local/lib make make install

添加用户

groupadd snort useradd -g snort snort -s/sbin/nologin

安装snort

cd .. wget http://www.snort.org/downloads/2787 tar zxvf snort-2.9.6.0.tar.gz cd snort-2.9.6.0 ./configure –prefix=/usr/local/snort-2.9.6.0 –with-dnet-libraries=/usr/local/lib/ make make install cd /usr/local ln -s snort-2.9.6.0 snort cd bin ./snort -v

错误

usr/local/snort/bin/snort: error while loading shared libraries: libdnet.1: cannot open shared object file: No such file or directory

解决

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib cp libdnet libdnet.so cp libdnet.1 libdnet.1.so ldconfig

错误

configure: WARNING: unrecognized options: –with-mysql

snort-Snort 2.9.3开始不支持mysql,改用barnyard插件

snort规则下载地址: 1.在http://www.snort.org/ 可以免费下载到社区版 snortrules-snapshot,下载官方rules是需要订阅付费 2.在 http://www.bleedingthreats.net/cgi-bin/viewcvs.cgi/rules/ 可以下载到一个第三方的 rules 文件 rules.tar.gz,这个系列更新也比较频繁,我的snortrules-snapshot-2.8.tar.gz 是在51cto上下载的。 3.BASE 可以从http://sourceforge.net/projects/secureideas/ 获取版本或者用软件SnortCenter是一个基于Web的snort探针和规则管理系统,用于远程修改snort探针的配置,起动、停止探针,编辑、分发snort特征码规则。http://users.telenet.be/larc/download/ 4.Adodb 可以从 http://sourceforge.net/projects/adodb/ 下载.ADODB 是 Active Data Objects Data Base 的简称,它是一种 PHP 存取数据库的中间函式组件

mkdir /usr/local/snort/etc cd /usr/local/snort/etc/ tar zxvf snortrules-snapshot-2956.tar.gz mv etc/* . rm snortrules-snapshot-2956.tar.gz chown -R root:root . vi /usr/local/snort/etc/snort.conf

修改

var RULE_PATH /usr/local/snort/etc/rules var SO_RULE_PATH /usr/local/snort/etc/so_rules var PREPROC_RULE_PATH /usr/local/snort/etc/preproc_rules var WHITE_LIST_PATH /usr/local/snort/etc/rules var BLACK_LIST_PATH /usr/local/snort/etc/rules dynamicpreprocessor directory /usr/local/snort/lib/snort_dynamicpreprocessor/ dynamicengine /usr/local/snort/lib/snort_dynamicengine/libsf_engine.so dynamicdetection directory /usr/local/snort/lib/snort_dynamicrules output unified2: filename /var/log/snort/snort.u2, limit 128 mkdir /usr/local/snort/lib/snort_dynamicrules mkdir /var/log/snort chown snort:snort /var/log/snort touch /usr/local/snort/etc/rules/white_list.rules touch /usr/local/snort/etc/rules/black_list.rules

启动snort

/usr/local/snort/bin/snort -d -u snort -g snort -l /var/log/snort -c /usr/local/snort/etc/snort.conf –== Initialization Complete ==– ,,_ -*> Snort!

The database output plugins are considered deprecated as !! of Snort 2.9.2 and will be removed in Snort 2.9.3.

barnyard知名的开源IDS的日志工具,具有快速的响应速度,优异的数据库写入功能,是做自定义的入侵检测系统不可缺少的插件 http://www.securixlive.com/barnyard2/download.php

安装barnyard2,前提需要你已安装mysql,这里装在/opt/mysql

wget http://www.securixlive.com/download/barnyard2/barnyard2-1.9.tar.gz tar zxvf barnyard2-1.9.tar.gz cd barnyard2-1.9 ./configure –with-mysql=/opt/mysql make make install cp etc/barnyard2.conf /usr/local/snort/etc/ mkdir /var/log/barnyard2 touch /var/log/snort/barnyard2.waldo vi /usr/local/snort/etc/barnyard2.conf config reference_file: /usr/local/snort/etc/reference.config config classification_file: /usr/local/snort/etc/classification.config config gen_file: /usr/local/snort/etc/gen-msg.map config sid_file: /usr/local/snort/etc/sid-msg.map config hostname: localhost config interface: eth0 outdatabase: output database: log, mysql, user=snort password=snort dbname=snort host=localhost

output database配好自已的db地址和密码

在编译目录schemas/create_mysql下有数据库语句,用mysql导入

CREATE USER ‘snort’@’localhost’ IDENTIFIED BY ‘***’; GRANT USAGE ON * . * TO ‘snort’@’localhost’ IDENTIFIED BY ‘***’ WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0 ; GRANT SELECT , INSERT , UPDATE , DELETE , CREATE , DROP , INDEX , ALTER ON `snortdb` . * TO ‘snort’@’localhost’;

安装base和adodb

wget http://jaist.dl.sourceforge.net/project/secureideas/BASE/base-1.4.5/base-1.4.5.tar.gz tar zxvf base-1.4.5.tar.gz chown -R www:website base-1.4.5 mv base-1.4.5 /opt/htdocs/www/ ln -s base-1.4.5 base http://jaist.dl.sourceforge.net/project/adodb/adodb-php5-only/adodb-518-for-php5/adodb518a.zip unzip adodb518a.zip chown -R www:website adodb5 mv adodb5 /opt/htdocs/www/base/adodb5

更新php的pear组件

cd /opt/php/bin ./pear install Image_Graph-alpha Image_Canvas-alpha Image_Color Numbers_Roman Mail_Mime Mail

访问地址并在线安装,就是配制一下 http://localhost:80/base/setup/index.php

测试snort

/usr/local/snort/bin/snort vd -i eth1

Snort还有一个测试功能选项(“-T”),它可以轻松地检测到用户批准的配置变更。你可以输入命令“snort -c /etc/snort/snort.conf -T”,然后查看输出来判断变化的配置是否工作正常。

运行snort,监控eth1入侵并记录日志到mysql中

/usr/local/snort/bin/snort -D -c /usr/local/snort/etc/snort.conf -i eth1 barnyard2 -c /usr/local/snort/etc/barnyard2.conf -d /var/log/snort -f snort.u2 -D -w /var/log/snort/barnyard2.waldo

查看流量 iftop -i eth1

如果有入侵,在base就可以看到记录.

如果需要监控整个交换机的流量,可以在交换机上做端口镜像将流量导入到snort机网卡对应的端口上. 我这里snort机上有4个网卡,监控电信、网通还有内网的流量,剩下一个做管理和转输数据。

vi /usr/local/snort/etc/barnyard2.conf 去掉绝对路径和时间戳

output unified2: filename snort.log, limit 128 mkdir /var/log/snort0 /var/log/snort1 /var/log/snort2 chown snort:snort /var/log/snort0 /var/log/snort1 /var/log/snort2 touch /var/log/snort0/barnyard.waldo touch /var/log/snort1/barnyard.waldo touch /var/log/snort2/barnyard.waldo

运行

/usr/local/snort/bin/snort -D -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth1 -l /var/log/snort1 barnyard2 -c /usr/local/snort/etc/barnyard2.conf -i eth1 -d /var/log/snort1 -f snort.log -D -w /var/log/snort1/barnyard.waldo /usr/local/snort/bin/snort -D -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth0 -l /var/log/snort0 barnyard2 -c /usr/local/snort/etc/barnyard2.conf -i eth0 -d /var/log/snort0 -f snort.log -D -w /var/log/snort0/barnyard.waldo /usr/local/snort/bin/snort -D -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth2 -l /var/log/snort2 barnyard2 -c /usr/local/snort/etc/barnyard2.conf -i eth2 -d /var/log/snort2 -f snort.log -D -w /var/log/snort2/barnyard.waldo

这时用tcpdump或iftop可以看到同交换机上其它机器的流量.

防止攻击snort,去掉网卡ip, 隐密snort方式 依次去掉eth0、eth1、eth2留下内网eth3 ifdown eth1 vi /etc/sysconfig/network-scripts/ifcfg-eth1

#NETMASK=255.255.255.192 #IPADDR=66.84.77.8

ifup eth1

自动启动 vi /etc/rc.local

/usr/local/snort/bin/snort -D -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth1 -l /var/log/snort1 barnyard2 -c /usr/local/snort/etc/barnyard2.conf -i eth1 -d /var/log/snort1 -f snort.log -D -w /var/log/snort1/barnyard.waldo /usr/local/snort/bin/snort -D -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth0 -l /var/log/snort0 barnyard2 -c /usr/local/snort/etc/barnyard2.conf -i eth0 -d /var/log/snort0 -f snort.log -D -w /var/log/snort0/barnyard.waldo /usr/local/snort/bin/snort -D -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth2 -l /var/log/snort2 barnyard2 -c /usr/local/snort/etc/barnyard2.conf -i eth2 -d /var/log/snort2 -f snort.log -D -w /var/log/snort2/barnyard.waldo

错误示例:

ERROR! dnet header not found, go get it from http://code.google.com/p/libdnet/ or use the –with-dnet-*

解决 安装dbus http://www.freedesktop.org/wiki/Software/dbus/

http://downloads.sourceforge.net/project/libdnet/libdnet/libdnet-1.11/libdnet-1.11.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flibdnet%2Ffiles%2Flibdnet%2Flibdnet-1.11%2F&ts=1392967212&use_mirror=jaist tar zxvf libdnet.1.11.tar.gz cd libdnet.1.11 ./configure make && make install

====================

/usr/local/lib/libz.a: could not read symbols: Bad value collect2: ld returned 1 exit status

解决 安装zlib

wget http://nchc.dl.sourceforge.net/project/libpng/zlib/1.2.3/zlib-1.2.3.tar.gz tar zxvf zlib-1.2.3.tar.gz cd zlib-1.2.3 ./configure vi MakeFile ,找到 CFLAGS=xxxxx ,在最后面加上 -fPIC #编译时加这个没用CFLAGS=”-O3 -fPIC” make make install

=======================

May 15 15:22:37 c1gstudio snort[29521]: S5: Pruned 35 sessions from cache for memcap. 5881 ssns remain. memcap: 8362032/8388608 May 15 15:22:38 c1gstudio snort[29521]: S5: Pruned 10 sessions from cache for memcap. 6038 ssns remain. memcap: 8388229/8388608 May 15 15:22:38 c1gstudio snort[29521]: S5: Pruned 5 sessions from cache for memcap. 6033 ssns remain. memcap: 8377128/8388608 May 15 15:22:38 c1gstudio snort[29521]: S5: Pruned 5 sessions from cache for memcap. 6029 ssns remain. memcap: 8362875/8388608 May 15 15:22:38 c1gstudio snort[29521]: S5: Pruned 10 sessions from cache for memcap. 6022 ssns remain. memcap: 8388607/8388608 May 15 15:22:38 c1gstudio snort[29521]: S5: Pruned 20 sessions from cache for memcap. 6002 ssns remain. memcap: 8379709/8388608

vi /usr/local/snort/etc/snort.conf

增加memcap 134217728 (128m)

# Target-Based stateful inspection/stream reassembly. For more inforation, see README.stream5 preprocessor stream5_global: track_tcp yes, \ track_udp yes, \ track_icmp no, \ memcap 134217728, \ max_tcp 262144, \ max_udp 131072, \ max_active_responses 2, \ min_response_seconds 5

=====================

WARNING: /usr/local/snort/etc/snort.conf(512) => Keyword priority for whitelist is not applied when white action is unblack. May 15 17:01:08 c1gstudio snort[12460]: Processing whitelist file /usr/local/snort/etc/rules/white_list.rules May 15 17:01:08 c1gstudio snort[12460]: Reputation entries loaded: 1, invalid: 0, re-defined: 0 (from file /usr/local/snort/etc/rules/white_list.rules) May 15 17:01:08 c1gstudio snort[12460]: Processing blacklist file /usr/local/snort/etc/rules/black_list.rules May 15 17:01:08 c1gstudio snort[12460]: Reputation entries loaded: 0, invalid: 0, re-defined: 0 (from file /usr/local/snort/etc/rules/black_list.rules) May 15 17:01:08 c1gstudio snort[12460]: Reputation total memory usage: 529052 bytes

WHITE_LIST_PATH 绝对路径 vi /usr/local/snort/etc/snort.conf

var WHITE_LIST_PATH /usr/local/snort/etc/rules var BLACK_LIST_PATH /usr/local/snort/etc/rules

黑白名单示例,但我尝试无效.

preprocessor reputation: \ nested_ip both, \ blacklist /etc/snort/default.blacklist, \ whitelist /etc/snort/default.whitelist white trust In file “default.blacklist” # These two entries will match all ipv4 addresses 1.0.0.0/1 128.0.0.0/1 In file “default.whitelist” 68.177.102.22 # sourcefire.com 74.125.93.104 # google.com

================

May 15 23:29:32 c1gstudio snort[20203]: S5: Session exceeded configured max bytes to queue 1048576 using 1049895 bytes (server queue). 36.250.86.52 5917 –> 61.147.125.16 80 (0) : LWstate 0x1 LWFlags 0x2001 May 15 23:32:42 c1gstudio snort[20203]: S5: Pruned session from cache that was using 1108276 bytes (stale/timeout). 36.250.86.52 5917 –> 61.147.125.16 80 (0) : LWstate 0x1 LWFlags 0x212001 May 16 05:01:49 c1gstudio snort[20203]: S5: Session exceeded configured max bytes to queue 1048576 using 1049688 bytes (client queue). 69.196.253.30 3734 –> 61.147.125.16 80 (0) : LWstate 0x1 LWFlags 0x402003

max_queued_bytes Default is “1048576” (1MB). 改成10MB

preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \ max_queued_bytes 10485760, \

参考: http://www.ibm.com/developerworks/cn/web/wa-snort1/ http://www.ibm.com/developerworks/cn/web/wa-snort2/

http://www.snort.org/snort-downloads? http://man.chinaunix.net/network/snort/Snortman.htm http://blog.chinaunix.net/uid-286494-id-2134474.html http://blog.chinaunix.net/uid-522598-id-1764389.html http://sourceforge.net/p/snort/mailman/snort-users/thread/433A1D25-D6EE-4257-8CE6-3743395D05D0%40auckland.ac.nz/#msg26465706 http://manual.snort.org/

Posted in 安全, 技术.

Tagged with , , .


使用Nginx添加header防止网页被frame

可以使用php或nginx等添加X-Frame-Options header来控制frame权限 X-Frame-Options有三个可选的值:

DENY:浏览器拒绝当前页面加载任何Frame页面

SAMEORIGIN:frame页面的地址只能为同源域名下的页面

ALLOW-FROM:允许frame加载的页面地址

PHP代码:

header(‘X-Frame-Options:Deny’);

Nginx配置:

add_header X-Frame-Options SAMEORIGIN

可以加在locaion中 location / { add_header X-Frame-Options SAMEORIGIN }

Apache配置:

Header always append X-Frame-Options SAMEORIGIN

使用后不充许frame的页面会显示一个白板。

参考: https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options?redirectlocale=en-US&redirectslug=The_X-FRAME-OPTIONS_response_header

Posted in Nginx.

Tagged with , .


OpenSSL“心脏出血”漏洞

OpenSSL“心脏出血”漏洞是一个非常严重的问题。这个漏洞使攻击者能够从内存中读取多达64 KB的数据。一些安全研究员表示: OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable 并且开了https服务的需尽快升级. 目前官网已出OpenSSL 1.0.1g http://www.openssl.org/

OpenSSL 1.0.0 ,OpenSSL 0.9.8不受影响

参考: http://drops.wooyun.org/papers/1381

Posted in 安全通告.

Tagged with , .


将php出错日志同步输出到splunk

vi /opt/php/etc/php.ini 打开syslog

; Log errors to specified file. error_log = /opt/php/logs/php_error.log ; Log errors to syslog (Event Log on NT, not valid in Windows 95). error_log = syslog

centos6为rsyslog,以下为syslog vi /etc/rsyslog.conf vi /etc/syslog.conf

#php log user.* /opt/php/logs/php_error.log #splunk *.* @192.168.0.39

192.168.0.39为splunk所在ip,安装spunk可看这篇blog 重新reload syslog和php 将php出错日志同步输出到splunk并在本机保留一份日志 这里没有做日志切割,需要的自已加

在splunk可看到如下日志

01:37:13.000 Jan 14 01:37:13 192.168.0.24 php-cgi: PHP Warning: mkdir() [function.mkdir]: File exists in /opt/htdocs/c1gblog/globals/class_cache.php on line 1013 host=192.168.0.24 选项| sourcetype=syslog 选项| source=tcp:1999 选项| process=php-cgi 选项

Posted in 日志.

Tagged with , , .


安装淘宝开源web服务器tengine替换nginx并使用proxy_cache做前端代理

简介 Tengine是由淘宝网发起的Web服务器项目。它在Nginx的基础上,针对大访问量网站的需求,添加了很多高级功能和特性。Tengine的性能和稳定性已经在大型的网站如淘宝网,天猫商城等得到了很好的检验。它的最终目标是打造一个高效、稳定、安全、易用的Web平台。

目前稳定版[2013-11-22] Tengine-1.5.2 特性 继承Nginx-1.2.9的所有特性,100%兼容Nginx的配置; 动态模块加载(DSO)支持。加入一个模块不再需要重新编译整个Tengine; 流式上传到HTTP后端服务器或FastCGI服务器,大量减少机器的I/O压力; 更加强大的负载均衡能力,包括一致性hash模块、会话保持模块,还可以对后端的服务器进行主动健康检查,根据服务器状态自动上线下线; 输入过滤器机制支持。通过使用这种机制Web应用防火墙的编写更为方便; 动态脚本语言Lua支持。扩展功能非常高效简单; 支持管道(pipe)和syslog(本地和远端)形式的日志以及日志抽样; 组合多个CSS、JavaScript文件的访问请求变成一个请求; 自动去除空白字符和注释从而减小页面的体积 自动根据CPU数目设置进程个数和绑定CPU亲缘性; 监控系统的负载和资源占用从而对系统进行保护; 显示对运维人员更友好的出错信息,便于定位出错机器; 更强大的防攻击(访问速度限制)模块; 更方便的命令行参数,如列出编译的模块列表、支持的指令等; 可以根据访问文件类型设置过期时间; …

安装jemalloc可以增加性能

cd /root/src/toolkits/ wget http://www.canonware.com/download/jemalloc/jemalloc-3.4.1.tar.bz2 tar jxvf jemalloc-3.4.1.tar.bz2 cd jemalloc-3.4.1 ./configure –prefix=/usr/local/jemalloc-3.4.1 make && make install ldconfig

GeoIP白名单

wget http://geolite.maxmind.com/download/geoip/api/c/GeoIP.tar.gz tar -zxvf GeoIP.tar.gz cd GeoIP-1.4.6 ./configure make; make install ldconfig

使用proxy_cache时增加purge模块

wget http://labs.frickle.com/files/ngx_cache_purge-2.1.tar.gz tar zxvf ngx_cache_purge-2.1.tar.gz –add-module=../ngx_cache_purge-2.1

后端nginx编译时需加上–with-http_realip_module以获取真实ip,并指定来源

set_real_ip_from 61.199.67.2; #前端ip set_real_ip_from 192.168.0.111;#前端ip real_ip_header X-Real-IP;

编译安装tengine jemalloc为编译路径

wget http://tengine.taobao.org/download/tengine-1.5.1.tar.gz tar zxvf tengine-1.5.1.tar.gz cd tengine-1.5.1 ./configure –user=www –group=website –prefix=/opt/tengine-1.5.1 –add-module=../ngx_cache_purge-2.1 –with-http_stub_status_module –with-http_ssl_module –with-http_realip_module \ –with-http_concat_module=shared \ –with-http_sysguard_module=shared \ –with-http_limit_conn_module=shared \ –with-http_limit_req_module=shared \ –with-http_footer_filter_module=shared \ –with-http_upstream_ip_hash_module=shared \ –with-http_upstream_least_conn_module=shared \ –with-http_upstream_session_sticky_module=shared \ –with-jemalloc=/root/src/lempelf/packages/jemalloc-3.4.1 make make install

GeoIp数据

cd /opt/tengine-1.5.1/conf wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz gunzip GeoIP.dat.gz chgrp -R website /opt/tengine-1.5.1/conf chmod -R 764 /opt/tengine-1.5.1/conf chmod 774 /opt/tengine-1.5.1/conf

复制原nginx的配制文件于tengine

cd /opt/nginx/conf cp awstats.conf fcgi.conf htpasswd block.conf nginx.conf /opt/tengine-1.5.1/conf/

检测配置文件

/opt/tengine-1.5.1/sbin/nginx -t -c /opt/tengine-1.5.1/conf/nginx.conf nginx: [emerg] unknown directive “limit_zone” in /opt/tengine-1.5.1/conf/nginx.conf:71 nginx: [emerg] unknown directive “limit_conn” in /opt/tengine-1.5.1/conf/nginx.conf:136 如果有以上错误,需去掉limit_conn配置,ngx_http_limit_conn_module 模块在新版已使用新指令

增加新的功能 vi /opt/tengine-1.5.1/conf/nginx.conf 根据cpu数量自动设定Tengine的worker进程数量,并进行cpu绑定。

worker_processes auto; worker_cpu_affinity auto;

关闭系统信息

server_info off; server_tag off;

ngx_http_sysguard_module 系统过载保护

sysguard on; sysguard_load load=10.5 action=/loadlimit; sysguard_mem swapratio=20% action=/swaplimit; sysguard_mem free=100M action=/freelimit; location /loadlimit { return 503; } location /swaplimit { return 503; } location /freelimit { return 503; }

ngx_http_limit_req_module 并发限制模块

limit_req_zone $binary_remote_addr zone=one:3m rate=1r/s; limit_req_zone $binary_remote_addr $uri zone=two:3m rate=1r/s; limit_req_zone $binary_remote_addr $request_uri zone=three:3m rate=1r/s; location / { limit_req zone=one burst=5; limit_req zone=two forbid_action=@test1; limit_req zone=three burst=3 forbid_action=@test2; } location /off { limit_req off; } location @test1 { rewrite ^ /test1.html; } location @test2 { rewrite ^ /test2.html; }

删除旧的nginx软链接,给tengine增加软链接 rm /opt/nginx ln -s /opt/tengine-1.5.1 /opt/nginx

一个完整的nginx.conf

user www website; worker_processes auto; worker_cpu_affinity auto; error_log /var/log/nginx/nginx_error.log error; pid /dev/shm/nginx.pid; #Specifies the value for maximum file descriptors that can be opened by this process. worker_rlimit_nofile 51200; dso { load ngx_http_footer_filter_module.so; load ngx_http_limit_conn_module.so; load ngx_http_limit_req_module.so; load ngx_http_sysguard_module.so; load ngx_http_upstream_ip_hash_module.so; load ngx_http_upstream_least_conn_module.so; load ngx_http_upstream_session_sticky_module.so; } events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; log_format access ‘$remote_addr – $remote_user [$time_local] “$request” ‘ ‘$status $body_bytes_sent “$http_referer” ‘ ‘”$http_user_agent” $http_x_forwarded_for’; open_log_file_cache max=1000 inactive=20s min_uses=2 valid=1m; server_names_hash_bucket_size 128; #linux 2.4+ sendfile on; tcp_nopush on; tcp_nodelay on; #tengine server_info off; server_tag off; #server_tag Apache; server_tokens off; server_name_in_redirect off; keepalive_timeout 60; client_header_buffer_size 16k; client_body_timeout 60; client_max_body_size 8m; large_client_header_buffers 4 32k; fastcgi_intercept_errors on; fastcgi_hide_header X-Powered-By; fastcgi_connect_timeout 180; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 128K; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; fastcgi_temp_path /dev/shm; #open_file_cache max=51200 inactive=20s; #open_file_cache_valid 30s; #open_file_cache_min_uses 2; #open_file_cache_errors off; gzip on; gzip_min_length 1k; gzip_comp_level 5; gzip_buffers 4 16k; gzip_http_version 1.1; gzip_types text/plain application/x-javascript text/css application/xml; gzip_proxied any; limit_req_log_level error; limit_req_zone $binary_remote_addr $uri zone=two:30m rate=10r/s; #访问限制白名单 geo $white_ip { #ranges; default 0; 127.0.0.1/32 1; 182.55.21.28/32 1; 192.168.0.0/16 1; 61.199.67.0/24 1; } client_body_buffer_size 512k; proxy_connect_timeout 5; proxy_read_timeout 60; proxy_send_timeout 5; proxy_buffer_size 16k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; #注:proxy_temp_path和proxy_cache_path指定的路径必须在同一分区 proxy_temp_path /opt/nginx/proxy_temp_dir; #设置Web缓存区名称为cache_www,内存缓存空间大小为3000MB,1天没有被访问的内容自动清除,硬盘缓存空间大小为30GB。 proxy_cache_path /opt/nginx/proxy_cache_www levels=1:2 keys_zone=cache_www:3000m inactive=1d max_size=20g; upstream www_server { server 192.168.0.131:80; } server { listen 80 default; server_name _; return 444; access_log off; } server { listen 80; server_name www.c1gstudio.com; index index.html index.htm index.php; root /opt/htdocs/www; access_log /var/log/nginx/proxy.www.c1gstudio.com.log access buffer=24k; if (-d $request_filename){ rewrite ^/(.*)([^/])$ http://$host/$1$2/ permanent; } limit_req_whitelist geo_var_name=white_ip geo_var_value=1; limit_req zone=two burst=50 forbid_action=/visitfrequently.html; location @visitfrequently { rewrite ^ /visitfrequently.html; } location ~/\.ht { deny all; } #用于清除缓存,假设一个URL为http://192.168.8.42/test.txt,通过访问http://192.168.8.42/purge/test.txt就可以清除该URL的缓存。 location ~ /purge(/.*) { #设置只允许指定的IP或IP段才可以清除URL缓存。 allow 127.0.0.1; allow 192.168.0.0/16; deny all; proxy_cache_purge cache_www $host$1$is_args$args; error_page 405 =200 /purge$1; #处理squidclient purge的时候出现的405错误 } if ( $request_method = “PURGE” ) { rewrite ^(.*)$ /purge$1 last; } location / { error_page 502 504 /502.html; proxy_set_header Host $host; #proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://www_server; add_header X-Cache Cache-Skip; } location ~ 404\.html$ { proxy_set_header Host $host; #proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://www_server; add_header X-Cache Cache-Skip; } location ~ .*\.(htm|html|)?$ { #如果后端的服务器返回502、504、执行超时等错误,自动将请求转发到upstream负载均衡池中的另一台服务器,实现故障转移。 proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_cache cache_www; #对不同的HTTP状态码设置不同的缓存时间 proxy_cache_valid 200 304 5m; #以域名、URI、参数组合成Web缓存的Key值,Nginx根据Key值哈希,存储缓存内容到二级缓存目录内 proxy_cache_key $host$uri$is_args$args; proxy_set_header Host $host; proxy_http_version 1.1; #proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://www_server; #支持后台expires proxy_ignore_headers “Cache-Control” “Expires”; add_header X-Cache Cache; } location ~* ^.+\.(jpg|jpeg|gif|png|rar|zip|css|js)$ { valid_referers none blocked *.c1gstudio.com; if ($invalid_referer) { rewrite ^/ http://leech.c1gstudio.com/leech.gif; return 412; break; } access_log off; #如果后端的服务器返回502、504、执行超时等错误,自动将请求转发到upstream负载均衡池中的另一台服务器,实现故障转移。 proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_cache cache_www; #对不同的HTTP状态码设置不同的缓存时间 proxy_cache_valid 200 304 5m; #以域名、URI、参数组合成Web缓存的Key值,Nginx根据Key值哈希,存储缓存内容到二级缓存目录内 proxy_cache_key $host$uri$is_args$args; proxy_set_header Host $host; proxy_http_version 1.1; #proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://www_server; #支持后台expires proxy_ignore_headers “Cache-Control” “Expires”; add_header X-Cache Cache; } } }

启动tengine /opt/nginx/sbin/nginx

观察了下top降低了许多

===========2014/1/3更新============= 如果负载时大时小可能有io瓶颈,可以将proxy_cache放到/dev/shm 中来解决.(/dev/shm默认为内存一半大小) 创建目录并加入开机执行

mkdir /dev/shm/nginx

vi /etc/rc.local 在nginx启动前加入 mkdir /dev/shm/nginx

修改nginx.conf

proxy_temp_path /dev/shm/nginx/proxy_temp_dir; #设置Web缓存区名称为cache_www,内存缓存空间大小为3000MB,1天没有被访问的内容自动清除,硬盘缓存空间大小为30GB。 proxy_cache_path /dev/shm/nginx/proxy_cache_www levels=1:2 keys_zone=cache_www:3000m inactive=1d max_size=20g;

Posted in Nginx.

Tagged with , .


给邮件服务器postfix增加spf和dkim

Google宣布发送给Gmail的91.4%邮件都采用DKIM或SPF反伪造标准

spf 在域名的dns管理中增加txt记录 以邮件服务器mta.c1gstudio.com ip:208.133.199.7 为例 1.SPF 记录指向A主机记录 @ IN TXT “v=spf1 include:mta.c1gstudio.com -all”

2.这个mta记录下增加spf IP地址(ip地址为邮件服务器对外ip) mta IN TXT “v=spf1 ip4: 208.133.199.7 -all” 如果需要ip段 mta IN TXT “v=spf1 ip4: 208.133.199.0/24 -all”

使用nslookup命令进行查询验证 windows下cmd->nslookup 查询的类型(type)为txt (SPF记录都是txt类型文件)set type=txt 输入要查询SPF记录的域名。这里要查询的域名是”163.com”。如果邮件地址是 [email protected] ,那么需要查询的域名就是 yy.com 查询的结果,从结果里的”include”这个关键字可以知道,163.com的SPF记录包含在”spf.163.com”的”txt”类型中 所以,再一次查询”spf.163.com”的”txt”类型 最终结果出来了 检查输出的IP,确认是否已经包含了所有发信IP

Received-SPF: neutral (google.com: 60.195.249.163 is neither permitted nor denied by domain of [email protected]) 发送邮件到gmail后,查看邮件头,包含SPF: pass为通过 Received-SPF: pass (google.com: domain of #####@yeah.net designates 60.12.227.137 as permitted sender

以上spf算是完成了.

dkim 安装opendkim可以使用epel库方便快捷 rhel5/centos5用这个 rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm rhel6/centos6用这个 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

安装opendkim会增加opendkim用户和组 如果密码文件/etc/passwd已加锁,记得解锁

yum clean all yum install opendkim

Installing: opendkim x86_64 2.8.4-1.el5 epel 262 k Installing for dependencies: ldns x86_64 1.6.16-1.el5 epel 507 k libopendkim x86_64 2.8.4-1.el5 epel 75 k unbound-libs x86_64 1.4.20-2.el5 epel 258 k

安装完成后,输入如下命令,会在当前目录下生成公钥和私钥两个文件:default.private 和 default.txt

1. cd /etc/opendkim/keys opendkim-genkey -r -d mta.c1gstudio.com default.txt 里面的内容类似如下:

default._domainkey IN TXT ( “v=DKIM1; k=rsa; s=email; ” “p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDTEMk2ow/WZuQheCMxZrowErith/LVsaWLVhkkrONT2S4LZcuuVbgCkr2EGaoJUNjc/Ztu0Z47RPbLDkGGjtCwy6VnUeKWh7ijwu2+AHyq6mJIn33z7TGz90Io0o30PjQLSeO1E/ozRpBSljvJMikYUDYfZpWZSxcIhtOetOp8mwIDAQAB” ) ; —– DKIM key default for mta.c1gstudio.com

2.修改文件属性 chown opendkim default.private

3.编缉配置文件 vi /etc/opendkim.conf

Mode sv UserID opendkim:opendkim Socket inet:8891@localhost Canonicalization relaxed/simple Domain mta.c1gstudio.com KeyFile /etc/opendkim/keys/default.private KeyTable refile:/etc/opendkim/KeyTable SigningTable refile:/etc/opendkim/SigningTable InternalHosts refile:/etc/opendkim/TrustedHosts SyslogSuccess No LogWhy No

4.信任的发信ip地址 vi /etc/opendkim/TrustedHosts

127.0.0.1 #hostname1.example1.com 192.168.1.0/24

5.private key vi /etc/opendkim/KeyTable

default._domainkey.mta.c1gstudio.com mta.c1gstudio.com:default:/etc/opendkim/keys/default.private

6.多个域名可以指定使用哪个public key vi /etc/opendkim/SigningTable

*@mta.c1gstudio.com [email protected] *@c1gstudio.com [email protected]

7.启动opendkim /etc/init.d/opendkim start Starting OpenDKIM Milter: [ OK ]

开机启动 chkconfig opendkim on

8.编辑postfix,在末尾加入 vi /etc/postfix/main.cf

#################dkim################### smtpd_milters= inet:localhost:8891 milter_default_action = accept milter_protocol = 2 non_smtpd_milters = inet:localhost:8891

如果Postfix 版本小于2.6需加上 milter_protocol = 2

9.重启postfix /usr/local/postfix/sbin/postfix reload

10.设置域名dns 将public key加入到mta.c1gstudio.com的txt记录中

default._domainkey IN TXT ( “v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDTEMk2ow/WZuQheCMxZrowErith/LVsaWLVhkkrONT2S4LZcuuVbgCkr2EGaoJUNjc/Ztu0Z47RPbLDkGGjtCwy6VnUeKWh7ijwu2+AHyq6mJIn33z7TGz90Io0o30PjQLSeO1E/ozRpBSljvJMikYUDYfZpWZSxcIhtOetOp8mwIDAQAB” )

这里吐槽下商务中国的域名面板无法加含”_”下划线的主机名,转战dnspod.

11.验证域名 http://dkimcore.org/tools/ Check a published DKIM Core Key: Selector:default Domain name:mail.c1gstudio.com

12.邮件测试 tail -f /var/log/maillog Dec 12 17:36:48 c1g opendkim[30546]: 9F36E1A181C8: DKIM-Signature field added (s=default, d=mta.c1gstudio.com)

接着再发邮件到gmail和hotmail中查看dkim是否有签名。 给gmail发邮件,然后查看邮件头,看到spf=pass和dkim=pass

Received-SPF: pass (google.com: domain of [email protected] designates 208.133.199.7 as permitted sender) client-ip=208.133.199.7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 208.133.199.7 as permitted sender) [email protected]; dkim=pass [email protected] Received: from c1g (c1g [208.133.199.7]) by mta.c1gstudio.com (Postfix) with ESMTP id C3BB21A1825C for ; Fri, 13 Dec 2013 10:47:02 +0800 (CST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=mta.c1gstudio.com; s=default; t=1386902822; bh=spJ3t+Wlf7qUYsyf9zBZsdaztFW9Vj9zn2URNGhfZ3o=; h=Date:From:To:Subject; b=R7gyGPsyKqNq3ceDTQYq958qRkdJE04xKCCKOseCb/Gi5v9rLr6jyZyUWsP5PstqX WhfTjErGippcxJEe84B4yTOE0gYDRJB//I8pEF+jux/qF7JhmIc3+Cs/yqI powsNbGdwkJaB3WlCPwroBk88/wP8W64tWda4oyGTVYZzNU=

trouble shooting: 1. tail -f /var/log/maillog opendkim no signature data 没有签名,需改成将opendkim.conf中的模式改成sv Mode sv

2. Dec 13 16:15:03 localhost opendkim[3248]: can’t load key from /etc/opendkim/keys/default.private: Permission denied cd /etc/opendkim/keys/ chown opendkim default.private

参考: http://mail.163.com/hd/all/chengxin/3.htm http://stevejenkins.com/blog/2011/08/installing-opendkim-rpm-via-yum-with-postfix-or-sendmail-for-rhel-centos-fedora/ http://www.banping.com/2011/07/19/postfix-dkim/ http://www.info110.com/mailserver/in27377-1.htm

Posted in Mail/Postfix.

Tagged with , , , .


UnixBench测试服务器性能

UnixBench是一款不错的Linux下的性能测试软件.

测试机配置

dell R410 5620(4核2.4G) *2 8G*2 SAS 15K 300G*2 noraid 7.2K SAS 2T*1 centos 5.8 64bit Linux LocalIm 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux wget http://byte-unixbench.googlecode.com/files/unixbench-5.1.2.tar.gz tar zxvf unixbench-5.1.2.tar.gz cd unixbench-5.1.2 make ./Run

Unixbench会开始性能测试,根据cpu核心数量的不同,测试时间长短不一,这次测试时间大概一小时左右

make all make[1]: Entering directory `/root/src/toolkits/unixbench-5.1.2′ Checking distribution of files ./pgms exists ./src exists ./testdir exists ./tmp exists ./results exists make[1]: Leaving directory `/root/src/toolkits/unixbench-5.1.2′ sh: 3dinfo: command not found # # # # # # # ##### ###### # # #### # # # # ## # # # # # # # ## # # # # # # # # # # # ## ##### ##### # # # # ###### # # # # # # ## # # # # # # # # # # # # ## # # # # # # # ## # # # # #### # # # # # ##### ###### # # #### # # Version 5.1.2 Based on the Byte Magazine Unix Benchmark Multi-CPU version Version 5 revisions by Ian Smith, Sunnyvale, CA, USA December 22, 2007 johantheghost at yahoo period com 1 x Dhrystone 2 using register variables 1 2 3 4 5 6 7 8 9 10 1 x Double-Precision Whetstone 1 2 3 4 5 6 7 8 9 10 1 x Execl Throughput 1 2 3 1 x File Copy 1024 bufsize 2000 maxblocks 1 2 3 1 x File Copy 256 bufsize 500 maxblocks 1 2 3 1 x File Copy 4096 bufsize 8000 maxblocks 1 2 3 1 x Pipe Throughput 1 2 3 4 5 6 7 8 9 10 1 x Pipe-based Context Switching 1 2 3 4 5 6 7 8 9 10 1 x Process Creation 1 2 3 1 x System Call Overhead 1 2 3 4 5 6 7 8 9 10 1 x Shell Scripts (1 concurrent) 1 2 3 1 x Shell Scripts (8 concurrent) 1 2 3 16 x Dhrystone 2 using register variables 1 2 3 4 5 6 7 8 9 10 16 x Double-Precision Whetstone 1 2 3 4 5 6 7 8 9 10 16 x Execl Throughput 1 2 3 16 x File Copy 1024 bufsize 2000 maxblocks 1 2 3 16 x File Copy 256 bufsize 500 maxblocks 1 2 3 16 x File Copy 4096 bufsize 8000 maxblocks 1 2 3 16 x Pipe Throughput 1 2 3 4 5 6 7 8 9 10 16 x Pipe-based Context Switching 1 2 3 4 5 6 7 8 9 10 16 x Process Creation 1 2 3 16 x System Call Overhead 1 2 3 4 5 6 7 8 9 10 16 x Shell Scripts (1 concurrent) 1 2 3 16 x Shell Scripts (8 concurrent) 1 2 3 ======================================================================== BYTE UNIX Benchmarks (Version 5.1.2) System: Impala: GNU/Linux OS: GNU/Linux — 2.6.18-308.el5 — #1 SMP Tue Feb 21 20:06:06 EST 2012 Machine: x86_64 (x86_64) Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″) CPU 0: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 1: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4787.9 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 2: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 3: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 4: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 5: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 6: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 7: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4787.9 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 8: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 9: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 10: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 11: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 12: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 13: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 14: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 15: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (4788.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization 14:24:24 up 324 days, 22:35, 1 user, load average: 0.07, 0.02, 0.06; runlevel 3 ———————————————————————— Benchmark Run: Fri Nov 08 2013 14:24:24 – 14:52:38 16 CPUs in system; running 1 parallel copy of tests Dhrystone 2 using register variables 14762933.9 lps (10.0 s, 7 samples) Double-Precision Whetstone 1509.6 MWIPS (8.7 s, 7 samples) Execl Throughput 2427.3 lps (29.7 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 597721.9 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 168768.0 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 1430654.0 KBps (30.0 s, 2 samples) Pipe Throughput 1166940.5 lps (10.0 s, 7 samples) Pipe-based Context Switching 105565.7 lps (10.0 s, 7 samples) Process Creation 8799.5 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 5298.7 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 2831.9 lpm (60.0 s, 2 samples) System Call Overhead 943466.8 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 14762933.9 1265.0 Double-Precision Whetstone 55.0 1509.6 274.5 Execl Throughput 43.0 2427.3 564.5 File Copy 1024 bufsize 2000 maxblocks 3960.0 597721.9 1509.4 File Copy 256 bufsize 500 maxblocks 1655.0 168768.0 1019.7 File Copy 4096 bufsize 8000 maxblocks 5800.0 1430654.0 2466.6 Pipe Throughput 12440.0 1166940.5 938.1 Pipe-based Context Switching 4000.0 105565.7 263.9 Process Creation 126.0 8799.5 698.4 Shell Scripts (1 concurrent) 42.4 5298.7 1249.7 Shell Scripts (8 concurrent) 6.0 2831.9 4719.8 System Call Overhead 15000.0 943466.8 629.0 ======== System Benchmarks Index Score 940.2 ———————————————————————— Benchmark Run: Fri Nov 08 2013 14:52:38 – 15:21:02 16 CPUs in system; running 16 parallel copies of tests Dhrystone 2 using register variables 109893471.8 lps (10.0 s, 7 samples) Double-Precision Whetstone 19203.2 MWIPS (9.2 s, 7 samples) Execl Throughput 40389.3 lps (29.7 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 232275.2 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 66058.1 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 701683.9 KBps (30.0 s, 2 samples) Pipe Throughput 11422586.8 lps (10.0 s, 7 samples) Pipe-based Context Switching 2899346.4 lps (10.0 s, 7 samples) Process Creation 115952.7 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 42915.9 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 8344.0 lpm (60.0 s, 2 samples) System Call Overhead 5982682.0 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 109893471.8 9416.7 Double-Precision Whetstone 55.0 19203.2 3491.5 Execl Throughput 43.0 40389.3 9392.9 File Copy 1024 bufsize 2000 maxblocks 3960.0 232275.2 586.6 File Copy 256 bufsize 500 maxblocks 1655.0 66058.1 399.1 File Copy 4096 bufsize 8000 maxblocks 5800.0 701683.9 1209.8 Pipe Throughput 12440.0 11422586.8 9182.1 Pipe-based Context Switching 4000.0 2899346.4 7248.4 Process Creation 126.0 115952.7 9202.6 Shell Scripts (1 concurrent) 42.4 42915.9 10121.7 Shell Scripts (8 concurrent) 6.0 8344.0 13906.7 System Call Overhead 15000.0 5982682.0 3988.5 ======== System Benchmarks Index Score 4199.4 dell R420 2420(6核1.9G) *2 8G*2 SAS 15K 300G*2 noraid centos 5.8 64bit Linux loclaP 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux BYTE UNIX Benchmarks (Version 5.1.2) System: ParkAvenue: GNU/Linux OS: GNU/Linux — 2.6.18-308.el5 — #1 SMP Tue Feb 21 20:06:06 EST 2012 Machine: x86_64 (x86_64) Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″) CPU 0: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 1: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 2: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 3: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 4: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.2 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 5: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 6: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 7: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 8: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 9: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 10: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 11: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 12: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 13: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 14: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 15: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 16: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 17: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 18: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 19: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 20: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 21: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.0 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 22: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization CPU 23: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (3800.1 bogomips) Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization 12:01:45 up 21:39, 1 user, load average: 0.07, 0.05, 0.01; runlevel 3 ———————————————————————— Benchmark Run: Fri Dec 06 2013 12:01:45 – 12:29:21 24 CPUs in system; running 1 parallel copy of tests Dhrystone 2 using register variables 13721062.3 lps (10.0 s, 7 samples) Double-Precision Whetstone 2121.9 MWIPS (7.2 s, 7 samples) Execl Throughput 2052.4 lps (29.8 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 567410.8 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 157374.9 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 1499112.8 KBps (30.0 s, 2 samples) Pipe Throughput 966163.2 lps (10.0 s, 7 samples) Pipe-based Context Switching 64456.9 lps (10.0 s, 7 samples) Process Creation 6327.8 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 4121.4 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 2182.9 lpm (60.0 s, 2 samples) System Call Overhead 801849.7 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 13721062.3 1175.8 Double-Precision Whetstone 55.0 2121.9 385.8 Execl Throughput 43.0 2052.4 477.3 File Copy 1024 bufsize 2000 maxblocks 3960.0 567410.8 1432.9 File Copy 256 bufsize 500 maxblocks 1655.0 157374.9 950.9 File Copy 4096 bufsize 8000 maxblocks 5800.0 1499112.8 2584.7 Pipe Throughput 12440.0 966163.2 776.7 Pipe-based Context Switching 4000.0 64456.9 161.1 Process Creation 126.0 6327.8 502.2 Shell Scripts (1 concurrent) 42.4 4121.4 972.0 Shell Scripts (8 concurrent) 6.0 2182.9 3638.1 System Call Overhead 15000.0 801849.7 534.6 ======== System Benchmarks Index Score 818.6

Posted in 测试.

Tagged with .


postfix如何过滤某个域名邮件的发送?

mail日志中可以看到有些无效的域名邮件可以直接丢弃

5305 example.com[93.184.216.119]:25: Connection timed out 1265 mail.qq.co[50.23.198.74]:25: Connection refused 394 Host not found, try again 374 qqq.com[98.126.157.163]:25: Connection refused

设置check_recipient_access访问表 在smtpd_recipient_restrictions参数中增加check_recipient_access hash:/etc/postfix/recipient_access 例:

smtpd_recipient_restrictions = check_recipient_access hash:/etc/postfix/recipient_access,permit_mynetworks,permit_sasl_authenticated,reject_invalid_hostname,reject_non_fqdn_hostname,reject_unknown_sender_domain,reject_non_fqdn_sender,reject_non_fqdn_recipient,reject_unknown_recipient_domain,reject_unauth_pipelining,reject_unauth_destination

/etc/postfix/recipient_access内容为:

[email protected] REJECT #具体地址 example.com REJECT #对整个域实现访问控制

postmap /etc/postfix/recipient_acces 之后执行 postfix reload

Posted in Mail/Postfix.

Tagged with .


nagios监控memcached

Nagios的check_memcached 下载地址:

http://search.cpan.org/CPAN/authors/id/Z/ZI/ZIGOROU/Nagios-Plugins-Memcached-0.02.tar.gz

这个脚本是用perl编的,所以你要先确保自己的机器里面是否有perl环境. 安装方法:

#wget http://search.cpan.org/CPAN/authors/id/Z/ZI/ZIGOROU/Nagios-Plugins-Memcached-0.02.tar.gz #tar xzvf Nagios-Plugins-Memcached-0.02.tar.gz #cd Nagios-Plugins-Memcached-0.02 #perl Makefile.PL

执行后会出现一些提示让你选择,一路回车

#make

这时会下载一些运行时需要的东西

#make install

默认会把check_memcached文件放到/usr/bin/check_memcached 做个软链接抟到Nagios libexec目录下.

ln -s /usr/bin/check_memcached /usr/local/nagios/libexec/

测试 /usr/local/nagios/libexec/check_memcached -H192.168.0.40:12111 MEMCACHED OK – OK 如果提示没有Nagios::Plugin

可以通过CPAN来安装perl-nagios-plugin

perl -MCPAN -e shell install File::Slurp install YAML install HTML::Parser install URI install Compress::Zlib install Module::Runtime install Module::Implementation install Attribute::Handlers install Params::Validate install Nagios::Plugin

重新配置cpan的命令为”o conf init”

测试通过后修改nagios commands.cfg配置文件.加上这些内容:

define command { command_name check_memcached_response command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ -w $ARG3$ -c $ARG4$ } define command { command_name check_memcached_size command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ –size-warning $ARG3$ –size-critical $ARG4$ } define command { command_name check_memcached_hit command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ –hit-warning $ARG3$ –hit-critical $ARG4$ }

然后在需监控主机的cfg配置文件里加上:

define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_response check_command check_memcached_response!192.168.0.40!12111!300!500 } define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_size check_command check_memcached_size!192.168.0.40!12111!90!95 } define service{ use local-service,srv-pnp ; Name of service template to use host_name memcachedserver service_description Memcached_hit check_command check_memcached_hit!192.168.0.40!12111!10!5 }

/etc/init.d/nagios reload 重新reload后就可以看到,目前此插件不支持pnp作图

参考:http://simblog.vicp.net/?p=250 http://blog.csdn.net/deccmtd/article/details/6799647

Posted in Nagios.

Tagged with , .


svn Compression of svndiff data failed

svn无法提交修改,报错: Compression of svndiff data failed

检查svn服务器为apache内存泄漏,重启一下就解决了.

Posted in Subversion.

Tagged with , .