Skip to content


postfix如何过滤某个域名邮件的发送?

mail日志中可以看到有些无效的域名邮件可以直接丢弃

5305 example.com[93.184.216.119]:25: Connection timed out
1265 mail.qq.co[50.23.198.74]:25: Connection refused
394 Host not found, try again
374 qqq.com[98.126.157.163]:25: Connection refused

设置check_recipient_access访问表
在smtpd_recipient_restrictions参数中增加check_recipient_access hash:/etc/postfix/recipient_access
例:

smtpd_recipient_restrictions = check_recipient_access hash:/etc/postfix/recipient_access,permit_mynetworks,permit_sasl_authenticated,reject_invalid_hostname,reject_non_fqdn_hostname,reject_unknown_sender_domain,reject_non_fqdn_sender,reject_non_fqdn_recipient,reject_unknown_recipient_domain,reject_unauth_pipelining,reject_unauth_destination

/etc/postfix/recipient_access内容为:

[email protected] REJECT #具体地址
example.com REJECT #对整个域实现访问控制

postmap /etc/postfix/recipient_acces
之后执行 postfix reload

Posted in Mail/Postfix.

Tagged with .


nagios监控memcached

Nagios的check_memcached
下载地址:

http://search.cpan.org/CPAN/authors/id/Z/ZI/ZIGOROU/Nagios-Plugins-Memcached-0.02.tar.gz

这个脚本是用perl编的,所以你要先确保自己的机器里面是否有perl环境.
安装方法:


#wget http://search.cpan.org/CPAN/authors/id/Z/ZI/ZIGOROU/Nagios-Plugins-Memcached-0.02.tar.gz
#tar xzvf Nagios-Plugins-Memcached-0.02.tar.gz
#cd Nagios-Plugins-Memcached-0.02
#perl Makefile.PL

执行后会出现一些提示让你选择,一路回车

#make

这时会下载一些运行时需要的东西

#make install

默认会把check_memcached文件放到/usr/bin/check_memcached
做个软链接抟到Nagios libexec目录下.

ln -s /usr/bin/check_memcached /usr/local/nagios/libexec/

测试
/usr/local/nagios/libexec/check_memcached -H192.168.0.40:12111
MEMCACHED OK – OK
如果提示没有Nagios::Plugin

可以通过CPAN来安装perl-nagios-plugin

perl -MCPAN -e shell
install File::Slurp
install YAML
install HTML::Parser
install URI
install Compress::Zlib
install Module::Runtime
install Module::Implementation
install Attribute::Handlers
install Params::Validate
install Nagios::Plugin

重新配置cpan的命令为”o conf init”

测试通过后修改nagios commands.cfg配置文件.加上这些内容:

define command {
command_name check_memcached_response
command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ -w $ARG3$ -c $ARG4$
}
define command {
command_name check_memcached_size
command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ –size-warning $ARG3$ –size-critical $ARG4$
}

define command {
command_name check_memcached_hit
command_line $USER1$/check_memcached -H $ARG1$:$ARG2$ –hit-warning $ARG3$ –hit-critical $ARG4$
}

然后在需监控主机的cfg配置文件里加上:

define service{
use local-service,srv-pnp ; Name of service template to use
host_name memcachedserver
service_description Memcached_response
check_command check_memcached_response!192.168.0.40!12111!300!500
}
define service{
use local-service,srv-pnp ; Name of service template to use
host_name memcachedserver
service_description Memcached_size
check_command check_memcached_size!192.168.0.40!12111!90!95
}
define service{
use local-service,srv-pnp ; Name of service template to use
host_name memcachedserver
service_description Memcached_hit
check_command check_memcached_hit!192.168.0.40!12111!10!5
}

/etc/init.d/nagios reload
重新reload后就可以看到,目前此插件不支持pnp作图

参考:http://simblog.vicp.net/?p=250
http://blog.csdn.net/deccmtd/article/details/6799647

Posted in Nagios.


svn Compression of svndiff data failed

svn无法提交修改,报错:
Compression of svndiff data failed

检查svn服务器为apache内存泄漏,重启一下就解决了.

Posted in Subversion.


drbd+xfs+heartbeat+mysql实现高可用

一.前言

DRBD是一种块设备,可以被用于高可用(HA)之中.它类似于一个网络RAID-1功能.当你将数据写入本地 文件系统时,数据还将会被发送到网络中另一台主机上.以相同的形式记录在一个文件系统中. 本地(主节点)与远程主机(备节点)的数据可以保证实时同步.当本地系统出现故障时,远程主机上还会保留有一份相同的数据,可以继续使用.
Heartbeat 是可以从 Linux-HA 项目 Web 站点公开获得的软件包之一,它提供了所有 HA 系统所需要的基本功能,比如启动和停止资源、监测群集中系统的可用性、在群集中的节点间转移共享 IP 地址的所有者等

系统os为centos5.8 64bit
drbd83
xfs-1.0.2-5.el5_6.1
Heartbeat 2.1.3
Percona-Server-5.5.22-rel25.2

资源分配
192.168.0.45 mysql1
192.168.0.43 mysql2
192.168.0.46 vip
DRBD数据目录 /data

mysql做主复制,后接从机,实现高可用

二.安装xfs支持

XFS的主要特性包括:
数据完全性
采用XFS文件系统,当意想不到的宕机发生后,首先,由于文件系统开启了日志功能,所以你磁盘上的文件不再会意外宕机而遭到破坏了。不论目前文件系统上存储的文件与数据有多少,文件系统都可以根据所记录的日志在很短的时间内迅速恢复磁盘文件内容。
传输特性
XFS文件系统采用优化算法,日志记录对整体文件操作影响非常小。XFS查询与分配存储空间非常快。xfs文件系统能连续提供快速的反应时间。笔者曾经对XFS、JFS、Ext3、ReiserFS文件系统进行过测试,XFS文件文件系统的性能表现相当出众。
可扩展性
XFS 是一个全64-bit的文件系统,它可以支持上百万T字节的存储空间。对特大文件及小尺寸文件的支持都表现出众,支持特大数量的目录。最大可支持的文件大 小为263 = 9 x 1018 = 9 exabytes,最大文件系统尺寸为18 exabytes。
XFS使用高的表结构(B+树),保证了文件系统可以快速搜索与快速空间分配。XFS能够持续提供高速操作,文件系统的性能不受目录中目录及文件数量的限制。
传输带宽
XFS 能以接近裸设备I/O的性能存储数据。在单个文件系统的测试中,其吞吐量最高可达7GB每秒,对单个文件的读写操作,其吞吐量可达4GB每秒。

xfs在默认的系统安装上是不被支持的,需要自己手动安装默认的包。

rpm -qa |grep xfs
xorg-x11-xfs-1.0.2-5.el5_6.1

yum install xfsprogs kmod-xfs

挂载分区

modprobe xfs //载入xfs文件系统模块,如果不行要重启服务器
lsmod |grep xfs //查看是否载入了xfs模块

df -h -P -T

Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol02 ext3 97G 3.7G 89G 5% /
/dev/mapper/VolGroup00-LogVol01 ext3 9.7G 151M 9.1G 2% /tmp
/dev/mapper/VolGroup00-LogVol04 ext3 243G 470M 230G 1% /opt
/dev/mapper/VolGroup00-LogVol03 ext3 39G 436M 37G 2% /var
/dev/sda1 ext3 99M 13M 81M 14% /boot
tmpfs tmpfs 7.9G 0 7.9G 0% /dev/shm

vgdisplay

— Volume group —
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 5
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.09 TB
PE Size 32.00 MB
Total PE 35692
Alloc PE / Size 13056 / 408.00 GB
Free PE / Size 22636 / 707.38 GB
VG UUID 7nAuuH-cpyx-u2Jr-bNNJ-fGPe-TUKg-T4KmmU

划出300G数据分区

lvcreate -L 300G -n DRBD VolGroup00

Logical volume “DRBD” created

/dev/VolGroup00/DRBD

三.安装配置drbd

1.设置hostname

vi /etc/sysconfig/network
修改HOSTNAME为node1

编辑hosts
vi /etc/hosts
添加:

192.168.0.45 node1
192.168.0.43 node2
使node1 hostnmae临时生效

hostname node1
node2设置类似。

2.安装drbd
yum install drbd83 kmod-drbd83

Installing:
drbd83 x86_64 8.3.15-2.el5.centos extras 243 k
kmod-drbd83 x86_64 8.3.15-3.el5.centos extras

yum install -y heartbeat heartbeat-ldirectord heartbeat-pils heartbeat-stonith

Installing:
heartbeat x86_64 2.1.3-3.el5.centos extras 1.8 M
heartbeat-ldirectord x86_64 2.1.3-3.el5.centos extras 199 k
heartbeat-pils x86_64 2.1.3-3.el5.centos extras 220 k
heartbeat-stonith x86_64 2.1.3-3.el5.centos extras 348 k

3.配置drbd
master上配置配置文件(/etc/drbd.d目录下)global_common.conf,然后scp到slave相同目录下.

vi /etc/drbd.d/global_common.conf

global {
usage-count yes;
# minor-count dialog-refresh disable-ip-verification
}

common {
protocol C;

handlers {
# These are EXAMPLE handlers only.
# They may have severe implications,
# like hard resetting the node under certain circumstances.
# Be careful when chosing your poison.

pri-on-incon-degr “/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”;
pri-lost-after-sb “/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”;
local-io-error “/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f”;
# fence-peer “/usr/lib/drbd/crm-fence-peer.sh”;
# split-brain “/usr/lib/drbd/notify-split-brain.sh root”;
# out-of-sync “/usr/lib/drbd/notify-out-of-sync.sh root”;
# before-resync-target “/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 — -c 16k”;
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}

startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
wfc-timeout 15;
degr-wfc-timeout 15;
outdated-wfc-timeout 15;
}

disk {
# on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
# no-disk-drain no-md-flushes max-bio-bvecs
on-io-error detach;
fencing resource-and-stonith;
}

net {
# sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
# max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
# after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
timeout 60;
connect-int 15;
ping-int 15;
ping-timeout 50;
max-buffers 8192;
ko-count 100;
cram-hmac-alg sha1;
shared-secret “mypassword123”;
}

syncer {
# rate after al-extents use-rle cpu-mask verify-alg csums-alg
rate 100M;
al-extents 512;
csums-alg sha1;
}
}

4.配置同步资源
vi /etc/drbd.d/share.res

resource share {
meta-disk internal;
device /dev/drbd0; #device指定的参数最后必须有一个数字,用于global的minor-count,否则会报错。device指定drbd应用层设备。
disk /dev/VolGroup00/DRBD;

on node1{ #注意:drbd配置文件中,机器名大小写敏感!
#on Locrosse{
address 192.168.0.45:9876;
}
on node2 {
#on Regal {
address 192.168.0.43:9876;
}
}

5.将文件传送到slave上
scp -P 22 /etc/drbd.d/share.res /etc/drbd.d/global_common.conf [email protected]:.

6.在两台机器上分别创建DRBD分区
drbdadm create-md share,share为资源名,对应上面share.res,后面的share皆如此
drbdadm create-md share
no resources defined!

在drbd.conf中加入配件文件
vi /etc/drbd.conf

include “drbd.d/global_common.conf”;
include “drbd.d/*.res”;

drbdadm create-md share

md_offset 322122543104
al_offset 322122510336
bm_offset 322112679936

Found xfs filesystem
314572800 kB data area apparently used
314563164 kB left usable by current configuration

Device size would be truncated, which
would corrupt data and result in
‘access beyond end of device’ errors.
You need to either
* use external meta data (recommended)
* shrink that filesystem first
* zero out the device (destroy the filesystem)
Operation refused.

Command ‘drbdmeta 0 v08 /dev/VolGroup00/DRBD internal create-md’ terminated with exit code 40
drbdadm create-md share: exited with code 40

解决办法
dd if=/dev/zero bs=1M count=1 of=/dev/VolGroup00/DRBD

1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.00221 seconds, 474 MB/s

drbdadm create-md share

Writing meta data…
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success

7.添加itpables

iptables -A INPUT -p tcp -m tcp -s 192.168.0.0/24 –dport 9876 -j ACCEPT

8.在两台机器上启动DRBD
service drbd start 或 /etc/init.d/drbd start

9.查看DRBD启动状态
service drbd status 或 /etc/init.d/drbd status,cat /proc/drbd 或 drbd-overview 查看数据同步状态,第一次同步数据比较慢,只有等数据同步完才能正常使用DRBD。
没有启动drbd或iptables没有打开
service drbd status

drbd driver loaded OK; device status:
version: 8.3.15 (api:88/proto:86-97)
GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26
m:res cs ro ds p mounted fstype
0:share WFConnection Secondary/Unknown Inconsistent/DUnknown C

成功连上
/etc/init.d/drbd status

drbd driver loaded OK; device status:
version: 8.3.15 (api:88/proto:86-97)
GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26
m:res cs ro ds p mounted fstype
0:share Connected Secondary/Secondary Inconsistent/Inconsistent C

10.设置主节点
把node1设置为 primary节点, 首次设置primary时用命令drbdsetup /dev/drbd0 primary -o 或drbdadm –overwrite-data-of-peer primary share ,而不能直接用命令drbdadm primary share, 再次查看DRBD状态
drbdsetup /dev/drbd0 primary -o
/etc/init.d/drbd status

drbd driver loaded OK; device status:
version: 8.3.15 (api:88/proto:86-97)
GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26
m:res cs ro ds p mounted fstype
… sync’ed: 1.9% (301468/307188)M
0:share SyncSource Primary/Secondary UpToDate/Inconsistent C

11.格式化为xfs
在node1上为DRBD分区(/dev/drbd0)创建xfs文件系统,并制定标签为drbd 。mkfs.xfs -L drbd /dev/drbd0,若提示未安装xfs,直接yum install xfsprogs安装
mkfs.xfs -L drbd /dev/drbd0

meta-data=/dev/drbd0 isize=256 agcount=16, agsize=4915049 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=78640784, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=32768, version=1
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0

挂载DRBD分区到本地(只能在primary节点上挂载),首先创建本地文件夹,
mkdir /data
之后
mount /dev/drbd0 /data
然后就可以使用drbd分区了。

12.测试,主从切换
首先在master节点/data文件夹中新建test文档touch /data/test,卸载DRBD分区/dev/drbd0, umount /dev/drbd0,把primary节点降为secondary节点drbdadm secondary share
touch /data/test
umount /dev/drbd0
drbdadm secondary share

在slave节点上,把secondary升级为paimary节点 drbdadm primary share,创建/data文件夹 mkdir /data,挂载DRBD分区/dev/drbd0到本地 mount /dev/drbd0 /data,然后查看/data文件夹如果发现test文件则DRBD安装成功。
drbdadm primary share
mkdir /data
mount /dev/drbd0 /data
ll /data

/etc/init.d/drbd status

drbd driver loaded OK; device status:
version: 8.3.15 (api:88/proto:86-97)
GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26
m:res cs ro ds p mounted fstype
0:share Connected Primary/Secondary UpToDate/UpToDate C /data xfs

主众切换提要
在master上,先umount
umount /dev/drbd0
把primary节点降为secondary节点
drbdadm secondary share

在slave上
drbdadm primary share
mount /dev/drbd0 /data
这样slave就升为主了

四.安装配置heartbeat

====================
1.手动编译失败版
1.1.安装libnet
http://sourceforge.net/projects/libnet-dev/
wget http://downloads.sourceforge.net/project/libnet-dev/libnet-1.2-rc2.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flibnet-dev%2F&ts=1368605826&use_mirror=jaist

tar zxvf libnet-1.2-rc2.tar.gz
cd libnet
./configure
make && make install

1.2.解锁用户文件
chattr -i /etc/passwd /etc/shadow /etc/group /etc/gshadow /etc/services

1.3.添加用户
groupadd haclient
useradd -g haclient hacluster

1.4.查看添加结果
tail /etc/passwd
hacluster:x:498:496::/var/lib/heartbeat/cores/hacluster:/bin/bash

tail /etc/shadow
hacluster:!!:15840:0:99999:7:::

tail /etc/gshadow
haclient:!::

tail /etc/group
haclient:x:496:

1.5.安装heartbeat
cd ..
wget http://www.ultramonkey.org/download/heartbeat/2.1.3/heartbeat-2.1.3.tar.gz

tar -zxvf heartbeat-2.1.3.tar.gz
cd heartbeat-2.1.3
./ConfigureMe configure
make && make install
出现这个错误过不去了
/usr/local/lib/libltdl.a: could not read symbols:

2.yum安装heartbeat
yum install libtool-ltdl-devel
yum install heartbeat

总共有三个文件需要配置:
ha.cf 监控配置文件
haresources 资源管理文件
authkeys 心跳线连接加密文件

3.同步两台节点的时间
ntpdate -d cn.pool.ntp.org

4.添加iptables
iptables -A INPUT -p udp -m udp –dport 694 -s 192.168.0.0/24 -j ACCEPT

5.配置ha.cf
ip分配
192.168.0.45 node1
192.168.0.43 node2
192.168.0.46 vip

vi /etc/ha.d/ha.cf

debugfile /var/log/ha-debug #打开错误日志报告
keepalive 2 #两秒检测一次心跳线连接
deadtime 10 #10 秒测试不到主服务器心跳线为有问题出现
warntime 6 #警告时间(最好在 2 ~ 10 之间)
initdead 120 #初始化启动时 120 秒无连接视为正常,或指定heartbeat
#在启动时,需要等待120秒才去启动任何资源。

udpport 694 #用 udp 的 694 端口连接
ucast eth0 192.168.0.43 #单播方式连接(主从都写对方的 ip 进行连接)
node node1 #声明主服(注意是主机名uname -n不是域名)
node node2 #声明备服(注意是主机名uname -n不是域名)
auto_failback on #自动切换(主服恢复后可自动切换回来)这个不要开启
respawn hacluster /usr/lib64/heartbeat/ipfail #监控ipfail进程是否挂掉,如果挂掉就重启它,64位为/lib64,32位为/lib

6.配置authkeys
vi /etc/ha.d/authkeys
写入:

auth 1
1 crc

chmod 600 /etc/ha.d/authkeys

7.配置haresources
mkdir /usr/lib/heartbeat

vi /etc/ha.d/haresources
写入:

node1 IPaddr::192.168.0.46/24/eth0 drbddisk::share Filesystem::/dev/drbd0::/data::xfs mysqld

node1:master主机名
IPaddr::192.168.0.46/24/eth0:设置虚拟IP(对外使用ip)
drbddisk::r0:管理资源r0
Filesystem::/dev/drbd1::/data::ext3:执行mount与unmout操作
node2配置基本相同,不同的是ha.cf中的192.168.0.43 改为192.168.0.45。
mysql为后续的服务 以空格分开可写多个

8.配置mysql服务
从mysql编译目录复制出mysql.server
cd mysql编译目录
cp support-files/mysql.server /etc/init.d/mysqld
chown root:root /etc/init.d/mysqld
chmod 700 /etc/init.d/mysqld

定义mysql数据目录及二进制日志目录是drbd资源
vi /opt/mysql/my.cnf

socket = /opt/mysql/mysql.sock
pid-file = /opt/mysql/var/mysql.pid
log = /opt/mysql/var/mysql.log
datadir = /data/mysql/var/

log-bin=/data/mysql/var/mysql-bin

9.DRBD主从自动切换测试
首先先在node1启动heartbeat,接着在node2启动,这时,node1等node2完全启动后,相继执行设置虚拟IP,启动drbd并设置primary,并挂载/dev/drbd0到/data目录,启动命令为:
service heartbeat start
这时,我们执行ip a命令,发现多了一个IP 192.168.0.46,这个就是虚拟IP,cat /proc/drbd查看drbd状态,显示primary/secondary状态,df -h显示/dev/drbd0已经挂载到/data目录。
然后我们来测试故障自动切换,停止node1的heartbeat服务或者断开网络连接,几秒后到node2查看状态。
接着恢复node1的heartbeat服务或者网络连接,查看其状态。

故障转移测试
service heartbeat standby
执行上面的命令,就会将haresources里面指定的资源从node1正常的转移到node2上,这时通过cat /proc/drbd在两台机器上查看,角色已发生了改变,mysql的服务也已在从服务器上成功开启,通过查看两台机器上的heartbeat日志便可清楚可见资源切换的整个过程,
cat /var/log/ha-log

实验结果
在node1使用 heartbeat stop或 standby后会切到slave ,node1再次开启heartbeat 后会自动再切回
所以关闭heartbeat服务要先关从机,否则会切到从机.

错误1:
/usr/lib/heartbeat/ipfail is not executable
在64位系统中此处为lib64
错误2:
ERROR: Bad permissions on keyfile [/etc/ha.d/authkeys], 600 recommended.
chmod 600 /etc/ha.d/authkeys

=================
ext3和xfs写性能简单测试

sas300g 15k 四盘组raid 0
条带大小Stripe Element Size:256K http://www.wmarow.com/strcalc/
读取策略:自适应 Adaptive
写策略:回写(write back)
打开电池(bbu)

lvm管理

# time dd if=/dev/zero of=/opt/c1gfile bs=1024 count=20000000

20000000+0 records in
20000000+0 records out
20480000000 bytes (20 GB) copied, 95.6601 seconds, 214 MB/s

real 1m35.950s
user 0m9.389s
sys 1m19.322s

# time dd if=/dev/zero of=/data/c1gfile bs=1024 count=20000000
time dd if=/dev/zero of=/data/c1gfile bs=1024 count=20000000
20000000+0 records in
20000000+0 records out
20480000000 bytes (20 GB) copied, 45.6391 seconds, 449 MB/s

real 0m45.652s
user 0m7.663s
sys 0m37.922s

参考:

http://blog.csdn.net/rzhzhz/article/details/7107115
http://www.51osos.com/a/MySQL/jiagouyouhua/jiagouyouhua/2010/1122/Heartbeat.html
http://bbs.linuxtone.org/thread-7413-1-1.html
http://www.centos.bz/2012/03/achieve-drbd-high-availability-with-heartbeat/
http://www.doc88.com/p-714757400456.html
http://icarusli.iteye.com/blog/1751435
http://www.mysqlops.com/2012/03/10/dba%e7%9a%84%e4%ba%b2%e4%bb%ac%e5%ba%94%e8%af%a5%e7%9f%a5%e9%81%93%e7%9a%84raid%e5%8d%a1%e7%9f%a5%e8%af%86.html
http://www.mysqlsystems.com/2010/11/drbd-heartbeat-make-mysql-ha.html
http://www.wenzizone.cn/?p=234

Posted in 高可用/集群.


内网配置错误引起的nginx 504 Connection timed out

架构为nginx_cache->nginx群->php群->memcache群->mysql群

现象为用户在每小时的10分、20分等整10分时刷新论坛页面会出现502,504错误,再刷一下就可以显示。

在nginx访问日志中有504记录
tail -f /var/log/nginx/bbs.c1gstudio.com.log |grep ‘” 504’

111.166.167.206 – – [28/Apr/2013:15:30:16 +0800] “GET /forum.php?mod=ajax&action=forumchecknew&fid=276&time=1367134175&inajax=yes HTTP/1.1” 504 578 “http://bbs.c1gstudio.com/forum-276-1.html” “Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11” –

在nginx.conf中打开error记录后调试

error_log /var/log/nginx/nginx_error.log error;

tail -f /var/log/nginx/nginx_error.log

2013/05/03 14:05:47 [error] 14295#0: *968832 connect() failed (110: Connection timed out) while connecting to upstream, client: 220.181.125.23, server: bbs.c1gstudio.com, request: “GET /forum-739-1.html HTTP/1.1”, upstream: “http://192.168.0.33:80/502.html”, host: “bbs.c1gstudio.com”

110: Connection timed out
连接内网192.168.0.33超时。。。

用iftop监控并持续ping,没有问题
检查各机器的crontab,php,nginx,iptables无果
在重启nginx_cache网络时提示内网ip占用,这才发现有一台新上的机器分配了相同的内网ip
修改ip后没有问题了.

Posted in Nginx.

Tagged with , .


linux 内网网关nat 上网

内网机器需要更新下载软件时,用内网可上网机器作nat连接外网

网关
外网网卡ip:61.88.54.23 网关:61.88.54.1
内网网卡ip:192.168.0.39 网关:无

客户机
内网网卡ip:192.168.0.40

内网网卡接在同一交换机上

1.网关机
#打开内核的包转发功能
echo 1 > /proc/sys/net/ipv4/ip_forward
#建立IP转发和映射

iptables -A FORWARD -d 192.168.0.0/24 -j ACCEPT
iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j SNAT –to-source 61.88.54.23

2.客户机
先加个corn 15点40自动清除,防止操作失败
crontab -e

40 15 * * * /sbin/route del default gw 192.168.0.39

3.添加路由
route add default gw 192.168.0.39

4.测试外网连接
ping 8.8.8.8

5.开机运行
vi /etc/rc.local

route add default gw 192.168.0.39

6.清除crontab

参考:
http://panpan.blog.51cto.com/489034/189072

Posted in LINUX.

Tagged with , .


linux centos5.8 安装memcached

1.安装libevent
yum install libevent.x86_64 libevent-devel.x86_64
没有libevent编译memcached为出错

checking for libevent directory… configure: error: libevent is required. You can get it from http://www.monkey.org/~provos/libevent/
If it’s already installed, specify its path using –with-libevent=/dir/

2.安装memcached

wget http://memcached.googlecode.com/files/memcached-1.4.15.tar.gz
tar zxvf memcached-1.4.15.tar.gz
cd memcached-1.4.15
./configure –prefix=/opt/memcached-1.4.15
make
make install

ln -s /opt/memcached-1.4.15 /opt/memcached

3.配置文件
vi /opt/memcached/my.conf

PORT=”11200″
IP=”192.168.0.40″
USER=”root”
MAXCONN=”1524″
CACHESIZE=”3000″
OPTIONS=””
#memcached

4.启动/关闭脚本
vi /etc/init.d/memcached

#!/bin/bash
#
# Save me to /etc/init.d/memcached
# And add me to system start
# chmod +x memcached
# chkconfig –add memcached
# chkconfig –level 35 memcached on
#
# Written by lei
#
# chkconfig: – 80 12
# description: Distributed memory caching daemon
#
# processname: memcached
# config: /usr/local/memcached/my.conf

source /etc/rc.d/init.d/functions

### Default variables
PORT=”11211″
IP=”192.168.0.40″
USER=”root”
MAXCONN=”1524″
CACHESIZE=”64″
OPTIONS=””
SYSCONFIG=”/opt/memcached/my.conf”

### Read configuration
[ -r “$SYSCONFIG” ] && source “$SYSCONFIG”

RETVAL=0
prog=”/opt/memcached/bin/memcached”
desc=”Distributed memory caching”

start() {
echo -n $”Starting $desc ($prog): ”
daemon $prog -d -p $PORT -l $IP -u $USER -c $MAXCONN -m $CACHESIZE $OPTIONS
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/memcached
return $RETVAL
}

stop() {
echo -n $”Shutting down $desc ($prog): ”
killproc $prog
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/memcached
return $RETVAL
}

restart() {
stop
start
}

reload() {
echo -n $”Reloading $desc ($prog): ”
killproc $prog -HUP
RETVAL=$?
echo
return $RETVAL
}

case “$1″ in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
condrestart)
[ -e /var/lock/subsys/$prog ] && restart
RETVAL=$?
;;
reload)
reload
;;
status)
status $prog
RETVAL=$?
;;
*)
echo $”Usage: $0 {start|stop|restart|condrestart|status}”
RETVAL=1
esac

exit $RETVAL

5.添加iptables 充许192.168.0.0/24访问

iptables -A INPUT -s 192.168.0.0/255.255.255.0 -p tcp -m tcp –dport 11200 -j ACCEPT

6.启动
/etc/init.d/memcached start

7.web 管理界面
http://www.junopen.com/memadmin/

Posted in Memcached/redis.

Tagged with .


linux centos5.8 安装redis

Redis 是一个高性能的key-value数据库。 redis的出现,很大程度补偿了memcached这类key/value存储的不足,在部 分场合可以对关系数据库起到很好的补充作用。它提供了Python,Ruby,Erlang,PHP客户端,使用很方便。
一.安装tcl
否则在redis make test时出报错

You need tcl 8.5 or newer in order to run the Redis test
make[1]: *** [test] Error 1


wget http://downloads.sourceforge.net/tcl/tcl8.6.0-src.tar.gz
tar zxvf tcl8.6.0-src.tar.gz
cd tcl8.6.0-src
cd unix &&
./configure –prefix=/usr \
–mandir=/usr/share/man \
$([ $(uname -m) = x86_64 ] && echo –enable-64bit)
make &&
sed -e “s@^\(TCL_SRC_DIR=’\).*@\1/usr/include’@” \
-e “/TCL_B/s@=’\(-L\)\?.*unix@=’\1/usr/lib@” \
-i tclConfig.sh

make test


Tests ended at Tue Apr 16 12:02:27 CST 2013
all.tcl: Total 116 Passed 116 Skipped 0 Failed 0


make install &&
make install-private-headers &&
ln -v -sf tclsh8.6 /usr/bin/tclsh &&
chmod -v 755 /usr/lib/libtcl8.6.so

二.redis
1.安装redis

wget http://redis.googlecode.com/files/redis-2.6.12.tar.gz
tar xzf redis-2.6.12.tar.gz
cd redis-2.6.12
make
make test

以下错误可以忽略
https://github.com/antirez/redis/issues/1034

[exception]: Executing test client: assertion:Server started even if RDB was unreadable!.
assertion:Server started even if RDB was unreadable!
while executing
“error “assertion:$msg””
(procedure “fail” line 2)
invoked from within
“fail “Server started even if RDB was unreadable!””
(“uplevel” body line 2)
invoked from within
“uplevel 1 $elsescript”
(procedure “wait_for_condition” line 7)
invoked from within
“wait_for_condition 50 100 {
[string match {*Fatal error loading*} [exec tail -n1 < [dict get $srv stdout]]] } else { fail "Server..." ("uplevel" body line 2) invoked from within "uplevel 1 $code" (procedure "start_server_and_kill_it" line 5) invoked from within "start_server_and_kill_it [list "dir" $server_path] { wait_for_condition 50 100 { [string match {*Fatal error loading*} \ [exec..." (file "tests/integration/rdb.tcl" line 57) invoked from within "source $path" (procedure "execute_tests" line 4) invoked from within "execute_tests $data" (procedure "test_client_main" line 9) invoked from within "test_client_main $::test_server_port " make[1]: *** [test] Error 1

make命令执行完成后,会在当前目录下生成4个可执行文件,分别是redis-server、redis-cli、redis-benchmark、redis-stat,它们的作用如下:
redis-server:Redis服务器的daemon启动程序
redis-cli:Redis命令行操作工具。当然,你也可以用telnet根据其纯文本协议来操作
redis-benchmark:Redis性能测试工具,测试Redis在你的系统及你的配置下的读写性能
redis-stat:Redis状态检测工具,可以检测Redis当前状态参数及延迟状况

2.建立Redis目录

mkdir -p /opt/redis/bin
mkdir -p /opt/redis/etc
mkdir -p /opt/redis/var

cp redis.conf /opt/redis/etc/
cd src
cp redis-server redis-cli redis-benchmark redis-check-aof redis-check-dump /opt/redis/bin/

useradd redis
chown -R redis.redis /opt/redis

建立Redis目录,只是为了将Redis相关的资源更好的统一管理。你也可以使用
make install
安装在系统默认目录

3.复制启动文件

cp ../utils/redis_init_script /etc/init.d/redis

4.启动redis

cd /opt/redis/bin
./redis-server /opt/redis/etc/redis.conf


_._
_.-“__ ”-._
_.-“ `. `_. ”-._ Redis 2.6.12 (00000000/0) 64 bit
.-“ .-“`. “`\/ _.,_ ”-._
( ‘ , .-` | `, ) Running in stand alone mode
|`-._`-…-` __…-.“-._|’` _.-‘| Port: 6379
| `-._ `._ / _.-‘ | PID: 24454
`-._ `-._ `-./ _.-‘ _.-‘
|`-._`-._ `-.__.-‘ _.-‘_.-‘|
| `-._`-._ _.-‘_.-‘ | http://redis.io
`-._ `-._`-.__.-‘_.-‘ _.-‘
|`-._`-._ `-.__.-‘ _.-‘_.-‘|
| `-._`-._ _.-‘_.-‘ |
`-._ `-._`-.__.-‘_.-‘ _.-‘
`-._ `-.__.-‘ _.-‘
`-._ _.-‘
`-.__.-‘

[24454] 12 Apr 10:34:19.519 # Server started, Redis version 2.6.12
[24454] 12 Apr 10:34:19.519 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command ‘sysctl vm.overcommit_memory=1’ for this to take effect.
[24454] 12 Apr 10:34:19.519 * The server is now ready to accept connections on port 6379

成功安装Redis后,直接执行redis-server即可运行Redis,此时它是按照默认配置来运行的(默认配置不是后台运行)。如果我们希望Redis按我们的要求运行,则需要修改配置文件,

5.设置itpables,充许192.168.0内网网段访问

iptables -A INPUT -s 192.168.0.0/24 -p tcp -m tcp –dport 6379 -j ACCEPT
/etc/init.d/iptables save

6.配置redis
vi /opt/redis/etc/redis.conf

daemonize yes
#是否作为守护进程运行 默认0
pidfile /var/run/redis.pid
# 指定一个pid,默认为/var/run/redis.pid
port 6379
#Redis默认监听端口
bind 192.168.0.41
#绑定主机IP,默认值为127.0.0.1
timeout 300
#客户端闲置多少秒后,断开连接,默认为300(秒)
tcp-keepalive 0
# tcp保持连接
loglevel notice
#日志记录等级,有4个可选值,debug,verbose(默认值),notice,warning
logfile /opt/redis/var/redis.log
#指定日志输出的文件名,默认值为stdout,也可设为/dev/null屏蔽日志
databases 16
#可用数据库数,默认值为16
save 900 1
#保存数据到disk的策略
#当有一条Keys数据被改变是,900秒刷新到disk一次
save 300 10
#当有10条Keys数据被改变时,300秒刷新到disk一次
save 60 10000
#当有1w条keys数据被改变时,60秒刷新到disk一次
#当dump .rdb数据库的时候是否压缩数据对象
rdbcompression yes
#本地数据库文件名,默认值为dump.rdb
dbfilename dump.rdb
#本地数据库存放路径,默认值为 ./
dir /opt/redis/var/
#内存限制
maxmemory 2G
#刷新策略
maxmemory-policy allkeys-lru

7. 调整系统内核参数
如果内存情况比较紧张的话,需要设定内核参数:

echo 1 > /proc/sys/vm/overcommit_memory

这里说一下这个配置的含义:/proc/sys/vm/overcommit_memory
该文件指定了内核针对内存分配的策略,其值可以是0、1、2。
0,表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
1,表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
2,表示内核允许分配超过所有物理内存和交换空间总和的内存
Redis在dump数据的时候,会fork出一个子进程,理论上child进程所占用的内存和parent是一样的,比如parent占用的内存为8G,这个时候也要同样分配8G的内存给child, 如果内存无法负担,往往会造成redis服务器的down机或者IO负载过高,效率下降。所以这里比较优化的内存分配策略应该设置为 1(表示内核允许分配所有的物理内存,而不管当前的内存状态如何)

vi /etc/sysctl.conf

vm.overcommit_memory = 1

然后应用生效:

sysctl –p

8.修改redis服务脚本
vi /etc/init.d/redis

REDISPORT=6379
EXEC=/opt/redis/bin/redis-server
CLIEXEC=/opt/redis/bin/redis-cli

PIDFILE=/var/run/redis.pid
CONF=”/opt/redis/etc/redis.conf”

如果监听非本机IP地址时还需修改下脚本,不然关不掉


#增加监听地址变量
REDIHOST=192.168.0.41

#增加-h $REDIHOST
echo “Stopping …”
$CLIEXEC -h $REDIHOST -p $REDISPORT shutdown

9.测试

/etc/init.d/redis start

Starting Redis server…

/opt/redis/bin/redis-cli -h 192.168.0.41

redis 127.0.0.1:6379> ping
PONG
redis 127.0.0.1:6379> set mykey c1gstduio
OK
redis 127.0.0.1:6379> get mykey
“c1gstduio”
redis 127.0.0.1:6379> exit

benchmark
/opt/redis/bin/redis-benchmark -h 192.168.0.41

====== PING_INLINE ======
10000 requests completed in 0.17 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds 100.00% <= 1 milliseconds 60240.96 requests per second ====== PING_BULK ====== 10000 requests completed in 0.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 60240.96 requests per second ====== SET ====== 10000 requests completed in 0.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 61728.39 requests per second ====== GET ====== 10000 requests completed in 0.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.49% <= 1 milliseconds 100.00% <= 1 milliseconds 60606.06 requests per second ====== INCR ====== 10000 requests completed in 0.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 96.26% <= 1 milliseconds 100.00% <= 1 milliseconds 64516.13 requests per second ====== LPUSH ====== 10000 requests completed in 0.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 93.60% <= 1 milliseconds 100.00% <= 1 milliseconds 65789.48 requests per second ====== LPOP ====== 10000 requests completed in 0.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 93.50% <= 1 milliseconds 100.00% <= 1 milliseconds 66666.66 requests per second ====== SADD ====== 10000 requests completed in 0.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 96.52% <= 1 milliseconds 100.00% <= 1 milliseconds 61728.39 requests per second ====== SPOP ====== 10000 requests completed in 0.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 57471.27 requests per second ====== LPUSH (needed to benchmark LRANGE) ====== 10000 requests completed in 0.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 58139.53 requests per second ====== LRANGE_100 (first 100 elements) ====== 10000 requests completed in 0.25 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.67% <= 1 milliseconds 100.00% <= 1 milliseconds 40322.58 requests per second ====== LRANGE_300 (first 300 elements) ====== 10000 requests completed in 0.56 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.22% <= 1 milliseconds 98.62% <= 2 milliseconds 99.01% <= 3 milliseconds 99.49% <= 4 milliseconds 100.00% <= 4 milliseconds 17921.15 requests per second ====== LRANGE_500 (first 450 elements) ====== 10000 requests completed in 0.96 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.07% <= 1 milliseconds 40.25% <= 2 milliseconds 73.76% <= 3 milliseconds 99.51% <= 4 milliseconds 100.00% <= 5 milliseconds 10362.69 requests per second ====== LRANGE_600 (first 600 elements) ====== 10000 requests completed in 1.00 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.07% <= 1 milliseconds 0.35% <= 2 milliseconds 98.18% <= 3 milliseconds 99.12% <= 4 milliseconds 99.48% <= 5 milliseconds 99.60% <= 6 milliseconds 99.85% <= 7 milliseconds 100.00% <= 8 milliseconds 10010.01 requests per second ====== MSET (10 keys) ====== 10000 requests completed in 0.26 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.34% <= 1 milliseconds 100.00% <= 2 milliseconds 38167.94 requests per second

10. 关闭服务
/opt/redis/bin/redis-cli shutdown
如果端口变化可以指定端口:
/opt/redis/bin/redis-cli -h 192.168.0.41 -p 6379 shutdown

11. 保存/备份
数据备份可以通过定期备份该文件实现。
因为redis是异步写入磁盘的,如果要让内存中的数据马上写入硬盘可以执行如下命令:
redis-cli save 或者 redis-cli -p 6379 save(指定端口)
注意,以上部署操作需要具备一定的权限,比如复制和设定内核参数等。
执行redis-benchmark命令时也会将内存数据写入硬盘。

/opt/redis/bin/redis-cli save
OK

查看结果
ll -h var/

total 48K
-rw-r–r– 1 root root 40K Apr 12 11:49 dump.rdb
-rw-r–r– 1 root root 5.3K Apr 12 11:49 redis.log

12.开机运行
vi /etc/rc.local

/etc/init.d/redis start

三.php扩展

redis扩展非常多,光php就有好几个,这里选用phpredis,Predis需要PHP >= 5.3
Predis ? ★ Repository JoL1hAHN Mature and supported
phpredis ? ★ Repository yowgi This is a client written in C as a PHP module.
Rediska ? Repository Homepage shumkov
RedisServer Repository OZ Standalone and full-featured class for Redis in PHP
Redisent ? Repository justinpoliey
Credis Repository colinmollenhour Lightweight, standalone, unit-tested fork of Redisent which wraps phpredis for best performance if available.

1.安装php扩展phpredis

wget https://nodeload.github.com/nicolasff/phpredis/zip/master –no-check-certificate
cd phpredis-master
unzip master
cd phpredis-master/
/opt/php/bin/phpize
./configure –with-php-config=/opt/php/bin/php-config
make
make install

接下来在php.ini中添加extension=redis.so
vi /opt/php/etc/php.ini

extension_dir = “/opt/php/lib/php/extensions/no-debug-non-zts-20060613/”
extension=redis.so

vi test.php

connect(“127.0.0.1”,6379);
$redis->set(“test”,”Hello World”);
echo $redis->get(“test”);

?>

redis
Redis Support enabled
Redis Version 2.2.2

四.基于php的web管理工具

phpRedisAdmin
https://github.com/ErikDubbelboer/phpRedisAdmin
演示:http://dubbelboer.com/phpRedisAdmin/?overview
这个用的人很多,但是我装完无法显示

readmin
http://readmin.org/
演示:http://demo.readmin.org/
需要装在根目录

phpredmin
https://github.com/sasanrose/phpredmin#readme
可以显示,有漂良的图表
使用伪rewrite,/index.php/welcome/stats

wget https://nodeload.github.com/sasanrose/phpredmin/zip/master
unzip phpredmin-master.zip

ln -s phpredmin/public predmin

编辑nginx,支持rewrite
nginx.conf

location ~* ^/predmin/index.php/
{
rewrite ^/predmin/index.php/(.*) /predmin/index.php?$1 break;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fcgi.conf;
}

crontab -e

* * * * * root cd /var/www/phpredmin/public && php index.php cron/index

配置redis
vi phpreadmin/config

‘redis’ => Array(
‘host’ => ‘192.168.0.30’,
‘port’ => ‘6379’,
‘password’ => Null,
‘database’ => 0

访问admin.xxx.com/predmin/就可以显示界面

五.实际使用感觉
redis装完后做discuzx2.5的cache
带宽消耗惊人,大概是memcached的十倍左右
内存占用也多了四分之一左右
discuz连接redis偶尔会超时和出错
最后还是切回了memcached

参考:
http://redis.io/topics/quickstart
http://hi.baidu.com/mucunzhishu/item/ead872ba3cec36db84dd798c
http://www.linuxfromscratch.org/blfs/view/svn/general/tcl.html

Posted in Memcached/redis.

Tagged with .


centos5.8 LINUX 安装openvpn

1.下载

wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz
wget https://nodeload.github.com/OpenVPN/openvpn/zip/release/2.3
wget http://swupdate.openvpn.org/community/releases/openvpn-2.3.0.tar.gz

2.安装LZO

tar -xvzf lzo-2.06.tar.gz
cd lzo-2.06
./configure –prefix=/usr/local/lzo-2.06
make && make install

3.安装openvpn

tar zxvf openvpn-2.3.0.tar.gz
cd openvpn-2.3.0
./configure –prefix=/usr/local/openvpn-2.3.0 –with-lzo-headers=/usr/local/lzo/include/lzo-2.06 –with-lzo-lib=/usr/local/lzo-2.06/lib –with-ssl-headers=/usr/include/openssl/ –with-ssl-lib=/usr/lib/openssl/

如果有错误
openvpn error: lzo enabled but missing
可以尝试下面

ldconfig
CFLAGS=”-I/usr/local/include” LDFLAGS=”-L/usr/local/lib”
./configure –prefix=/usr/local/openvpn-2.3.0

make && make install

安装后提示

(1) make device node: mknod /dev/net/tun c 10 200
(2a) add to /etc/modules.conf: alias char-major-10-200 tun
(2b) load driver: modprobe tun
(3) enable routing: echo 1 > /proc/sys/net/ipv4/ip_forward

4.创建tun

mknod /dev/net/tun c 10 200

5.复制服务端样例配置文件

mkdir /etc/openvpn
cp sample/sample-config-files/server.conf /etc/openvpn/

6.下载easy-rsa

wget https://nodeload.github.com/OpenVPN/easy-rsa/zip/master
unzip master
cd easy-rsa-master
cp -R easy-rsa/ /etc/openvpn/

7.创建证书
cd /etc/openvpn/easy-rsa/2.0/
这下面的文件做简单介绍:
vars 脚本,是用来创建环境变量,设置所需要的变量的脚本
clean-all 脚本,是创建生成CA证书及密钥 文件所需要的文件和目录
build-ca 脚本,生成CA证书(交互)
build-dh 脚本,生成Diffie-Hellman文件(交互)
build-key-server 脚本,生成服务器端密钥(交互)
build-key 脚本,生成客户端密钥(交互)
pkitool 脚本,直接使用vars的环境变量设置直接生成证书(非交互)

a.初始化keys文件

. ./vars (注意有两个点,两个点之间有空格)
./clean-all
./build-ca (一路按回车就可以)

b.生成Diffie-Hellman文件

./build-dh

c.生成VPN server ca证书

./build-key-server server

然后把刚生成的CA证书和密钥copy到/etc/openvpn/下

cd keys
cp ca.crt ca.key server.crt server.key dh2048.pem /etc/openvpn/

d.生成客户端CA证书及密钥

./build-key client

打包客户端证书 供客户端使用

tar zcvf userkeys.tar.gz ca.crt ca.key client.crt client.key client.csr

8.编辑配置文件
vi /etc/openvpn/openvpn.conf

port 8099
proto udp
dev tun
ca /etc/openvpn/ca.crt
cert /etc/openvpn/server.crt
key /etc/openvpn/server.key # This file should be kept secret
dh /etc/openvpn/dh2048.pem
server 172.16.1.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push “dhcp-option DNS 8.8.8.8”
client-to-client
duplicate-cn
keepalive 10 120
comp-lzo
user nobody
group nobody
persist-key
persist-tun
status /var/log/openvpn-status.log
log /var/log/openvpn.log
log-append /var/log/openvpn.log
verb 3

9.启动和查看openvpn

ln -s /usr/local/openvpn-2.3.0 /usr/local/openvpn
/usr/local/openvpn/sbin/openvpn –daemon –config /etc/openvpn/openvpn.conf
netstat -tunlp

10.开启iptables

iptables -t nat -A POSTROUTING -o eth0 -s 172.16.2.0/24 -j SNAT –to-source 100.100.100.100
iptables -A INPUT -p udp -m state –state NEW -m udp –dport 8099 -j ACCEPT
iptables -A INPUT -p tcp -m state –state NEW -m tcp –dport 8099 -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -s 172.16.1.0/24 -j SNAT –to-source 100.100.100.100

100.100.100.100为vpn服务器外网卡eth0的IP地址,这是保证客户端能翻墙上网。也可以这样设置

iptables -t nat -A POSTROUTING -o eth0 -s 172.16.2.0/24 -j MASQUERADE

这应该是一种比较通用方法,适合ADSL拨号的动态公网地址

11.
客户端安装和配置
我的客户端是windowsXP系统的。从openvpn官网下载最新的客户端,然后安装,过程一直下一步就OK了。
完成之后我们需要把VPN-server服务器上的/etc/openvpn/keys/ 目录下的ca.crt、client.crt、client.key三个文件复制到“C:\Program Files\openvpn\config\keys”文件夹内。
然后连接

http://swupdate.openvpn.org/community/releases/openvpn-install-2.3.0-I004-i686.exe

ps:openvpn需安装客户端,多用户也不能同时连接.

参考:
http://lxsym.blog.51cto.com/1364623/772075
http://blog.jiechic.com/archives/budgetvm-install-openvpn-vpn-vps-server
http://www.itdhz.com/post-287.html
http://www.kdolphin.com/1120
http://blog.creke.net/748.html
http://luxiaok.blog.51cto.com/2177896/1078375
http://docs.linuxtone.org/ebooks/VPN/openvpn%E9%9B%86%E5%90%88.pdf

Posted in VPN.

Tagged with , .


centos5.8 LINUX 安装L2TP/IPSec VPN

第二层隧道协议L2TP(Layer 2 Tunneling Protocol)是一种工业标准的Internet隧道协议,它使用UDP的1701端口进行通信。L2TP本身并没有任何加密,但是我们可以使用IPSec对L2TP包进行加密。L2TP VPN比PPTP VPN搭建复杂一些。
IPSec 使用预共享密钥(PSK)进行加密和验证,L2TP 负责封包,PPP 负责具体的用户验证
一、部署IPSEC 、安装 openswan
1、安装关联包

yum install make gcc gmp-devel bison flex

2、编译安装
使用Openswan来实现IPSec

wget http://ftp.openswan.org/openswan/openswan-2.6.38.tar.gz
tar zxvf openswan-2.6.38.tar.gz
cd openswan-2.6.38
make programs install

3、配置ipsec
vi /etc/ipsec.conf

config setup
nat_traversal=yes
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12
oe=off
protostack=netkey

conn L2TP-PSK-NAT
rightsubnet=vhost:%priv
also=L2TP-PSK-noNAT

conn L2TP-PSK-noNAT
authby=secret
pfs=no
auto=add
keyingtries=3
rekey=no
ikelifetime=8h
keylife=1h
type=transport
left=YOUR.SERVER.IP
leftprotoport=17/1701
right=%any
rightprotoport=17/%any

YOUR.SERVER.IP为vpn服务器的公网ip
注意前面有tab缩进,否则可能出现下面错误

failed to start openswan IKE daemon – the following error occured:
can not load config ‘/etc/ipsec.conf’: /etc/ipsec.conf:58: syntax error, unexpected KEYWORD, expecting $end [rightsubnet]

4、 设置 Shared Key

vi /etc/ipsec.secrets

YOUR.SERVER.IP %any: PSK “YourSharedSecret”

YOUR.SERVER.IP为vpn服务器的公网ip
YourSharedSecret为共享密钥

5、 修改包转发设置

for each in /proc/sys/net/ipv4/conf/*
do
echo 0 > $each/accept_redirects
echo 0 > $each/send_redirects
done

echo 1 >/proc/sys/net/core/xfrm_larval_drop
echo 1 >/proc/sys/net/ipv4/ip_forward

sed -i ‘/net.ipv4.ip_forward / {s/0/1/g} ‘ /etc/sysctl.conf
sed -i ‘/net.ipv4.conf.default.rp_filter / {s/1/0/g} ‘ /etc/sysctl.conf


touch /var/lock/subsys/local

6、 重启 IPSec ,测试

/etc/init.d/ipsec restart

ipsec_setup: Stopping Openswan IPsec…
ipsec_setup: stop ordered, but IPsec appears to be already stopped!
ipsec_setup: doing cleanup anyway…
ipsec_setup: Starting Openswan IPsec U2.6.38/K2.6.18-308.el5..

ipsec verify
没有报[FAILED]就可以了。


Checking your system to see if IPsec got installed and started correctly:
Version check and ipsec on-path [OK]
Linux Openswan U2.6.38/K2.6.18-308.el5 (netkey)
Checking for IPsec support in kernel [OK]
SAref kernel support [N/A]
NETKEY: Testing XFRM related proc values [OK]
[OK]
[OK]
Checking that pluto is running [OK]
Pluto listening for IKE on udp 500 [OK]
Pluto listening for NAT-T on udp 4500 [OK]
Two or more interfaces found, checking IP forwarding [FAILED]
Checking NAT and MASQUERADEing [OK]
Checking for ‘ip’ command [OK]
Checking /bin/sh is not /bin/dash [OK]
Checking for ‘iptables’ command [OK]
Opportunistic Encryption Support [DISABLED]

错误1:
SAref kernel support [N/A]
/etc/xl2tpd/xl2tpd.conf这个文件里

[global]
ipsec saref = no

Linux Openswan U2.6.38/K2.6.18-308.el5 (netkey)
以netkey方式运行不支持局域网多个nat客户;
开启SAref kernel support后以klips方式运行支持

错误2:
Two or more interfaces found, checking IP forwarding
修改ip_forward,只要 cat /proc/sys/net/ipv4/ip_forward 返回结果是1就没事
echo 1 >/proc/sys/net/ipv4/ip_forward

错误3:
Please enable /proc/sys/net/core/xfrm_larval_drop
echo 1 > /proc/sys/net/core/xfrm_larval_drop

二、安装 L2TP
1、关联包

yum install libpcap-devel ppp

2.编译安装

wget http://jaist.dl.sourceforge.net/project/rp-l2tp/rp-l2tp/0.4/rp-l2tp-0.4.tar.gz
tar -zxvf rp-l2tp-0.4.tar.gz
cd rp-l2tp-0.4
./configure
make
cp handlers/l2tp-control /usr/local/sbin/
mkdir /var/run/xl2tpd/
ln -s /usr/local/sbin/l2tp-control /var/run/xl2tpd/l2tp-control

wget http://www.xelerance.com/wp-content/uploads/software/xl2tpd/xl2tpd-1.3.0.tar.gz
tar -zxvf xl2tpd-1.3.0.tar.gz
cd xl2tpd-1.3.0
make
make install

安装显示

install -d -m 0755 /usr/local/sbin
install -m 0755 xl2tpd /usr/local/sbin/xl2tpd
install -d -m 0755 /usr/local/share/man/man5
install -d -m 0755 /usr/local/share/man/man8
install -m 0644 doc/xl2tpd.8 /usr/local/share/man/man8/
install -m 0644 doc/xl2tpd.conf.5 doc/l2tp-secrets.5 \
/usr/local/share/man/man5/
# pfc
install -d -m 0755 /usr/local/bin
install -m 0755 pfc /usr/local/bin/pfc
install -d -m 0755 /usr/local/share/man/man1
install -m 0644 contrib/pfc.1 /usr/local/share/man/man1/
# control exec
install -d -m 0755 /usr/local/sbin
install -m 0755 xl2tpd-control /usr/local/sbin/xl2tpd-control

3、配置

mkdir /etc/xl2tpd
vi /etc/xl2tpd/xl2tpd.conf


[global]
ipsec saref = yes

[lns default]
ip range = 192.168.81.2-192.168.81.254
local ip = 192.168.81.1 //你的内网口
refuse chap = yes
refuse pap = yes
require authentication = yes
ppp debug = yes
pppoptfile = /etc/ppp/options.xl2tpd
length bit = yes

4、修改 ppp 配置

vi /etc/ppp/options.xl2tpd

require-mschap-v2
ms-dns 8.8.8.8
ms-dns 8.8.4.4
asyncmap 0
auth
crtscts
lock
hide-password
modem
debug
name l2tpd
proxyarp
lcp-echo-interval 30
lcp-echo-failure 4

5、添加用户名/密码

vi /etc/ppp/chap-secrets

# user server password ip
vpnuser l2tpd userpass *

8、启动 xl2tpd

iptables -t nat -A POSTROUTING -s 192.168.81.0/24 -o eth0 -j MASQUERADE

iptables -A INPUT -p udp -m state –state NEW -m udp –dport 1701 -j ACCEPT
iptables -A INPUT -p udp -m state –state NEW -m udp –dport 500 -j ACCEPT
iptables -A INPUT -p udp -m state –state NEW -m udp –dport 4500 -j ACCEPT

iptables -I FORWARD -s 192.168.81.0/24 -j ACCEPT
iptables -I FORWARD -d 192.168.81.0/24 -j ACCEPT

/usr/local/sbin/xl2tpd

错误

Feb 20 15:20:38 localc1g ipsec__plutorun: /usr/local/lib/ipsec/_plutorun: line 250: 7859 Aborted (core dumped) /usr/local/libexec/ipsec/pluto –nofork –secretsfile /etc/ipsec.secrets –ipsecdir /etc/ipsec.d –use-netkey –uniqueids –nat_traversal –virtual_private %v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12
Feb 20 15:20:38 localc1g ipsec__plutorun: !pluto failure!: exited with error status 134 (signal 6)
Feb 20 15:20:38 localc1g ipsec__plutorun: restarting IPsec after pause…


Feb 20 16:58:47 localc1g pppd[13553]: The remote system is required to authenticate itself
Feb 20 16:58:47 localc1g pppd[13553]: but I couldn’t find any suitable secret (password) for it to use to do so.

检查chap-secrets文件server是否正确


Feb 21 11:30:52 localc1g pluto[16897]: “L2TP-PSK-NAT”[11] 122.221.55.121 #11: probable authentication failure (mismatch of preshared secrets?): malformed payload in packet
Feb 21 11:30:52 localc1g pluto[16897]: | payload malformed after IV

检查客户端PSK是否正确

9、开机运行
放入/etc/rc.local中

touch /var/lock/subsys/local
for each in /proc/sys/net/ipv4/conf/*
do
echo 0 > $each/accept_redirects
echo 0 > $each/send_redirects
done
echo 1 >/proc/sys/net/core/xfrm_larval_drop
echo 1 >/proc/sys/net/ipv4/ip_forward
/etc/init.d/ipsec restart
/usr/local/sbin/xl2tpd

参考:
http://www.myvm.net/archives/554
http://amumy.blog.163.com/blog/static/17312970201210282323568/
http://www.vpsyou.com/2010/08/10/centos-install-l2tpipsec-and-simple-troubleshooting.html
http://www.esojourn.org/blog/post/setup-l2tp-vpn-server-with-ipsec-in-centos6.php
https://www.dls-yan.com/2012/10/04/783.html
http://blog.csdn.net/rosetta/article/details/7794826

http://book.51cto.com/art/201204/331170.htm
http://blog.csdn.net/cumtmimi/article/details/1814073

1、“IPSEC服务”服务不在运行状态

请依次执行下列操作:

计算机管理->服务和应用程序->服务,找到IPSEC Services ,双击打开,设启动方式为自动。

重新开机再设置策略

2、IPSEC Services 如何打开

补充:如果点打开时出现提示
在 本地计算机 无发启动 IPSEC Services 服务
错误1747:未知的验证服务
现在就是自动的 只是前面的装备 没有启动
网络客户端装上后 还是一样不能启动

修复方法:
Code:
开始>运行 输入:CMD 在窗口中输入:netsh winsock reset

3、修改注册表
缺省的Windows XP L2TP 传输策略不允许L2TP 传输不使用IPSec 加密。可以通过修改
Windows XP 注册表来禁用缺省的行为:
手工修改:
1) 进入Windows XP 的“开始” “运行”里面输入“Regedt32”,打开“注册表编辑
器”,定位“HKEY_Local_Machine / System / CurrentControl Set / Services / RasMan /
Parameters ”主键。
2) 为该主键添加以下键值:
键值:ProhibitIpSec
数据类型:reg_dword
值:1

Posted in VPN.

Tagged with , , .