Skip to content


drbd+xfs+heartbeat+mysql实现高可用

一.前言

DRBD是一种块设备,可以被用于高可用(HA)之中.它类似于一个网络RAID-1功能.当你将数据写入本地 文件系统时,数据还将会被发送到网络中另一台主机上.以相同的形式记录在一个文件系统中. 本地(主节点)与远程主机(备节点)的数据可以保证实时同步.当本地系统出现故障时,远程主机上还会保留有一份相同的数据,可以继续使用. Heartbeat 是可以从 Linux-HA 项目 Web 站点公开获得的软件包之一,它提供了所有 HA 系统所需要的基本功能,比如启动和停止资源、监测群集中系统的可用性、在群集中的节点间转移共享 IP 地址的所有者等

系统os为centos5.8 64bit drbd83 xfs-1.0.2-5.el5_6.1 Heartbeat 2.1.3 Percona-Server-5.5.22-rel25.2

资源分配 192.168.0.45 mysql1 192.168.0.43 mysql2 192.168.0.46 vip DRBD数据目录 /data

mysql做主复制,后接从机,实现高可用

二.安装xfs支持

XFS的主要特性包括: 数据完全性 采用XFS文件系统,当意想不到的宕机发生后,首先,由于文件系统开启了日志功能,所以你磁盘上的文件不再会意外宕机而遭到破坏了。不论目前文件系统上存储的文件与数据有多少,文件系统都可以根据所记录的日志在很短的时间内迅速恢复磁盘文件内容。 传输特性 XFS文件系统采用优化算法,日志记录对整体文件操作影响非常小。XFS查询与分配存储空间非常快。xfs文件系统能连续提供快速的反应时间。笔者曾经对XFS、JFS、Ext3、ReiserFS文件系统进行过测试,XFS文件文件系统的性能表现相当出众。 可扩展性 XFS 是一个全64-bit的文件系统,它可以支持上百万T字节的存储空间。对特大文件及小尺寸文件的支持都表现出众,支持特大数量的目录。最大可支持的文件大 小为263 = 9 x 1018 = 9 exabytes,最大文件系统尺寸为18 exabytes。 XFS使用高的表结构(B+树),保证了文件系统可以快速搜索与快速空间分配。XFS能够持续提供高速操作,文件系统的性能不受目录中目录及文件数量的限制。 传输带宽 XFS 能以接近裸设备I/O的性能存储数据。在单个文件系统的测试中,其吞吐量最高可达7GB每秒,对单个文件的读写操作,其吞吐量可达4GB每秒。

xfs在默认的系统安装上是不被支持的,需要自己手动安装默认的包。

rpm -qa |grep xfs xorg-x11-xfs-1.0.2-5.el5_6.1

yum install xfsprogs kmod-xfs

挂载分区

modprobe xfs //载入xfs文件系统模块,如果不行要重启服务器 lsmod |grep xfs //查看是否载入了xfs模块

df -h -P -T

Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol02 ext3 97G 3.7G 89G 5% / /dev/mapper/VolGroup00-LogVol01 ext3 9.7G 151M 9.1G 2% /tmp /dev/mapper/VolGroup00-LogVol04 ext3 243G 470M 230G 1% /opt /dev/mapper/VolGroup00-LogVol03 ext3 39G 436M 37G 2% /var /dev/sda1 ext3 99M 13M 81M 14% /boot tmpfs tmpfs 7.9G 0 7.9G 0% /dev/shm

vgdisplay

— Volume group — VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.09 TB PE Size 32.00 MB Total PE 35692 Alloc PE / Size 13056 / 408.00 GB Free PE / Size 22636 / 707.38 GB VG UUID 7nAuuH-cpyx-u2Jr-bNNJ-fGPe-TUKg-T4KmmU

划出300G数据分区

lvcreate -L 300G -n DRBD VolGroup00

Logical volume “DRBD” created /dev/VolGroup00/DRBD

三.安装配置drbd

1.设置hostname

vi /etc/sysconfig/network 修改HOSTNAME为node1

编辑hosts vi /etc/hosts 添加:

192.168.0.45 node1 192.168.0.43 node2 使node1 hostnmae临时生效

hostname node1 node2设置类似。

2.安装drbd yum install drbd83 kmod-drbd83

Installing: drbd83 x86_64 8.3.15-2.el5.centos extras 243 k kmod-drbd83 x86_64 8.3.15-3.el5.centos extras yum install -y heartbeat heartbeat-ldirectord heartbeat-pils heartbeat-stonith Installing: heartbeat x86_64 2.1.3-3.el5.centos extras 1.8 M heartbeat-ldirectord x86_64 2.1.3-3.el5.centos extras 199 k heartbeat-pils x86_64 2.1.3-3.el5.centos extras 220 k heartbeat-stonith x86_64 2.1.3-3.el5.centos extras 348 k

3.配置drbd master上配置配置文件(/etc/drbd.d目录下)global_common.conf,然后scp到slave相同目录下.

vi /etc/drbd.d/global_common.conf

global { usage-count yes; # minor-count dialog-refresh disable-ip-verification } common { protocol C; handlers { # These are EXAMPLE handlers only. # They may have severe implications, # like hard resetting the node under certain circumstances. # Be careful when chosing your poison. pri-on-incon-degr “/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”; pri-lost-after-sb “/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”; local-io-error “/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f”; # fence-peer “/usr/lib/drbd/crm-fence-peer.sh”; # split-brain “/usr/lib/drbd/notify-split-brain.sh root”; # out-of-sync “/usr/lib/drbd/notify-out-of-sync.sh root”; # before-resync-target “/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 — -c 16k”; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb wfc-timeout 15; degr-wfc-timeout 15; outdated-wfc-timeout 15; } disk { # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes # no-disk-drain no-md-flushes max-bio-bvecs on-io-error detach; fencing resource-and-stonith; } net { # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork timeout 60; connect-int 15; ping-int 15; ping-timeout 50; max-buffers 8192; ko-count 100; cram-hmac-alg sha1; shared-secret “mypassword123”; } syncer { # rate after al-extents use-rle cpu-mask verify-alg csums-alg rate 100M; al-extents 512; csums-alg sha1; } }

4.配置同步资源 vi /etc/drbd.d/share.res

resource share { meta-disk internal; device /dev/drbd0; #device指定的参数最后必须有一个数字,用于global的minor-count,否则会报错。device指定drbd应用层设备。 disk /dev/VolGroup00/DRBD; on node1{ #注意:drbd配置文件中,机器名大小写敏感! #on Locrosse{ address 192.168.0.45:9876; } on node2 { #on Regal { address 192.168.0.43:9876; } }

5.将文件传送到slave上 scp -P 22 /etc/drbd.d/share.res /etc/drbd.d/global_common.conf [email protected]:.

6.在两台机器上分别创建DRBD分区 drbdadm create-md share,share为资源名,对应上面share.res,后面的share皆如此 drbdadm create-md share no resources defined!

在drbd.conf中加入配件文件 vi /etc/drbd.conf

include “drbd.d/global_common.conf”; include “drbd.d/*.res”;

drbdadm create-md share

md_offset 322122543104 al_offset 322122510336 bm_offset 322112679936 Found xfs filesystem 314572800 kB data area apparently used 314563164 kB left usable by current configuration Device size would be truncated, which would corrupt data and result in ‘access beyond end of device’ errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command ‘drbdmeta 0 v08 /dev/VolGroup00/DRBD internal create-md’ terminated with exit code 40 drbdadm create-md share: exited with code 40

解决办法 dd if=/dev/zero bs=1M count=1 of=/dev/VolGroup00/DRBD

1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00221 seconds, 474 MB/s

drbdadm create-md share

Writing meta data… initializing activity log NOT initialized bitmap New drbd meta data block successfully created. success

7.添加itpables

iptables -A INPUT -p tcp -m tcp -s 192.168.0.0/24 –dport 9876 -j ACCEPT

8.在两台机器上启动DRBD service drbd start 或 /etc/init.d/drbd start

9.查看DRBD启动状态 service drbd status 或 /etc/init.d/drbd status,cat /proc/drbd 或 drbd-overview 查看数据同步状态,第一次同步数据比较慢,只有等数据同步完才能正常使用DRBD。 没有启动drbd或iptables没有打开 service drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share WFConnection Secondary/Unknown Inconsistent/DUnknown C

成功连上 /etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share Connected Secondary/Secondary Inconsistent/Inconsistent C

10.设置主节点 把node1设置为 primary节点, 首次设置primary时用命令drbdsetup /dev/drbd0 primary -o 或drbdadm –overwrite-data-of-peer primary share ,而不能直接用命令drbdadm primary share, 再次查看DRBD状态 drbdsetup /dev/drbd0 primary -o /etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype … sync’ed: 1.9% (301468/307188)M 0:share SyncSource Primary/Secondary UpToDate/Inconsistent C

11.格式化为xfs 在node1上为DRBD分区(/dev/drbd0)创建xfs文件系统,并制定标签为drbd 。mkfs.xfs -L drbd /dev/drbd0,若提示未安装xfs,直接yum install xfsprogs安装 mkfs.xfs -L drbd /dev/drbd0

meta-data=/dev/drbd0 isize=256 agcount=16, agsize=4915049 blks = sectsz=512 attr=0 data = bsize=4096 blocks=78640784, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0

挂载DRBD分区到本地(只能在primary节点上挂载),首先创建本地文件夹, mkdir /data 之后 mount /dev/drbd0 /data 然后就可以使用drbd分区了。

12.测试,主从切换 首先在master节点/data文件夹中新建test文档touch /data/test,卸载DRBD分区/dev/drbd0, umount /dev/drbd0,把primary节点降为secondary节点drbdadm secondary share touch /data/test umount /dev/drbd0 drbdadm secondary share

在slave节点上,把secondary升级为paimary节点 drbdadm primary share,创建/data文件夹 mkdir /data,挂载DRBD分区/dev/drbd0到本地 mount /dev/drbd0 /data,然后查看/data文件夹如果发现test文件则DRBD安装成功。 drbdadm primary share mkdir /data mount /dev/drbd0 /data ll /data

/etc/init.d/drbd status

drbd driver loaded OK; device status: version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by [email protected], 2013-03-27 16:01:26 m:res cs ro ds p mounted fstype 0:share Connected Primary/Secondary UpToDate/UpToDate C /data xfs

主众切换提要 在master上,先umount umount /dev/drbd0 把primary节点降为secondary节点 drbdadm secondary share

在slave上 drbdadm primary share mount /dev/drbd0 /data 这样slave就升为主了

四.安装配置heartbeat

====================

1.手动编译失败版 1.1.安装libnet http://sourceforge.net/projects/libnet-dev/ wget http://downloads.sourceforge.net/project/libnet-dev/libnet-1.2-rc2.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flibnet-dev%2F&ts=1368605826&use_mirror=jaist

tar zxvf libnet-1.2-rc2.tar.gz cd libnet ./configure make && make install

1.2.解锁用户文件 chattr -i /etc/passwd /etc/shadow /etc/group /etc/gshadow /etc/services

1.3.添加用户 groupadd haclient useradd -g haclient hacluster

1.4.查看添加结果 tail /etc/passwd hacluster:x:498:496::/var/lib/heartbeat/cores/hacluster:/bin/bash

tail /etc/shadow hacluster:!!:15840:0:99999:7:::

tail /etc/gshadow haclient:!::

tail /etc/group haclient:x:496:

1.5.安装heartbeat cd .. wget http://www.ultramonkey.org/download/heartbeat/2.1.3/heartbeat-2.1.3.tar.gz

tar -zxvf heartbeat-2.1.3.tar.gz cd heartbeat-2.1.3 ./ConfigureMe configure make && make install 出现这个错误过不去了 /usr/local/lib/libltdl.a: could not read symbols:

2.yum安装heartbeat yum install libtool-ltdl-devel yum install heartbeat

总共有三个文件需要配置: ha.cf 监控配置文件 haresources 资源管理文件 authkeys 心跳线连接加密文件

3.同步两台节点的时间 ntpdate -d cn.pool.ntp.org

4.添加iptables iptables -A INPUT -p udp -m udp –dport 694 -s 192.168.0.0/24 -j ACCEPT

5.配置ha.cf ip分配 192.168.0.45 node1 192.168.0.43 node2 192.168.0.46 vip

vi /etc/ha.d/ha.cf

debugfile /var/log/ha-debug #打开错误日志报告 keepalive 2 #两秒检测一次心跳线连接 deadtime 10 #10 秒测试不到主服务器心跳线为有问题出现 warntime 6 #警告时间(最好在 2 ~ 10 之间) initdead 120 #初始化启动时 120 秒无连接视为正常,或指定heartbeat #在启动时,需要等待120秒才去启动任何资源。 udpport 694 #用 udp 的 694 端口连接 ucast eth0 192.168.0.43 #单播方式连接(主从都写对方的 ip 进行连接) node node1 #声明主服(注意是主机名uname -n不是域名) node node2 #声明备服(注意是主机名uname -n不是域名) auto_failback on #自动切换(主服恢复后可自动切换回来)这个不要开启 respawn hacluster /usr/lib64/heartbeat/ipfail #监控ipfail进程是否挂掉,如果挂掉就重启它,64位为/lib64,32位为/lib

6.配置authkeys vi /etc/ha.d/authkeys 写入:

auth 1 1 crc

chmod 600 /etc/ha.d/authkeys

7.配置haresources mkdir /usr/lib/heartbeat

vi /etc/ha.d/haresources 写入:

node1 IPaddr::192.168.0.46/24/eth0 drbddisk::share Filesystem::/dev/drbd0::/data::xfs mysqld

node1:master主机名 IPaddr::192.168.0.46/24/eth0:设置虚拟IP(对外使用ip) drbddisk::r0:管理资源r0 Filesystem::/dev/drbd1::/data::ext3:执行mount与unmout操作 node2配置基本相同,不同的是ha.cf中的192.168.0.43 改为192.168.0.45。 mysql为后续的服务 以空格分开可写多个

8.配置mysql服务 从mysql编译目录复制出mysql.server cd mysql编译目录 cp support-files/mysql.server /etc/init.d/mysqld chown root:root /etc/init.d/mysqld chmod 700 /etc/init.d/mysqld

定义mysql数据目录及二进制日志目录是drbd资源 vi /opt/mysql/my.cnf

socket = /opt/mysql/mysql.sock pid-file = /opt/mysql/var/mysql.pid log = /opt/mysql/var/mysql.log datadir = /data/mysql/var/ log-bin=/data/mysql/var/mysql-bin

9.DRBD主从自动切换测试 首先先在node1启动heartbeat,接着在node2启动,这时,node1等node2完全启动后,相继执行设置虚拟IP,启动drbd并设置primary,并挂载/dev/drbd0到/data目录,启动命令为: service heartbeat start 这时,我们执行ip a命令,发现多了一个IP 192.168.0.46,这个就是虚拟IP,cat /proc/drbd查看drbd状态,显示primary/secondary状态,df -h显示/dev/drbd0已经挂载到/data目录。 然后我们来测试故障自动切换,停止node1的heartbeat服务或者断开网络连接,几秒后到node2查看状态。 接着恢复node1的heartbeat服务或者网络连接,查看其状态。

故障转移测试 service heartbeat standby
执行上面的命令,就会将haresources里面指定的资源从node1正常的转移到node2上,这时通过cat /proc/drbd在两台机器上查看,角色已发生了改变,mysql的服务也已在从服务器上成功开启,通过查看两台机器上的heartbeat日志便可清楚可见资源切换的整个过程, cat /var/log/ha-log

实验结果 在node1使用 heartbeat stop或 standby后会切到slave ,node1再次开启heartbeat 后会自动再切回 所以关闭heartbeat服务要先关从机,否则会切到从机.

错误1: /usr/lib/heartbeat/ipfail is not executable 在64位系统中此处为lib64 错误2: ERROR: Bad permissions on keyfile [/etc/ha.d/authkeys], 600 recommended. chmod 600 /etc/ha.d/authkeys

=================

ext3和xfs写性能简单测试

sas300g 15k 四盘组raid 0 条带大小Stripe Element Size:256K http://www.wmarow.com/strcalc/ 读取策略:自适应 Adaptive 写策略:回写(write back) 打开电池(bbu)

lvm管理

time dd if=/dev/zero of=/opt/c1gfile bs=1024 count=20000000

20000000+0 records in 20000000+0 records out 20480000000 bytes (20 GB) copied, 95.6601 seconds, 214 MB/s

real 1m35.950s user 0m9.389s sys 1m19.322s

time dd if=/dev/zero of=/data/c1gfile bs=1024 count=20000000

time dd if=/dev/zero of=/data/c1gfile bs=1024 count=20000000 20000000+0 records in 20000000+0 records out 20480000000 bytes (20 GB) copied, 45.6391 seconds, 449 MB/s

real 0m45.652s user 0m7.663s sys 0m37.922s

参考:

http://blog.csdn.net/rzhzhz/article/details/7107115 http://www.51osos.com/a/MySQL/jiagouyouhua/jiagouyouhua/2010/1122/Heartbeat.html http://bbs.linuxtone.org/thread-7413-1-1.html http://www.centos.bz/2012/03/achieve-drbd-high-availability-with-heartbeat/ http://www.doc88.com/p-714757400456.html http://icarusli.iteye.com/blog/1751435 http://www.mysqlops.com/2012/03/10/dba%e7%9a%84%e4%ba%b2%e4%bb%ac%e5%ba%94%e8%af%a5%e7%9f%a5%e9%81%93%e7%9a%84raid%e5%8d%a1%e7%9f%a5%e8%af%86.html http://www.mysqlsystems.com/2010/11/drbd-heartbeat-make-mysql-ha.html http://www.wenzizone.cn/?p=234

Posted in 高可用/集群.

Tagged with , , , , .


内网配置错误引起的nginx 504 Connection timed out

架构为nginx_cache->nginx群->php群->memcache群->mysql群

现象为用户在每小时的10分、20分等整10分时刷新论坛页面会出现502,504错误,再刷一下就可以显示。

在nginx访问日志中有504记录 tail -f /var/log/nginx/bbs.c1gstudio.com.log |grep ‘” 504’

111.166.167.206 – – [28/Apr/2013:15:30:16 +0800] “GET /forum.php?mod=ajax&action=forumchecknew&fid=276&time=1367134175&inajax=yes HTTP/1.1” 504 578 “http://bbs.c1gstudio.com/forum-276-1.html” “Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11” –

在nginx.conf中打开error记录后调试

error_log /var/log/nginx/nginx_error.log error;

tail -f /var/log/nginx/nginx_error.log

2013/05/03 14:05:47 [error] 14295#0: *968832 connect() failed (110: Connection timed out) while connecting to upstream, client: 220.181.125.23, server: bbs.c1gstudio.com, request: “GET /forum-739-1.html HTTP/1.1”, upstream: “http://192.168.0.33:80/502.html”, host: “bbs.c1gstudio.com”

110: Connection timed out 连接内网192.168.0.33超时。。。

用iftop监控并持续ping,没有问题 检查各机器的crontab,php,nginx,iptables无果 在重启nginx_cache网络时提示内网ip占用,这才发现有一台新上的机器分配了相同的内网ip 修改ip后没有问题了.

Posted in Nginx.

Tagged with , .


linux 内网网关nat 上网

内网机器需要更新下载软件时,用内网可上网机器作nat连接外网

网关 外网网卡ip:61.88.54.23 网关:61.88.54.1 内网网卡ip:192.168.0.39 网关:无

客户机 内网网卡ip:192.168.0.40

内网网卡接在同一交换机上

1.网关机 #打开内核的包转发功能 echo 1 > /proc/sys/net/ipv4/ip_forward #建立IP转发和映射

iptables -A FORWARD -d 192.168.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j SNAT –to-source 61.88.54.23

2.客户机 先加个corn 15点40自动清除,防止操作失败 crontab -e

40 15 * * * /sbin/route del default gw 192.168.0.39

3.添加路由 route add default gw 192.168.0.39

4.测试外网连接 ping 8.8.8.8

5.开机运行 vi /etc/rc.local

route add default gw 192.168.0.39

6.清除crontab

参考: http://panpan.blog.51cto.com/489034/189072

Posted in LINUX.

Tagged with , .


linux centos5.8 安装memcached

1.安装libevent yum install libevent.x86_64 libevent-devel.x86_64 没有libevent编译memcached为出错

checking for libevent directory… configure: error: libevent is required. You can get it from http://www.monkey.org/~provos/libevent/ If it’s already installed, specify its path using –with-libevent=/dir/

2.安装memcached

wget http://memcached.googlecode.com/files/memcached-1.4.15.tar.gz tar zxvf memcached-1.4.15.tar.gz cd memcached-1.4.15 ./configure –prefix=/opt/memcached-1.4.15 make make install ln -s /opt/memcached-1.4.15 /opt/memcached

3.配置文件 vi /opt/memcached/my.conf

PORT=”11200″ IP=”192.168.0.40″ USER=”root” MAXCONN=”1524″ CACHESIZE=”3000″ OPTIONS=”” #memcached

4.启动/关闭脚本 vi /etc/init.d/memcached

#!/bin/bash # # Save me to /etc/init.d/memcached # And add me to system start # chmod +x memcached # chkconfig –add memcached # chkconfig –level 35 memcached on # # Written by lei # # chkconfig: – 80 12 # description: Distributed memory caching daemon # # processname: memcached # config: /usr/local/memcached/my.conf source /etc/rc.d/init.d/functions ### Default variables PORT=”11211″ IP=”192.168.0.40″ USER=”root” MAXCONN=”1524″ CACHESIZE=”64″ OPTIONS=”” SYSCONFIG=”/opt/memcached/my.conf” ### Read configuration [ -r “$SYSCONFIG” ] && source “$SYSCONFIG” RETVAL=0 prog=”/opt/memcached/bin/memcached” desc=”Distributed memory caching” start() { echo -n $”Starting $desc ($prog): ” daemon $prog -d -p $PORT -l $IP -u $USER -c $MAXCONN -m $CACHESIZE $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/memcached return $RETVAL } stop() { echo -n $”Shutting down $desc ($prog): ” killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/memcached return $RETVAL } restart() { stop start } reload() { echo -n $”Reloading $desc ($prog): ” killproc $prog -HUP RETVAL=$? echo return $RETVAL } case “$1″ in start) start ;; stop) stop ;; restart) restart ;; condrestart) [ -e /var/lock/subsys/$prog ] && restart RETVAL=$? ;; reload) reload ;; status) status $prog RETVAL=$? ;; *) echo $”Usage: $0 {start|stop|restart|condrestart|status}” RETVAL=1 esac exit $RETVAL

5.添加iptables 充许192.168.0.0/24访问

iptables -A INPUT -s 192.168.0.0/255.255.255.0 -p tcp -m tcp –dport 11200 -j ACCEPT

6.启动 /etc/init.d/memcached start

7.web 管理界面 http://www.junopen.com/memadmin/

Posted in Memcached/redis.

Tagged with .


linux centos5.8 安装redis

Redis 是一个高性能的key-value数据库。 redis的出现,很大程度补偿了memcached这类key/value存储的不足,在部 分场合可以对关系数据库起到很好的补充作用。它提供了Python,Ruby,Erlang,PHP客户端,使用很方便。 一.安装tcl 否则在redis make test时出报错

You need tcl 8.5 or newer in order to run the Redis test make[1]: *** [test] Error 1 wget http://downloads.sourceforge.net/tcl/tcl8.6.0-src.tar.gz tar zxvf tcl8.6.0-src.tar.gz cd tcl8.6.0-src cd unix && ./configure –prefix=/usr \ –mandir=/usr/share/man \ $([ $(uname -m) = x86_64 ] && echo –enable-64bit) make && sed -e “s@^\(TCL_SRC_DIR=’\).*@\1/usr/include’@” \ -e “/TCL_B/s@=’\(-L\)\?.*unix@=’\1/usr/lib@” \ -i tclConfig.sh make test Tests ended at Tue Apr 16 12:02:27 CST 2013 all.tcl: Total 116 Passed 116 Skipped 0 Failed 0 make install && make install-private-headers && ln -v -sf tclsh8.6 /usr/bin/tclsh && chmod -v 755 /usr/lib/libtcl8.6.so

二.redis 1.安装redis

wget http://redis.googlecode.com/files/redis-2.6.12.tar.gz tar xzf redis-2.6.12.tar.gz cd redis-2.6.12 make make test

以下错误可以忽略 https://github.com/antirez/redis/issues/1034

[exception]: Executing test client: assertion:Server started even if RDB was unreadable!. assertion:Server started even if RDB was unreadable! while executing “error “assertion:$msg”” (procedure “fail” line 2) invoked from within “fail “Server started even if RDB was unreadable!”” (“uplevel” body line 2) invoked from within “uplevel 1 $elsescript” (procedure “wait_for_condition” line 7) invoked from within “wait_for_condition 50 100 { [string match {*Fatal error loading*} [exec tail -n1

make命令执行完成后,会在当前目录下生成4个可执行文件,分别是redis-server、redis-cli、redis-benchmark、redis-stat,它们的作用如下: redis-server:Redis服务器的daemon启动程序 redis-cli:Redis命令行操作工具。当然,你也可以用telnet根据其纯文本协议来操作 redis-benchmark:Redis性能测试工具,测试Redis在你的系统及你的配置下的读写性能 redis-stat:Redis状态检测工具,可以检测Redis当前状态参数及延迟状况

2.建立Redis目录

mkdir -p /opt/redis/bin mkdir -p /opt/redis/etc mkdir -p /opt/redis/var cp redis.conf /opt/redis/etc/ cd src cp redis-server redis-cli redis-benchmark redis-check-aof redis-check-dump /opt/redis/bin/ useradd redis chown -R redis.redis /opt/redis

建立Redis目录,只是为了将Redis相关的资源更好的统一管理。你也可以使用 make install 安装在系统默认目录

3.复制启动文件

cp ../utils/redis_init_script /etc/init.d/redis

4.启动redis

cd /opt/redis/bin ./redis-server /opt/redis/etc/redis.conf _._ _.-“__ ”-._ _.-“ `. `_. ”-._ Redis 2.6.12 (00000000/0) 64 bit .-“ .-“`. “`\/ _.,_ ”-._ ( ‘ , .-` | `, ) Running in stand alone mode |`-._`-…-` __…-.“-._|’` _.-‘| Port: 6379 | `-._ `._ / _.-‘ | PID: 24454 `-._ `-._ `-./ _.-‘ _.-‘ |`-._`-._ `-.__.-‘ _.-‘_.-‘| | `-._`-._ _.-‘_.-‘ | http://redis.io `-._ `-._`-.__.-‘_.-‘ _.-‘ |`-._`-._ `-.__.-‘ _.-‘_.-‘| | `-._`-._ _.-‘_.-‘ | `-._ `-._`-.__.-‘_.-‘ _.-‘ `-._ `-.__.-‘ _.-‘ `-._ _.-‘ `-.__.-‘ [24454] 12 Apr 10:34:19.519 # Server started, Redis version 2.6.12 [24454] 12 Apr 10:34:19.519 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command ‘sysctl vm.overcommit_memory=1’ for this to take effect. [24454] 12 Apr 10:34:19.519 * The server is now ready to accept connections on port 6379

成功安装Redis后,直接执行redis-server即可运行Redis,此时它是按照默认配置来运行的(默认配置不是后台运行)。如果我们希望Redis按我们的要求运行,则需要修改配置文件,

5.设置itpables,充许192.168.0内网网段访问

iptables -A INPUT -s 192.168.0.0/24 -p tcp -m tcp –dport 6379 -j ACCEPT /etc/init.d/iptables save

6.配置redis vi /opt/redis/etc/redis.conf

daemonize yes #是否作为守护进程运行 默认0 pidfile /var/run/redis.pid # 指定一个pid,默认为/var/run/redis.pid port 6379 #Redis默认监听端口 bind 192.168.0.41 #绑定主机IP,默认值为127.0.0.1 timeout 300 #客户端闲置多少秒后,断开连接,默认为300(秒) tcp-keepalive 0 # tcp保持连接 loglevel notice #日志记录等级,有4个可选值,debug,verbose(默认值),notice,warning logfile /opt/redis/var/redis.log #指定日志输出的文件名,默认值为stdout,也可设为/dev/null屏蔽日志 databases 16 #可用数据库数,默认值为16 save 900 1 #保存数据到disk的策略 #当有一条Keys数据被改变是,900秒刷新到disk一次 save 300 10 #当有10条Keys数据被改变时,300秒刷新到disk一次 save 60 10000 #当有1w条keys数据被改变时,60秒刷新到disk一次 #当dump .rdb数据库的时候是否压缩数据对象 rdbcompression yes #本地数据库文件名,默认值为dump.rdb dbfilename dump.rdb #本地数据库存放路径,默认值为 ./ dir /opt/redis/var/ #内存限制 maxmemory 2G #刷新策略 maxmemory-policy allkeys-lru
  1. 调整系统内核参数 如果内存情况比较紧张的话,需要设定内核参数:
echo 1 > /proc/sys/vm/overcommit_memory

这里说一下这个配置的含义:/proc/sys/vm/overcommit_memory 该文件指定了内核针对内存分配的策略,其值可以是0、1、2。 0,表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 1,表示内核允许分配所有的物理内存,而不管当前的内存状态如何。 2,表示内核允许分配超过所有物理内存和交换空间总和的内存 Redis在dump数据的时候,会fork出一个子进程,理论上child进程所占用的内存和parent是一样的,比如parent占用的内存为8G,这个时候也要同样分配8G的内存给child, 如果内存无法负担,往往会造成redis服务器的down机或者IO负载过高,效率下降。所以这里比较优化的内存分配策略应该设置为 1(表示内核允许分配所有的物理内存,而不管当前的内存状态如何)

vi /etc/sysctl.conf

vm.overcommit_memory = 1

然后应用生效:

sysctl –p

8.修改redis服务脚本 vi /etc/init.d/redis

REDISPORT=6379 EXEC=/opt/redis/bin/redis-server CLIEXEC=/opt/redis/bin/redis-cli PIDFILE=/var/run/redis.pid CONF=”/opt/redis/etc/redis.conf”

如果监听非本机IP地址时还需修改下脚本,不然关不掉

#增加监听地址变量 REDIHOST=192.168.0.41 #增加-h $REDIHOST echo “Stopping …” $CLIEXEC -h $REDIHOST -p $REDISPORT shutdown

9.测试

/etc/init.d/redis start

Starting Redis server…

/opt/redis/bin/redis-cli -h 192.168.0.41

redis 127.0.0.1:6379> ping PONG redis 127.0.0.1:6379> set mykey c1gstduio OK redis 127.0.0.1:6379> get mykey “c1gstduio” redis 127.0.0.1:6379> exit

benchmark /opt/redis/bin/redis-benchmark -h 192.168.0.41

====== PING_INLINE ====== 10000 requests completed in 0.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.51%
  1. 关闭服务 /opt/redis/bin/redis-cli shutdown 如果端口变化可以指定端口: /opt/redis/bin/redis-cli -h 192.168.0.41 -p 6379 shutdown

  2. 保存/备份 数据备份可以通过定期备份该文件实现。 因为redis是异步写入磁盘的,如果要让内存中的数据马上写入硬盘可以执行如下命令: redis-cli save 或者 redis-cli -p 6379 save(指定端口) 注意,以上部署操作需要具备一定的权限,比如复制和设定内核参数等。 执行redis-benchmark命令时也会将内存数据写入硬盘。

/opt/redis/bin/redis-cli save OK

查看结果 ll -h var/

total 48K -rw-r–r– 1 root root 40K Apr 12 11:49 dump.rdb -rw-r–r– 1 root root 5.3K Apr 12 11:49 redis.log

12.开机运行 vi /etc/rc.local

/etc/init.d/redis start

三.php扩展

redis扩展非常多,光php就有好几个,这里选用phpredis,Predis需要PHP >= 5.3 Predis ? ★ Repository JoL1hAHN Mature and supported phpredis ? ★ Repository yowgi This is a client written in C as a PHP module. Rediska ? Repository Homepage shumkov RedisServer Repository OZ Standalone and full-featured class for Redis in PHP Redisent ? Repository justinpoliey
Credis Repository colinmollenhour Lightweight, standalone, unit-tested fork of Redisent which wraps phpredis for best performance if available.

1.安装php扩展phpredis

wget https://nodeload.github.com/nicolasff/phpredis/zip/master –no-check-certificate cd phpredis-master unzip master cd phpredis-master/ /opt/php/bin/phpize ./configure –with-php-config=/opt/php/bin/php-config make make install

接下来在php.ini中添加extension=redis.so vi /opt/php/etc/php.ini

extension_dir = “/opt/php/lib/php/extensions/no-debug-non-zts-20060613/” extension=redis.so

vi test.php

phpinfo(); $redis = new Redis(); $redis->connect(“127.0.0.1”,6379); $redis->set(“test”,”Hello World”); echo $redis->get(“test”); ?>

redis Redis Support enabled Redis Version 2.2.2

四.基于php的web管理工具

phpRedisAdmin https://github.com/ErikDubbelboer/phpRedisAdmin 演示:http://dubbelboer.com/phpRedisAdmin/?overview 这个用的人很多,但是我装完无法显示

readmin http://readmin.org/ 演示:http://demo.readmin.org/ 需要装在根目录

phpredmin https://github.com/sasanrose/phpredmin#readme 可以显示,有漂良的图表 使用伪rewrite,/index.php/welcome/stats

wget https://nodeload.github.com/sasanrose/phpredmin/zip/master unzip phpredmin-master.zip

ln -s phpredmin/public predmin

编辑nginx,支持rewrite nginx.conf

location ~* ^/predmin/index.php/ { rewrite ^/predmin/index.php/(.*) /predmin/index.php?$1 break; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fcgi.conf; }

crontab -e

* * * * * root cd /var/www/phpredmin/public && php index.php cron/index

配置redis vi phpreadmin/config

‘redis’ => Array( ‘host’ => ‘192.168.0.30’, ‘port’ => ‘6379’, ‘password’ => Null, ‘database’ => 0

访问admin.xxx.com/predmin/就可以显示界面

五.实际使用感觉 redis装完后做discuzx2.5的cache 带宽消耗惊人,大概是memcached的十倍左右 内存占用也多了四分之一左右 discuz连接redis偶尔会超时和出错 最后还是切回了memcached

参考: http://redis.io/topics/quickstart http://hi.baidu.com/mucunzhishu/item/ead872ba3cec36db84dd798c http://www.linuxfromscratch.org/blfs/view/svn/general/tcl.html

Posted in Memcached/redis.

Tagged with .


centos5.8 LINUX 安装openvpn

1.下载

wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz wget https://nodeload.github.com/OpenVPN/openvpn/zip/release/2.3 wget http://swupdate.openvpn.org/community/releases/openvpn-2.3.0.tar.gz

2.安装LZO

tar -xvzf lzo-2.06.tar.gz cd lzo-2.06 ./configure –prefix=/usr/local/lzo-2.06 make && make install

3.安装openvpn

tar zxvf openvpn-2.3.0.tar.gz cd openvpn-2.3.0 ./configure –prefix=/usr/local/openvpn-2.3.0 –with-lzo-headers=/usr/local/lzo/include/lzo-2.06 –with-lzo-lib=/usr/local/lzo-2.06/lib –with-ssl-headers=/usr/include/openssl/ –with-ssl-lib=/usr/lib/openssl/

如果有错误 openvpn error: lzo enabled but missing 可以尝试下面

ldconfig CFLAGS=”-I/usr/local/include” LDFLAGS=”-L/usr/local/lib” ./configure –prefix=/usr/local/openvpn-2.3.0 make && make install

安装后提示

(1) make device node: mknod /dev/net/tun c 10 200 (2a) add to /etc/modules.conf: alias char-major-10-200 tun (2b) load driver: modprobe tun (3) enable routing: echo 1 > /proc/sys/net/ipv4/ip_forward

4.创建tun

mknod /dev/net/tun c 10 200

5.复制服务端样例配置文件

mkdir /etc/openvpn cp sample/sample-config-files/server.conf /etc/openvpn/

6.下载easy-rsa

wget https://nodeload.github.com/OpenVPN/easy-rsa/zip/master unzip master cd easy-rsa-master cp -R easy-rsa/ /etc/openvpn/

7.创建证书 cd /etc/openvpn/easy-rsa/2.0/ 这下面的文件做简单介绍: vars 脚本,是用来创建环境变量,设置所需要的变量的脚本 clean-all 脚本,是创建生成CA证书及密钥 文件所需要的文件和目录 build-ca 脚本,生成CA证书(交互) build-dh 脚本,生成Diffie-Hellman文件(交互) build-key-server 脚本,生成服务器端密钥(交互) build-key 脚本,生成客户端密钥(交互) pkitool 脚本,直接使用vars的环境变量设置直接生成证书(非交互)

a.初始化keys文件

. ./vars (注意有两个点,两个点之间有空格) ./clean-all ./build-ca (一路按回车就可以)

b.生成Diffie-Hellman文件

./build-dh

c.生成VPN server ca证书

./build-key-server server

然后把刚生成的CA证书和密钥copy到/etc/openvpn/下

cd keys cp ca.crt ca.key server.crt server.key dh2048.pem /etc/openvpn/

d.生成客户端CA证书及密钥

./build-key client

打包客户端证书 供客户端使用

tar zcvf userkeys.tar.gz ca.crt ca.key client.crt client.key client.csr

8.编辑配置文件 vi /etc/openvpn/openvpn.conf

port 8099 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key # This file should be kept secret dh /etc/openvpn/dh2048.pem server 172.16.1.0 255.255.255.0 ifconfig-pool-persist ipp.txt push “dhcp-option DNS 8.8.8.8” client-to-client duplicate-cn keepalive 10 120 comp-lzo user nobody group nobody persist-key persist-tun status /var/log/openvpn-status.log log /var/log/openvpn.log log-append /var/log/openvpn.log verb 3

9.启动和查看openvpn

ln -s /usr/local/openvpn-2.3.0 /usr/local/openvpn /usr/local/openvpn/sbin/openvpn –daemon –config /etc/openvpn/openvpn.conf netstat -tunlp

10.开启iptables

iptables -t nat -A POSTROUTING -o eth0 -s 172.16.2.0/24 -j SNAT –to-source 100.100.100.100 iptables -A INPUT -p udp -m state –state NEW -m udp –dport 8099 -j ACCEPT iptables -A INPUT -p tcp -m state –state NEW -m tcp –dport 8099 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -s 172.16.1.0/24 -j SNAT –to-source 100.100.100.100

100.100.100.100为vpn服务器外网卡eth0的IP地址,这是保证客户端能翻墙上网。也可以这样设置

iptables -t nat -A POSTROUTING -o eth0 -s 172.16.2.0/24 -j MASQUERADE

这应该是一种比较通用方法,适合ADSL拨号的动态公网地址

11. 客户端安装和配置 我的客户端是windowsXP系统的。从openvpn官网下载最新的客户端,然后安装,过程一直下一步就OK了。 完成之后我们需要把VPN-server服务器上的/etc/openvpn/keys/ 目录下的ca.crt、client.crt、client.key三个文件复制到“C:\Program Files\openvpn\config\keys”文件夹内。 然后连接

http://swupdate.openvpn.org/community/releases/openvpn-install-2.3.0-I004-i686.exe

ps:openvpn需安装客户端,多用户也不能同时连接.

参考: http://lxsym.blog.51cto.com/1364623/772075 http://blog.jiechic.com/archives/budgetvm-install-openvpn-vpn-vps-server http://www.itdhz.com/post-287.html http://www.kdolphin.com/1120 http://blog.creke.net/748.html http://luxiaok.blog.51cto.com/2177896/1078375 http://docs.linuxtone.org/ebooks/VPN/openvpn%E9%9B%86%E5%90%88.pdf

Posted in VPN.

Tagged with , .


centos5.8 LINUX 安装L2TP/IPSec VPN

第二层隧道协议L2TP(Layer 2 Tunneling Protocol)是一种工业标准的Internet隧道协议,它使用UDP的1701端口进行通信。L2TP本身并没有任何加密,但是我们可以使用IPSec对L2TP包进行加密。L2TP VPN比PPTP VPN搭建复杂一些。 IPSec 使用预共享密钥(PSK)进行加密和验证,L2TP 负责封包,PPP 负责具体的用户验证 一、部署IPSEC 、安装 openswan 1、安装关联包

yum install make gcc gmp-devel bison flex

2、编译安装 使用Openswan来实现IPSec

wget http://ftp.openswan.org/openswan/openswan-2.6.38.tar.gz tar zxvf openswan-2.6.38.tar.gz cd openswan-2.6.38 make programs install

3、配置ipsec vi /etc/ipsec.conf

config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=YOUR.SERVER.IP leftprotoport=17/1701 right=%any rightprotoport=17/%any

YOUR.SERVER.IP为vpn服务器的公网ip 注意前面有tab缩进,否则可能出现下面错误

failed to start openswan IKE daemon – the following error occured: can not load config ‘/etc/ipsec.conf’: /etc/ipsec.conf:58: syntax error, unexpected KEYWORD, expecting $end [rightsubnet]

4、 设置 Shared Key

vi /etc/ipsec.secrets

YOUR.SERVER.IP %any: PSK “YourSharedSecret”

YOUR.SERVER.IP为vpn服务器的公网ip YourSharedSecret为共享密钥

5、 修改包转发设置

for each in /proc/sys/net/ipv4/conf/* do echo 0 > $each/accept_redirects echo 0 > $each/send_redirects done echo 1 >/proc/sys/net/core/xfrm_larval_drop echo 1 >/proc/sys/net/ipv4/ip_forward sed -i ‘/net.ipv4.ip_forward / {s/0/1/g} ‘ /etc/sysctl.conf sed -i ‘/net.ipv4.conf.default.rp_filter / {s/1/0/g} ‘ /etc/sysctl.conf touch /var/lock/subsys/local

6、 重启 IPSec ,测试

/etc/init.d/ipsec restart

ipsec_setup: Stopping Openswan IPsec… ipsec_setup: stop ordered, but IPsec appears to be already stopped! ipsec_setup: doing cleanup anyway… ipsec_setup: Starting Openswan IPsec U2.6.38/K2.6.18-308.el5..

ipsec verify 没有报[FAILED]就可以了。

Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K2.6.18-308.el5 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [FAILED] Checking NAT and MASQUERADEing [OK] Checking for ‘ip’ command [OK] Checking /bin/sh is not /bin/dash [OK] Checking for ‘iptables’ command [OK] Opportunistic Encryption Support [DISABLED]

错误1: SAref kernel support [N/A] /etc/xl2tpd/xl2tpd.conf这个文件里

[global] ipsec saref = no

Linux Openswan U2.6.38/K2.6.18-308.el5 (netkey) 以netkey方式运行不支持局域网多个nat客户; 开启SAref kernel support后以klips方式运行支持

错误2: Two or more interfaces found, checking IP forwarding 修改ip_forward,只要 cat /proc/sys/net/ipv4/ip_forward 返回结果是1就没事 echo 1 >/proc/sys/net/ipv4/ip_forward

错误3: Please enable /proc/sys/net/core/xfrm_larval_drop echo 1 > /proc/sys/net/core/xfrm_larval_drop

二、安装 L2TP 1、关联包

yum install libpcap-devel ppp

2.编译安装

wget http://jaist.dl.sourceforge.net/project/rp-l2tp/rp-l2tp/0.4/rp-l2tp-0.4.tar.gz tar -zxvf rp-l2tp-0.4.tar.gz cd rp-l2tp-0.4 ./configure make cp handlers/l2tp-control /usr/local/sbin/ mkdir /var/run/xl2tpd/ ln -s /usr/local/sbin/l2tp-control /var/run/xl2tpd/l2tp-control wget http://www.xelerance.com/wp-content/uploads/software/xl2tpd/xl2tpd-1.3.0.tar.gz tar -zxvf xl2tpd-1.3.0.tar.gz cd xl2tpd-1.3.0 make make install

安装显示

install -d -m 0755 /usr/local/sbin install -m 0755 xl2tpd /usr/local/sbin/xl2tpd install -d -m 0755 /usr/local/share/man/man5 install -d -m 0755 /usr/local/share/man/man8 install -m 0644 doc/xl2tpd.8 /usr/local/share/man/man8/ install -m 0644 doc/xl2tpd.conf.5 doc/l2tp-secrets.5 \ /usr/local/share/man/man5/ # pfc install -d -m 0755 /usr/local/bin install -m 0755 pfc /usr/local/bin/pfc install -d -m 0755 /usr/local/share/man/man1 install -m 0644 contrib/pfc.1 /usr/local/share/man/man1/ # control exec install -d -m 0755 /usr/local/sbin install -m 0755 xl2tpd-control /usr/local/sbin/xl2tpd-control

3、配置

mkdir /etc/xl2tpd vi /etc/xl2tpd/xl2tpd.conf [global] ipsec saref = yes [lns default] ip range = 192.168.81.2-192.168.81.254 local ip = 192.168.81.1 //你的内网口 refuse chap = yes refuse pap = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes

4、修改 ppp 配置

vi /etc/ppp/options.xl2tpd

require-mschap-v2 ms-dns 8.8.8.8 ms-dns 8.8.4.4 asyncmap 0 auth crtscts lock hide-password modem debug name l2tpd proxyarp lcp-echo-interval 30 lcp-echo-failure 4

5、添加用户名/密码

vi /etc/ppp/chap-secrets

# user server password ip vpnuser l2tpd userpass *

8、启动 xl2tpd

iptables -t nat -A POSTROUTING -s 192.168.81.0/24 -o eth0 -j MASQUERADE iptables -A INPUT -p udp -m state –state NEW -m udp –dport 1701 -j ACCEPT iptables -A INPUT -p udp -m state –state NEW -m udp –dport 500 -j ACCEPT iptables -A INPUT -p udp -m state –state NEW -m udp –dport 4500 -j ACCEPT iptables -I FORWARD -s 192.168.81.0/24 -j ACCEPT iptables -I FORWARD -d 192.168.81.0/24 -j ACCEPT

/usr/local/sbin/xl2tpd

错误

Feb 20 15:20:38 localc1g ipsec__plutorun: /usr/local/lib/ipsec/_plutorun: line 250: 7859 Aborted (core dumped) /usr/local/libexec/ipsec/pluto –nofork –secretsfile /etc/ipsec.secrets –ipsecdir /etc/ipsec.d –use-netkey –uniqueids –nat_traversal –virtual_private %v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 Feb 20 15:20:38 localc1g ipsec__plutorun: !pluto failure!: exited with error status 134 (signal 6) Feb 20 15:20:38 localc1g ipsec__plutorun: restarting IPsec after pause… Feb 20 16:58:47 localc1g pppd[13553]: The remote system is required to authenticate itself Feb 20 16:58:47 localc1g pppd[13553]: but I couldn’t find any suitable secret (password) for it to use to do so.

检查chap-secrets文件server是否正确

Feb 21 11:30:52 localc1g pluto[16897]: “L2TP-PSK-NAT”[11] 122.221.55.121 #11: probable authentication failure (mismatch of preshared secrets?): malformed payload in packet Feb 21 11:30:52 localc1g pluto[16897]: | payload malformed after IV

检查客户端PSK是否正确

9、开机运行 放入/etc/rc.local中

touch /var/lock/subsys/local for each in /proc/sys/net/ipv4/conf/* do echo 0 > $each/accept_redirects echo 0 > $each/send_redirects done echo 1 >/proc/sys/net/core/xfrm_larval_drop echo 1 >/proc/sys/net/ipv4/ip_forward /etc/init.d/ipsec restart /usr/local/sbin/xl2tpd

参考: http://www.myvm.net/archives/554 http://amumy.blog.163.com/blog/static/17312970201210282323568/ http://www.vpsyou.com/2010/08/10/centos-install-l2tpipsec-and-simple-troubleshooting.html http://www.esojourn.org/blog/post/setup-l2tp-vpn-server-with-ipsec-in-centos6.php https://www.dls-yan.com/2012/10/04/783.html http://blog.csdn.net/rosetta/article/details/7794826

http://book.51cto.com/art/201204/331170.htm http://blog.csdn.net/cumtmimi/article/details/1814073

1、“IPSEC服务”服务不在运行状态

请依次执行下列操作:

计算机管理->服务和应用程序->服务,找到IPSEC Services ,双击打开,设启动方式为自动。

重新开机再设置策略

2、IPSEC Services 如何打开

补充:如果点打开时出现提示 在 本地计算机 无发启动 IPSEC Services 服务 错误1747:未知的验证服务 现在就是自动的 只是前面的装备 没有启动 网络客户端装上后 还是一样不能启动

修复方法: Code: 开始>运行 输入:CMD 在窗口中输入:netsh winsock reset

3、修改注册表 缺省的Windows XP L2TP 传输策略不允许L2TP 传输不使用IPSec 加密。可以通过修改 Windows XP 注册表来禁用缺省的行为: 手工修改: 1) 进入Windows XP 的“开始” “运行”里面输入“Regedt32”,打开“注册表编辑 器”,定位“HKEY_Local_Machine / System / CurrentControl Set / Services / RasMan / Parameters ”主键。 2) 为该主键添加以下键值: 键值:ProhibitIpSec 数据类型:reg_dword 值:1

Posted in VPN.

Tagged with , , .


编译安装ntp

升级ssh时连带升级OpenSSL 后ntp无法启动,yum升级ntp无果,决定重新编译个.

#ntpd –version

ntpd – NTP daemon program – Ver. 4.2.4p8

#tail /var/log/messages

ntpd: OpenSSL version mismatch. Built against 10000003, you have 1000103f

删除原ntp

yum remove ntp

编译ntp

wget http://www.eecis.udel.edu/~ntp/ntp_spool/ntp4/ntp-4.2/ntp-4.2.6p5.tar.gz tar zxvf ntp-4.2.6p5.tar.gz cd ntp-4.2.6p5 ./configure –prefix=/usr/local/ntp-4.2.6p5 –enable-all-clocks –enable-parse-clocks make && make install

创建软链接 ln -s /usr/local/ntp-4.2.6p5 /usr/local/ntp

编辑配置文件 vi /etc/ntp.conf

driftfile /var/lib/ntp/drift restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery restrict 127.0.0.1 restrict -6 ::1 server 0.centos.pool.ntp.org server 1.centos.pool.ntp.org server 2.centos.pool.ntp.org includefile /etc/ntp/crypto/pw keys /etc/ntp/keys

运行ntp

/usr/local/ntp/bin/ntpd -c /etc/ntp.conf

检查同步情况

watch /usr/local/bin/ntpq -p

开机运行

echo ‘/usr/local/ntp/bin/ntpd -c /etc/ntp.conf’ >> /etc/rc.local

参考: http://vbird.dic.ksu.edu.tw/linux_server/0440ntp/0440ntp-centos4.php

Posted in LINUX.

Tagged with , .


linux日志集中管理查看syslog-ng+splunk

syslog-ng ,可以简单的看成取代 syslog 的的日志服务器,企业级的.目前我们使用的 syslog-ng 开源版本是启动于十年之前的 syslog-ng 项目的“直系后代”.syslog-ng可运行与“server”和“agent”模式,分别支持 UDP、可靠的TCP和加密的TLS协议.syslog 可以用来在混合复杂的环境里建立灵活的、可靠的日志服务器.

syslog-ng开源版本的特性还有:

  1. 支持SSL/TSL协议
  2. 支持将日志写入数据库中,支持的数据库有MySQL, Microsoft SQL (MSSQL), Oracle, PostgreSQL, and SQLite.
  3. 支持标准的syslog协议
  4. 支持filter、parse以及rewrite
  5. 支持更多的平台
  6. 更高的负载能力

syslog-ng 对性能进行了优化,可以处理巨大的数据量.一般的硬件,在正确的配置下,可以实时地处理75000个消息每秒钟,超过24GB的RAW日志每小时.

前言 在标准的 Linux 中有一个 syslog .通常设置格式为

..

默认系统预先定义了12+8个(mail、news、auth等)facility,八个不同的优先级(alert到debug).通常我们也只能根据这些来做一些操作.备注中有详细的解释.

在 syslog-ng 中.就不一样,非常强大,只需要定义来源,和目标位置,有可能需要定义一个过滤. 示例为:

{Source;filter;destination;}

其中的每一个字段的会在下面进行详细的说明,这也必须在 syslog_ng.conf 文件中定义你想要的.

一.安装 syslog-ng 直接用 yum

yum install syslog-ng

源码安装 http://www.balabit.com/downloads/files?path=/libol

wget http://www.balabit.com/downloads/files?path=/libol/0.3/libol-0.3.18.tar.gz wget http://www.balabit.com/downloads/files?path=/syslog-ng/open-source-edition/3.3.7/source/eventlog_0.2.12.tar.gz wget http://www.balabit.com/downloads/files?path=/syslog-ng/open-source-edition/3.3.7/source/syslog-ng_3.3.7.tar.gz

1.安装eventlog

tar -zxvf eventlog_0.2.12.tar.gz cd eventlog-0.2.12 ./configure –prefix=/usr/local/eventlog make && make install

2.安装libol

mv files\?path\=%2Flibol%2F0.3%2Flibol-0.3.18.tar.gz libol-0.3.18.tar.gz tar -zxvf libol-0.3.18.tar.gz cd libol-0.3.18 ./configure –prefix=/usr/local/libol make && make install

3.安装syslog-ng

yum install pcre # 设置环境变量 export PKG_CONFIG_PATH=/usr/local/eventlog/lib/pkgconfig/ tar -zxvf syslog-ng_3.3.7.tar.gz cd syslog-ng-3.3.7 ./configure –prefix=/usr/local/syslog-ng –with-libol=/usr/local/libol/ make && make install

二.配置syslog-ng

1.打开iptables接收端口,这里只开放给内网

iptables -A INPUT -p tcp -m tcp -s 192.168.0.0/16 –dport 514 -j ACCEPT iptables -A INPUT -p udp -m udp -s 192.168.0.0/16 –dport 514 -j ACCEPT

2.配置全局配置 全局配置的是在 /etc/syslog-ng/syslog-ng.conf 中. 新板本中sync变成flush_lines,long_hostnames变成 chain_hostnames. Your configuration file uses an obsoleted keyword, please update your configuration; keyword=’sync’, change=’flush_lines’ Your configuration file uses an obsoleted keyword, please update your configuration; keyword=’long_hostnames’, change=’chain_hostnames’

接收远程日志并按年月日和主机地址集中的一个文件中,示例:

options { keep_hostname(off); chain_hostnames(off); flush_lines(1); log_fifo_size(1024); create_dirs(yes); # if a dir does not exist create it owner(root); # owner of created files group(root); # group of created files perm(0600); # permissions of created files dir_perm(0700); # permissions of created dirs }; source s_local { system(); unix-stream(“/dev/log”); # local system logs file(“/proc/kmsg”); # local kernel logs internal(); }; source s_all { udp(ip(0.0.0.0) port(514)); # remote logs # arriving at 514/udp }; destination d_local_file { file(“/var/syslog/$YEAR.$MONTH/$HOST/log-$DAY.log” owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)); }; destination d_net_file { file(“/var/syslog/$YEAR.$MONTH/$HOST/log-$DAY.log” owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)); }; log { source(s_local); destination(d_local_file); }; log { source(s_all); destination(d_net_file); };

全局选项(option) chain_hostnames 是否使用长主机名记录,也就是使用完全符合标准的域名. flush_lines 设置一次向目的地发送几行消息.如果设成0,一收到消息就发送 sync_freq 在写入文件之前,可以缓冲的日志消息行数 use_dns 是否使用 DNS ,选项包括:yes、no和persist_only.选项设成了‘persist_only’, 因而会检查 /etc/hosts 文件,以解析主机名,这时并不依赖DNS 服务器. stats_freq 两个状态消息(关于丢失日志消息的统计消息)消息之间间隔的时间(以秒为单位).0表示禁用发送STATS消息. normalize_hostnames 是否对主机名转换成小写. keep_hostname 如果要进行转发或透过外部服务器传送,该选项就会保留主机名,那样主机最终到达中央服务器后, 主机名会一并到达,而不是依赖DNS(或/etc/hosts)

来源(source) internal syslog-ng 内部产生的消息 unix-stream 打开指定的SOCK_STREAM模式的unix套接字,接收日志消息 unix-dgram 打开指定的SOCK_DGRAM模式的unix套接字,接收日志消息 file 打开指定的文件读取日志信息 pipe,fifo 打开指定的管道或者FIFO设备,读取日志信息 tcp 在指定的TCP端口接收日志消息 udp 在指定的UDP端口接收日志消息 program 来自程序 syslog 来自网络上syslog格式的信息

目的地(destination) file() file 是 syslog-ng 最重要的日志消息目的驱动器之一.使用它,你可以把日志消息定向到一些文件中. logstore() 存储成二进制格式可以加密压缩 pipe() 通过pipe()日志消息目的驱动器把日志消息发送到/dev/xconsole之类的命名管道. program() 驱动器fork出一个进程,使用给定的参数执行一个特定的程序,然后把日志消息送到这个进程的标准输入设备. sql() 存储到mysql,oracel,mssql等数据库中 syslog() 转发到远程日志服务器上 unix-stream()和unix0dgram() 通过这两个日志消息目的驱动器把日志消息发送到一个SOCK_STREAM或者SOCK_DGRAM模式的UNIX套接字. udp()和tcp() 使用TCP或者UDP协议把日志消息送到本地网络上或者internet上的另外的主机. usertty() 使用这个日志消息目的驱动器把日志消息送到一个登录用户使用的终端.

3.启动 /usr/local/syslog-ng/sbin/syslog-ng

关闭 pkill syslog-ng

4.调试 在其它的节点的 Linux 上在 syslog.conf 或rsyslog.conf 中配置 vi /etc/syslog.conf 或vi /etc/rsyslog.conf

*.* @syslog-ng 服务器 ip /etc/init.d/syslog restart

用 logger 来进行测试

logger -p local3.info hello

这样在syslog-ng 的服务器上就能见到 message 的信息了

三 splunk

splunk看着比LogZilla(php-syslog-ng)强. Splunk 是一款顶级的日志分析软件,如果你经常用 grep、awk、sed、sort、uniq、tail、head 来分析日志,那么你需要 Splunk。能处理常规的日志格式,比如 apache、squid、系统日志、mail.log 这些。对所有日志先进行 index,然后可以交叉查询,支持复杂的查询语句。然后通过直观的方式表现出来。日志可以通过文件方式传倒 Splunk 服务器,也可以通过网络实时传输过去。或者是分布式的日志收集。总之支持多种日志收集方法。

这个软件分为免费版本和专业版本。专业版本的价格是 3 万多刀。免费版本的功能也足够强大了。

下载后为企业试用版,可以转为免费版,免费版和收费版的差异 The Free license includes 500 MB/day of indexing volume, is free (as in beer), and has no expiration date.

The following features that are available with the Enterprise license are disabled in Splunk Free:

Multiple user accounts and role-based access controls
Distributed search
Forwarding in TCP/HTTP formats (you can forward data to other Splunk instances, but not to non-Splunk instances)
Deployment management (including for clients)
Alerting/monitoring 

1.安装aplunkhttp://www.splunk.com 注册个用户,乱填的可能不通过 下载 splunk tarball 当前最新为splunk-5.0.1 解压 splunk tarball 并将其移动到 /usr/local/splunk

2.打开iptables splunk默认运行在8000端口,打开iptables设置充许访问的ip

iptables -A INPUT -p tcp -m tcp -s 192.168.0.39 –dport 8000-j ACCEPT

3.启动

/usr/local/splunk/bin/splunk start

第一次运行会进行一些设置,按”y”同意

4.设置日志分析目录 进入http://localhost:8000 第一次需要重设密码 然后点击添加数据,选择本地文件,还有tcp端口接收syslog-ng转发

5.修改syslog-ng配置 过滤不同日志存入不同文件,并转发至splunk

options { use_dns (no); use_fqdn(no); chain_hostnames(off); keep_hostname(off); flush_lines(0); stats_freq(43200); create_dirs(yes); }; source s_internal { internal(); }; destination d_syslognglog { file(“/var/log/syslog-ng.log”); }; log { source(s_internal); destination(d_syslognglog); }; source s_sys { system();file(“/proc/kmsg”); unix-stream(“/dev/log”);}; destination d_cons { file(“/dev/console”); }; destination d_mesg { file(“/var/log/messages”); }; destination d_auth { file(“/var/log/secure”); }; destination d_mail { file(“/var/log/maillog”); }; destination d_spol { file(“/var/log/spooler”); }; destination d_boot { file(“/var/log/boot.log”); }; destination d_cron { file(“/var/log/cron”); }; destination d_rsync { file(“/var/log/rsync”); }; destination d_mlal { usertty(“*”); }; filter f_filter1 { facility(kern); }; filter f_filter2 { level(info) and not (facility(mail) or facility(authpriv) or facility(cron)); }; filter f_filter3 { facility(authpriv); }; filter f_filter4 { facility(mail); }; filter f_filter5 { level(emerg); }; filter f_filter6 { facility(uucp) or (facility(news) and level(crit)); }; filter f_filter7 { facility(local7); }; filter f_filter8 { facility(cron); }; filter f_filter9 { facility(daemon); }; filter f_filter10 { facility(local6); }; #log { source(s_sys); filter(f_filter1); destination(d_cons); }; log { source(s_sys); filter(f_filter2); destination(d_mesg); }; log { source(s_sys); filter(f_filter3); destination(d_auth); }; log { source(s_sys); filter(f_filter4); destination(d_mail); }; log { source(s_sys); filter(f_filter5); destination(d_mlal); }; log { source(s_sys); filter(f_filter6); destination(d_spol); }; log { source(s_sys); filter(f_filter7); destination(d_boot); }; log { source(s_sys); filter(f_filter8); destination(d_cron); }; # Remote logging source s_remote { udp(ip(192.168.0.39) port(514)); }; destination r_mesg { file(“/var/log/syslog-ng/$YEAR.$MONTH/$HOST/messages” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); }; destination r_auth { file(“/var/log/syslog-ng/$YEAR.$MONTH/$HOST/secure” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); }; destination r_mail { file(“/var/log/syslog-ng/$YEAR.$MONTH/$HOST/maillog” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); }; destination r_spol { file(“/var/log/syslog-ng/$YEAR.$MONTH/$HOST/spooler” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); }; destination r_boot { file(“/var/log/syslog-ng/$YEAR.$MONTH/$HOST/boot.log” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); }; destination r_cron { file(“/var/log/syslog-ng/$YEAR.$MONTH/$HOST/cron” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); }; destination r_daemon { file(“/var/log/syslog-ng/$YEAR.$MONTH/$HOST/daemon” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); }; destination r_local6 { file(“/var/log/syslog-ng/$YEAR.$MONTH/network/messages” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); }; #destination d_separatedbyhosts { # file(“/var/log/syslog-ng/$HOST/messages” owner(“root”) group(“root”) perm(0640) dir_perm(0750) create_dirs(yes)); #}; #log { source(s_remote); destination(d_separatedbyhosts); }; log { source(s_remote); filter(f_filter2); destination(r_mesg); }; log { source(s_remote); filter(f_filter3); destination(r_auth); }; log { source(s_remote); filter(f_filter4); destination(r_mail); }; log { source(s_remote); filter(f_filter6); destination(r_spol); }; log { source(s_remote); filter(f_filter7); destination(r_boot); }; log { source(s_remote); filter(f_filter8); destination(r_cron); }; log { source(s_remote); filter(f_filter9); destination(r_daemon); }; log { source(s_remote); filter(f_filter10); destination(r_local6); }; #splunk use 1999 port destination d_tcp { tcp(“localhost” port(1999) localport(999)); }; log { source(s_remote); destination(d_tcp); };

6.加入自运行 echo ‘/usr/local/syslog-ng/sbin/syslog-ng’ >> /etc/rc.local echo ‘/usr/local/splunk/bin/splunk start’ >> /etc/rc.local

参考: http://www.php-oa.com/2012/01/13/linux-syslog-ng.html http://blog.163.com/dingding_jacky/blog/static/1669127872011113011048416/ http://andyxu.blog.51cto.com/2050315/888583 http://bbs.linuxtone.org/thread-2082-1-3.html http://www.phpwebgo.com/2012/05/14/318.html http://www.balabit.com/sites/default/files/documents/syslog-ng-v3.0-guide-admin-en.html/bk01-toc.html http://docs.splunk.com/Documentation/Splunk http://www.syslog.org/syslog-ng

Posted in 技术, 日志.

Tagged with , .


Apache Tomcat FORM身份验证安全绕过漏洞

发布时间: 2012-12-04 (GMT+0800) 漏洞版本:

Apache Group Tomcat 7.0.0 – 7.0.29 Apache Group Tomcat 6.0.0 – 6.0.35 漏洞描述:

BUGTRAQ ID: 56812 CVE(CAN) ID: CVE-2012-3546

Apache Tomcat是一个流行的开放源码的JSP应用服务器程序。

Tomcat v7.0.30、6.0.36之前版本在FORM身份验证的实现上存在安全漏洞。在使用FORM验证时,若其他组件(如Single-Sign-On)在调用FormAuthenticator#authenticate()之前调用了request.setUserPrincipal(),则攻击者可以通过在URL结尾添加”/j_security_check”以绕过FORM验证 < 参考 http://seclists.org/fulldisclosure/2012/Dec/73 > 安全建议:

厂商补丁:

Apache Group

目前厂商已经发布了升级补丁以修复这个安全问题,请到厂商的主页下载7.0.30和6.0.36或更高版本。

参考链接: http://tomcat.apache.org/security.html http://tomcat.apache.org/security-7.html http://tomcat.apache.org/security-6.html

Posted in Tomcat, 安全通告.

Tagged with , .