Iperf
Iperf is a tool to measure the bandwidth and the quality of a network link.
By default, the Iperf client connects to the Iperf server on the TCP port 5001 and the bandwidth displayed by Iperf is the bandwidth from the client to the server.
client side
[root@client]#iperf -c 192.168.2.131 ------------------------------------------------------------ Client connecting to 192.168.2.131, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.2.110 port 46327 connected with 192.168.2.131 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 113 MBytes 94.9 Mbits/sec
server side
[root@server]# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.2.131 port 5001 connected with 192.168.2.110 port 46327 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.1 sec 113 MBytes 94.1 Mbits/sec
Lighttpd mod_expire
If you have checked your page with google pagespeed and you have got warning about
Leverage Cache Browsing and you’re using Lighttpd instead of Apache, here is how to
enable it with mod expire.
We have to edit lighttpd config /etc/lighttpd/lighttpd.conf. First we enable mod_expire
in server.modules section and we add some lines about witch type of files we want to cache longer to mod_expire section.And at the end we enable etags.
server.modules = ( ... # "mod_usertrack", "mod_expire", # "mod_rrdtool", "mod_accesslog" ) ... #### mod_expire $HTTP["url"] =~ "\.(png|js|jpg|gif|ico|css)$" { expire.url = ( "" => "access 21 days" ) ... ####etag etag.use-inode = "enable" etag.use-mtime = "enable" etag.use-size = "enable" static-file.etags = "enable"
Now we can restart lighhtpd service lighttpd restart and check if mod_expire works correctly.
#curl -I http://www.mypage.test/test/light.jpg
HTTP/1.1 200 OK Expires: Sun, 19 Feb 2012 21:56:11 GMT Cache-Control: max-age=3456000 Content-Type: image/jpeg Accept-Ranges: bytes ETag: "-649924182" Last-Modified: Wed, 04 Jan 2012 15:27:27 GMT Content-Length: 3538 Date: Tue, 10 Jan 2012 21:56:11 GMT Server: lighttpd/1.4.26
Monitor HW Raid with Dell OpenManage
I get in work some Dell servers and I was interested how to monitor HW Raid, so here is short
how-to.
Installation
First we add Dell repo to our system.
#wget -q -O - http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash
Now we can istall src-admin, it’s Dell utility to manage Dell servers.
#yum install srvadmin-all
If we ger error about about unsupported system, we can use this command.
#touch /opt/dell/srvadmin/lib/openmanage/IGNORE_GENERATION
We start OpenManagement.
#/opt/dell/srvadmin/sbin/srvadmin-services.sh start
Starting Systems Management Device Drivers: Starting dell_rbu: [ OK ] Starting ipmi driver: [ OK ] Starting Systems Management Device Drivers: Starting dell_rbu: Already started [ OK ] Starting ipmi driver: [ OK ] Starting Systems Management Data Engine: Starting dsm_sa_datamgrd: [ OK ] Starting dsm_sa_eventmgrd: [ OK ] Starting DSM SA Shared Services: [ OK ] Starting DSM SA Connection Service: [ OK ]
Monitoring
We create symlink to omreport – Raid utility.
ln -s /opt/dell/srvadmin/bin/omreport /sbin/omreport
We add rights for system users.
#chmod +s /opt/dell/srvadmin/bin/omreport
And now we can look at our HW Raid status.
$/sbin/omreport storage vdisk
Controller PERC 3/Di (Embedded) ID : 0 Status : Ok Name : RAID5 _ State : Ready HotSpare Policy violated : Not Applicable Virtual Disk Bad Blocks : Not Applicable Secured : Not Applicable Progress : Not Applicable Layout : RAID-5 Size : 273.43 GB (293595512832 bytes) Device Name : /dev/sda Bus Protocol : SCSI Media : HDD Read Policy : Read Cache Enabled Write Policy : Write Cache Enabled Protected Cache Policy : Not Applicable Stripe Element Size : 64 KB Disk Cache Policy : Not Applicable
If we want more detailed ouput we can use command.
$omreport storage pdisk controller=0
List of Physical Disks on Controller PERC 3/Di (Embedded) Controller PERC 3/Di (Embedded) ID : 0:0 Status : Ok Name : Physical Disk 0:0 State : Online Failure Predicted : No Certified : Not Applicable Encryption Capable : No Secured : Not Applicable Progress : Not Applicable Bus Protocol : SCSI Media : HDD Capacity : 33.90 GB (36398759936 bytes) Used RAID Disk Space : 33.90 GB (36398759936 bytes) Available RAID Disk Space : 0.00 GB (0 bytes) Hot Spare : No Vendor ID : MAXTOR Product ID : ATLAS10K4_36SCA Revision : DFM0 Serial No. : Not Available Part Number : Not Available Negotiated Speed : Not Available Capable Speed : Not Available Manufacture Day : Not Available Manufacture Week : Not Available Manufacture Year : Not Available SAS Address : Not Available ID : 0:1 Status : Ok Name : Physical Disk 0:1 State : Online Failure Predicted : No Certified : Not Applicable Encryption Capable : No Secured : Not Applicable Progress : Not Applicable Bus Protocol : SCSI Media : HDD Capacity : 33.90 GB (36398759936 bytes) Used RAID Disk Space : 33.90 GB (36398759936 bytes) Available RAID Disk Space : 0.00 GB (0 bytes) Hot Spare : No Vendor ID : MAXTOR Product ID : ATLAS10K4_36SCA Revision : DFM0 Serial No. : Not Available Part Number : Not Available Negotiated Speed : Not Available Capable Speed : Not Available Manufacture Day : Not Available Manufacture Week : Not Available Manufacture Year : Not Available SAS Address : Not Available ID : 0:2 Status : Ok Name : Physical Disk 0:2 State : Online Failure Predicted : No Certified : Not Applicable Encryption Capable : No Secured : Not Applicable Progress : Not Applicable Bus Protocol : SCSI Media : HDD Capacity : 33.90 GB (36398759936 bytes) Used RAID Disk Space : 33.90 GB (36398759936 bytes) Available RAID Disk Space : 0.00 GB (0 bytes) Hot Spare : No Vendor ID : MAXTOR Product ID : ATLAS10K4_36SCA Revision : DFM0 Serial No. : Not Available Part Number : Not Available Negotiated Speed : Not Available Capable Speed : Not Available Manufacture Day : Not Available Manufacture Week : Not Available Manufacture Year : Not Available SAS Address : Not Available ID : 0:3 Status : Ok Name : Physical Disk 0:3 State : Online Failure Predicted : No Certified : Not Applicable Encryption Capable : No Secured : Not Applicable Progress : Not Applicable Bus Protocol : SCSI Media : HDD Capacity : 33.90 GB (36398759936 bytes) Used RAID Disk Space : 33.90 GB (36398759936 bytes) Available RAID Disk Space : 0.00 GB (0 bytes) Hot Spare : No Vendor ID : MAXTOR Product ID : ATLAS10K4_36SCA Revision : DFM0 Serial No. : Not Available Part Number : Not Available Negotiated Speed : Not Available Capable Speed : Not Available Manufacture Day : Not Available Manufacture Week : Not Available Manufacture Year : Not Available SAS Address : Not Available ID : 0:4 Status : Ok Name : Physical Disk 0:4 State : Online Failure Predicted : No Certified : Not Applicable Encryption Capable : No Secured : Not Applicable Progress : Not Applicable Bus Protocol : SCSI Media : HDD Capacity : 33.90 GB (36398759936 bytes) Used RAID Disk Space : 33.90 GB (36398759936 bytes) Available RAID Disk Space : 0.00 GB (0 bytes) Hot Spare : No Vendor ID : MAXTOR Product ID : ATLAS10K4_36SCA Revision : DFM0 Serial No. : Not Available Part Number : Not Available Negotiated Speed : Not Available Capable Speed : Not Available Manufacture Day : Not Available Manufacture Week : Not Available Manufacture Year : Not Available SAS Address : Not Available
MySQL monitoring with Zabbix
If you want monitor MySQL with zabbix and do not insert MySQL password into zabbix_agent.conf,
mysqladmin ping, uptime, threads, questions, qps can be used with the lowest privilege in MySQL: “USAGE”.
The USAGE privilege allows the user to connect and … that’s it. No access to any database. But you can get status and configuration variables with SHOW (GLOBAL STATUS | VARIABLES) LIKE ‘…’ which is interesting for monitoring.
mysql>GRANT USAGE ON *.* TO ‘zabbix’@'localhost’ IDENTIFIED BY ‘mypassword’;
Piranha Load Balancing
Red Hat adapted the Piranha load balancing software to allow for transparent load balancing and failover between servers. The application being balanced does not require special configuration to be balanced, instead a Red Hat Enterprise Linux server with the load balancer configured, intercepts and routes traffic based on metrics/rules set on the load balancer.
lba1.virtual.net 10.10.50.11/24
lba2.virtual.net 10.10.50.12/24
web1.virtual.net 10.10.50.21/24
web2.virtual.net 10.10.50.22/24
virtual ip 10.10.50.100/24
Load Balancers
We install piranha and ipvsadm packages on both load balancers.
# yum install piranha ipvsadm -y
Now we create file ipvsadm and we allow ipv4 forwarding.
#touch /etc/sysconfig/ipvsadm #sed -i 's/net.ipv4.ip_forward = 0/net.ipv4.ip_forward = 1/' /etc/sysctl.conf
Make ipv4 forwarding active
#sysctl -p net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 4294967295 kernel.shmall = 268435456
We craete configuration file lvs.cf for both load balancers, lba1 will be primary and
lba2 will be secondary.In case of lba1 failure lba2 will take virtual ip.
lba1
[lba1]#cat /etc/sysconfig/ha/lvs.cf serial_no = 34 primary = 10.10.50.11 service = lvs rsh_command = rsh backup_active = 1 backup = 10.10.50.12 heartbeat = 1 heartbeat_port = 539 keepalive = 3 deadtime = 25 network = direct reservation_conflict_action = preempt debug_level = NONE virtual HTTP { active = 1 address = 10.10.50.100 eth0:1 vip_nmask = 255.255.255.0 port = 80 send = "GET / HTTP/1.1\r\n\r\n" expect = "HTTP" use_regex = 0 scheduler = rr protocol = tcp timeout = 4 reentry = 4 quiesce_server = 1 server web1 { address = 10.10.50.21 active = 1 weight = 1 } server web2 { address = 10.10.50.22 active = 1 weight = 1 } }
lba2
[lba2]#cat /etc/sysconfig/ha/lvs.cf serial_no = 34 primary = 10.10.50.11 service = lvs rsh_command = rsh backup_active = 1 backup = 10.10.50.12 heartbeat = 1 heartbeat_port = 539 keepalive = 3 deadtime = 25 network = direct reservation_conflict_action = preempt debug_level = NONE virtual HTTP { active = 1 address = 10.10.50.100 eth0:1 vip_nmask = 255.255.255.0 port = 80 send = "GET / HTTP/1.1\r\n\r\n" expect = "HTTP" use_regex = 0 scheduler = rr protocol = tcp timeout = 4 reentry = 4 quiesce_server = 1 server web1 { address = 10.10.50.21 active = 1 weight = 1 } server web2 { address = 10.10.50.22 active = 1 weight = 1 } }
Web servery
Now we install httpd on web servers and ensure to start httpd service at boot time.
#yum install httpd #chkconfig httpd on
We want to web servers were clustered with direct routing, we have to stop reverse ARP.
We can use iptables or arptables_jf.I recommande to use arptables.
#yum install arptables_jf -y #chkconfig arptables_jf on
We create arptables rules.
web1
[web1]#arptables -I IN -d 10.10.50.100 -j DROP [web1]#arptables -A OUT -d 10.10.50.100 -j mangle --mangle-ip-s 10.10.50.21 [web1]#service arptables save [web1]#service arptables start
web2
[web2]# arptables -I IN -d 10.10.50.100 -j DROP [web2]#arptables -A OUT -d 10.10.50.100 -j mangle --mangle-ip-s 10.10.50.22 [web2]#service arptables save [web2]#service arptables start
Finally we create second loopback interface and we create alias of virtual ip address where
web server will send queries.Same at both web servers.
# vi /etc/sysconfig/network-scripts/ifcfg-lo:0 DEVICE=lo:0 IPADDR=10.10.50.100 NETMASK=255.255.255.255 NETWORK=10.10.50.0 BROADCAST=10.10.50.255 ONBOOT=yes
Let’s start pusle service and check functinality.
#service pulse start #chkconfig pulse on
We chceck virtual ip on primary balancer.
[lba1]#ip adress show eth0 2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:db:1c:a5 brd ff:ff:ff:ff:ff:ff inet 10.10.50.11/24 brd 10.0.2.255 scope global eth0 inet 10.10.50.100/24 brd 10.0.2.255 scope global secondary eth0:1 inet6 fe80::20c:29ff:fedb:1ca5/64 scope link valid_lft forever preferred_lft forever
For troubleshooting we can use command.
# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.2.100:80 rr -> 10.0.2.22:80 Route 1 0 0 -> 10.0.2.21:80 Route 1 0 0