Saturday, February 14, 2015

Monitor RDS with Nagios

This plugin is written on Python and utilizes the module boto (Python interface to Amazon Web Services) to get various RDS metrics from CloudWatch and compare them against the thresholds.

Install the package: yum install python-boto or apt-get install python-boto

Create a config /etc/boto.cfg or ~nagios/.boto with your AWS API credentials. See http://code.google.com/p/boto/wiki/BotoConfig

This plugin that is supposed to be run by Nagios, i.e. under nagios user, should have permissions to read the config /etc/boto.cfg or ~nagios/.boto.

Example:

[root@centos6 ~]# cat /etc/boto.cfg
[Credentials]
aws_access_key_id = THISISATESTKEY
aws_secret_access_key = thisisatestawssecretaccesskey

If you do not use this config with other tools such as our Cacti script, you can secure this file the following way:

[root@centos6 ~]# chown nagios /etc/boto.cfg
[root@centos6 ~]# chmod 600 /etc/boto.cfg
DESCRIPTION

The plugin provides 4 checks and some options to list and print RDS details:

RDS Status
RDS Load Average
RDS Free Storage
RDS Free Memory
To get the list of all RDS instances under AWS account:

# ./aws-rds-nagios-check.py -l
To get the detailed status of RDS instance identified as blackbox:

# ./aws-rds-nagios-check.py -i blackbox -p
Nagios check for the overall status. Useful if you want to set the rest of the checks dependent from this one:

# ./aws-rds-nagios-check.py -i blackbox -m status
OK mysql 5.1.63. Status: available
Nagios check for CPU utilization, specify thresholds as percentage of 1-min., 5-min., 15-min. average accordingly:

# ./aws-rds-nagios-check.py -i blackbox -m load -w 90,85,80 -c 98,95,90
OK Load average: 18.36%, 18.51%, 15.95% | load1=18.36;90.0;98.0;0;100 load5=18.51;85.0;95.0;0;100 load15=15.95;80.0;90.0;0;100
Nagios check for the free memory, specify thresholds as percentage:

# ./aws-rds-nagios-check.py -i blackbox -m memory -w 5 -c 2
OK Free memory: 5.90 GB (9%) of 68 GB | free_memory=8.68;5.0;2.0;0;100
# ./aws-rds-nagios-check.py -i blackbox -m memory -u GB -w 4 -c 2
OK Free memory: 5.90 GB (9%) of 68 GB | free_memory=5.9;4.0;2.0;0;68
Nagios check for the free storage space, specify thresholds as percentage or GB:

# ./aws-rds-nagios-check.py -i blackbox -m storage -w 10 -c 5
OK Free storage: 162.55 GB (33%) of 500.0 GB | free_storage=32.51;10.0;5.0;0;100
# ./aws-rds-nagios-check.py -i blackbox -m storage -u GB -w 10 -c 5
OK Free storage: 162.55 GB (33%) of 500.0 GB | free_storage=162.55;10.0;5.0;0;500.0
CONFIGURATION

Here is the excerpt of potential Nagios config:

define servicedependency{
      hostgroup_name                  mysql-servers
      service_description             RDS Status
      dependent_service_description   RDS Load Average, RDS Free Storage, RDS Free Memory
      execution_failure_criteria      w,c,u,p
      notification_failure_criteria   w,c,u,p
      }

define service{
      use                             active-service
      hostgroup_name                  mysql-servers
      service_description             RDS Status
      check_command                   check_rds!status!0!0
      }

define service{
      use                             active-service
      hostgroup_name                  mysql-servers
      service_description             RDS Load Average
      check_command                   check_rds!load!90,85,80!98,95,90
      }

define service{
      use                             active-service
      hostgroup_name                  mysql-servers
      service_description             RDS Free Storage
      check_command                   check_rds!storage!10!5
      }

define service{
      use                             active-service
      hostgroup_name                  mysql-servers
      service_description             RDS Free Memory
      check_command                   check_rds!memory!5!2
      }

define command{
      command_name    check_rds
      command_line    $USER1$/pmp-check-aws-rds.py -i $HOSTALIAS$ -m $ARG1$ -w $ARG2$ -c $ARG3$
      }

Saturday, February 7, 2015

MongoDB on Amazon Ec2

EC2 instances can be configured either with ephemeral storage or persistent storage using the Elastic Block Store (EBS). Ephemeral storage is lost when instances are terminated so it is generally not recommended for use unless you’re comfortable with the data-loss implications.
For almost all deployments EBS will be the better choice. For production systems we recommend using
  • EBS-optimized EC2 instances
  • Provisioned IOPS (PIOPS) EBS volumes
Storage configurations can vary from one deployment to the next but for the best performance we recommend one volume for each of the following: data directory, journal, and log. Each of those has different write behaviours and we use one volume for each to reduce IO contention. Different RAID levels such as RAID0, RAID1, or RAID10 can also be used to provide volume level redundancy or capacity. Different storage configurations will have different cost implications especially when combined with PIOPS EBS volumes.

Deploy from the AWS Marketplace

There are three officially maintained MongoDB AMIs on the AWS Marketplace. Each AMI comes pre-configured with individual PIOPS EBS volumes for data, journal, and the log.
  • MongoDB 2.4 with 1000 IOPS - data: 200 GB @ 1000 IOPS, journal: 25 GB @ 250 IOPS, log: 10 GB @ 100 IOPS
  • MongoDB 2.4 with 2000 IOPS - data: 200 GB @ 2000 IOPS, journal: 25 GB @ 250 IOPS, log: 15 GB @ 150 IOPS
  • MongoDB 2.4 with 4000 IOPS - data: 400 GB @ 4000 IOPS, journal: 25 GB @ 250 IOPS, log: 20 GB @ 200 IOPS
For specific information about how each instance was configured, refer to Deploy MongoDB on EC2.

Deploy MongoDB on EC2

The following steps can be used to deploy MongoDB on EC2. The instances will be configured with the following characteristics:
  • Amazon Linux
  • MongoDB 2.4.x installed via Yum
  • Individual PIOPS EBS volumes for data (1000 IOPS), journal (250 IOPS), and log (100 IOPS)
  • Updated read-ahead values for each block device
  • Update ulimit settings
Before continuing be sure to have the following:
  • Install EC2 command line tools
  • Generate an EC2 key pair for connecting to the instance via SSH
  • Create a security group that allows SSH connections
Create the instance using the key pair and security group previously created and also include the --ebs-optimized flag and specify individual PIOPS EBS volumes (/dev/xvdf for data, /dev/xvdg for journal,/dev/xvdh for log). 
After login, update installed packages, add the MongoDB yum repo, and install MongoDB:
$ sudo yum -y update
$ echo "[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1" | sudo tee -a /etc/yum.repos.d/mongodb.repo
$ sudo yum install -y mongodb-org-server mongodb-org-shell mongodb-org-tools
Next, create/configure the mount points, mount each volume, set ownership (MongoDB runs under themongod user/group), and set the /journal link
$ sudo mkdir /data /log /journal
$ sudo mkfs.ext4 /dev/xvdf
$ sudo mkfs.ext4 /dev/xvdg
$ sudo mkfs.ext4 /dev/xvdh
$ echo '/dev/xvdf /data ext4 defaults,auto,noatime,noexec 0 0
/dev/xvdg /journal ext4 defaults,auto,noatime,noexec 0 0
/dev/xvdh /log ext4 defaults,auto,noatime,noexec 0 0' | sudo tee -a /etc/fstab
$ sudo mount /data
$ sudo mount /journal
$ sudo mount /log
$ sudo chown mongod:mongod /data /journal /log
$ sudo ln -s /journal /data/journal
Now configure the following MongoDB parameters by editing the configuration file /etc/mongod.conf
dbpath = /data
logpath = /log/mongod.log
By default Amazon Linux uses ulimit settings that are not appropriate for MongoDB. To setup ulimit to match the documented ulimit settings use the following steps:
$ sudo nano /etc/security/limits.conf
* soft nofile 64000
* hard nofile 64000
* soft nproc 32000
* hard nproc 32000
$ sudo nano /etc/security/limits.d/90-nproc.conf
* soft nproc 32000
* hard nproc 32000
sudo blockdev --setra 32 /dev/xvdf
To startup MongoDB, issue the following command:
sudo service mongod start

Monday, January 26, 2015

How to add a Custom NAT instance in AWS VPC?

In this tutorial I am assuming , you must be running VPC in AWS.

NAT Instances

Instances that you launch into a private subnet in a virtual private cloud (VPC) can't communicate with the Internet. You can optionally use a network address translation (NAT) instance in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.


To launch NAT instance in AWS , search for NAT in community AMI section , AWS provides lots of NAT instances AMI .

On the Choose an Instance Type page, select the instance type, then click Next: Configure Instance Details.

On the Configure Instance Details page, select the VPC you created from the Network list, and select your public subnet from the Subnet list.

Once NAT instance launch disable the SrcDestCheck attribute for the NAT instance.




Click on “Yes,Disable




Connect to the NAT instance using terminal emulation software (i.e. putty), and allow the ip forwarding on it:

vi /etc/sysctl.conf

Uncomment the below line

net.ipv4.ip_forward=1

Issue the Iptables command for  MASQUERADE:

 iptables -t nat -A POSTROUTING  -s  10.0.0.0/16 -o eth0 -j MASQUERADE

Modify the NAT instance security group to allow all or desired inbound traffic from private subnet (In my case, 10.100.20.0/24) or desired server.

Create a custom route, associate your private subnet(s) to it and make a default route to use the NAT instance as a gateway: