Cobertura Report Not showing css properly

Recently I setup Cobertura reports showing coverage report in my jenkins which was getting generated from gocov, but I noticed that it wasn’t showing the properly CSS and everything was in plan old text. When i investigated using the Developer console, I found out that browser was blocking the in-line css due to content security policies. I was seeing following errors.

Blocked script execution in '<https://jenkins_url_here>' because the document's frame is sandboxed and the 'allow-scripts' permission is not set.</code>

After searching I found out Configuring Content Security Policy article which talks about Jenkins content security policy settings.

As I was using Debian box to run jenkins, I had to update /etc/default/jenkins file and update JAVA_ARGS variables and add the following into it. Restart Jenkins for changes to take effect.

vi /etc/default/jenkins

-Dhudson.model.DirectoryBrowserSupport.CSP="default-src 'self'; style-src 'self' 'unsafe-inline';" 

systemctl restart jenkins

And my cover reports were loading properly after this with all the CSS, etc.,

– Sandeep Sidhu

Rackspace Cloud DNS management and API usage

This article is about using Rackspace Cloud DNS (beta) and managing your domains and entries using the API.

Okay.. some theory stuff: There are many long and details explanations of DNS system, but let’s see if I can put a simple one here just for the completeness of the article. DNS is a mechanism which provides a way for resolving domain names to IP address, e.g. example.com would resolve to an IP address. Domain name entries are required so that people don’t have to remember the IP address of your server and instead remember a string like www.google.com. So, when you enter www.google.com into their Internet Explorer and press Enter, then your computer system tries to resolve the IP address behind www.google.com and then contacts that IP address and retrieve the index.html page from it.

– Sandeep Sidhu

Interactive Command line Interface to Rackspace Cloud Files

This is a python based command line interface for Rackspace Cloud Files. It provides a nice interactive interface to manage your Cloud Files from the Linux or Window machines using python. I had this code on github for quite some time now, but found out that not many people knew about it, so decided to write this quick post so that it will show up in the google searches and people can use it.

You can grab the code from here: https://github.com/sandeep-sidhu/python-cloudfiles-cmd

I know we can always use python API commands and curl to list things, but it might be helpful for customers who have no programming knowledge, and also they call the upload or download files directly from cron job, etc.

Current Implemented Functions: list containers create container delete container(empty containers) container info - gives information about a container, like total size, publish status, CDN URL. list files delete files upload files download files

Usage example

root@ssidhu-dv6:/python_cf# python cf.py
Login with credentials from ~/.pycflogin file? [yes/no]yes
-Info- Logging in as sandeepsidhu
Interactive command line interface to cloud files
CF>>list
.CDN_ACCESS_LOGS
cloudservers
images
sand
CF>>list sand
New folder/New Text Document.txt
cloudfuse
cloudfuse-0.1.tar.gz
cloudfuse.tar.gz
cloudfuse_1.0-1_amd64.deb
file1
https_webpage_status.png
CF>>quit
root@ssidhu-dv6:/python_cf#

I have created it in a module way, so each of those functions can be called individually without actually running interpreter, so if somebody wants to just grab a list of files in a container, he can just call the individual function files. This will help if somebody wants to write their own scripts.

root@ssidhu-dv6:/python_cf# python cf_list_containers.py
-Info- Logging in as sandeepsidhu
.CDN_ACCESS_LOGS
cloudservers
images
sand
root@ssidhu-dv6:/python_cf# python cf_list_files.py sand
-Info- Logging in as sandeepsidhu
New folder/New Text Document.txt
cloudfuse
cloudfuse-0.1.tar.gz
cloudfuse.tar.gz
cloudfuse_1.0-1_amd64.deb
file1
https_webpage_status.png
root@ssidhu-dv6:/python_cf#

Some ideas about next features to add: delete container along with it’s all files(non empty containers) publish/unpublish containers file info - will provide all the info about a file, size, CDN URL, meta data, etc purge from CDN upload multiple directories, handling of pseudo directories. use of servicenet option

If anybody wants to add any of the above functions then please do and send me a pull request so that we can share it with other people.

– Sandeep Sidhu

Rackspace Cloud Files and php

This article will explain how to use the Rackspace Cloud Files API with php using the php-cloudfiles bindings. All this information is available online, some on Rackspace KB articles, github, API docs. But for somebody who is just now starting with cloud files, it’s not possible to gather all this information without wasting few days time here and there. So, the objective of this article is to provide :

  • All the available links to Cloud Files documentation, API, and PHP bindings.
  • PHP Cloud Files binding installation
  • PHP Cloud Files binding coding examples
Cloud Files Documentation links: In addition to all above Rackspace also have different language bindings which allows to interact with API easy. In this article we will concentrate on using php-cloudfiles language bindings.

PHP Cloud Files bindings on github.com https://github.com/rackspace/php-cloudfiles

PHP Cloud Files binding installation

Download a package from there using the Download link to your cloud server, and then extract it.

[root@web01 ~]# ls -la | grep cloudfiles
-rw-r--r--  1 root   root   496476 Oct  1 16:09 rackspace-php-cloudfiles-v1.7.9-0-gb5e5481.zip
[root@web01 ~]#
[root@web01 ~]# unzip rackspace-php-cloudfiles-v1.7.9-0-gb5e5481.zip
[root@web01 ~]#
[root@web01 ~]# ls -la | grep cloudfiles
drwxr-xr-x  6 root   root     4096 Oct  1 16:36 rackspace-php-cloudfiles-b5e5481
-rw-r--r--  1 root   root   496476 Oct  1 16:09 rackspace-php-cloudfiles-v1.7.9-0-gb5e5481.zip
[root@web01 ~]#

Once extracted, the folder should look something like this:

[root@web01 rackspace-php-cloudfiles-b5e5481]# ls
AUTHORS    cloudfiles_exceptions.php  cloudfiles.php  debian  phpdoc.ini   README  tests
Changelog  cloudfiles_http.php        COPYING         docs    phpunit.xml  share
[root@web01 rackspace-php-cloudfiles-b5e5481]#

Now we need to copy the cloudfiles API files to a place where they can be included in your php files, means source code files of API should be in PHP Include path.

As per the README file, following are the requirements for using API source code files:

Requirements
;; ------------------------------------------------------------------------
;;   [mandatory] PHP version 5.x (developed against 5.2.0)
;;   [mandatory] PHP's cURL module
;;   [mandatory] PHP enabled with mbstring (multi-byte string) support
;;   [suggested] PEAR FileInfo module (for Content-Type detection)
;;

You can check all these with the following commands:


[root@web01 ~]# php -v
PHP 5.1.6 (cli) (built: Nov 29 2010 16:47:46)
Copyright (c) 1997-2006 The PHP Group
Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies
[root@web01 ~]#

Let’s check if curl is installed:

[root@web01 ~]# php -m | grep curl
curl
[root@web01 ~]#

Now let’s check mbstring

[root@web01 ~]# php -m | grep mbstring
[root@web01 ~]#

It looks like we don’t have mbstring installed, so let’s install it as well.

[root@web01 ~]# yum install php-mbstring

Similarly for FileInfo

[root@web01 ~]# php -m | grep file
[root@web01 ~]#
[root@web01 ~]# yum install php-pecl-Fileinfo
[root@web01 ~]# php -m | grep fileinfo
fileinfo
[root@web01 ~]#
[root@web01 ~]# cd rackspace-php-cloudfiles-b5e5481/
[root@web01 rackspace-php-cloudfiles-b5e5481]#

Remember you might have to change the folder name to reflect your own copy of the API

Now that we are in the source directory of php-cloudfiles, let’s move the required files to the right place. Run the following commands, and move them to /usr/share/php. This directory is part of the php include PATH.

[root@web01 rackspace-php-cloudfiles-b5e5481]# mkdir /usr/share/php
[root@web01 rackspace-php-cloudfiles-b5e5481]# cp cloudfiles* /usr/share/php/

After moving the php-cloudfiles API binding files into the php include PATH, now lets check it to make sure we have everything is setup ok. We will create a simple php file and include the php-cloudfiles binding file into it.

[root@web01 rackspace-php-cloudfiles-b5e5481]# cd /var/www/html/
[root@web01 html]# touch cfcheck.php

Open your favorite text editor, e.g. vi editor, and type the following into cfcheck.php file:

<?php
    require('cloudfiles.php'); 
?>

Save the file cfcheck.php and run this command from the prompt:

[root@web01 html]# php cfcheck.php
[root@web01 html]#

Check the error state of the last command. It should be 0.

[root@web01 html]# echo $?
0
[root@web01 html]#

If you are returned to the prompt with no errors the PHP API is installed

If you get an error like this:

PHP Fatal error:  require(): Failed opening required 'cloudfiles.php'
(include_path='.;C:\php5\pear') in cfcheck.php on line 1

Then you do not have the files located in the right place. This error should print out the include_path so make sure you have the files located there. Or you can change the include path in the php.ini but that is beyond the scope of this tutorial.

PHP Cloud Files binding coding examples

Okay.. I hope by now you have API installed. If yes, then let’s move onto actually trying some php cloud files code. If you are having problem with installing the API, then probably you should troubleshoot the above first before moving on.

For an easy access to php-cloudfiles documentation, run the following commands:

[root@web01 html]# cd rackspace-php-cloudfiles-b5e5481/
[root@web01 rackspace-php-cloudfiles-b5e5481]# cp -R docs/ /var/www/html/
[root@web01 rackspace-php-cloudfiles-b5e5481]# service httpd start

After this you can access the documentation at http:///docs url.

If you are a developer, then you can easily check the documentation and work from there, but let me put some examples here for non-developer or newbies.

Basically, whenever you put

require('cloudfiles.php');

line in any of your php file, the following classes become available into your php code.

Classes:

  • CF_Authentication: Class for handling Cloud Files Authentication, call it’s authenticate() method to obtain authorized service urls and an authentication token.
  • CF_Connection: Class for establishing connections to the Cloud Files storage system.
  • CF_Container: Container operations
  • CF_Object: Object operations

Each class comes with it’s own methods and properties. Methods can be called on the objects of those classes and their properties can be set and queried. For e.g. CF_Authentication class provides methods for authentication against the cloud files using the username and API key of your cloud files accounts. CF_Connection class provides a connection object. Connection object has methods like create_container, delete_container. Let’s look at the following code example.

Continuing on our previous example of cfcheck.php file, let’s add some more code into it.

[root@web01 html]# vi cfcheck.php

<?php         
       require('cloudfiles.php');
       $username='your cloud user name';
       $api_key="your cloud api key";
       $auth = new CF_Authentication($username, $api_key);
       $auth->authenticate();

        if ( $auth->authenticated() )
                echo "CF Authentication successful \n";
        else
                echo "Authentication faile \n";
?>
[root@web01 html]# php cfcheck.php
CF Authentication successful
[root@web01 html]#

if the above command runs without producing any error, then it means you have successfully authenticated against CF using the above php code. In the above code, first we created an object of CF_Authentication named auth by passing it username and API key as variables. Then we run authenticate() method of the CF_Authentication class which actually authenticates against the cloud files. Similarly, authenticated() method returns a boolean value (true or false) depending on the state of the auth object. I cannot be more explicit than this. if you are having problem understanding this, then you should probably not be doing this by yourself and get a developer, but if you still following so far and it’s working fine, let’s move forward… and add some more lines to our code.

<?php         
       require('cloudfiles.php');
       $username='your cloud user name';
       $api_key="your cloud API key";
       $auth = new CF_Authentication($username, $api_key);
       $auth->authenticate();

       if ( $auth->authenticated() )
                echo "CF Authentication successful \n";
       else
                echo "Authentication faile \n";
       $conn = new CF_Connection($auth);
       $container_list = $conn->list_containers();
       print_r($container_list);

?>
[root@web01 html]# php cfcheck.php
CF Authentication successful
Array
(
    [0] => .CDN_ACCESS_LOGS
    [1] => cloudservers
    [2] => images
    [3] => sand
)
[root@web01 html]#

The output would be different depending on the containers you have in your cloud files and their names. Above are the name of the containers in my account. It might not list any containers if you don’t have one in your cloud files, but that’s okay. We will create container in the next lines. But as you can see in the code we added the following lines:


        $conn = new CF_Connection($auth);
        $container_list = $conn->list_containers();
        print_r($container_list);

we created a new CF_Connection object named conn and using this object, we got a list of all the containers with list_containers() method. list_containers() method gives us an array object in return containing the names of all the containers in your cloud files storage, in this examples I have used the print_r function to print the array in human readable format.

Well, I can add more code lines for creating container, deleting container, getting details of container like public URI, snet, etc., but that would be going too far and seems redundant as similar stuff is self explanatory in the API documentation.

But I do hope this article will give you a head start and helps you in finding the correct documentation and resources for your cloud files API and php coding. Do let me know how did you like it, and if there were any errors or typos, I would like to help back and improve it.

– Sandeep Sidhu

Recovering deleted files from Linux ext3 filesystem on Cloud Servers

Every now and then you hit enter on your keyboard and next second you realize your mistake, you just deleted some files by mistake. Immediately your mind starts thinking about your backups, changes you have made since last available backup, and all that..

Well, today I faced the challenge of recovering some accidently deleted from from one of my cloud servers. As we all know data never actually gets deleted from the hard disk, it gets unlinked from the file system table and then those blocks get overwritten by other data. So, as soon as I realized that I have accidentally deleted the files, I booted the Cloud Server into rescue mode. I did this because then my original HDD of the cloud server is not mounted RW and is not in use. Then I downloaded my good old friend TestDisk, downloaded it and compiled, but bummer, it doesn’t recognize the HDD in cloud servers, don’t ask me why but it doesn’t may be it doesn’t have drivers for handling virtual hdds. So, what next??

With some googling I found this link http://carlo17.home.xs4all.nl/howto/undelete_ext3.html. I really recommend that you give it a good read, very well explained by the developer how the program actually works. The Developer of the ext3grep program wrote it to recover his own deleted files, as per his words “don’t expect it to work out-of-box”, and

Anyway, back to topic, I downloaded the code from http://code.google.com/p/ext3grep/ There were few other obstacles in the normal compiling and using it and finally recovering the deleted files. So, I’ll explain the steps and additional packages you will need to compile the ext3 and running it successfully.

Download the latest ext3grep code:

# wget http://ext3grep.googlecode.com/files/ext3grep-0.10.2.tar.gz

Extract the contents of the tar.gz file

# tar -xzf ext3grep-0.10.2.tar.gz

Install the following dependency packages, these are needed for compiling the ext3grep code.

# yum install gcc cpp
# yum install gcc-c++ libstdc++
# yum install e2fsprogs-devel

Now go into the extracted ext3grep directory and run the following commands:

# cd ext3grep-0.10.2
# ./configure

./configure command should finish without any errors with the following lines in the bottom of the output.

[...]
configure: creating ./config.status
config.status: creating src/timestamp-sys.h
config.status: sys.h is unchanged
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating config.h
config.status: config.h is unchanged
config.status: executing depfiles commands

If the ./configure command runs without any errors, that means all the required dependencies are met. But even if it shows some problem, then just look at the error and improvise yourself. Just check what it’s looking for and install those dependencies.

Next, please run the following commands

# make
# make install

This will compile the ext3grep program, which you can use to recover the files.

Now, let’s talk about few things you need to know before starting to use the program. I have read through the long article and will try to spit out few main points about the program.

The program assumes a spike of recently deleted files (shortly before the last unmount). It does NOT deal with a corrupted file system, only with accidently deleted files.

Also, the program is beta: the development of the program was done as a hack, solely aimed at recovering developer’s own data. He fixed few of the bugs that didn’t show in his case, but over all the program isn’t as robust as it could be. Therefore, it is likely that things might not work entirely out-of-the-box for you. The program is stashed with asserts, which makes it likely that if something doesn’t work for you then the program will abort instead of trying to gracefully recover. In that case you will have to dig deeper, and finish the program yourself, so to say.

The program only needs read access to the file system with the deleted files: it does not attempt to recover the files. Instead, it allows you to make a copy of deleted files and writes those to a newly created directory tree in the current directory (which obviously should be a different file system). All paths are relative to the root of the partion, thus— if you are analysing a partition /dev/md5 which was mounted under /home, then /home is unknown to the partition and to the program and therefore not part of the path(s). Instead, a path will be something like for example “carlo/c++/foo.cc”, without leading slash. The parition root (/home in the example) is the empty string, not ‘/’.

Just one more thing before you attempt to recover your files, the ext3grep program expects to find lost+found directory in the file system, and since our /dev/sda1 is not mounted in the resuce mode, there is no lost+found directory, the workaround is to mount the /dev/sda1 and just create a new directory inside it named lost+found, and unmount it.

Now for recovering files, ext3grep provides many command line options, it allows you to look for the superblock and read the superblock information. It can show you which inode resides in which block group. It can even prints the whole contents of a single inode, like all blocks linked to that inode and other data about the inode. You can read more in the original article of ext3grep as mentioned in the starting of the article.

Enough about theory, let’s try some commands.

Running ext3grep:

# $IMAGE=/dev/sda1
# ext3grep $IMAGE --superblock | grep 'size:'
Block size: 4096
Fragment size: 4096

Using the options –ls –inode $N, ext3grep lists the contents of each directory block of inode N. For example, to list the root directory of a partition:

# ext3grep $IMAGE --ls --inode 2

It is also possible to apply filters to the output of –ls. An overview of the available filters is given in the output of the –help option:

# ext3grep $IMAGE --help
[...]
Filters:
  --group grp            Only process group 'grp'.
  --directory            Only process directory inodes.
  --after dtime          Only entries deleted on or after 'dtime'.
  --before dtime         Only entries deleted before 'dtime'.
  --deleted              Only show/process deleted entries.
  --allocated            Only show/process allocated inodes/blocks.
  --unallocated          Only show/process unallocated inodes/blocks.
  --reallocated          Do not suppress entries with reallocated inodes.
                         Inodes are considered 'reallocated' if the entry
                         is deleted but the inode is allocated, but also when
                         the file type in the dir entry and the inode are
                         different.
  --zeroed-inodes        Do not suppress entries with zeroed inodes. Linked
                         entries are always shown, regardless of this option.
  --depth depth          Process directories recursively up till a depth
                         of 'depth'.
[...]
$ ext3grep $IMAGE --restore-file 

the above command will create a RESTORED_FILES directory in your current directory and store the recovered files there.

Well, there are other lot of commands and options of manually recovering some small but important files, but I have to cut this article short as putting all that here would be duplicating the things. Above instructions should be good enough for recovering one or two directories or some important files. Do let me know if I have missed some important bits, I’ll update it.

– Sandeep Sidhu

Cloud Servers - High Availability with heartbeat on CentOS 5.5

This article is about setting up high-availability on CentOS using heartbeat software. We will have two web servers (named web1 and web2 in this article), both of these servers would have a shared IP (virtual IP) between them, this virtual IP would be active only at one web server at any time. So, it would be like an Active/Passive high-availability, where one web server would be active (have virtual IP) and other host would be in passive mode(waiting for first server to fail). All your web requests would be directed to the virtual IP address through DNS configuration. Both the server would have an heartbeat package installed and configured on them, this heartbeat service would be used by both the servers to check if the other box is active or have failed. So, let’s get on with it. I’m going to use rackspace cloud server to configure it.

Creating the cloud servers and shared IP:

Login into your Rackspace Cloud Control Panel at https://manage.rackspacecloud.com and create two CentOS 5.5 Cloud Servers. Choose the configuration which suites your resource requirement, give them descriptive names so that you can easily identify them for e.g. web1 and web2. Once you have your two Cloud Servers created, you will have to create a support ticket to get a shared IP for your cloud servers, as mentioned on this link cloudservers.rackspacecloud.com/index.php/Frequently_Asked_Questions#Can_I_buy_extra_IPs.3F

Installing heartbeat software:

Note: All the following commands are need to be run on both the cloud servers (e.g. web01 web02)

You will have to install heartbeat package to setup heartbeat between both the cloud servers for monitoring.

[root@ha01 /]# yum update
[root@ha01 /]# yum install heartbeat-pils heartbeat-stonith  heartbeat

Once all the above packages get installed, you can confirm them by running following command:

[root@ha01 /]# rpm -qa | grep heartbeat
heartbeat-pils-2.1.3-3.el5.centos
heartbeat-stonith-2.1.3-3.el5.centos
heartbeat-2.1.3-3.el5.centos
[root@ha01 /]# 

Configuring heartbeat:

First, we need to copy sample configuration files from the /usr/share/doc/heartbeat-2.1.3 directory to /etc/ha.d directory

[root@ha01 ha.d]# cd /usr/share/doc/heartbeat-2.1.3/
[root@ha01 heartbeat-2.1.3]# cp ha.cf authkeys haresources /etc/ha.d
[root@ha01 heartbeat-2.1.3]# cd /etc/ha.d/
[root@ha01 ha.d]# ls
authkeys  ha.cf  harc  haresources  rc.d  README.config  resource.d  shellfuncs
[root@ha01 ha.d]# 

Next, we need to populate authkeys file with an MD5 sum key. You can generate the key with following command.

[root@ha01 ha.d]#  dd if=/dev/urandom bs=512 count=1 2&gt;/dev/null | openssl md5
ea6cdc1133c424e432aed155dd48a49d

Now we need to enter the key into “authkeys” file, so it looks like following.

[root@ha01 ha.d]# cat authkeys 
auth 1
1 md5 a77030a32d0cc2b6cac31f9cddfe4b09

Next, we need to configure ha.cf and add/update the following parameters in it with appropriate values: You will need to change the hostname as per your cloud server host names for node parameters.

on web01

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 10
udpport 694
bcast eth1
ucast eth1 
auto_failback on
node web01
node web02

On web02:

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 10
udpport 694
bcast eth1
ucast eth1 <private IP address of web01>
auto_failback on
node web01
node web02

Next, we need to configure haresources and add resources into it. The haresources file contains a list of resources that move from machine to machine as nodes go down and come up in the cluster. Do not include any fixed IP addresses in this file.

Note: The haresources file MUST BE IDENTICAL on all nodes of the cluster.

The node names listed in front of the resource group information is the name of the preferred node to run the service. It is not necessarily the name of the current machine. Like in below example, I have chosen web01 as the preferred node to run the HTTPD service, but if web01 is not available then the httpd service will be started on web02. Should the service move back to web01 again once it becomes available is controlled by the auto_failback ON configuration in ha.cf file

So, add the following line into haresrouces file on both the servers.

web01 <shared IP address>/24/eth0 httpd

Starting the heartbeat service:

Now, let’s start the heartbeat service on both the nodes using following command

[root@web01 /# chkconfig heartbeat on
[root@web01 /# service heartbeat start
Starting High-Availability services: 
2011/04/27_08:16:04 INFO:  Resource is stopped
                                                           [  OK  ]

Now if you check httpd service status, it should be running. And your shared IP address should be up on the web01 node.

[root@web01 ~]# service httpd status
httpd (pid  23938) is running...

ifconfig -a command will show you all the available IP address. By running ifconfig -a command on the web01 you can confirm that it has the virtual IP address up and accessible on it.

Testing the failover using heartbeat service

Let’s test the high availability. Shutdown the web01 node using halt command. The virtual IP address and httpd service should automatically be failed over to web02.

MySQL master master replication on Debian 5 (Lenny)

This article is about setting up mysql master master replication between two cloud servers. The operating system which I’m going to use is Debian 5 (Lenny) Rackspace cloud base image.

Setup outline: We will have two cloud servers, named debian501 and debian502 during this exercise. Both servers have two IP addresses(one public, one private). We will configure the replication to be done using the private IP interface so that we don’t incur any bandwidth charges.

Creating the cloud servers: You need to create two Linux cloud servers, using the Debian 5 image. You can create them by logging into your cloud control panel, and spinning up two cloud servers. Choose the RAM configuration depending on your requirement of the database. Name the server accordingly so that you can easily identify. For my exercise I have named them debian501 and debian502. All the below command I’m running as a root user.

Instaling MySQL

We need to install mysql on both the debian cloud servers. Before installing MySQL we need to run few extra commands needed for any freshly installed Debian.

Update the package database:
#aptitude update
Install locales:
#aptitude install locales
#dpkg-reconfigure locales

The dpkg-reconfigure locales command will bring up a locales setting window where you can choose the locales for your system depending on your country and region. In my case I have choose “en_GB.UTF-8”.

Now, you can run the following commands to get the MySQL installed

#aptitude install mysql-server mysql-client libmysqlclient15-dev

Enabling the replication

After this we need to make configuration changes into each of the server to enable replication.

debian501 server

We need to create our database which we will setup for replication, and also we need to create a replication username and password. Run the following commands to set it up. Do change all the values as per your needs.

Login to your mysql with password you setup during mysql installation.

#mysql -u root -p
mysql>

Open the file /etc/mysql/my.cnf and create/update following entries:

bind-address = 0.0.0.0
server-id = 1
log-bin = /usr/local/mysql/var/bin.log
log-slave-updates
log-bin-index = /usr/local/mysql/var/log-bin.index
log-error = /usr/local/mysql/var/error.log
relay-log = /usr/local/mysql/var/relay.log
relay-log-info-file = /usr/local/mysql/var/relay-log.info
relay-log-index = /usr/local/mysql/var/relay-log.index
auto_increment_increment = 10
auto_increment_offset = 1
master-host = [private IP address of debian502]
master-user = [replication username]
master-password = [replication password
replicate-do-db = [database name to be replicated]

debian502 server

Open the file /etc/mysql/my.cnf and create/update following entries:

bind-address = 0.0.0.0
server-id = 2
log-bin = /usr/local/mysql/var/bin.log
log-slave-updates
log-bin-index = /usr/local/mysql/var/log-bin.index
log-error = /usr/local/mysql/var/error.log
relay-log = /usr/local/mysql/var/relay.log
relay-log-info-file = /usr/local/mysql/var/relay-log.info
relay-log-index = /usr/local/mysql/var/relay-log.index
auto_increment_increment = 10
auto_increment_offset = 2
master-host = [ private IP address of debian501 ]
master-user = [ replication username ]
master-password = [ replication password ]
replicate-do-db = [ database name to be replicated ]

Now, restart both databases. If the service restart on either server fails, then please check the /var/log/mysql/error.log file for any errors, and update the configuration checking for any typos, etc.,

Testing the scenarios

For the purpose of testing our replication setup, we can create the database specified in the configuration section above (replicate-do-db), as well as a test table on one of the nodes and watch the log files in /var/log/mysql directory. Note that any and all the database changes should be replicated to our other server immediately.

mysql> create database [your-db-name];
mysql> use [your-db-name]
mysql> create table foo (id int not null, username varchar(30) not null);
mysql> insert into foo values (1, 'bar');

An additional test is to stop the MySQL service on debian502, making database changes on the debian501 and then starting the service on debian502 once again. The debian502 MySQL service should sync up all the new changes automatically.

you should also consider changing the default binary log rotation values (expire_logs_days and max_binlog_size) in the /etc/mysql/my.cnf file, as by default all the binary logs will be kept for 10 days. If you have high transaction count in you database and application then it can cause signifient hard disk space usage in logs. So, I would recommend changing those values as per your server backup policies. For example, if you have daily backups setup of your MySQL node then it makes no sense to keep 10 days worth of binary logs.

– Sandeep Sidhu

Installing XenServer using usb - mboot.c32: not a COM32R image

Today I was installing XenServer to one of my boxes using USB as installation media. As usual, I downloaded the XenServer ISO file and used latest version of unetbootin-linux-549 to create a bootable USB. But it wouldn’t boot as expected, and throw me an error saying “kernel image not found”.

As it turns out, some extra steps needs to be done for booting XenServer installation from USB. I followed this article here and made the following changes in the USB.

I copied “client_install” and “packages.main” to USB Rename rename syslinux.cfg file to syslinux_cfg.old Rename the bootisolinux directory to bootsyslinux Rename the bootsyslinuxisolinux.cfg file to syslinux.cfg

As per the article, at this point you’re done - stick the USB stick into the Server where you’d like to install XenServer

But now I started the following error:
 mboot.c32: not a COM32R image
After lots of debugging and googling, I figured it out that unetboot is using a buggy version of syslinux and it’s messing it up for me. So, I thought let’s replace mboot.c32 with an older version and see if it works.

I mounted by USB disk and replaced the mboot.c32 inside it, and run the syslinux command on it.

cp -r /usr/lib/syslinux/mboot.c32 /media/disk/boot/syslinux/
 syslinux /dev/sdb1
It worked for me afterwards!

– Sandeep Sidhu

Mounting Rackspace Cloud Files using cloudfuse into ubuntu 10.10 v2

This article shows how to mount cloud files using cloudfuse software into your ubuntu 10.10 as a directory so you can access your cloud files containers data inside your linux server just like any other folder and files. One heads up, this gives you an easy access to your cloud files data but in no way means you can use it as a place for any database/application directly running from it, will be darn slow. So, why would one want it then? Well, there are plenty of uses of having system level access to your cloud files, for instance if you have some scripts which create your mysql backup or website backup, those scripts can create backups automatically into cloud files without you needing to copy them yourself. So, lets get on with it..

Note: The following commands are tried and tested on Ubuntu 10.10, but they should be easily applicable to other versions of ubuntu or debian. As long as you install the required packages, you should be able to compile cloudfuse code and use it.

Installing cloud fuse: First download the cloudfuse code. Extract this file and then compile.

ssidhu@ssidhu:~$ tar -xzvf cloudfuse-0.1.tar.gz

Once you have extracted the .tar.gz file, you should have following files under cloudfuse-0.1 directory.

root@ubuntu-test:~/cloudfuse-0.1# ls -la
total 280
drwxr-xr-x 3 root root   4096 Feb 21 21:47 .
drwx------ 4 root root   4096 Feb 21 21:47 ..
drwxr-xr-x 8 root root   4096 Feb 21 21:47 .git
-rw-r--r-- 1 root root   1059 Feb 21 21:47 LICENSE
-rw-r--r-- 1 root root   1024 Feb 21 21:47 Makefile.in
-rw-r--r-- 1 root root   2332 Feb 21 21:47 README
-rw-r--r-- 1 root root  12014 Feb 21 21:47 cloudfsapi.c
-rw-r--r-- 1 root root   1043 Feb 21 21:47 cloudfsapi.h
-rw-r--r-- 1 root root  11240 Feb 21 21:47 cloudfuse.c
-rw-r--r-- 1 root root   4335 Feb 21 21:47 config.h.in
-rwxr-xr-x 1 root root 198521 Feb 21 21:47 configure
-rw-r--r-- 1 root root   1324 Feb 21 21:47 configure.in
-rwxr-xr-x 1 root root  13184 Feb 21 21:47 install-sh
root@ubuntu-test:~/cloudfuse-0.1#

Now its time to compile it and install it. You’ll need libcurl, libfuse, and libxml2 and their dev packages installed to build it.

Cloudfuse is built and installed like any other autoconf-configured code. Normally,

./configure
make
sudo make install

But, first you need to install the required packages, otherwise the ./configure command will fail and throw you errors.

apt-get update
apt-get install gcc
apt-get install libcurl4-openssl-dev
apt-get install libxml2 libxml2-dev
apt-get install libfuse-dev

now run the following command in the cloudfuse directory

root@ubuntu-test:~/cloudfuse-0.1# ./configure
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for a BSD-compatible install... /usr/bin/install -c
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for XML... yes
checking for CURL... yes
checking for FUSE... yes
.........
............
configure: creating ./config.status
config.status: creating Makefile
config.status: creating config.h
root@ubuntu-test:~/cloudfuse-0.1# make
gcc -g -O2 -I/usr/include/libxml2     -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse   -o cloudfuse cloudfsapi.c cloudfuse.c -lxml2   -lcurl   -pthread -lfuse -lrt -ldl
root@ubuntu-test:~/cloudfuse-0.1# make install
/usr/bin/install -c cloudfuse /usr/local/bin/cloudfuse
root@ubuntu-test:~/cloudfuse-0.1#

If everything went fine, you should have cloudfuse installed properly. Confirm this by running the which command. It should show the location of the cloudfuse binary file.

root@ubuntu-test:~/cloudfuse-0.1# which cloudfuse
/usr/local/bin/cloudfuse
root@ubuntu-test:~/cloudfuse-0.1#

Mounting cloudfiles: Let’s now use cloudfuse and mount our cloudfiles.

You’ll have to create a configuration file for cloudfuse in your home directory and put your Rackspace cloudfiles username and API key in it, like below:

$HOME/.cloudfuse
    username=[username]
    api_key=[api key]
    authurl=[auth URL]

Auth URLs: US cloudfiles account: https://auth.api.rackspacecloud.com/v1.0 UK cloudfiles account: https://lon.auth.api.rackspacecloud.com/v1.0

The following entries are optional, you can define these values in the .cloudfuse file.

     use_snet=[True to use snet for connections]
     cache_timeout=[seconds for directory caching, default 600]

After creating the above configuration file, you will run the cloudfuse command like following. The syntax should be as simple as:

cloudfuse [mount point]

So, you should be able to mount cloud like this

root@ubuntu-test:/# mkdir cloudfiles
root@ubuntu-test:/# cloudfuse /cloudfiles

If you run # ls -la command inside the /cloudfiles directory you should see your cloudfiles containers.

If you are not the root of the system, then you username will need to be part of “fuse” group. This can probably be accomplished with:

sudo usermod -a -G fuse [username]

If you are unable to see any containers inside the mountpoint, then probably some of the above steps didn’t work properly. You need to check and make sure that all the above steps get completed properly.

UPDATE: 30/9/2011

Here is some extra info for CentOS on how to mount cloudfuse using another use ie. Apache.

$ yum install fuse
$ usermod -a -G fuse apache
$ mkdir /mnt/cloudfiles
$ chown apache:apache /mnt/cloudfiles
$ sudo -u apache cloudfuse /mnt/cloudfiles -o username=myuser,api_key=mykey,use_snet=true,authurl=&quot;https://lon.auth.api.rackspacecloud.com/v1.0&quot;

Play around with it and fix it how you like, but I think it would be useful. Courtesy my frnd Anh.

Let me know if there are any errors in these instructions or you faced some difficulty understanding them, I will update them accordingly. Any comments are highly appreciated.

Note: Please don’t bug Rackspace Support for the help on this article, it’s not supported by them hence this article. :)

Good luck!

– Sandeep Sidhu

Installing Apache, MySQL, and PHP on Fedora 14 on Rackspace cloud servers.

Following are the steps to install Apache, MySql and PHP on Fedora 14 on Rackspace cloud servers:

Do a SSH login to your cloud server. If you use windows, you can use PuTTY to login to your cloud server. You can download the PuTTY from here. You can login with root using the password send to you via email when you first created your cloud server

After logging into your cloud server run the following commands:

#yum update
Once it has finished updating run the following:
#yum install httpd mysql mysql-server php php-devel php-mysql
Once this has finished installing run the following:
 #/etc/init.d/mysqld start
 #/usr/bin/mysql_secure_installation
You will be asked for current password, just press enter. Then next set a root password (make it a secure one), remove anonymous users, disallow root login remotely unless you want to manage your MySql remotely, and remove the test database. Finally reload the privilege tables.

To make sure MySQL always loads on restart run the following:

 #chkconfig mysqld on
We also want to do similar for Apache so run the following:
/etc/init.d/httpd start
 chkconfig –levels 235 httpd on
This will start the web server, however if you go to the IP address of the server in your browser it will say there is a problem loading the page, what is happening is that the Rackspace images have a very restrictive firewall to start, we need to allow some traffic through. To do this perform the following commands:
 #iptables -I INPUT 1 -p tcp –dport http -j ACCEPT
 #iptables -I INPUT 1 -p tcp –dport mysql -j ACCEPT
 #service iptables save
 #service iptables restart
Once this has done you will be able to see the Apache welcome page on your server IP

– Sandeep Sidhu

Rackspace cloud servers - linux rescue mode

This post is regarding rescue mode in Rackspace cloud servers, particularly for Linux systems.  Normally, we can find everything at Rackspace cloud servers knowledge base, but it seems there is no article available on Linux rescue environment there, and somebody who hasn’t worked on Linux much, rescue environment might become daunting.  So, I decided to write one here.

When do I require rescue environment?

First of all, you only require rescue environment when your system has become non-bootable, means something is terrible wrong with the server, it could be either file system corruption, boot files corruption, configuration errors.  There are lots of cases where linux system can become non-bootable.  Most of the times, if linux system encounters any problem during booting time, it will drop you to a maintenance mode environment where you can login with your root password and check for any errors.  The problem with maintenance mode are a) your system is read-only b) most of the services are not working, like SSH. c) you can’t copy your data over the network. d) you have to work on console which is slower than SSH login.

So, in such cases you can always bring your server up in rescue environment and debug the issues with SSH login from your desktop and copy any files from your server for recovery purposes as well.

What is Rescue mode?

Rescue mode grants you full access to your non-bootable server’s filesystem. You can use it to modify problem in configuration files or to use scp to copy data from the slice to a remote location.

For somebody familiar with Linux, rescue mode is similar to booting into single-user mode with networking enabled.

Getting your server into Rescue mode.

You have to login into your cloud servers account at https://manage.rackspaceloud.com.  Click on Hosting->Cloud Servers, then you can click on the server which you want to enable rescue mode.  Once you click on your server, you will see a screen similar to below image, where you can click the Rescue button.

As you can see Rackspace has done well by putting everything in the above message.  It pretty much explains the whole rescue process there, but as always people are too quick to click enter and never read it.  So, I have put the snapshot so that you will give it a read while you are it. :)

Notice that rescue environment is limited to 90 minutes only.  So you only have 3 hours to fix your server or copy the data.  Of course, you can go back again into rescue mode as many times as possible.  I guess, this time limit is there only to prevent misuse of rescue environment.

Once the rescue mode build process is complete, your screen should look similar to above, and your system is in rescue mode.  You will receive an e-mail from Rackspace cloud support with new password to login into rescue mode.

Once you have received the new password, you can actually SSH into your public IP and use new password to login into rescue environment.

As you can see fidk -l command in rescue environment show three disks.

/dev/sdb1 = this is the rescue disk /dev/sda1 = this is our server disk, size of 10.2GB, my current server disk size.  It could be different if your server has a larger disk. /dev/sda2  =  this is swap disk, as you can see from the size of 536MB

Now, what we need to do is mount /dev/sda1 to a directory into rescue environment.  Once /dev/sda1 is mounted to for example /mnt.  Then you can access all your files under /mnt

Likewise, if you made some wrong entries into your /etc/fstab and due to which your system is not booting.  Now in rescue environment you can edit your fstab at  /mnt/etc/fstab, and make the required corrections.

That’s it!  Once you are done editing/fixing, then you can exit the rescue mode by clicking “Exit Rescue Mode” link in your server details page.

Please post any comments if this helped you anyway and also if you see some issues and some things are needed to changed.

– Sandeep Sidhu

Extract pages from a PDF file in Ubuntu 10.10

Yesterday I got an odd task at hand. One of senior members in my team and really amazing person I must say, e-mailed me few PDFs of Linux Journal from past months, and asked if I could extract the troubleshooting articles from them and compile them as a one single pdf, which we can keep for future references, plus this was needed as he has promised the other team to do this in return of the PDFs he gets from their subscription :)

So, then I begin my search on tools and ways to do this in Ubuntu 10.10.  Yes, for the past 3-4 weeks Ubuntu has been my main operating system, not like earlier when I have always kept one windows machine with me. This time I thought lets move completely to Linux without taking any whatsoever help from windows, and let me tell you its been really going great, even better than windows. But more on this later.

Quick Google search revealed that there are a number of ways to extract a range of pages from PDF files. Main article which explained three methods and a little handy script was on linuxjournal, which prompted me to look in detail all three methods.

There are PDF related toolkits for extracting pages from PDF or you can use Ghostscript directly for command line option, and also there are graphic applications as well. So I decided to put them all together here.

First: Use of poppler-tools and psutils. One can extract a range of pages from a larger PDF file using these tools. Like, if you want to extract pages 18–22 of the PDF file one_big_file.pdf, you could use the following command:

$ pdftops one-big-file.pdf - | psselect -p18-22 | ps2pdf - new-file-name.pdf
The pdftops command converts the PDF file to PostScript and psselect command selects the relevant pages from the PostScript, then ps2pdf command converts the selected PostScript into a new PDF file.

Second:Using pdftk toolkit For example, to extract pages 18-22 from a big PDF file.

Splitting pages from one big file:

 $ pdftk A=one_big_file.pdf cat A18-22 output new_file_name.pdf
Joining pages into one big file:
$ pdftk file1.pdf file2.pdf cat output single_big_file.pdf
for more options like attaching files, filling forms, etc., check this link

Third:Using Ghostscript Use of Ghostscript, which unlike pdftk is installed nearly everywhere and you’ve been using it in the last command anyway, goes like following.


 $ gs -sDEVICE=pdfwrite -dNOPAUSE -dBATCH -dSAFER
 -dFirstPage=18 -dLastPage=22
 -sOutputFile=new_file_name.pdf one_big_file.pdf
Merging files with Ghostscript

 $ gs -q -sDEVICE=pdfwrite -dNOPAUSE -dBATCH -dSAFER
 -sOutputFile=one_big_file.pdf file1.pdf file2.pdf file3.pdf
When using Ghostscript to combine PDF files, you can add any PDF-related option to the command line. For example, you can compress the file, target it to an eBook reader, or encrypt it. See the Ghostscript documentation for more information.

Conclusion Regarding speed and efficiency of the processing and more important the quality of the output file, the first method above is for sure the worst of the three. The conversion of the original PDF to PostScript and back to PDF (known as “refrying”) is very unlikely to completely preserve advanced PDF features (such as transparency information, font hinting, overprinting information, color profiles, trapping instructions, etc.).

The 3rd method uses Ghostscript only (which the 1st one uses anyway, because ps2pdf is nothing more than a wrapper script around a more or less complicated Ghostscript command line. The 3rd method also preserves all the important PDF objects on your pages as they are, without any “roundtrip” conversions.

Little extra The only drawback of the 3rd method is that it’s a longer and more complicated command line to type. But you can overcome that drawback if you save it as a bash function. Just put these lines in your ~/.bashrc file:

 function pdf-extract()
 {
 # this function uses 3 arguments:
 # $1 is the first page of the range to extract
 # $2 is the last page of the range to extract
 # $3 is the input file
 # output file will be named “inputfile_pXX-pYY.pdf”
 gs -sDEVICE=pdfwrite -dNOPAUSE -dBATCH -dSAFER
 -dFirstPage=${1}
 -dLastPage=${2}
 -sOutputFile=${3%.pdf}_page${1}_to_page${2}.pdf
 ${3}
 }
 Source
Now you only need to type (after starting a new copy bash or sourcing .bashrc) the following: $ pdf-extract 22 36 inputfile.pdf which will result in the file inputfile_p22-p36.pdf in the same directory as the input file.

For a graphic option http://sourceforge.net/projects/pdfshuffler/

– Sandeep Sidhu

Java runtime environment on fedora 14

The standard installation of Fedora comes with OpenJDK off of Sun Java. However if not, it can be installed using YUM: yum install java-1.6.0-openjdk java-1.6.0-openjdk-plugin If you installed OpenJDK, all the ava application and web applets should automatically work. Unfortunately some applets may not run properly and the OpenJDK might have some limitations. Majority of user should find OpenJDK perfect for everyday use.

Installing Sun Java: Download java from here: http://www.java.com/en/download/manual.jsp?locale=en&host=www.java.com

Always use the latest update. Select Java JRE 6 Update 18 (the JDK is for developers) On the next page, for Platform select “Linux” for 32-bit users, and “Linux x64” for 64-bit users. For Language select “Multi-language”. Also accept the license agreement, and hit “Continue”.

On the next page, select the RPM option: 64bit - jre-6u22-linux-x64-rpm.bin or 32bit - jre-6u22-linux-i586-rpm.bin

Installation: For 64bit: #sh jre-6u18-linux-i586-rpm.bin For 32bit: #sh jre-6u18-linux-x64-rpm.bin

You will need to hit ‘space’ till it reaches the end, then type ‘yes’. It should install the RPM files automatically, but if it doesn’t install then you can manually install it using RPM command rpm -ivh

Now, if you run command #java -version, you will see that by default its running java from OpenJDK. In order to use Sun Java, use the alternatives command.

In Fedora you can use alternatives to have multiple versions of java installed on your system and change between then as and when required.

Running the following command(for both 32-bit and 64-bit users): # /usr/sbin/alternatives –install /usr/bin/java java /usr/java/default/bin/java 20000

Explanation: alternatives –install
name: is the generic name for the master link. link: is the name of its sym‐link. (this link will be created by alternatives, so don’t try to look up the path, it won’t be there unless you have run alternatives command earlier) path: is the alternative being introduced for the master link. priority: is the priority of the alternatives group. Higher priorities take precendence if no alternative is manually selected.

Setup the Mozilla/Firefox browser plugin. For 32-bit users: # /usr/sbin/alternatives –install /usr/lib/mozilla/plugins/libjavaplugin.so libjavaplugin.so /usr/java/default/lib/i386/libnpjp2.so 20000 For 64-bit users: /usr/sbin/alternatives –install /usr/lib64/mozilla/plugins/libjavaplugin.so libjavaplugin.so.x86_64 /usr/java/default/lib/amd64/libnpjp2.so 20000 Note: don’t try to locate the file libjavaplugin.so in mozilla/plugins directory, it won’t be there by default. Actually, alternatives is going to create a symbolic link with that name in mozilla/plugins directory which will point to configured javaplugin using alternatives. So, just type in the above command as it’s and hit the enter.

You need to restart firefox to see the plugin take effect.

You can run following commands to change between different versions of java as per your needs.

/usr/sbin/alternatives –config java /usr/sbin/alternatives –config libjavaplugin.so (or for 64-bit) /usr/sbin/alternatives –config libjavaplugin.so.x86_64 Above commands will show all the different versions of java available under alternatives control, and which one is default setup, you can change the default settings by using the numbers displayed on screen.

You can use the following command to see current settings for java /usr/sbin/alternatives –display java and similarly for others. It will display the current default java in use.

– Sandeep Sidhu