Thursday, July 24, 2014

Port Forwarding using your Windows Server or Laptop

Sometimes we just need to forward a port from one host, to another host. A few years back I built a java NIO-based port forwarder to learn about java NIO socket communication. The java source codes since has been lost and there is a different need for port forwarding, the first is that a server having failure connecting to services in another server, and I think that the server's IP  might be translated to another IP while it were connecting to another network. So the port forwarding need to be able to log the IPs of the incoming  connection. The second, is to temporarily circumvent strange network problems that prevent the first server connecting to another server.

Step 1 - Decide which incoming port to use

First we need to decide what port we are going to use to accept connections in the first host. Check first that the port is not already occupied in the host (try opening in your favorite browser where xx is the chosen port)

Step 2 - Open windows firewall for the port

For this step we need to go to windows firewall settings and allow connections to the chosen port.
In my Windows 8.1 laptop, the steps are :

  • Windows-S, type 'firewall'
  • click on the shown Windows Firewall icon
  • click on Advanced Settings (left menu)
  • click Inbound Rules (left tree)
  • click New Rule (right Actions menu)
  • choose Port, click Next
  • choose TCP, insert port number in specific local ports, click Next
  • choose Allow connection (don't change the default), click Next
  • check all Domain, Private, and Public boxes, click Next
  • type name and description, Finish

Step 3 - Enable IPv6 protocol in the network adapter

We need to enable IPv6 in the adapter, because Windows's proxy service needs IPv6 library even though we only forward IPv4 ports.
The steps:
  • Click triangle 'show hidden icons' in Windows Taskbar near the clock
  • Right click on connected the network icon
  • Click open Network and sharing center
  • Click on the active Connection where we want to enable the port forwarding
  • Click Properties
  • Ensure TCP/IP v6 checkbox were checked. If there is no TCP/IP v6 entry, click on Install, choose protocols, TCP/IP v6.

Step 4 - Enable port forwarding using command line

  • Windows-S, type 'cmd'
  • Right click on Command Prompt, click Run as Administrator
  • In the command console, type :

netsh interface portproxy add v4tov4 listenport=80 connectport=81 connectaddress= 

This example forwards local port 80 to port 81 in host Please change the IP and port numbers as needed.

Step 5 - Enable incoming connection logging 

This step is optional, I need it because I need to record IP addresses that connects to this server/laptop that is using the port forwarding service.

  • Windows-S, type 'firewall'
  • click on the shown Windows Firewall icon
  • click on Advanced Settings (left menu)
  • Ensure Windows Firewall in left menu is selected
  • Click Windows Firewall properties in the middle window
  • Click on Public Profile (or other profile, depending your active profile)
  • Click on Customize.. on the Logging fieldset
  • Change Log successful connections from No to Yes, click OK
  • click Apply
The log will be written to the specified path in the Log successful connections screen. To read the log, use Administrator command console, because the location is not accessible by normal user or Admin without privilege escalation.

That's all.


Sunday, July 13, 2014

Compiling PDO_OCI in CentOS / RHEL


Similar to the previous post, my yii-framework based PHP application need to access to oracle database tables. Yii requires PDO_OCI PHP extension in order to access oracle database. I will describe steps that I took to compile PDO_OCI extension from php package source SRPMS. 


In CentOS, we need to create /etc/yum.repos.d/source.repo because CentOS doesn't come with one :

name=CentOS-$releasever – Base SRPMS

name=CentOS-$releasever – Base SRPMS

We also need yum-utils and rpm-build packages
yum install yum-utils rpm-build

Then, download the source package file with yumdownloader :
[root@essdev ~]# yumdownloader --source php
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base:
 * extras:
 * updates:
base-SRPMS-6.5                                           | 1.9 kB     00:00
base-SRPMS-6.5/primary_db                                | 672 kB     00:03
updates-SRPMS-6.5                                        | 2.9 kB     00:00
updates-SRPMS-6.5/primary_db                             | 104 kB     00:00
php-5.3.3-27.el6_5.src.rpm                               |  10 MB     00:49

I want to use a non-root user to do the compile, prepare the directories :
[esscuti@essdev ~]$ cd ~/src/rpm
[esscuti@essdev rpm]$ mkdir BUILD RPMS SOURCES SPECS SRPMS
[esscuti@essdev rpm]$ mkdir RPMS/{i386,i486,i586,i686,noarch,athlon}

Move or copy the src.rpm to the non root user.
[root@essdev ~]# mv php-5.3.3-27.el6_5.src.rpm  /home/esscuti/

Do a test build

First we should be able to build the unchanged php source code.

Install the src rpm onto the src directories
[esscuti@essdev ~]$ rpm -ivh php-5.3.3-27.el6_5.src.rpm
   1:php                    warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root

As red hat said in Bug #206277, the mockbuild warnings are benign, ignore them.

Install build dependencies :
sudo yum-builddep php-5.3.3-27.el6_5.src.rpm
Do the build as the non-root user
[esscuti@essdev ~]$ rpmbuild -ba src/rpm/SPECS/php.spec

Oracle Instant Client Installation

1. We need oracle instant client for the OCI library, I have good experiences with instant client You need to download Instant client basic and sdk from OTN (oracle tech network).
2. Extract the instant client files, move them to /opt/instant_client_10_2 and create symbolic links.
ln -s /opt/instantclient_10_2/ /opt/instantclient_10_2/
ln -s /opt/instantclient_10_2/ /opt/instantclient_10_2/
ls -s /opt/instantclient_10_2/lib opt/instantclient_10_2

Rebuild with pdo-oci enabled

Insert --with-pdo-oci=shared clause in  src/rpm/SPECS/php.spec to enable pdo_oci extension :
      --enable-pdo=shared \
      --with-pdo-odbc=shared,unixODBC,%{_prefix} \
      --with-pdo-oci=shared,instantclient,/opt/instantclient_10_2, \
      --with-pdo-mysql=shared,%{mysql_config} \
      --with-pdo-pgsql=shared,%{_prefix} \
      --with-pdo-sqlite=shared,%{_prefix} \
      --with-sqlite3=shared,%{_prefix} \

Do the build:
rpmbuild -ba src/rpm/SPECS/php.spec

Copy the extension library to php extension directory :
[root@essdev esscuti]# cp src/rpm/BUILDROOT/php-5.3.3-27.el6.x86_64/usr/lib64/php/modules/ /usr/lib64/php/modules/

create ini file to call the extension :
[root@essdev esscuti]# cat > /etc/php.d/pdo_oci.ini

Refer the instantclient directory for shared object/libraries 
[root@essdev modules]# cat > /etc/

[root@essdev modules]# ldconfig 
[root@essdev modules]# ldconfig -v

Change selinux label in instantclient files :
[root@essdev instantclient_10_2]# restorecon -Fv *.so
restorecon reset /opt/instantclient_10_2/ context unconfined_u:object_r:admin_home_t:s0->system_u:object_r:lib_t:s0
restorecon reset /opt/instantclient_10_2/ context unconfined_u:object_r:lib_t:s0->system_u:object_r:lib_t:s0
restorecon reset /opt/instantclient_10_2/ context unconfined_u:object_r:lib_t:s0->system_u:object_r:lib_t:s0
restorecon reset /opt/instantclient_10_2/ context unconfined_u:object_r:admin_home_t:s0->system_u:object_r:lib_t:s0
restorecon reset /opt/instantclient_10_2/ context unconfined_u:object_r:lib_t:s0->system_u:object_r:lib_t:s0

Change some other SELinux contexts  :
[root@essdev instantclient_10_2]# chcon -v system_u:object_r:lib_t:s0 /opt/instantclient_10_2
changing security context of `/opt/instantclient_10_2'
[root@essdev instantclient_10_2]# chcon -v system_u:object_r:textrel_shlib_t:s0
changing security context of `'
[root@essdev instantclient_10_2]# chcon -v system_u:object_r:textrel_shlib_t:s0
changing security context of `'

Restart apache :
service httpd restart

Check using command line and info.php :
php -r "phpinfo();"
cat > /var/www/html/info.php
<?php phpinfo(); ?>

Friday, July 11, 2014

How To Build PDO_OCI in Ubuntu 12.04

Building PDO_OCI extension in Ubuntu 12.04 is a little difficult. The reason :
a. pdo extension are included in the php5 package
b. PDO_OCI in pecl requires pdo extension source, not pdo extension binary
c. pdo from pecl cannot compile under php 5.3
c. malformed tgz resulting from 'pecl download PDO_OCI' (well, as of today, 11-07-2014, it is)
Why I need PDO_OCI? Well, I used Yii framework and need to access oracle database.

Yesterday I tried this strategy to obtain pdo_oci extension :
1. downloaded instant client basic and sdk from OTN (oracle tech network)
2. extract the instant client files, move them to  /opt/instantclient_10_2, create 3 symbolic links.
3. download php5 source package, and try to rebuild the php5 debs using debuild. This would ensure the php extensions were build.
apt-get install dpkg-dev
apt-get source php5
apt-get build-dep php5
apt-get install devscripts
debuild -us -uc

4. after php5 deb created, change the debian/rules file to enable the pdo_oci compilation by inserting a with-pdo-oci line :
                --with-pdo-odbc=shared,unixODBC,/usr \
                --with-pdo-pgsql=shared,/usr/bin/pg_config \
                --with-pdo-oci=shared,instantclient,/opt/instantclient_10_2, \
                --with-pdo-sqlite=shared,/usr \
Note that there are 4 parameters for with-pdo-oci that need to be specified. The first parameter makes an pdo_oci extension instead of statically compiling it. The second states that we will use oracle instant client driver. The third tells the location of the instant client files. The fourth explicitly states instant client version we have, because the autodetection doesn't work very well.
5. Rebuild using debuild, and copy the resulting so to the modules folder.
cp debian/libapache2-mod-php5/usr/lib/php5/20090626+lfs/ /usr/lib/php5/20090626+lfs/
Thats as far I can recall. If something were inadvertently omitted, I would have to update this post.

Tips :

1. The linker would skip the instant client shared library (.so) if the architecture doesn't match. For example, it would skip the instant client .so if the downloaded instant client is for 64bit architecture and we are on 32bit linux.
2. Locale errors could be fixed by installing language-pack-de and executing locale-gen from root.
apt-get install language-pack-de

3. Three symbolic links should be created for instantclient :
ln -s /opt/instantclient_10_2/ /opt/instantclient_10_2/
ln -s /opt/instantclient_10_2/ /opt/instantclient_10_2/
ls -s /opt/instantclient_10_2/lib opt/instantclient_10_2
4. Enable yii schema caching to reduce long delays caused by inefficient data dictionary queries in the Oracle database

Tuesday, July 1, 2014

How to dump stacktrace in running Ruby process


Murphy's law describe that if something could break, it would break in the worst time possible. Or something like that. Anyway, more often than not, our software doesn't behave as it should. And I often get web apps that waiting endlessly for something. It made us curious what on earth cause the app to wait. In this case, the app is openshift origin console. Being Ruby based, means there supposedly a way to dump stack from running threads. 


At first I tried to borrow the openshift ruby cartridge method of thread dump. Upon reverse engineering the cartridge (ok, I just snoop in some files such as this) I am surprised to find out that all that the ruby cartridge does is to send signal ABRT to the process which has the title prefix of Rack: .  Trying to apply the same procedure to the running openshift-console process, and the result is a killed process and a confusion.
Another reference, the Phusion Passenger user's guide, tells me that Ruby & Python process that received an ABRT would print backtrace and then aborts the process. The fact that the backtrace were missing in any of the known log files after sending the signal ABRT made me skeptical about usability of this technique. The user's guide also states that signal QUIT could also be send to Ruby processes, and supposed to have the same result without killing the process in cold blood. But sending QUIT to openshift-console's Rack process also have no solid result either.

kill -s QUIT <pid>

Further lead

Knowing openshift-console's processes should contact openshift-broker's API, I starts sending QUIT to openshift-broker's Rack process. And this time, reviewing log files, I got several clues about the cause of the freezing openshift-console. 
Another valuable reference in debugging frozen process is a new relic blog entry.


Unfortunately the post must end when the story is not quite finished yet. But the technique of sending QUIT to Rack process (or is it httpd process? forgot it) can shed some light about what region of ruby code currently executing right now.

Learning Openshift Origin

This post will be about my experience installing OpenShift Origin in my company's servers.


I find the comprehensive deployment guide in OpenShift site ( is very useful, but not without flaws. My OS is RHEL, but I think my experiences would also apply to CentOS systems.
The first glitch found was that the yum update went broken just after step 1.1 (repository configuration). The problem was a complex dependency between packages, in short it could be fixed after I do "yum erase libart_lgpl-devel". That package is not needed for correct system operation.
I noted that the mcollective installation (Chapter 5) as prerequisite doesn't mention that in RHEL we need different package than Fedora, that is

yum install -y ruby193-mcollective-client

Openshift installation liberally uses SCL (software collections) packages, in which a different root  is used to install newer versions of certain software packages. For example, the ruby193 SCL is installed in (/opt/rh/ruby193/root/).  The old version of ruby (1.8.7) is still installed in /usr/bin/ruby, and when we need to use ruby-1.9.3 we need to add the /opt/rh/ruby193/root/usr/bin path to PATH and LD_LIBRARY_PATH. It could be done by the command :
scl enable ruby193 bash
But the openshift installation does this globally by creating /etc/profile.d/, making the scl path available to the global profile.

For most of the deployment guide, from top to bottom is the correct sequence to apply. I said mostly because the missing ruby193-mcollective-client made my system without /opt/rh/ruby193/root/etc/mcollective/client.cfg to configure, and so I must do the configuration in later steps when the file would be available.

Complications arise because the newer version of v8 engine, previously packaged as ruby193-v8, now packaged as different scl altogether (v8314-v8 and or v8314-runtime). A new scl folder now required to be integrated to openshift applications, such as admin-console and developer console. If no adjustments were made, developer console shows blank in the browser and missing javascript runtime error shows in admin-console. In my system this is done by changing  /var/www/openshift/console/script/console_ruby and /var/www/openshift/broker/script to include additional library path from the v8314 scl :

export LD_LIBRARY_PATH=/opt/rh/ruby193/root/usr/local/lib64:/opt/rh/ruby193/root/usr/lib64:/opt/rh/v8314/root/usr/lib64

Installation packages

In summary, here are packages that I installed in the broker host:
bind.x86_64                                32:9.8.2-0.23.rc1.el6_5.1     @updates
bind-libs.x86_64                           32:9.8.2-0.23.rc1.el6_5.1     @anaconda-RedHatEnterpriseLinux-201009221801.x86_64/6.3
bind-utils.x86_64                          32:9.8.2-0.23.rc1.el6_5.1     @anaconda-RedHatEnterpriseLinux-201009221801.x86_64/6.3

libmongodb.x86_64                          2.4.6-2.el6oso                @openshift-origin-deps

mongodb.x86_64                             2.4.6-2.el6oso                @openshift-origin-deps
openshift-origin-broker.noarch                   @openshift-origin
openshift-origin-broker-util.noarch                @openshift-origin
openshift-origin-console.noarch                  @openshift-origin
openshift-origin-msg-common.noarch                @openshift-origin
openshift-origin-util-scl.noarch                 @openshift-origin
ruby193-mcollective-client.noarch          2.2.3-2.el6oso                @openshift-origin-deps
ruby193-mcollective-common.noarch          2.2.3-2.el6oso                @openshift-origin-deps

Configuration files

The installation procedure also involves creating various configuration files :

Reference to openshift rpms from Red hat 's openshift repository :
Environment variables changes to activate ruby193
Name server cloud domain configuration 
Name server upstream configuration
Name server and key to be used by broker
Console authentication configuration
Broker authentication configuration

Some other files must be changed according to the guide :
Dns settings
Hostname setting
Name server configuration to refer cloud domain config
Mcollective configuration to use activemq service
Gear size, cloud domain, and mongodb service to be used by broker

Time keeping

To ensure best time keeping behavior using my company's VM-based infrastructure, I applied VMware KB 1006427 (see the KB here), in essence :
  • add tinker panic 0 in top of /etc/ntp.conf
  • comment Undisciplined Local Clock in /etc/ntp.conf, preventing NTP to prefer local clock rather than external network time source.

Node installation

Packages that I installed in the node host is : 
rubygem-openshift-origin-node.noarch                @openshift-origin
rubygem-passenger-native.x86_64            1:3.0.21-11.el6oso            @openshift-origin-deps
openshift-origin-port-proxy.noarch                 @openshift-origin
openshift-origin-node-util.noarch                @openshift-origin
openshift-origin-cartridge-cron.noarch                @openshift-origin
openshift-origin-cartridge-haproxy.noarch                @openshift-origin
openshift-origin-cartridge-mongodb.noarch                @openshift-origin
openshift-origin-cartridge-mysql.noarch                @openshift-origin
openshift-origin-cartridge-php.noarch                @openshift-origin

Network Troubles in Node

Several services will not start if your LAN device is not eth1. It is because most of the openshift shell scripts assume eth1 for network connectivity. Or it might occurred because I didn't specify EXTERNAL_ETH_DEV in /etc/openshift/node.conf. Anyway the cure is to set EXTERNAL_ETH_DEV  in /etc/openshift/node.conf


Installing openshift origin in multiple host is a priceless experience. I get to know some of the mechanisms that openshift relied on.