Posts

Showing posts from October, 2014

Configuring Openshift Origin with S3-based persistent shared storage

This post will describe the steps that I take to provide shared storage for OpenShift Origin M4 installation. There were some difficulties that must be solved by non standard methods. Requirement When hosting applications on Openshift Origin platform, we are confronted with a bitter truth : writing applications for cloud platforms requires us to avoid writing to local filesystems. There is no support for storage shared between gears. But we still need support multiple PHP applications that stores their attachment in the local filesystem with minimal code changes. So we need a way to quickly implement shared storage between gears of the same application. And maybe we could loosen the application isolation requirement just for the shared storage. Basic Idea The idea is to mount an S3 API-based storage on all nodes. And then each gear could refer to application's folder inside the shared storage to store and retrieve file attachments. My implementation uses an EMC VIPR shared

Debugging Ruby code - Mcollective server

In this post I record steps that I took to debug some ruby code. Actually the code was an ruby mcollective server code that were installed as part of openshift origin Node. The bug is that the server consistently fails to respond to client queries in my configuration. I documented the steps taken  even though I hadn't nailed the bug yet. First thing first First we need to identify the entry point. These commands would do the trick: [root@broker ~]# service ruby193-mcollective status mcollectived (pid  1069) is running... [root@broker ~]# ps afxw | grep 1069  1069 ?        Sl     0:03 ruby /opt/rh/ruby193/root/usr/sbin/mcollectived --pid=/opt/rh/ruby193/root/var/run/mcollectived.pid --config=/opt/rh/ruby193/root/etc/mcollective/server.cfg 12428 pts/0    S+     0:00          \_ grep 1069 We found out that the service is : running with pid 1069 running with configuration file /opt/rh/ruby193/root/etc/mcollective/server.cfg service's source code is at /opt/rh/rub

How to move an EC2 Instance to another region

Image
In this post I would describe the process of moving an EC2 instance to another region. The background I have a server in one of the EC2 regions that a bit pricey than the rest. It seems that moving it to another region would save me some bucks. Well, it turns out that I did a few blunders that maybe causes the savings to be negligible. The initial plan I read that snapshots could be copied to other regions. So the original plan is to create snapshots of existing volumes that support the instance (I have one instance with three EBS volumes), copy these to another region, and create a new instance in the new region. The mistake My mistake is that I assume creating a new instance is a simple matter of selecting the platform (i386 or x86_64) and the root EBS volume. Actually, it is not. First, we create an AMI (Amazon Machine Image) using an EBS snapshot, not EBS volume. Then we could launch a new instance based on the AMI. As shown below, when we are trying to create a new AMI

How to Peek inside your ActiveMQ Server

Image
This post describes steps that can be taken for sysadmins to peek inside an ActiveMQ server. We assume root capability, otherwise we need a user which has access to ActiveMQ configuration files. Step 1. Determine running ActiveMQ process ps auxw | grep activemq We got a java process running ActiveMQ : [root@broker ~]# ps auxw | grep activemq activemq  1236  0.1  0.0  19124   696 ?        Sl   07:00   0:02 /usr/lib/activemq/linux/wrapper /etc/activemq/wrapper.conf wrapper.syslog.ident=ActiveMQ wrapper.pidfile=/var/run/activemq//ActiveMQ.pid wrapper.daemonize=TRUE wrapper.lockfile=/var/lock/subsys/ActiveMQ activemq  1243  3.2 12.2 2016568 125264 ?      Sl   07:00   1:06 java -Dactivemq.home=/usr/share/activemq -Dactivemq.base=/usr/share/activemq -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.trustStorePassword=password -Djavax.net.ssl.keyStore=/usr/share/activemq/conf/broker.ks -Djavax.net.ssl.trustStore=/usr/share/activemq/conf/broker.ts -Dcom.sun.management.jmxre

Verification of Node installation in Openshift Origin M4

The Openshift Origin Comprehensive Installation Guideline (http://openshift.github.io/documentation/oo_deployment_guide_comprehensive.html) states that there is several things that can be done to ensure the Node is ready for integration into Openshift cluster : built-in script to check the node :  oo-accept-node check that facter runs properly : /etc/cron.minutely/openshift-facts check that mcollective communication works : in the broker, run : oo-mco ping  What I found that it is not enough. For example, openshift-facts show blanks, even though if there is an error with the facter functionality. So check the facter with : facter  And oo-mco ping works fine even though that there is something wrong with the rpc channel. I would suggest run these in the broker : oo-mco facts kernel oo-mco inventory In one of our Openshift Origin M4 cluster , I have these lines in /opt/rh/ruby193/root/etc/mcollective/server.cfg: main_collective = mcollective coll