Posts

Showing posts from 2016

How to create LVM volume with thin provisioning

Image
This post shows how to create LVM volume with thin provisioning, that is, only actually used ranges of the volume will actually be allocated. Check volume groups First, check lvm volume groups to find out which vg has space for our thin volume pool. vgdisplay Choose one of the volume groups with sufficient space. Because we are using thin provisioning, we could use less space than normal provisioning. Second, check existing logical volumes also.  lvs Creating thin volume pool Next, we create thin volume pool in the chosen volume group (example, vgdata). lvcreate -L 50G --thinpool globalthinpool vgdata Print the resulting volumes using lvs : We see that globalthinpool are created with logical size 50 gigabytes. Creating thinly provisioned volume Now we create thinly provisioned volume using previously created pool. lvcreate -V100G -T vgdata/globalthinpool -n dockerpool The command would create a 100 G logical volume using thin

How to Run X Windows Server inside Docker Container

Image
Background Sometimes I need to run X Windows-based applications inside Docker containers, and running the server locally is too unpractical because of latency reasons or the working laptop has no X Windows Server. First I tried to create a VirtualBox-based Vnc Server, and it worked fine albeit a little slow, but Docker containers seem to have better memory and disk footprint. So I tried to create Vnc Server running X Windows inside a Docker container. I already tried suchja/x11server ( ref ) but it has strange problems ignoring cursor keys of my MacBook on webkit page (such as Pentaho Data Integration's Formula page). Starting point Many of my Docker images are based on Debian Jessie. So I start from the instructions from this DigitalOcean article :  https://www.digitalocean.com/community/tutorials/how-to-set-up-vnc-server-on-debian-8 .  This vnc server is based on XFCE Desktop Environment. The steps are basically is to install : xfce4  xfce4-goodies  gnome-icon-them

Docker Basic 101

Image
Background This post would describe notes that results from my initial exploration using docker. Docker could be described as a thin VM. Essentially docker runs processes in a linux host in a semi-isolated environment. It was a brilliant technical accomplishment that exploits several characteristic of running applications in a linux-based OS. First, that the result of package installation is the distribution of package files in certain directories, and changes to certain files. Second, that executable file from one Linux distribution could be run in another Linux distribution provided that all the required shared library and configuration files are in their places. Basic characteristic of Docker images Docker images are essentially similar to zip archives, organized as layer over layers. Each additional layer provide new file or changed files.  Docker image should be portable, means it could be used in different instances of application in different hosts. Docker images are

Running X11 Apps inside Docker on Remote Server

Image
Background Docker is fast-growing trend that I could no longer ignore, so I tried Docker running in a Linux server machine. Running server app is a breeze inside docker, but I need to run Pentaho Data Integration in the server, which uses X11 display. There is several references about forwarding X11 connection to a Docker container but none works for my setup, which has Quartz XServer running in  Mac OS X laptop and Docker service running in a remote Linux Server. The usual way The steps to run X Windowed Applications in Docker containers can be read from  Running GUI Apps with Docker  and  Alternatives to SSH X11 Forwarding for Docker Containers , which essentially is as follows : Forwarding DISPLAY environment variable to the container Forwarding directory /tmp/.X11-unix to the container I already tried such steps with no results, because I need to add another step before these two, that is forwarding X11 connection thru ssh connection to the server (not container).

SAP System Copy Lessons Learned

Image
Background Earlier this year I was part of a team that does System Copy for a 20 terabyte plus SAP ERP RM-CA System. And just now I am involved in doing two system copy in just over one week, for much lesser amount of data. I think I would note some lessons learned from the experience in this blog. For the record, we are migrating from HP/UX and AIX to Linux x86 platform. Things that go wrong First, following the System Copy guide carefully is quite a lot of work - mainly because some important stuff are hidden in references in the guide. And reading a SAP note that are referenced in another SAP note, that are referenced in Installation Guide.. is a bit too much. Let me describe what thing goes wrong. VM Time drift The Oracle RAC Cluster have time drift problem, killing one instance when the other is shutting down. The cure for our VMWare-based Linux database server is hidden in SAP Note 989963 "Linux VMWARE Timing", which is basically add a tinker panic 0 in the

'Cached' memory in Linux Kernel

Image
It is my understanding that the free memory in linux operating system, can be shown by checking the second line in the result of "free -m" : The first line, shows free memory that are really really free. The second line, shows free memory combined by buffers and cache. The reason is, I was told, that buffer and cache memory could be converted to free memory whenever there is a need. The cache memory is filled with the filesystem cache of the Linux Operating System. The problem is, I was wrong. There are several cases where I find that cache memory is not being reduced when there is an application needing more memory. Instead, a part of the application memory is being sent to the swap, increasing swap usage and causing pauses in the system (while the memory pages being written to disk). In one case an Oracle database instance restarted and the team thinks it is because the memory demand too high (I think this is a bug). The cache memory suppose to be reduced when we

Nostalgic Programming in Pascal

Image
A writer once said that in the new world, programmers would be free to choose any programming languages to do their job with. They would be able to use the most productive language for themselves and also for the task at hand. In this event, I am feeling a bit nostalgic, and found that there is an open source software called Free Pascal Compiler. In the past I learnt programming using Pascal as my second language, specifically Turbo Pascal. Background Needing to write a simple program to verify that a CSV file have the specified number of columns. It could be done using awk etc, but I need the program to be fast because the file size is large and the rows is in the order of hundreds of thousand.  The program Explanation At first the field counter returns strange result that very much different than the expected one. I was baffled, until I remembered that pascal Strings have maximum 255 characters wide (the files have more characters in each line). So it turns out to

Deploying Yii Application in Openshift Origin

Image
This post would describe how we deploy a Yii application into Openshift Origin. It should also work in Openshift Enterprise and Openshift online. Challenges PHP-based application that runs in a Gear must not write to the application directory, because it is not writable. Openshift provides a Data Directory for each gear, which we could use for the purpose of writing assets and application runtime log. For the case of load-balanced application, error messages written to application log is also stored in multiple gears, making troubleshooting more complex than it is. Solution Use deploy action hook in order to create directories in the data directory and symbolic links to the application. Change the deploy script to be executable, in Windows systems without TortoiseGit we need to do some git magic. Create this file as .openshift/action_hooks/deploy in the application source code. If your application is hosted using 'php' directory in the source code : If your a

Adapting Openshift Origin for High load

Image
Openshift Origin is a Platform as a Service software platform which enables us to horizontally scale applications and also manage multiple applications in one cluster. One openshift node could contain many applications, the default settings allows for 100 gears (which could be 100 different applications or may be only 4 applications each with 25 gears). Each gear contains a separate apache instance. This post would describe adjustments that I have done on an Openshift M4 cluster that are deployed using the definitive guide . Maybe I really should upgrade the cluster to newer version, but we are currently running production load in this cluster. The Node architecture Load balancing in an Openshift application is done by haproxy. The general application architecture is shown below (replace Java with PHP for cases of PHP-based application) (ref : Openshift Blog : How haproxy scales apps ). The gear shown running code, for PHP applications, each consists of one Apache HTTPD instance

Long running process in Linux using PHP

Background To do stuff, I usually create web-based applications written in PHP. Sometimes we need to run something that takes a long time, far longer than the 10 second psychological limit for web pages. A bit of googling in stack overflow found us this  http://stackoverflow.com/questions/2212635/best-way-to-manage-long-running-php-script , but I will tell the similar story with a different solution. One of the long running tasks that need to be run is a Pentaho data integration transformation. Difficulties in long running PHP scripts I encountered some problems when trying to make PHP do long running tasks : PHP script timeout. This could be solved by running set_time_limit(0); before the long running tasks. Memory leaks. The framework I normally use have a bit of memory issues, this can be solved either by patching the framework (ok, it is a bit difficult to do, but I did something similar in the past ) or splitting the data to process into several batches. And if you

Hack : Monitoring CPU usage from Your Mobile

Image
Background Sometimes I need to run a long-running background process in the server, and I need to know when does the CPU usage returns to (almost) 0, indicating the process finished. I know there are other options, like sending myself an email when the process finished, but currently I am satisfied with monitoring the CPU usage. The old way I have an android cellphone, which allows me to : Launch ConnectBot, type ssh username and password connect to the server type top watch the top result The new way Because I am more familiar with PHP than with anything else right now (ok, there are times I am more familiar with C Sharp, but it is another story), I  do a quick google search for 'php cpu usage' and found  http://stackoverflow.com/questions/13131003/get-cpu-percent-usage-in-php . Using stix's solution I created this simple JSON web service using PHP : For displaying the CPU as a graph, another google search pointed me to Flot, a javascript l

Installing MariaDB and TokuDB in Ubuntu Trusty

Image
Background In this post I would tell a story about installing MariaDB in Ubuntu Trusty, and the process that I went through to enable TokuDB engine. I need to experiment using the engine as an alternative to the Archive engine, to store compressed table rows. It have better performance than InnoDB tables (row_format=compressed), and it was recommended in some blog posts (this post  and this )  Packages for Ubuntu Trusty In order to be able to use TokuDB, I seek the documentation and find out that Ubuntu version 12.10 and newer for 64-bit platform requires  mariadb-tokudb-engine-5.5  package.  Despite the existence of mariadb-5.5 packages, I found no package containing tokudb keyword in the official Ubuntu Trusty repositories. The mariadb 5.5 server package also doesn't contain ha_tokudb.so (see file list ).  The solution is to use the repository from this online wizard . Installing mariadb-server-10.1, we have many storage engines available, tokudb and cassandra