Posts

Showing posts from 2010

Installing oci8 on Centos

Today I have the inclination to resume one of my postponed to-dos, that is to install Oracle instant client and php extension oci8 in two production servers. references : http://shamuntoha.wordpress.com/2010/04/12/centos-oracle-php-database-connect-oci-instantclient-oci8/ http://ubuntuforums.org/showthread.php?t=92528 http://www.jagsiacs.co.uk/node/77 Overview The basic steps are, download and extract basic instant client and SDK instant client. Then do a pecl install oci8 to download and compile oci8 extension. There are few issues I encountered when installing php oci8 extension. pear tried to connect directly to pear server. Must set http_proxy using pear config-set command. missing links. Because I downloaded zip files, and not rpms, there are few missing links when trying to link oci8 with instant client. The solution is to create them manually intermittent compilation problem. One of two compile result in an error, even with exactly the same arguments and environment condition.

Solving time drift problem on Ubuntu VMWare Guest

I have an Ubuntu VMWare guest, and having trouble with time drift. After a bit twiddling with ntp synchronization and still getting unacceptable time drift, finally I read a post in http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=471784 : From: Kazuhiro NISHIYAMA To: 471784@bugs.debian.org Subject: Please recommends open-vm-source Date: Fri, 28 Mar 2008 21:13:22 +0900 I had a same problem, and resolved following commands: * sudo aptitude purge open-vm-tools * sudo reboot * sudo aptitude install open-vm-source * pager /usr/share/doc/open-vm-source/README.Debian * sudo module-assistant prepare open-vm * sudo module-assistant auto-install open-vm * sudo aptitude install open-vm-tools * sudo reboot I done the steps above, and it seems to work perfectly. Oh, I didn't do the reboot parts nor the pager one. EDIT: Seems that its not enough. I have a AMD Phenom x4 CPU, which have some anomalies in regard to CPU clock speed. The problem is, the CentOS 4.6 Host detected that the CPU hav

Implementing Multimaster Replication

Today I tried to implement multimaster replication on two Red Hat Enterprise Linux 5 servers. I studied the top three google result for 'multimaster replication mysql' today : http://onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html http://capttofu.livejournal.com/1752.html I found these three resources complementary. At first I have no idea about the replication process. Initially followed instructions from Dev.MySQL site to setup one-way replication, then read the livejournal blog for better one-page summary of the process. Reading the ONLAMP Article, I understood the syntax to stop replication and ordered MySQL to skip one SQL statement when resuming replication. I'm using replication with existing data in the database, so the steps I use is a little different from the three web resource above. And because I'm using out-of-the box RHEL setup, there is an issue with the firewall blockin

Feedback Cycles - key to timely process delivery

I've recently found some analogy in some events occured lately. Its all about feedbacks. Back then when I'm doing undergraduate study in Electronics Engineering, we learnt that feedback is an important element of any control systems. And now, it seems, it still an important element, if we want our process delivers in timely fashion. Allow me to describe the two events where the analogy is found. First, some person were being requested to upload some data. But they aren't the one to upload it to the system, they only must provide the datafile to the uploader. And where I fits in, uploading the data. Because of belief that its not in my best interest to upload stuffs, I delegated the task. After the delegate finished with the task, I notify some people that the task is done. Days passed. The software where the uploads being done have a report to validate the data, but somehow the report returns far too much invalid data. Takes us another days to found out that some of the up

Packet Too Large

While importing (restoring) a relatively large MySQL backup file, this error occured: packet too large. After googling for a while I found this page . So the cure is to increase max_allowed_packet in my.ini.

Dont ever put online redo log and its mirror in one drive

Everything is fine in the last one year or so, even if the server crashed several times (maybe bad power line..). Thanks to the RAID5 mechanism (which I must re-add with manual commands each time one drive kicked out of the array), no data were lost in each of the crashes in the past. In 25 February, 2010, my server's system crashed again. It seems that I overlooked the fact that I placed mirrlog and origlog in one partition. The mirrlogA directory contains a member of redo log group 1 & one member of redo log group 3, and origlogA directory contains another member of the same redo log group. The file in origlogA is mirrored in mirrlogA, and the thing is, I symlinked them both to the same partition (different directory, of course). The better practice is, to make those two directory (mirrlogA and origlogA) resides in different physical hard drive. If they reside in the same drive, the probability of both of the online redo log corrupted is becoming significant. And thats exa

Oracle 9.0.1.1 exp bug

I tried to run an intranet app from home, with a database connection from my laptop tunneled to Telkom HQ Data Center. Seems that the application tooks too many queries and data volume were bit too much (OK, maybe I should profile the data volume & query counts...), it took me an hour of trial and error so that I realized : it is impossible to run the app without timing out while in tunneled connection. OK, then proceed to think.. If this is were my usual application, what would I do ? I would fire up MySQL Administrator and backup the externally located database in my system, then restore it to my local MySQL database. For oracle database, the tool alternatives were: - use TOAD to move the database (which, I think would be tedious, and also I have no TOAD installed in my system) - use Oracle Datapump, which we cannot use because the database are 9.0.1.1, where such technology haven't exist yet, and also .. I have no experience at this moment on Datapump. - migrate to MySQL,

saprfc installation on centos 5

Image
After hesitating for a while, I struggled to bring saprfc to live on a Centos 5 installation. Ok, first, the people I'm helping already installed rfc sdk in /usr/sap/rfcsdk. I am using RFC SDK extracted from RFC_45-20000055.SAR that I've downloaded from SAP Service Marketplace Support links (http://service.sap.com/). The SAR file is suitable for Linux i386 architecture, you might want to use RFC_45-10003377.SAR for Linux x86_64 architecture. Now i need to compile sap:rfc extension. I used saprfc-1.4.1 downloaded from http://saprfc.sourceforge.net/ . But, no phpize in path, so i must install php-devel first > yum install php-devel but, because the proxy setup is not done (our server lives outside the DMZ), I must edit yum.conf : > vi /etc/yum.conf and add one line : > proxy=http://.... Then I'm ready to try php-devel again. > yum install php-devel OK. now try to compile sap:rfc. R the INSTALL file in a flash, and tries these steps (the part b in the INSTALL file)