Posts

Showing posts with the label oracle

Copying Big Oracle Tables into Iceberg

During my piloting of Trino Query Engine (formerly PrestoSQL), I tried several datawarehouse destination options. The first option is using Trino's Hive connector with the data stored in Minio storage accessed using S3 API. The Minio services were  run on IBM hardware (ppc64le architecture), but that's another story for another blog post. The metadata were stored in a Hive metastore to serve the metadata, which takes some effort because at some point the metastore need to access the S3 storage (which I don't understand why) and thus need to have proper Hadoop AWS jars. The second option is using Trino's Iceberg Connector to store the data in the same Minio storage and Hive metastore with Iceberg table format. For reference's sake, I will note the version of the software being used in this experiment.  Trino version 442, deployed on openshift OKD 4.13 using Pulumi and Trino Helm template as starting point. Using pristine Trino image taken from docker hub (docker.io...

SAP System Copy Lessons Learned

Image
Background Earlier this year I was part of a team that does System Copy for a 20 terabyte plus SAP ERP RM-CA System. And just now I am involved in doing two system copy in just over one week, for much lesser amount of data. I think I would note some lessons learned from the experience in this blog. For the record, we are migrating from HP/UX and AIX to Linux x86 platform. Things that go wrong First, following the System Copy guide carefully is quite a lot of work - mainly because some important stuff are hidden in references in the guide. And reading a SAP note that are referenced in another SAP note, that are referenced in Installation Guide.. is a bit too much. Let me describe what thing goes wrong. VM Time drift The Oracle RAC Cluster have time drift problem, killing one instance when the other is shutting down. The cure for our VMWare-based Linux database server is hidden in SAP Note 989963 "Linux VMWARE Timing", which is basically add a tinker panic 0 in the ...

'Cached' memory in Linux Kernel

Image
It is my understanding that the free memory in linux operating system, can be shown by checking the second line in the result of "free -m" : The first line, shows free memory that are really really free. The second line, shows free memory combined by buffers and cache. The reason is, I was told, that buffer and cache memory could be converted to free memory whenever there is a need. The cache memory is filled with the filesystem cache of the Linux Operating System. The problem is, I was wrong. There are several cases where I find that cache memory is not being reduced when there is an application needing more memory. Instead, a part of the application memory is being sent to the swap, increasing swap usage and causing pauses in the system (while the memory pages being written to disk). In one case an Oracle database instance restarted and the team thinks it is because the memory demand too high (I think this is a bug). The cache memory suppose to be reduced when we...

Compiling PDO_OCI in CentOS / RHEL

Background Similar to the previous post, my yii-framework based PHP application need to access to oracle database tables. Yii requires PDO_OCI PHP extension in order to access oracle database. I will describe steps that I took to compile PDO_OCI extension from php package source SRPMS.  Preparation In CentOS, we need to create /etc/yum.repos.d/source.repo because CentOS doesn't come with one : [base-SRPMS-6.5] name=CentOS-$releasever – Base SRPMS baseurl=http://vault.centos.org/6.5/os/Source gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 priority=1 enabled=1 [updates-SRPMS-6.5] name=CentOS-$releasever – Base SRPMS baseurl=http://vault.centos.org/6.5/updates/Source gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 priority=1 enabled=1 We also need yum-utils and rpm-build packages yum install yum-utils rpm-build Then, download the source package file with yumdownloader : [root@essdev ~]# yumdownloa...

How To Build PDO_OCI in Ubuntu 12.04

Building PDO_OCI extension in Ubuntu 12.04 is a little difficult. The reason : a. pdo extension are included in the php5 package b. PDO_OCI in pecl requires pdo extension source, not pdo extension binary c. pdo from pecl cannot compile under php 5.3 c. malformed tgz resulting from 'pecl download PDO_OCI' (well, as of today, 11-07-2014, it is) Why I need PDO_OCI? Well, I used Yii framework and need to access oracle database. Yesterday I tried this strategy to obtain pdo_oci extension : 1. downloaded instant client 10.2.0.4 basic and sdk from OTN (oracle tech network) 2. extract the instant client files, move them to  /opt/instantclient_10_2, create 3 symbolic links. 3. download php5 source package, and try to rebuild the php5 debs using debuild. This would ensure the php extensions were build. apt-get install dpkg-dev apt-get source php5 apt-get build-dep php5 apt-get install devscripts debuild -us -uc 4. after php5 deb created, change the debian/rules file to en...

Database connection pooling problems

We have some problems running Oracle and ASP.NET application in different network segments. Similar to Garreth Hooper's problem in his blog , we encountered a ORA-12571 TNS: packet writer failure during period of server inactivity. This all came from the best practice of connection pooling, which in my opinion a complex matter. The Original Problems In one side, we have application servers, hosting applications written either in ASP.NET, Java, or PHP, and on the other side we have database servers. The first problem is, opening a connection from application to db server takes time, even though this is not a certainity (a lot of factors came into play, such as database server configuration, firewall setup, etc), this fact seems to encourage application developers to introduce connection pooling system to the architecture.  The second problem comes from the fact that a connection open is a session open in the database server, and it almost always takes precious resources, so...

Nginx and Linux Kernel Update Blues

Background We have a three-server landscape that serves our internal applications. All of them were running nginx on RHEL linux. The nginx were installed from EPEL repository packages. Two days ago I updated the nginx in one server (lets say server A) in order to repackage it after enhancing it with HttpSubsModule, during the compilation process it complains about missing kernel functionality. For the compile to be success, I updated kernel-devel and kernel packages (in server A) to the latest one. After the compilation succeeded, I copied the resulting rpm to server B and installed it. The problem After reinstalling the rpm package in server B, I didn't remember to restart the nginx server. After installing new kernel in server A, I havent restarted it either. But the next day, because of one thing and another, server A got restarted, and the application on server A no longer works. Server B's nginx server also got restarted, and afterwards unable to serve any web pages...

Recovering from Deleted Row Disaster in MySQL

This post is dedicated to accidents that every so often resulting in deleted rows of data. Of course everyone should have backups. But things do happen when the backups nowhere to be found, or not being done often enough, or we're storing data in a forgotten database server. In the event of such incident.. deactivate your apps. Prevent unnecessary writes to db that could make your data overwritten. In case of oracle database, you could try flashback query that assume the transaction is still fresh in the redo log. In case of mysql database, copy the whole data directory into a safe location, where we would try to dump deleted rows using Percona Innodb recovery tool. But such methods failed miserably when we found out about the accident much too late. I think every person should be responsible for their actions, even if the action is about deleting data. But the consequence of mass-delete actions is very different from single-row deletes. That is why mass delete function must be ...

Installing oci8 on Centos

Today I have the inclination to resume one of my postponed to-dos, that is to install Oracle instant client and php extension oci8 in two production servers. references : http://shamuntoha.wordpress.com/2010/04/12/centos-oracle-php-database-connect-oci-instantclient-oci8/ http://ubuntuforums.org/showthread.php?t=92528 http://www.jagsiacs.co.uk/node/77 Overview The basic steps are, download and extract basic instant client and SDK instant client. Then do a pecl install oci8 to download and compile oci8 extension. There are few issues I encountered when installing php oci8 extension. pear tried to connect directly to pear server. Must set http_proxy using pear config-set command. missing links. Because I downloaded zip files, and not rpms, there are few missing links when trying to link oci8 with instant client. The solution is to create them manually intermittent compilation problem. One of two compile result in an error, even with exactly the same arguments and environment condition. ...

Oracle 10.2 on Linux x86_64

Bagaimana caranya kita install Oracle Database Server di Linux ? Jika anda berkata, masukkan DVD/CD, run installer, klik next-next-next, anda salah besar.. Isu major: installer Oracle 10g untuk Linux tidak sadar akan semua paket sistem yang diperlukan sebelum instalasi Oracle Database. Hasilnya: ketika cek persyaratan, semua OK, tetapi installer bisa saja mandek di tengah jalan dengan error tertentu. Guideline : Oracle Metalink Note 169706.1 (aku tahu kode ini malah dari SAP Notes :) ). Pada Red Hat Enterprise Linux 4, atau CentOs 4, Install paket-paket berikut ini sebelum instalasi (perhatikan ada paket x86_64 maupun i386): # binutils-2.15.92.0.2-13.0.0.0.2.x86_64 # compat-db-4.1.25-9.x86_64.rpm # compat-gcc-32-c++ # compat-libstdc++-33-3.2.3-47.3.x86_64.rpm # compat-libstdc++-33-3.2.3-47.3.i386.rpm # control-center-2.8.0-12.x86_64.rpm # gcc-3.4.3-47.x86_64.rpm # gcc-c++-3.4.3-47.x86_64.rpm # glibc-2.3.4-2.9.x86_64.rpm # glibc-2.3.4-2.9.i386.rpm # glibc-common-2.3.4-2.9.x86_64.rpm # g...

Beware: Your App Server is NOT plug-and-play

Recently, I tried to build a simple application with a somewhat curious architecture: this web application forwards the request from user to a message queue, and another application reads from the message queue, executing an insert sql statement for each message. Netbeans 6.5 beta comes with Glassfish v2 Application Server, which is essentially a Sun Java Application Server 9.1-something with its JMS Message Queue implementation and database connection pool. With its default configuration, I created a JDBC connection to an Oracle Express 10g database, and also its connection pool. The first application is a JSF application, sending a JMS message to the JMS message queue. The second app is an EJB app, consisting a single message driven bean. Naturally the app server pools this bean too. Ok, during manual application testing, all works well. During automated testing using ab (ApacheBench), things starts to get strange aft er more than 10 concurrent connections. With 50 concurrent con...