Saturday, November 21, 2009

IWSS - Java developer's nightmare

InterScan Web Security Suite, abbreviated IWSS, it is said to be a comprehensive solution tailored for large corporate deployment. Product of Trend Micro, IWSS scans every HTTP access from a corporate's intranet onto the great internet. Unfortunately, it assumes that every .jar file accessed by a corporate's intranet is a java applet, therefore it naively apply bytecode manipulation on it, making a dialog box pops up every time the class in the jar file access something in the host OS. Like accessing a file in Eclipse's plug-ins folder. Like accessing a file in the Local Settings folder.

Why does it have to be like this? When does an antivirus company, allowed to do manipulations that in the past only done by virus? Let me explain. A jar file is an JVM executable. I downloaded tens if not over a hundred jar files from the internet, because Eclipse's update mechanism is just like that- by downloading jar files from the Internet. IWSS tampered with them, modified them.. it modified the executables so I could no longer expect the original behaviour of them. It ruins tens if not hundreds of jar files in my two laptops, which I must cleanup now. Virus in the past also did something similar to this. Exe files were modified by viruses, its header altered to call the virus body attached in the end of the exe file before calling the original entry point of the executable.
Please. I don't think that there is any reason for an antivirus company to behave like a virus. For the damages that already done, I think Trend Micro must provide its users a cleanup tool, a tool that could scan a harddisk for altered jar files, with com.iwss package in it, and modified them to normal, un-applying the bytecode manipulation said before. It is similar to virus cleanup tool, no?
I don't think Trend Micro has done everything they could to detect whether the JAR file is an applet or not. Oh, I see, they seem just UNABLE to do that. I wonder if they were UNABLE to create such bytecode transformation I said in the previous paragraph.

Saturday, November 14, 2009

Installing MULE ESB on Centos

I have one configuration that downloads SAP data using a Mule-1.4.4. Why not mule 2? The SAP Transport for Mule 2 didn't exist yet during the time I built the system. It was implemented on a Windows Sandbox system, that unfortunately has few shortcomings such as not being backed by tech support staffs, virus-prone, must be started up after power outages, and shared system load with SAP ERP Sandbox & one Ubuntu OS loaded in VMWare (whew, that's a lot..).
So I think it would be better to migrate the Mule-system into a Centos-powered virtual machine located in the Data Center (where power outages are rare and of course we have no worm/virus problem).
Extracted mule-1.4.4.tar.gz into /opt/mule-1.4.4. Put the IBM JDK 1.5's bin directory onto user's PATH by editing user's .bashrc file. Extracted sapjco 2.1.8 into /opt/sapjco2.
Tried run Mule for the first time, but it wants to download mail-1.4.jar. I copied the missing file from the previous system to lib\user. I also copied activation-1.1.jar to lib\user. Mule runs but complains about it being unable to write pid files and log folders. Chown-ed the entire mule tree to the mule user I created for the purpose of running the system.
Now, for the SAP integration part.. extracted in the home folder of mule user. Copied mule-transport-sap.jar and mule-transport-sap-examples.jar from the target folders resulting from the previous built (on a WinXP system, I think) onto lib\user.
I tried to run the config file from the mule-sap-transport system. Unfortunately it choked because of missing commons-dbcp and ojdbc.jar (I used oracle jdbc endpoint). I copied the missing jars from the previous system onto lib\opt (don't know whether will make any difference if I used lib\user). Symlinked sapjco.jar into lib\user.
At long last, Mule runs now. But when I triggerred the sap download (RFC call), exceptions spewn in the console (or mule.log, depends on I used mule start -config ... or just mule -config ...). It couldn't find the native library, but somehow it was able to load in the same folder. Must be the wrapper mechanism (mule used wrapper from tanukisoftware) that causes this strangeness. Poking my .bashrc again, and finds out that the LD_LIBRARY_PATH is not being exported.. silly me. Adds LD_LIBRARY_PATH to the export clause, exit the console, re-login, and now it works..

RAID5 Failure, Again

This time I get (another) annoying RAID5 failure. The CentOS 4.7 server won't boot because it was unable to start the RAID5 array. Yes, this is the second time I stumbled upon this problem (see this Indonesian-written post). I burned a new CentOS 4.7 DVD (using a new REAL server's DVD writer, no less), then I boot up the DVD, typed linux rescue in the boot command line, and tried to follow the exactly the same step I've done and written in this blog, but to no success: the system complains that the superblock doesnt match.
Seems I forgot the new RAID5 configuration in this server. I forgot that I have reinstalled this server with SAP ERP Netweaver, creating two software RAID5 arrays in the process, and of course with different partitions.
The partitions were: sda3, sdb3, sdc1, sdd2. The four partitions created a 215 megablock (thats about 100 GB, I think) md1 partition. Here's the chemistry:
- The kernel won't add non-fresh member (sdd2) into the array, it kicks it out of the RAID assembly.
- The remaining RAID assembly of three partitions couldn't be started. The cause is, which I found out after forcing the array to run, is that event counter in sda3 is not the same with the others. But kernel said nothing of this in the dmesg log. It just said 'unable to start degraded array ..'
- I forced the assembly to run. Must do this when the md device stopped. So, mdadm -S /dev/md1, then mdadm -A --force --run /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc1 /dev/sdd2. It runs indeed, writing error messages about sda3
- But sdd2 still kicked out from the array. I must manually add it to the array, mdadm -a /dev/md1 /dev/sdd2
Now, I am just waiting for the recovery (recovery status can be read in /proc/mdstat) to finish, so I could boot up this system in confidence. I hope nothing else went wrong.