Sunday, February 23, 2014

Tips on Creating PDF using mPDF PHP library

In this post I will describe several ways to format HTML documents that will be converted into PDF file using mPDF PHP library.

1. Change Page Orientation

Use CSS3 page and sheet-size attribute to change page orientation. 

<style type="text/css">
    .landscape {
        page: a4landscape;
    }
    .portrait {
        page: a4portrait;
    }

    @page a4portrait {
        sheet-size: A4;
    }
    @page a4landscape {
        sheet-size: A4-L;
    }
</style>

<div class="landscape"> Portrait page </div>
<div class="landscape"> Landscape page </div>



 2. Change page margins

To change page margins, you need to give additional parameter to mPDF's constructors. The page margins are in milimeters. Note the example uses Yii pdf extension instead of calling mPDF directly.

                $mPDF1 = Yii::app()->ePdf->mpdf('', 'A4',0,'',5,5);

3. Fit a table into one page

 To fit a table in entirety onto single page, we could use CSS attribute page-break-inside:avoid on the table.

<table style="page-break-inside: avoid;"> ... </table>
...

And you might want to enlarge the maximum shrink factor (the larger the value, the smaller we allow mPDF to shrink tables) :
                $mPDF1->shrink_tables_to_fit = 10;
 

4. Fit contents of a div into one page

We might need to fit not only one table but one DIV containing many tables and other tags. The DIV should be only one DIV and directly under HTML BODY.
<body>
<div style="bottom: 0; left: 0; position: fixed; right: 0; top: 0;">
...
</div> </body>

5. Change center to other tag with center class

The center tag is deprecated and not supported by mPDF library.
<center> This is a title </center>

should be changed onto
<div class="center"> This is another title</div>

<style type="text/css">

.center {

   text-align: center;

}

</style>


6. Accelerate page rendering

Sometimes mPDF takes a long time to render pages because it iteratively tries to fit elements. So you might want to change this parameter to speed things up (the normal value is 10, meaning mPDF iterates in 1/10 increments):
                $mPDF1->incrementFPR1 = 4;

Tuesday, February 18, 2014

Export and Importing data using Oracle 11

Background

In the era of oracle 9-10, I usually does export and imports data using exp and imp utility. Using oracle 11g, things aren't as smooth as it used to be.

Problem A - Exp doesn't export empty tables

Exp no longer exports empty tables. The cause are, exp utility is processing data segments, creating export dump. The Oracle 11g has a new feature, called 'Deferred segment creation'. It means that tables with no data will occupy no segments. And these tables will be missing from the exp result. The symptom of this problem is simple, we have missing tables in the destination system whose row count in the source system are zero.

Solution A-1 - Allocate segment for empty tables

Alter table allocate extent;

This is not so ideal, we must execute the SQL above for each empty table before doing exp command.

Solution A-2 - Disable deferred segment creation

Alter system set DEFERRED_SEGMENT_CREATION=FALSE;

This need to be done before we even create our empty tables.

Solution A-3 - Don't use exp/imp, use expdp/impdp instead

If we have access to the source database server and destination database server's respective filesystems, administrator-like OS level access, Oracle Data Pump (expdp/impdp) is a good solution. Impdp and Expdp are faster than imp and exp. But both are not without problems as we shall see.

Problem B - Expdp in oracle 11 are buggy

Some of the installed Oracle 11 still have buggy expdp routines. The symptoms are ORA-14102 error when we're doing impdp. It seems that a bug in the expdp causes the import SQL contains the clause 'NOLOGGING NOCOMPRESS LOGGING'. Such clause cannot be correct. Makes me wonder what all oracle QA guys are doing. 

Solution B-1 Install Oracle Patch 8795792

You need to log on to Oracle Metalink and download patch 8795792. After shutting down your oracle database, install the patch, start the database, and execute the post-install steps. I forgot to do the post install steps (missed it in the readme file) and takes us 3x export-import cycles to realize that the patch were not effective (yet). But of course you need an active Oracle support contract. IMHO, if you don't have Oracle support contract, you would be better running MariaDB or MongoDB or other open source database.

Problem C - Error importing partitioned data 

One of the destination database is an Oracle Express 11g, which doesn't have Enterprise features. But the source database were Oracle 11g enterprise. When tables using partitioning were imported, we have ORA-00439 feature not enabled : Partitioning error. Fortunately Oracle 11g's impdp have a merge partition option.

Solution C-1 Use merge partition option when importing

Just insert partition_options=merge to impdp's command line. Beware that the partitions would be gone from table's DDL in the destination system.

impdp system TABLES=sh.sales PARTITION_OPTIONS=merge 
DIRECTORY=dpump_dir1 DUMPFILE=sales.dmp



Wednesday, February 5, 2014

Database connection pooling problems

We have some problems running Oracle and ASP.NET application in different network segments. Similar to Garreth Hooper's problem in his blog, we encountered a ORA-12571 TNS: packet writer failure during period of server inactivity.
This all came from the best practice of connection pooling, which in my opinion a complex matter.

The Original Problems

In one side, we have application servers, hosting applications written either in ASP.NET, Java, or PHP, and on the other side we have database servers.
The first problem is, opening a connection from application to db server takes time, even though this is not a certainity (a lot of factors came into play, such as database server configuration, firewall setup, etc), this fact seems to encourage application developers to introduce connection pooling system to the architecture. 
The second problem comes from the fact that a connection open is a session open in the database server, and it almost always takes precious resources, so databases naturally want to ensure that idle connections doesn't linger too long.
The third problem, network firewalls want to police the network, so instead of database server, the network firewall does the connection disconnect in the name of network performance management.

Connection Pool

The connection pool, is a list of connections kept open to the database server. Even if we know nothing about this, the standard ODP.NET connection uses connection pools, and C3PO / DBCP connection pools are the norm in Java Spring Framework applications. So we must live with them. The thing is, connection pools prevents the application to actually close database connections. Instead of closing them, the close operation returns the connection to the pool.
If, for example, the application request a new connection 30 minutes after the close operation, the connection pool might returns the same connection. The connection, might be already killed by the database server, or killed by the network firewall. Which will cause us similar problem, both with Oracle database servers and MySQL database servers.
The configuration points that we might need to configure are :
1) the database server parameters
2) db connection string in the application
3) connection pool parameters in the application (this is the same as 2 on .NET platforms)

Solution A : Ensure connections are not killed

So, we could try to ensure that connections would not be killed at all. That includes configuring database server so there is no idle disconnect, and configure firewall to give special cases to database ports. Oh, the last one is impossible, I forgot. So we could, in oracle databases, enter this one-liner in NETWORK/admin/sqlnet.ora and restart the tns listener, so the database would always initiate ping/network activity on the connection after 10 minutes of idle time :
SQLNET.EXPIRE_TIME= 10
This would ensure the network firewall stays happy and prevent it from killing our connections.
Then we also needs to ensure the oracle user's profile is having unlimited idle time, so the database would not kill our connections :
SELECT PROFILE, LIMIT FROM DBA_PROFILES WHERE RESOURCE_NAME = 'IDLE_TIME';
SELECT PROFILE FROM DBA_USERS WHERE USERNAME = 'username';
 
Or, for MySQL databases, set the wait_timeout to a large value :
#The number of seconds the server waits for activity on a connection before closing it
wait_timeout=28800
#The number of seconds the server waits for activity on an interactive connection before closing it.
interactive_timeout=28800

For MySQL, the configuration  you might want to set is keepalive setting in the client connection parameters.  For example, in the MySQL/net connection string :
Server=127.0.0.1;Uid=root;Pwd=12345;Database=test;Keepalive=120

For MySQL/Java connections :
jdbc:mysql://localhost:3306/sakila?tcpKeepAlive=true&autoReconnectPools=true

Solution B Ensure connections are valid, reconnect if not

 In Java's DBCP connection pooling, we could set these parameters to ensure that each connections are valid before passing it to the application code :
                     class="org.apache.commons.dbcp.BasicDataSource">
       
       
       
       
       
       
       
       

       

 So all connections would be tested before use, preventing users from seeing connection errors.
For Oracle connections on .NET platform, we could set in the connection string as follows :
User Id=scott;Password=tiger;Data Source=oracle;Validate connections=true
For MySQL connections in .NET platform, the validation already hardcoded, there is no need to set any option (see the source).  

Disclaimer

Please test suggestions in the article in a controlled test environment before applying them in the production environments, because my environment is definitely different from yours :).

References

Tuesday, February 4, 2014

How to compile using Visual Web Developer Express in lieu of Visual Studio Professional

When we build an ASP.NET application, usually we used Visual Studio Professional. But when we have several members of the team that want to write some code, it would be beneficial to be able to develop using free software only. That means less hassle with licenses.
Missing features of the free Microsoft Visual Web Developer Express were :
- No Crystal Reports designer
- No plugins - means Oracle Developer Tools for Visual Studio will not be available
- Unable to connect using ODP.NET in the Database Explorer
- No Microsoft Report Viewer control

When some of the components above were referenced in ASP.NET project, opening the project file results in missing (yellow exclamation mark) references. To fulfill those referential dependency, there are some runtime software that should be installed :
- SAP Crystal Reports runtime engine, see this SAP download page.
- Microsoft Report Viewer Redistributable package, see this Microsoft download page.

Of course the report design tools are not available for use, but the project would compile just fine. Lets just hope none of the reports need to be changed for a while.

Sunday, February 2, 2014

CORS - Cross Origin Resource Sharing using CouchDB

Recently I tried to build a simple application which creates simple math questions, and records the time taken by the student to finish one page of questions. The short story is that I finally succeeded in implementing the question generation by using Javascript without any server side programming. But what about recording the time taken? We need some database in some server for that.
I want to use a database, but I don't want to  code server-side. So I guess CouchDB is a good choice, given that it supports REST-based API and all. It turned out that calling HTTP REST service from  the browser is much more complicated than calling the same service from the server.
To make a long story short, here is what I found :
  1. Javascript running in a page in a web browser are not allowed to call urls outside the domain of the page. 
  2. Except, when the server serving the URL specifically allowed this by giving Cross Origin Resource Sharing header.
  3. Credentials passing is doubly difficult, because there is a separate Allow Credentials header required, and the involvement of preflight requests, which is a http request with OPTIONS method that were issued before the actual request. The facts are :
    1. Credentials will not be passed when the browser is doing a preflight request
    2. Simple requests, with custom Authentication header (containing the said credentials), will be performed using preflight request first because the existence of such header
    3. Some CouchDB servers will reject the OPTIONS header when were requested in some URL, with not authorized HTTP status.
    4. Debugging CORS in Firebug/Firefox is a nightmare. Use Chrome for better sanity.
  4. Cloudant CouchDB server will not give CORS header for _uuid URL.  This might be true for other servers. CORS configuration must be done using curl (see this gist)
  5. Couchappy CouchDB server keeps requesting authentication even for OPTIONS request. Couchappy have CORS section in the configuration.
  6. My Iriscouch CouchDB server have intermittent availability problems. Unreliable for development use. Something wrong might happened during my registration/service provisioning. Iriscouch keeps showing no CORS header even after I enabled it in the Futon configuration.
My first success is obtained by :
  1. Using cloudant server, configure CORS using curl with allow_credentials false and allow-origin *.
    curl -k -i -u username  -X PUT https://username.cloudant.com/_api/v2/user/config/cors -H "Content-Type: application/json" -d '{"enable_cors":true,"allow_credentials":false,"allow_methods":["GET","PUT","POST","DELETE","OPTIONS"],"origins":{"*":{}}}'
  2. Configure the database using Cloudant's Dashboard UI so Everyone has write and read access
  3. Call the CouchDB API using jquery :
        docdata = '{"level" : "' + currentExercise.level + '", "sheet" : ' + currentExercise.sheet
        + ', "msecs" : '+s.diff + ', "acc" : ' + s.score + ', "d" : "'+datetime +'"}';
        $.ajax({
            url: baseUrl + "/sheet_submission",
            data: docdata,
            type: 'POST',
            dataType: 'json',
            contentType: 'application/json',            error: function() {
                alert('document save failed :' + docdata);
            },
            success: function (d1,st1,jq1) {
                alert('doc save ok : ' + docdata);
                recordCallback();
            }
        });