Very strange behaviour - advice needed

Hi all,

I have a php system that generates some quite large (200K-800K, 4-18
page) .pdf files. It has been working fine for months, on Ubuntu/nginx
07.67 and Windows Apache/2.2.11 (Win32) PHP/5.3.0

Then a recent change involved making some of the images in the .pdf
files slightly larger, and involved moving the production server (from
Ubuntu 10.04 with nginx to Apache 2.2.under Ubuntu 10.10.

I immediately started getting reports of black screens when showing the
PDF files. Investigation showed that
The log showed the correct .pdf url
The time taken was consistent with the .pdf being generated (4-8
seconds).
The content was displayed as a pdf (to the headers sent are pdf
headers).
The content was actually the initial log-in screen as if the
session data was not found.
the PDF is generated in response to a javascript button that
executes code like this…

function dopopup(what,act) {
var url = ‘http://sopsystem.anake.hcs/’+ act + ‘/’ + what + ‘350.pdf’;
var win = window.open(url,’_blank’);
win.focus();
window.setTimeout(‘location.reload(true)’,8000);
}
This is then converted using
location ~ .pdf$ {
rewrite ^/(email|print)/(quote|ack|amend)(\d+).pdf$
/index.php?view=$2&act=$1&id=$3;
}
and it is served in the new window.

Further investigation has shown that the session is indeed empty when
this happens.

I am left wondering why the session sometimes cannot be found when
opening a new window, but usually works just fine.

Having never had a problem with nginx before or on the test setup, I
swapped Apache 2.2 out and put nginx in, and the level of reports has
dropped, but not gone away.

I have two development systems. A Windows Apache 2.2 system for
development and a Ubuntu nginx VM system for staging before moving to
production.

The Apache 2.2 system is a standard WAMP install. Here some documents
always fail as above, and some always succeed. A large one failed - and
appeared to take
Apache down with it. So I stopped all processes (which took some
considerable time) and start them again. Then the one that failed,
worked! More puzzlement!

So I tried to test the nginx side.

The a large PDF (17 page) failed. So I opened the base record to see if
there was any obvious data error. There wasn’t, but I saved it anyway.
Now it works. I can create PDF files
from all the other records on Nginx side now including the one that
failed a moment ago.

So I rebooted the nginx machine and tried again. This would ensure
everything was fresh.
This time I was told the .pdf file was corrupted and could not be
repaired. Looking at the source it appeared to be a valid .pdf file.
So I hit refresh in the browser. And the pdf displayed properly.

Closed the window and triggered it again from the link - it works fine.
Others also work fine.

Rebooted the VM server and clicked the link again - this time the new
window displayed the login screen as the login screen ALTHOUGH THE URL
WAS THE .PDF. This is what I would expect if
the session no longer existed. Conclusion - sessions are wiped when
fastcgi php starts. Reasonable.

So closed windows, went back to the main screen, logged in and called up
the PDF again. It worked this time.

So I rebooted the server, logged in and requested one of the ones that
had never failed. It failed “PDF corrupt”.
So I hit refresh and it appeared - all 18 pages of it.
Go back to the first one that failed - it works first attempt.

Has anyone any idea what might be going wrong, or any suggestions as to
how to debug it further?

The only thing that makes sense with what I have observed that that
something is getting messed up and crashing on the first call for a PDF
file, and the recovery processing is actually recovering.
But is it the PDF library, the PHP fast_cgi set up or what? And how can
I detect which?

All input gratefully received.

Ian

I’ve Googled and found nothing helpful. Maybe my Goole foo is low
today.

Has anyone any idea what might be happening?

All input gratefully received.

What browsers did you use for testing?

There is kinda a known bug/problem (
http://support.microsoft.com/kb/812935 ) with IE (older versions at
least)
that the browser doesnt like Cache-Control: No Store / No Cache headers
(for certain types of documents ) which obviously are sent by default if
you
are using php sessions.

One of the suggested fixes is to add following header lines to the code
before sending the file:

<? // fix for IE catching or PHP bug issue header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); ?>

Similar case/sample (
http://blog.globalfolders.com/2010/10/pdf-downloads-in-ie-over-ssl-using-nginx/
) just using the nginx X-Accel-Redirect.

See if that helps.

If not you probably need to enable some kind of php errorloging (logging
to
file) to catch the problem - it might aswell be just the php script
exausting ‘memory_limit’ which in some L(W)AMP installations is pretty
low.

rr

Hi Reinis,

On 28/03/2011 16:30, Reinis R. wrote:

All input gratefully received.

What browsers did you use for testing?

I use Firefox, but my users mostly use IE.

// fix for IE catching or PHP bug issue
header(“Pragma: public”);
header(“Expires: 0”);
header(“Cache-Control: must-revalidate, post-check=0, pre-check=0”);
?>

Similar case/sample (
http://blog.globalfolders.com/2010/10/pdf-downloads-in-ie-over-ssl-using-nginx/
) just using the nginx X-Accel-Redirect.

See if that helps.
Thanks for the heads-up!

If not you probably need to enable some kind of php errorloging
(logging to file) to catch the problem - it might aswell be just the
php script exausting ‘memory_limit’ which in some L(W)AMP
installations is pretty low.

I’ve raised it from 128MB to 256MB, but it appeared to make no
difference.

I’ll try raising the error logging, and sending it to a file.

Thanks again.

Ian

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs