Hi,
i have some trouble with the dump feature.
Have anybody test it with tables greater than 15 MB and a slow server (350 PII)? It is impossible to get a dump from big table with the current version, but the 'old' phpMyAdmin 2.2.0 handle these tables easy.
The previous versions of phpMyAdmin have build the dump during the client have get. This method save memory and time because, with the current method the script build first the complete dump and then send it to the server.
If the PHP settings for memory usage (--enable-memory-limit,memory_limit <= 8M) and max time (max_execution_time < 30) to small, phpMyAdmin _must_ break by small tables with size, is over 5MB because the php tread have no more resources.
I think we should turn back to the old method.
What you think about it?
On Fri, 28 Sep 2001, Steve Alberty wrote:
Hi,
i have some trouble with the dump feature.
Have anybody test it with tables greater than 15 MB and a slow server (350 PII)? It is impossible to get a dump from big table with the current version, but the 'old' phpMyAdmin 2.2.0 handle these tables easy.
Yes, I have tested it. Have a look at Bug #448223, "Dump Hangs". The limitations are either that PHP times out, PHP runs out of memory, or that PHP takes so long the browser times out.
We will not be able to escape the browser timing out tho.
I think we should turn back to the old method. What you think about it?
As Loic mentioned in his reply to this message, we need the new message for the compressed output and the output buffering. If we can figure out a way to make those send packets of data while we are building the dump, then we can eliminate this problem, but I do not know if that possible. I think it could be done with gzip (due to nature of gzip), and output buffering (ob_flush ?), but I am not certain of bzip2 compression (which never seemed to work on my PHP anyway, something with the libraries).