Hi,
i have some trouble with the dump feature.
Have anybody test it with tables greater than 15 MB and
a slow server (350 PII)? It is impossible to get a dump
from big table with the current version, but the 'old'
phpMyAdmin 2.2.0 handle these tables easy.
The previous versions of phpMyAdmin have build the dump
during the client have get. This method save memory and time
because, with the current method the script build first the
complete dump and then send it to the server.
If the PHP settings for memory usage
(--enable-memory-limit,memory_limit <= 8M)
and max time (max_execution_time < 30) to small,
phpMyAdmin _must_ break by small tables with size,
is over 5MB because the php tread have no more resources.
I think we should turn back to the old method.
What you think about it?
--
Steve