[Phpmyadmin-devel] Plans for the status page

Tyron Madlener tyronx at gmail.com
Thu Jun 23 20:39:43 CEST 2011


A "proof of concept" or v0.1 is not pushed to my repos. There is now a
new tab on the status page "Live charting" which contains 3 default
charts all updating with the same http request. So I'm done with
building the basic structure for the one request for many charts
system thingy.

Thanks to the freaking awesomeness of jQuery, jQuery UI and just a few
lines of code you can reorder the 3 charts with Drag&Drop.

Next step is a UI to add/remove charts.

On Tue, Jun 21, 2011 at 9:26 PM, Tyron Madlener <tyronx at gmail.com> wrote:
> Ok, I think I know now where I should be heading to. I've just put the
> latest dev version of pma onto my own dedicated server that runs a few
> websites and to my amazement I get to see that my mysql server creates
> huge amounts of traffic (4.3 GiB total in 2 days 6 hrs runtime) with
> bursts of up to 0.5 mb on the traffic live chart with 2sec refresh
> rate. Looking over the list of the alert values in the status
> variables I see them being very high too.
> I caught myself tabbing between the traffic and query live chart and
> wondering which queries create this big amounts of traffic.
>
> So, for pma to be a useful query optimization tool the following could be done:
>
> - Allow users to make every numeric status variable chart-able.
> There's many more variables worth watching than just queries, traffic
> and connections/processes.
>
> - Build an interface that allows you create your own reporting page
> containing the charts you want. A little bit like the custom reporting
> feature of google analytics. As an example in my case, I would want to
> see queries, connections/processes and traffic chart all on one page.
> This configuration could be later saved in the pma_config db
>
> - On this custom reporting page, I not only want to have variable
> charts, but (on demand) also a constantly refreshing list of top 5
> slowest queries, maybe also something in connection with queries per
> session - extracted from the general log.
>
> - If the host is 'localhost'
>  - Read the slow_query_log from file giving us microsecond accuracy
> instead of second.
>  - Allow additional charts for the reporting page such as cpu usage
> and memory usage
>
> - More importantly, I guess I will also have to build a feature that
> allows recording of all this data, so that you can precisely track
> down which queries are being slow. I'm not quite sure how to do this
> yet. (Automatically building a custom table of each configuration?).
> This will also require some time selection functionality for the
> reporting page. This feature is probably going to take some longer
> time to implement.
>
> One advantage of grouped charts is I can retrieve all live data within
> one http request. I only need some intelligent server<=>client
> communication on what data should be sent.
> And the ultimate ownage would be when we feed all this channeled
> together data sources into a JavaScript rules-engine that can infer
> concrete suggestions on how to optimize the queries :D
>
> Anyway, I just thought its probably a good idea to write my plans here
> so I can get some suggestions and verify that I'm on the correct path.
> Thanks for taking the time to read it all ;-)
>




More information about the Developers mailing list