Login | Register
My pages Projects Community openCollabNet

propel
Reply to message

* = Required fields
* Subject
* Body
Attachments
Send reply to
Topic
Author (directly in email)
Please type the letters in the image above.

Original message

Author Cameron Brunner <cameron.brunner@gmail.com>
Full name Cameron Brunner <cameron.brunner@gmail.com>
Date 2007-07-14 01:40:29 PDT
Message On 7/14/07, Oliver Schonrock <oliver at realtsp dot com> wrote:
> Cameron Brunner wrote:
> > A - We need to store the entire query in memory if we are to do
> > duplicate query detection.
>
> true, however (connected with what you say in D) the
> PDOStatement->queryString property only gives us the prepared statement
> sql not the final, "all params substituted" sql. This is a limitation in
> PDO and probably for good reason since the param substitution is
> supposed to happen in the DB server for some backends (not mysql due to
> query cache limitations).
>
> This means that we do not, and will not for some time, have access to
> the final sql string. Does "duplicate detection" make sense in this
> case? Is it a valid feature if we are only looking at prepared
> statements? There will be many "duplicated" queiries for
> retrieveByPk(1|2|3) for example....

Good point, time to tear the prepared statement parser out of Creole
then. Duplicate query detection is a big thing in my books.

> > B - If we dont, we incur quite an overhead to do func_get_args and
> > call_user_func etc so if we dont we pay a penalty even in production
> > hence the way i did that
>
> hmm..good point. i did some simple tests over 1000 calls:
>
> plain call a(1,2,3,4): 1.5ms
> plain call $a->a(1,2,3,4): 1.6ms
> call_user_func_array('a', array(1,2,3,4)): 3.6ms
> call_user_func_array(array($a, 'a'), array(1,2,3,4)): 4.6ms
>
> big percentage increase, still low given that every db query takes ~1ms
> no matter how trivial. But worth considering when we think about how to
> implement these calls. I will revisit this and rewrite as a native call.
>
> > C - true, specially if you have unbuffered queries enabled, its just a
> > bit of a guide on what returns a lot of rows. We could also add a
> > memory usage check onto the dtor, that would be about when all rows
> > are hydrated... we could also add the true flag to the memory usage
> > functions, i dont know the difference tho
>
> i am not sure the memory_usage thing is very useful due to these
> limitations. Should perhaps be dealt with at a profiling level further
> out than PDO or even outside Propel.

Agreed that its better to do it with real profiling but a quick
snapshot while a system is still in development without having to jump
through hoops is certainly a bonus in my books. To get a better memory
usage we can add a memory check in the dtor for the statement and that
should kick in when the objects are done with hydration... ?

> > D - We can still add these into PropelPDO and since DebugPropelPDO
> > extends it that will run automatically still... nice and seamless and
> > easy... i have had propel's name cursed at work since we couldnt see
> > the final sql that was being ran easily (having to replace the ?'s by
> > hand)
>
> I am trying to clarify what features we can realistically hope to
> achieve and where they should sit. In my opinion, while duplicate query
> detection and memory consumption are desirable, they are not really
> possible given where we are measuring. That then leaves query timing and
> counting.
>
> As far as I can see these can be easily added to the existing
> PropelPDO.php while adding a PropelPDOStatement.php. Making sure that we
> use efficient call mechanisms for the extensions.
>
> Counting could be driven by a $debug switch (although I would prefer a
> setter rather than a public property).

Granted, static method would be preferable but it was a quick patch
(half hr before work)

> Timing could be switched on if logging is enabled and $debug=true. The
> time written to the log with the sql string since these belong together
> and there doesn't seem to be a case for keeping the sql in memory.
>
> Moving logging to exec time rather than prepare poses several problems
> with backward compat and persistent connections. I will prepare a new
> patch which considers these options and suggests a solution, bearing
> performance in mind.

Persistent connections are something that I personally discourage in
any situation, is there any databases that really need this these days
to get optimal performance? The overhead for mysql and postgres is
quite minimal... sqlite is the only real db that i know gains
something from persistence.

> > Always plenty of options, i have been wanting a way to automatically
> > profile an application in development and when a 'slow' or 'expensive'
> > (memory wise) query is ran then to display it. I was considering a
> > Smarty debug style popup to be able to be enabled...
>
> we have profiling information automatically at bottom of every
> non-production page, just in the html. it works well.
>
> hope we are getting closer.
>
> Oliver
>
>
>

You're not the only one who hopes we are getting closer to the right
solution here. I'm bored at the moment so i'm about to play around
with this and maybe more work on the PropelDateTime stuff.

--
Cameron Brunner

Want a better web browser?
http://www.spreadfir​efox.com/?q=affiliat​es&id=182780​&t=1