Login | Register
My pages Projects Community openCollabNet

Reply to message

2020-03-13: This site is going to be decommissioned and shut down very soon. Please copy and archive any data you wish to keep ASAP

* = Required fields
* Subject
* Body
Send reply to
Author (directly in email)
Please type the letters in the image above.

Original message

Author =?ISO-8859-1?Q?David_Z=FClke?= <dz@bitxtender.com>
Full name =?ISO-8859-1?Q?David_Z=FClke?= <dz@bitxtender.com>
Date 2006-10-04 11:20:35 PDT
Message ???

Isn't the first method nonsense? It's not any different from what
we're doing right now.

I think you're focusing too much on the performance of hydration.
I'll try the direct PDO hydration idea later. I bet this will be
incredibly fast. The only point where caching results makes sense is
when we want to avoid querying the database again - that's the slow

foreach($result as $row) {
    // each $row is hydrated on demand and NOT stored anywhere. under
ideal circumstances, this would mean that memory use never exceeds
the amount of memory needed for the largest row in the result set

This is what iterators would be about!


Am 04.10.2006 um 20:11 schrieb Sven Tietje:

> hi alan,
> don`t want any war with you - i`m a friend of clean oo design, too.
> perhaps i didn`t make myself clear. of course, i prefer a single
> instance of each object.
> i`d like to have PropelResultset:
> class PropelResultset implements IteratorAggregate, ArrayAccess {
> }
> foreach ($resultset as $row) {
> private $cache = array(
> 0 => BaseObject,
> 1 => BaseObject,
> .....
> )
> // the objects are generated on the fly and on demand.
> // the objects are cached
> }
> fetching a special element afterwards will not generator a new
> instance again - will return the object generated during the first
> iteration.
> $blah = $resultset[0] => [0] is already generated - you`ll get the
> cached object back.
> or you iterate the resultset again => it`s not necessary to hydrate
> the objects again - its just iterated about the interal cached
> array containing the object instances.
> i think, that variante is the normal variant and its compatible to
> all our applications.
> the second variant is a none-caching variant -> not use much memory
> -> eg overview, only output etc...
> of course, i could use an array, but i like oo-style for fetching
> data. cause of this, i proposed to use FETCH_INTO - we can do it
> another way. $object->getRelatedObject() is queried on the fly,
> too. afterwards, data is gone.
> the first variant should still be the prefered and default variant
> of handling data.
> greets sven
> Alan Pinstein schrieb:
>> Why? Because re-using objects is:
>> 1. BAD OO design... one instance == 1 LOGICAL instance. If you re-
>> use it, you are pretty much breaking black-box. "Clients" of your
>> objects don't expect this behavior, because it's bad OO design.
>> What if in the course of using FETCH_INTO, you have some
>> relationships:
>> foreach (FETCH_INTO loop) {
>> $myObj->setRelat​edObject($Y);
>> }
>> Well this is going to end up an awful mess, with no referential
>> integrity in your object model.
>> 2. There is no meaningful benefit to FETCH_INTO. Seriously, name
>> one potential benefit of doing this. Are you thinking performance?
>> Lots of things in OO would be faster if you break the OO... but it
>> shouldn't be done.
>> 3. With your example:
>>> Now, i wanna publish a List of Person with their fullnames. I
>>> don`t want to
>>> change data or something -> just want to print table containting the
>>> fullname and some columns additional information. I`ll implement
>>> a method
>>> getFullname in my Object-Class:
>> Ok great! That's a completely reasonable thing to want to do.
>> HOWEVER, I don't see why getting back DISTINCT instances for each
>> iteration prevents this from happening. It doesn't use more memory
>> and it isn't much slower (only real speed difference is calling
>> the constructor).
>> However, I can promise you that getting back the SAME instance
>> each time, but it having different data for *some* columns, will
>> definitely cause you hours of painful debugging, followed by "Why
>> am I using FETCH_INTO"?
> --------------------​--------------------​--------------------​---------
> To unsubscribe, e-mail: dev-unsubscribe@prop​el.tigris.org
> For additional commands, e-mail: dev-help at propel dot tigris dot org