Thursday, February 12, 2015

Tree Join Tables: preventing cycles

Searching Google, I was surprised to find that there were few solutions published for a common issue: preventing users from creating a cycle when you create a self-join table.  So here's one solution, which will be "good enough" for most people, but has some caveats (see below).

First, the setup: we have a table of items.  Items can be in one or more collections.  Each item can itself be a collection, allowing users to create collections of collections.  So the first thing we need is a self-join table on the "adjacency list" model:

    create table collections (
        collection_id int not null references items(id) on delete cascade,
        item_id int not null references items(id) on delete cascade,
        constraint collections_pk primary key ( collection_id, item_id )
    );
    create index on collections(item_id);

So the first part of preventing cycles is to prevent the simplest cycle, where a collection collects itself.  That can be done with a constraint:

     alter table collections add constraint
     no_self_join check ( collection_id <> item_id )

Now comes the tough part, preventing cycles of more than one, two, or N collections in a chain.  This requires us to look down a chain of possible collections and make sure that each inserted tuple doesn't complete a loop.  Fortunately, WITH RECURSIVE works for this provided we do it in a BEFORE trigger.  If we did it in an AFTER trigger, the trigger itself would cycle, which would be no good.

    CREATE OR REPLACE FUNCTION collections_prevent_cycle ()
    returns trigger
    language plpgsql
    as $f$
    BEGIN
        -- select recusively, looking for all child items of the new collection
        -- and making sure that they don't include the new collection
        IF EXISTS ( WITH recursive colitem as (
                select collection_id, item_id
                from collections
                where collection_id = NEW.item_id
                UNION ALL
                select colitem.collection_id, collections.item_id
                from collections
                join colitem on colitem.item_id = collections.collection_id
            )
            SELECT collection_id from colitem
            WHERE item_id = NEW.collection_id
            LIMIT 1 ) THEN
                RAISE EXCEPTION 'You may not create a cycle of collections.';
        END IF;
       
        RETURN NEW;
    END; $f$;

    CREATE TRIGGER collections_prevent_cycle
    BEFORE INSERT OR UPDATE ON collections
    FOR EACH ROW EXECUTE PROCEDURE collections_prevent_cycle();

As I said, this solution will be "good enough" for a variety of uses.  However, it has some defects:

Concurrency: It is vulnerable to concurrency failure.  That is, if two users simultaneously insert "A collects B" and "B collects A", this trigger would not prevent it.  The alternative is locking the entire table on each commit, which is also problematic.

Cost: we're running a pretty expensive recursive query with every insert.  For applications where the tree table is write-heavy, this will decrease throughput significantly.

So my, challenge to you is this: come up with a better solution for this, which solves either the concurrency or cost problem without making the other problem worse.

P.S.: this blog has reached half a million views.  Thanks, readers!

22 comments:

  1. For the concurrency issue; if you're not inserting many rows per transaction would it be any use to grab transaction level advisory locks on both the new ids before the exists check? Still locking, not very elegant, and potential to be more problematic but does that hold anything over a full table lock?

    ReplyDelete
    Replies
    1. What ID would you lock, exactly? You're trying to prevent a cycle, not prevent a duplicate. That means you'd have to lock all potential IDs which might create cycles, which would be a tall order.

      Delete
  2. Short-sighted yes, I was thinking of a starting and ending id, but yes you'd have to traverse the chain and lock every id.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. Yeah, the problem with this solution is that it doesn't solve the problem for an undirected graph like items/collections. Your solution only works if an item can only be a member of one collection, whereas in the example case, an item can be a member of many collections.

      Unless I'm misunderstanding your example?

      Delete
  4. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. No, I get this one. Thing is, I'm dynamically materializing the path in the trigger. I don't think maintaining a materialized view of materialized paths to check for cycles would really help execution speed, and it doesn't help the concurrency issue at all (which is what I'm more worried about).

      Delete
    2. This comment has been removed by the author.

      Delete
    3. You're still misunderstanding an essential part of the problem statement. Items/Collections is not a hierarchy. Any given document can belong to *multiple* collections. The solutions you're presenting only work for strict hierarchies, where each node belongs to one and only one parent.

      Delete
    4. BTW, the solution you linked to looks truly horrible. I'm glad I don't have to use SQL Server anymore.

      Delete
  5. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. Except then as I read it, it's no longer preventing cycles. No?

      Anyway, I'm really unclear on which problem you're trying to solve here: the concurrency problem or the cost problem?

      Delete
    2. This comment has been removed by the author.

      Delete
    3. Postgres has inverted indexes, so we can materialize the path as an index array, which would search quite fast. I'm more worried about it being slow to update. But if it actually solves the concurrency problem, then it's worth contemplating.

      However, I'm still not seeing what the structure would look like for an undirected graph. The actual code examples you give are just for single-parent trees, whereas the real data structure is more of a "baobob forest" with multiple branches and multiple roots and trunks. For example,in that structure there would be no concept of a "tree_id", because there are not distinct trees; it's even conceivable, thought very unlikely, for every item in the system to be connected via collections of collections in a horizonal matrix.

      So if you believe in this solution, I'd like to see the SQL for it.

      Delete
  6. Well, you can always add "cache" of all parents, grand parents and so on for every node.

    Assuming the tree is not too deep it shouldn't be big problem (space wise).

    Of course this makes for some "fun" when adding/removing parent from a node, but it should be rather manageable.

    ReplyDelete
  7. A solution I use, when it's appropriate, is to have a slightly different constraint. In your case it would be:

    CHECK (collection_id > item_id);

    This is efficient to enforce and describes a directed acyclic graph.

    The downside is that placing an item/collection into an existing collection A can mean reordering A to have an id greater than its new children. This doesn't usually require much computational effort but is annoying if you're depending on having stable ids; for instance, when you use that id in another table with declaring the proper FOREIGN KEY!

    (I have chosen here to order parents after their children. Doing the opposite can result in much more reordering, for example, take entries in order: A, b, c, d, E, with (b,c,d) in collection A. Placing A into collection E requires moving A to the end of the list -- i.e. giving it a higher id -- which in turn means moving b, c, and d.)

    This scheme also ties into versioning quite nicely. When an entry is changed, a new tree of collections is build over it, reusing existing unchanged entries where it can.

    (Apologies if this post is duplicated; it got lost the first time I submitted...)

    ReplyDelete
    Replies
    1. As a solution, that works. However, it effectively amounts to "locking the entire table whenever you add a collection", which we could also do with my solution with a simple LOCK TABLE. That's the problem I have with nested set trees, for that matter; way too much of the time you have to lock the whole table.

      Delete
    2. I'm not sure about the locking. Best case is you are adding a collection whose children are already in the table, in which case you simply use the primary key sequence. Worst case is all the reordering/renumbering. Luckily this only has to be done with collections (and only in proportion to the height of the tree for each addition to a collection). As long as such renumbering is done in a transaction, does it matter if other sessions are also modifying other parts of the table?

      Still, I'm not going to try to force the solution where it obviously won't work but I would like to make sure we are on the same page. :-)

      Delete
    3. No, it's an interesting idea, and it moves the concurrency problem -- that is, instead of having inconsistent data, we instead have potential lock-blocking. Right now I'm discussing with you whether that's an improvement overall or not.

      You could do this opportunistically. The drawback to that is possible deadlock failures when two users try to add collections to other collections at the same time. If you're prepared to deal with that, then that's one way to do it.

      If we were to pursue a > strategy, one way to do it without huge cascading reassignments would be to use fractional IDs (e.g. ID = 15.875). This has been implemented as a more advanced version of nested sets for trees, for example. Using fractional IDs would in a lot of cases allow you to preserve the mandated ordering without needing to push the entire tree downwards.

      Thoughts?

      Delete
    4. First of all, I acknowledge that I've didn't even try to understand concurrency failure you describe in depth ;). But, can you demonstrate it at SERIALIZABLE isolation level (and if you can, why didn't you file a bug report)?

      Delete
    5. Yaroslav: I don't think SERIALIZABLE would fail the concurrent insert because it's driven by a trigger, and transaction isolation levels don't care about procedural code. There's no constraint violated.

      Delete
    6. Sorry, I didn't see your answer until today.
      Now, I did try to understand concurrency failure you describe. ;)
      SERIALIZABLE should prevent it (and it does, so no bug here).

      You can test it yourself by adding:
      PERFORM pg_sleep(5);
      before
      RETURN NEW;
      into your trigger function and trying to insert conflicting data from two sessions with "SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL SERIALIZABLE" concurrently. One of them will fail with:

      ERROR: could not serialize access due to read/write dependencies among transactions
      DETAIL: Reason code: Canceled on identification as a pivot, during write.

      As you correctly said, SERIALIZABLE isolation level doesn't care about procedural code, it just should prevent _all_ possible non-serializable executions (all concurrency failures).




      Delete