library: libCore
#include "TBtree.h"

TBtreeIter


class description - header file - source file
viewCVS header - viewCVS source

class TBtreeIter: public TIterator

Inheritance Inherited Members Includes Libraries
Class Charts

Function Members (Methods)

Display options:
Show inherited
Show non-public
public:
TBtreeIter(const TBtreeIter& iter)
TBtreeIter(const TBtree* t, Bool_t dir = kIterForward)
~TBtreeIter()
static TClass*Class()
virtual const TCollection*GetCollection() const
virtual Option_t*TIterator::GetOption() const
virtual TClass*IsA() const
virtual TObject*Next()
TObject*TIterator::operator()()
virtual TIterator&operator=(const TIterator& rhs)
TBtreeIter&operator=(const TBtreeIter& rhs)
virtual voidReset()
virtual voidShowMembers(TMemberInspector& insp, char* parent)
virtual voidStreamer(TBuffer& b)
voidStreamerNVirtual(TBuffer& b)
private:
TBtreeIter()

Data Members

private:
const TBtree*fTreebtree being iterated
Int_tfCursorcurrent position in btree
Bool_tfDirectioniteration direction

Class Description

                                                                      
 TBtree                                                               
                                                                      
 B-tree class. TBtree inherits from the TSeqCollection ABC.           
                                                                      

B-tree Implementation notes

This implements B-trees with several refinements. Most of them can be found in Knuth Vol 3, but some were developed to adapt to restrictions imposed by C++. First, a restatement of Knuth's properties that a B-tree must satisfy, assuming we make the enhancement he suggests in the paragraph at the bottom of page 476. Instead of storing null pointers to non-existent nodes (which Knuth calls the leaves) we utilize the space to store keys. Therefore, what Knuth calls level (l-1) is the bottom of our tree, and we call the nodes at this level LeafNodes. Other nodes are called InnerNodes. The other enhancement we have adopted is in the paragraph at the bottom of page 477: overflow control.

The following are modifications of Knuth's properties on page 478:

  1. Every InnerNode has at most Order keys, and at most Order+1 sub-trees.
  2. Every LeafNode has at most 2*(Order+1) keys.
  3. An InnerNode with k keys has k+1 sub-trees.
  4. Every InnerNode that is not the root has at least InnerLowWaterMark keys.
  5. Every LeafNode that is not the root has at least LeafLowWaterMark keys.
  6. If the root is a LeafNode, it has at least one key.
  7. If the root is an InnerNode, it has at least one key and two sub-trees.
  8. All LeafNodes are the same distance from the root as all the other LeafNodes.
  9. For InnerNode n with key n[i].key, then sub-tree n[i-1].tree contains all keys < n[i].key, and sub-tree n[i].tree contains all keys >= n[i].key.
  10. Order is at least 3.

The values of InnerLowWaterMark and LeafLowWaterMark may actually be set by the user when the tree is initialized, but currently they are set automatically to:

        InnerLowWaterMark = ceiling(Order/2)
        LeafLowWaterMark  = Order - 1

If the tree is only filled, then all the nodes will be at least 2/3 full. They will almost all be exactly 2/3 full if the elements are added to the tree in order (either increasing or decreasing). [Knuth says McCreight's experiments showed almost 100% memory utilization. I don't see how that can be given the algorithms that Knuth gives. McCreight must have used a different scheme for balancing. [No, he used a different scheme for splitting: he did a two-way split instead of the three way split as we do here. Which means that McCreight does better on insertion of ordered data, but we should do better on insertion of random data.]]

It must also be noted that B-trees were designed for DISK access algorithms, not necessarily in-memory sorting, as we intend it to be used here. However, if the order is kept small (< 6?) any inefficiency is negligible for in-memory sorting. Knuth points out that balanced trees are actually preferable for memory sorting. I'm not sure that I believe this, but it's interesting. Also, deleting elements from balanced binary trees, being beyond the scope of Knuth's book (p. 465), is beyond my scope. B-trees are good enough.

A B-tree is declared to be of a certain ORDER (3 by default). This number determines the number of keys contained in any interior node of the tree. Each interior node will contain ORDER keys, and therefore ORDER+1 pointers to sub-trees. The keys are numbered and indexed 1 to ORDER while the pointers are numbered and indexed 0 to ORDER. The 0th ptr points to the sub-tree of all elements that are less than key[1]. Ptr[1] points to the sub-tree that contains all the elements greater than key[1] and less than key[2]. etc. The array of pointers and keys is allocated as ORDER+1 pairs of keys and nodes, meaning that one key field (key[0]) is not used and therefore wasted. Given that the number of interior nodes is small, that this waste allows fewer cases of special code, and that it is useful in certain of the methods, it was felt to be a worthwhile waste.

The size of the exterior nodes (leaf nodes) does not need to be related to the size of the interior nodes at all. Since leaf nodes contain only keys, they may be as large or small as we like independent of the size of the interior nodes. For no particular reason other than it seems like a good idea, we will allocate 2*(ORDER+1) keys in each leaf node, and they will be numbered and indexed from 0 to 2*ORDER+1. It does have the advantage of keeping the size of the leaf and interior arrays the same, so that if we find allocation and de-allocation of these arrays expensive, we can modify their allocation to use a garbage ring, or something.

Both of these numbers will be run-time constants associated with each tree (each tree at run-time can be of a different order). The variable "order" is the order of the tree, and the inclusive upper limit on the indices of the keys in the interior nodes. The variable "order2" is the inclusive upper limit on the indices of the leaf nodes, and is designed

    (1) to keep the sizes of the two kinds of nodes the same;
    (2) to keep the expressions involving the arrays of keys looking
        somewhat the same:   lower limit        upper limit
          for inner nodes:        1                order
          for leaf  nodes:        0                order2
        Remember that index 0 of the inner nodes is special.

Currently, order2 = 2*(order+1).

 Picture: (also see Knuth Vol 3 pg 478)

           +--+--+--+--+--+--...
           |  |  |  |  |  |
 parent--->|  |     |     |
           |  |     |     |
           +*-+*-+*-+--+--+--...
            |  |  |
       +----+  |  +-----+
       |       +-----+  |
       V             |  V
       +----------+  |  +----------+
       |          |  |  |          |
 this->|          |  |  |          |<--sib
       +----------+  |  +----------+
                     V
                    data

It is conceptually VERY convenient to think of the data as being the very first element of the sib node. Any primitive that tells sib to perform some action on n nodes should include this 'hidden' element. For InnerNodes, the hidden element has (physical) index 0 in the array, and in LeafNodes, the hidden element has (virtual) index -1 in the array. Therefore, there are two 'size' primitives for nodes:

Psize       - the physical size: how many elements are contained in the
              array in the node.
Vsize       - the 'virtual' size; if the node is pointed to by
              element 0 of the parent node, then Vsize == Psize;
              otherwise the element in the parent item that points to this
              node 'belongs' to this node, and Vsize == Psize+1;

Parent nodes are always InnerNodes.

These are the primitive operations on Nodes:

Append(elt)     - adds an element to the end of the array of elements in a
                  node.  It must never be called where appending the element
                  would fill the node.
Split()         - divide a node in two, and create two new nodes.
SplitWith(sib)  - create a third node between this node and the sib node,
                  divvying up the elements of their arrays.
PushLeft(n)     - move n elements into the left sibling
PushRight(n)    - move n elements into the right sibling
BalanceWithRight() - even up the number of elements in the two nodes.
BalanceWithLeft()  - ditto

To allow this implementation of btrees to also be an implementation of sorted arrays/lists, the overhead is included to allow O(log n) access of elements by their rank (`give me the 5th largest element'). Therefore, each Item keeps track of the number of keys in and below it in the tree (remember, each item's tree is all keys to the RIGHT of the item's own key).

[ [ < 0 1 2 3 > 4 < 5 6 7 > 8 < 9 10 11 12 > ] 13 [ < 14 15 16 > 17 < 18 19 20 > ] ]
   4  1 1 1 1   4   1 1 1   5   1  1  1  1      7  3   1  1  1    4    1  1  1

TBtreeIter(const TBtree *t, Bool_t dir)
 Create a B-tree iterator.
TBtreeIter(const TBtreeIter &iter)
 Copy ctor.
TIterator & operator=(const TIterator &rhs)
 Overridden assignment operator.
TBtreeIter & operator=(const TBtreeIter &rhs)
 Overloaded assignment operator.
void Reset()
 Reset the B-tree iterator.
TObject * Next()
 Get next object from B-tree. Returns 0 when no more objects in tree.
TBtreeIter()
{ }
~TBtreeIter()
{ }
const TCollection * GetCollection()
{ return fTree; }

Author: Fons Rademakers 10/10/95
Last update: root/cont:$Name: $:$Id: TBtree.cxx,v 1.10 2006/04/19 08:22:22 rdm Exp $
Copyright (C) 1995-2000, Rene Brun and Fons Rademakers. *


ROOT page - Class index - Class Hierarchy - Top of the page

This page has been automatically generated. If you have any comments or suggestions about the page layout send a mail to ROOT support, or contact the developers with any questions or problems regarding ROOT.