Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

better Leaf and Ensemble structs #90

Open
Eight1911 opened this issue Oct 6, 2018 · 5 comments
Open

better Leaf and Ensemble structs #90

Eight1911 opened this issue Oct 6, 2018 · 5 comments

Comments

@Eight1911
Copy link
Contributor

I've implemented GradientBoost and a more efficient version of AdaBoost that interface with treeregressor and treeclassifier and am now looking to port them to the light-weight front facing structs we use, i.e., LeafOrNode and Ensemble.

Trying to do so, I realized that one thing I feel somewhat against is the storage of every labels in the Leaf struct. So I want to discuss if it would be possible to change it so something more lightweight. These values are usually unnecessary, take up too much space, and can be recomputed if needed. For a lot of purposes, a single confidence parameter should suffice (e.g., the impurity of that leaf) And if that is not enough we might add, for example, another function that takes in a set of samples and returns an array containing the indices of the leaves they end up in. This would generalize the current implementation to any dataset and not just the training set, and from that we can very easily rewrite apply_tree_proba.

This may feel like an inconvenience, but for boosting and random forest ensembles that usually use shallow trees, most of the memory cost is in this storage of the samples, and the real inconvenience is dealing with models that are 4GB large or not being able to train your model because you ran out of memory.

One other thing is that the current Ensemble struct doesn't add very much to the current implementation because it's just a list of trees. This means that we have to deal with something like AdaBoost returning both an Ensemble and a list of coefficients that should have been a part of that ensemble in the first place. So I'm also proposing that we encode these coefficients into the model.

tldr; I'm proposing that we use the following structs instead

struct Leaf{S, T}
    label    :: T
    impurity :: Float64
end

struct Ensemble{S, T}
    trees  :: Vector{Node{S, T}}
    coeffs :: Union{Nothing, Vector{Float64}}
    method :: String
end
@bensadeghi
Copy link
Member

@Eight1911
Great to hear about your implementations of GradientBoost and AdaBoost!
And yes, totally agree with you that the current Leaf struct is unnecessarily bloated and wasteful when it comes to generating ensembles.

For a new Leaf struct, it would be great to maintain the same functionality for apply_X_proba (for multi-class tree, forest, stumps, ...) and print_tree (where leaf majority matches are presented). We could potentially store the label counts in an array or tuple.
We could also consider having new and completely independent Leaf and Node types for classification and regression purposes, optimized for their corresponding tasks.

For a new Ensemble struct, great to have the coeffs bundled in. We just need to make sure not to break the existing API for build_adaboost_stumps (by perhaps redundantly outputting the coeffs as currently done?)

@Eight1911
Copy link
Contributor Author

How about this? We change the structure in such a way that storing which leaf the labels end up in can be optional. The advantage of doing this is that the users now have a choice, without the disadvantage of having to use some awkward apis for fit_X_proba or even having to change most of the APIs for the user at all. I'm thinking of something like this

struct Leaf{T}
    label     :: T

    # the location in the tree.labels
    # where the labels in this leaf are stored
    region    :: UnitRange{Int}

    # one or two summary statistics for fast retrieval
    # - probability of inaccuracy for classication
    # - variance of leaf for regression
    deviation :: Float64
end

struct Node{S, T}
    featid  :: Int
    featval :: S
    left    :: Union{Leaf{T}, Node{S, T}}
    right   :: Union{Leaf{T}, Node{S, T}}
end

struct Tree{S, T}
    root    :: Union{Leaf{T}, Node{S, T}}
    # nothing if we choose not to store
    # otherwise, a vector of labels
    labels  :: Union{Nothing, Vector{T}}
    
    # ... other helpful metadata like how 
    # the tree was made, etc.
end

A little unrelated, but if you want something even more adventurous, you might also change how nodes refer to their children. That is, instead of linking them directly. Have the tree store the nodes in the array, and then use referral by index. That is,

struct Node{S, T}
    featid  :: Int
    featval :: S
    left    :: Int # index of the left child in `tree.nodes`
    right   :: Int # index of the right child in `tree.nodes`
end

struct Tree{S, T}
    nodes   :: Vector{Union{Leaf{T}, Node{S, T}}}
    # nothing if we choose not to store
    # otherwise, a vector of labels
    labels  :: Union{Nothing, Vector{T}}
    
    # ... other helpful metadata like how 
    # the tree was made, the purity criterion used
    # etc.
end

The advantage is that it should be easier this way to storage in a linear format, which is helpful for porting to other languages. length(tree) is now O(1) as it should be. (though length(node) is still O(n)).

On another note, how important is preserving backwards compatibility for build_adaboost_stumps? I think it's not intuitive to have it return something that is not used at all, especially when we will be adding another boosting algorithm which will use the same api.

@bensadeghi
Copy link
Member

I'm open to both approaches for the new structs. It'd be good to experiment a bit and see which is more efficient, in terms of model training execution time and memory allocation, and also how well models write to disk.
Regarding build_adaboot_stumps, it'll be fine to break the API, for the sake of having a consistent API across the boosting algorithms.

@ValdarT
Copy link

ValdarT commented Dec 8, 2018

Hi. Any updates on this? Would love to see the structs get more lightweight and efficient...

@harryscholes
Copy link

Bump. Any updates on these changes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants