Skip to content

Commit

Permalink
Link two terms from glossary. (#19)
Browse files Browse the repository at this point in the history
* Link two terms to glossary.

* Link to glossary terms in parallelism docs.

* Unqualify references.

* Update docs/background.rst

* Update docs/parallelism.rst
  • Loading branch information
kklein authored Jun 24, 2024
1 parent 67725ae commit 049fbea
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 11 deletions.
3 changes: 2 additions & 1 deletion docs/background.rst
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,8 @@ the opposite holds true.
We would like to learn such policies to apply them on previously
unseen data. In order to learn the policy, we can use data from an experiment.
We can dinstinguish two cases when it comes to experiment data:
observational or RCT data.
:term:`observational<Observational data>` or
:term:`RCT<Randomized Control Trial (RCT)>` data.

Importantly, MetaLearners for CATE estimation can, in principle, be
used for both observational or RCT data. Yet, the following conditions
Expand Down
8 changes: 4 additions & 4 deletions docs/examples/example_basic.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@
"\n",
"Now that the data has been loaded, we can get to actually using\n",
"MetaLearners. Let's start with the\n",
"{class}`metalearners.TLearner`.\n",
"{class}`~metalearners.TLearner`.\n",
"Investigating its documentation, we realize that only three initialization parameters\n",
"are necessary in the case we do not want to reuse nuisance models: ``nuisance_model_factory``, ``is_classification`` and\n",
"``n_variants``. Given that our outcome is a scalar, we want to set\n",
Expand Down Expand Up @@ -115,7 +115,7 @@
"* We need to specify the observed treatment assignment ``w`` in the call to the\n",
" ``fit`` method.\n",
"* We need to specify whether we want in-sample or out-of-sample\n",
" estimates in the ``predict`` call via ``is_oos``."
" estimates in the {meth}`~metalearners.TLearner.predict` call via ``is_oos``."
]
},
{
Expand Down Expand Up @@ -149,7 +149,7 @@
"cater to a general case, where there are more than two variants and/or\n",
"classification problems with many class probabilities. Given that we\n",
"care about the simple case of binary variant regression, we can make use of\n",
"{func}`metalearners.utils.simplify_output` to simplify this shape as such:"
"{func}`~metalearners.utils.simplify_output` to simplify this shape as such:"
]
},
{
Expand Down Expand Up @@ -177,7 +177,7 @@
"-----------------------------------\n",
"\n",
"Instead of using a T-Learner, we can of course also some other\n",
"MetaLearner, such as the {class}`metalearners.RLearner`.\n",
"MetaLearner, such as the {class}`~metalearners.RLearner`.\n",
"The R-Learner's documentation tells us that two more instantiation\n",
"parameters are necessary: ``propensity_model_factory`` and\n",
"``treatment_model_factory``. Hence we can instantiate an R-Learner as follows"
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/example_lime.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@
"source": [
"### Generating lime plots\n",
"\n",
"``lime`` will an expect a function which takes in an ``X`` and returns\n",
"``lime`` will expect a function which consumes an ``X`` and returns\n",
"a one-dimensional vector of the same length as ``X``. We'll have to\n",
"adapt the {meth}`~metalearners.rlearner.RLearner.predict` method of\n",
"our {class}`~metalearners.rlearner.RLearner` in two ways:\n",
Expand Down
12 changes: 7 additions & 5 deletions docs/parallelism.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ What about parallelism?
************************

In the context of the topic outlined in :ref:`Motivation_multiprocessing`, one of the factors
motivating the implementation of this library is the introduction of parallelism in metalearners.
motivating the implementation of this library is the introduction of parallelism in ``metalearners``.
We've discovered three potential levels for executing parallelism:

#. **Base model level**: Certain base models implement the option to use multiple threads
#. **Base model level**: Certain :term:`base models<Base model>` implement the option to use multiple threads
during their training. Examples of these models include
`LightGBM <https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html#lightgbm.LGBMRegressor>`_
or `RandomForest from sklearn <https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html>`_.
Expand All @@ -25,13 +25,15 @@ We've discovered three potential levels for executing parallelism:
To use parallelism at this level one can use the ``n_jobs_cross_fitting`` parameter of the
:py:meth:`~metalearners.metalearner.MetaLearner.fit` method of the metalearner.

#. **Stage level**: A majority of MetaLearners entails multiple nuisance and/or treatment models.
#. **Stage level**: A majority of MetaLearners entails multiple
:term:`nuisance<Nuisance model>` and/or :term:`treatment models<Treatment effect model>`.
Within an individual stage, these models are independent of each other, an example of
this would be one propensity model and an outcome model for each treatment variant.
this would be one :term:`propensity model<Propensity model>` and
an :term:`outcome model<Outcome model>` for each treatment variant.
This independence translates into another possibility for parallelism.

To use parallelism at this level one can use the ``n_jobs_base_learners`` parameter of the
:py:meth:`~metalearners.metalearner.MetaLearner.fit` method of the metalearner.
:py:meth:`~metalearners.metalearner.MetaLearner.fit` method of the MetaLearner.

Our experiments leveraging parallelism at various levels reveal that there is not a
'one-size-fits-all' setting; the optimal configuration varies significantly based on factors
Expand Down

0 comments on commit 049fbea

Please sign in to comment.