Skip to content

Commit

Permalink
Docs.
Browse files Browse the repository at this point in the history
  • Loading branch information
felixleopoldo committed Dec 6, 2023
1 parent ba7a1e2 commit 97701d7
Show file tree
Hide file tree
Showing 11 changed files with 200 additions and 170 deletions.
6 changes: 3 additions & 3 deletions docs/source/fig1_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -659,7 +659,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You main also plot the minimal CSI relations as below. See the paper for definition of a minimal CSI."
"You may also plot the minimal CSI relations as below. See the paper for definition of a minimal CSI."
]
},
{
Expand Down Expand Up @@ -939,7 +939,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Data sampled from a CStree is stored as a Pandas dataframe, with labels inherited from the CStree level labels. The second row shows the cardinalities of variables."
"Data sampled from a CStree is stored as a Pandas dataframe, with labels inherited from the CStree level labels. The second row contains the cardinalities of variables."
]
},
{
Expand Down Expand Up @@ -1281,7 +1281,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You may easily read a CStree from a Pandas dataframe. Norte that CStrees can thus be saved to file as a usual Pandas dataframe."
"You may easily read a CStree from a Pandas dataframe. A CStree can thus be saved to file as a usual Pandas dataframe."
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions docs/source/learn_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook shows how to learn a CStree from observational data using exhaustive search. "
"This notebook shows how to learn a CStree from observational data using an exhaustive search. procedure. "
]
},
{
Expand Down Expand Up @@ -442,7 +442,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Both the structure learning (learning of CStree) and the parameter estimation, use pre-calculated scores. We restrict the number of variables in each context to be at most 2 and use the BDeu score with pseudo count 1. By letting poss_cvars=None, the possible context variables for a variable are unrestricted."
"The structure learning (learning of CStree) uses pre-calculated scores. We restrict the number of variables in each context to be at most 2 and use the BDeu score with pseudo count 1. By letting poss_cvars=None, the possible context variables for a variable are unrestricted."
]
},
{
Expand Down Expand Up @@ -470,7 +470,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Th dict score_table contains the order score tables and context_scores, the scores for every context of every variable. "
"The dict score_table contains the order score tables and context_scores, the scores for every context of every variable. "
]
},
{
Expand Down Expand Up @@ -799,7 +799,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Estimate the stage parameters using the BDeu prior with total pseudo count 1."
"Estimate the stage parameters using the BDeu prior with a total pseudo count per level of 1."
]
},
{
Expand Down
23 changes: 8 additions & 15 deletions docs/source/learn_demo_gibbs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook shows how to learn a CStree from observational data using McMC (Gibbs) sampling from the posterior order distribution. "
"This notebook shows how to learn a CStree from observational data using McMC (Gibbs) sampling to approximate the posterior order distribution. "
]
},
{
Expand Down Expand Up @@ -1170,28 +1170,21 @@
},
{
"cell_type": "code",
"execution_count": 80,
"execution_count": 94,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Depth=1, working on node 3: 100%|██████████| 4/4 [00:00<00:00, 4277.72it/s]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Depth=1, working on node 3: 100%|██████████| 4/4 [00:00<00:00, 1642.57it/s]"
"Depth=1, working on node 3: 100%|██████████| 4/4 [00:00<00:00, 1812.38it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Possible context vararibles per node: {'a': [], 'b': ['d'], 'c': ['d'], 'd': ['b', 'c']}\n"
"Possible context variables per node: {'a': [], 'b': ['d'], 'c': ['d'], 'd': ['b', 'c']}\n"
]
},
{
Expand All @@ -1205,7 +1198,7 @@
"source": [
"pcgraph = pc(x[1:].values, 0.05, \"chisq\", node_names=x.columns)\n",
"poss_cvars = ctl.causallearn_graph_to_posscvars(pcgraph, labels=x.columns)\n",
"print(\"Possible context vararibles per node:\", poss_cvars)"
"print(\"Possible context variables per node:\", poss_cvars)"
]
},
{
Expand Down Expand Up @@ -1280,7 +1273,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We plot the first 200 interations of the order score trajectory."
"We plot the first 200 iterations of the order score trajectory."
]
},
{
Expand Down Expand Up @@ -1314,7 +1307,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To further investigate the McMC trajectory we find the individual variables positions/levels in the orders and plot these. This gives an overview of the mixing properties of the sampler and if the trjaectory seems to have converged."
"To further investigate the McMC trajectory we find the individual variables positions/levels in the orders and plot these. This gives an overview of the mixing properties of the sampler and if the trajectory seems to have converged."
]
},
{
Expand All @@ -1332,7 +1325,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Assuming that the majority of praobability mass will center around the order corresponding to the true CStree (though there could be many or them), the a-line should converge to 0 (i.e. position 0), the b-line should converge to 1, and so on."
"Assuming that the majority of probability mass is centered around the order corresponding to the true CStree (though there could be many or them), the a-line should converge to 0 (i.e. position 0), the b-line should converge to 1, and so on."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions notebooks/fig1_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -659,7 +659,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You main also plot the minimal CSI relations as below. See the paper for definition of a minimal CSI."
"You may also plot the minimal CSI relations as below. See the paper for definition of a minimal CSI."
]
},
{
Expand Down Expand Up @@ -939,7 +939,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Data sampled from a CStree is stored as a Pandas dataframe, with labels inherited from the CStree level labels. The second row shows the cardinalities of variables."
"Data sampled from a CStree is stored as a Pandas dataframe, with labels inherited from the CStree level labels. The second row contains the cardinalities of variables."
]
},
{
Expand Down Expand Up @@ -1281,7 +1281,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You may easily read a CStree from a Pandas dataframe. Norte that CStrees can thus be saved to file as a usual Pandas dataframe."
"You may easily read a CStree from a Pandas dataframe. A CStree can thus be saved to file as a usual Pandas dataframe."
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions notebooks/learn_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook shows how to learn a CStree from observational data using exhaustive search. "
"This notebook shows how to learn a CStree from observational data using an exhaustive search. procedure. "
]
},
{
Expand Down Expand Up @@ -442,7 +442,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Both the structure learning (learning of CStree) and the parameter estimation, use pre-calculated scores. We restrict the number of variables in each context to be at most 2 and use the BDeu score with pseudo count 1. By letting poss_cvars=None, the possible context variables for a variable are unrestricted."
"The structure learning (learning of CStree) uses pre-calculated scores. We restrict the number of variables in each context to be at most 2 and use the BDeu score with pseudo count 1. By letting poss_cvars=None, the possible context variables for a variable are unrestricted."
]
},
{
Expand Down Expand Up @@ -470,7 +470,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Th dict score_table contains the order score tables and context_scores, the scores for every context of every variable. "
"The dict score_table contains the order score tables and context_scores, the scores for every context of every variable. "
]
},
{
Expand Down Expand Up @@ -799,7 +799,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Estimate the stage parameters using the BDeu prior with total pseudo count 1."
"Estimate the stage parameters using the BDeu prior with a total pseudo count per level of 1."
]
},
{
Expand Down
23 changes: 8 additions & 15 deletions notebooks/learn_demo_gibbs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook shows how to learn a CStree from observational data using McMC (Gibbs) sampling from the posterior order distribution. "
"This notebook shows how to learn a CStree from observational data using McMC (Gibbs) sampling to approximate the posterior order distribution. "
]
},
{
Expand Down Expand Up @@ -1170,28 +1170,21 @@
},
{
"cell_type": "code",
"execution_count": 80,
"execution_count": 94,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Depth=1, working on node 3: 100%|██████████| 4/4 [00:00<00:00, 4277.72it/s]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Depth=1, working on node 3: 100%|██████████| 4/4 [00:00<00:00, 1642.57it/s]"
"Depth=1, working on node 3: 100%|██████████| 4/4 [00:00<00:00, 1812.38it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Possible context vararibles per node: {'a': [], 'b': ['d'], 'c': ['d'], 'd': ['b', 'c']}\n"
"Possible context variables per node: {'a': [], 'b': ['d'], 'c': ['d'], 'd': ['b', 'c']}\n"
]
},
{
Expand All @@ -1205,7 +1198,7 @@
"source": [
"pcgraph = pc(x[1:].values, 0.05, \"chisq\", node_names=x.columns)\n",
"poss_cvars = ctl.causallearn_graph_to_posscvars(pcgraph, labels=x.columns)\n",
"print(\"Possible context vararibles per node:\", poss_cvars)"
"print(\"Possible context variables per node:\", poss_cvars)"
]
},
{
Expand Down Expand Up @@ -1280,7 +1273,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We plot the first 200 interations of the order score trajectory."
"We plot the first 200 iterations of the order score trajectory."
]
},
{
Expand Down Expand Up @@ -1314,7 +1307,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To further investigate the McMC trajectory we find the individual variables positions/levels in the orders and plot these. This gives an overview of the mixing properties of the sampler and if the trjaectory seems to have converged."
"To further investigate the McMC trajectory we find the individual variables positions/levels in the orders and plot these. This gives an overview of the mixing properties of the sampler and if the trajectory seems to have converged."
]
},
{
Expand All @@ -1332,7 +1325,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Assuming that the majority of praobability mass will center around the order corresponding to the true CStree (though there could be many or them), the a-line should converge to 0 (i.e. position 0), the b-line should converge to 1, and so on."
"Assuming that the majority of probability mass is centered around the order corresponding to the true CStree (though there could be many or them), the a-line should converge to 0 (i.e. position 0), the b-line should converge to 1, and so on."
]
},
{
Expand Down
Loading

0 comments on commit 97701d7

Please sign in to comment.