Skip to content

Commit 4bdcb44

Browse files
authored
Update README.md
1 parent 9edfd90 commit 4bdcb44

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -52,15 +52,15 @@ The following formula shows that the 1-H(X) formula, where H() is entropy and X
5252
of containing an accurate label, can be expressed as the Bayesian information gain, or KL divergence
5353
between the posterior P and uniform prior Q:
5454

55-
$$D_{\text{KL}}(P \parallel Q) = p \log_2\left(\frac{p}{q}\right) + (1-p) \log_2\left(\frac{1-p}{1-q}\right) = $$
56-
$$p \log_2\left(\frac{p}{0.5}\right) + (1-p) \log_2\left(\frac{1-p}{0.5}\right) =$$
57-
$$p \left(\log_2(p) - \log_2(0.5)\right) + (1-p) \left(\log_2(1-p) - \log_2(0.5)\right) =$$
58-
$$p \log_2(p) + p + (1-p) \log_2(1-p) + (1-p) =$$
59-
$$p \log_2(p) + (1-p) \log_2(1-p) + 1 = $$
55+
$$D_{\text{KL}}(P \parallel Q) = p * \log_2\left(\frac{p}{q}\right) + (1-p) *\log_2\left(\frac{1-p}{1-q}\right) =$$
56+
$$p *\log_2\left(\frac{p}{0.5}\right) + (1-p) * \log_2\left(\frac{1-p}{0.5}\right) =$$
57+
$$p *\left(\log_2(p) - \log_2(0.5)\right) + (1-p) * \left(\log_2(1-p) - \log_2(0.5)\right) =$$
58+
$$p *\log_2(p) + p + (1-p) *\log_2(1-p) + (1-p) =$$
59+
$$p *\log_2(p) + (1-p) * \log_2(1-p) + 1 = $$
6060
$$ 1 - H(X) $$
6161

6262
To derive a label's probability X of containing a positive label, we can plug the labels' TPR
63-
p(b|a) and FPR p(b|\neg a) into Bayes's rule, again using the uniform prior assumption.
63+
$p(b|a)$ and FPR $p(b|\neg a)$ into Bayes's rule, again using the uniform prior assumption.
6464

6565
$$X = p(a|b) = \frac{p(b|a)*p(a)}{p(b)} = $$
6666
$$\frac{p(b|a)*p(a)}{p(b|a)*p(a) + p(b|\neg a)*p(\neg a)} = $$

0 commit comments

Comments
 (0)