-
Notifications
You must be signed in to change notification settings - Fork 10
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* changes in ML track * changes * Update soa/tracks/ml/8.md Co-authored-by: Arjoonn <[email protected]>
- Loading branch information
1 parent
73a3e2c
commit fb16ea2
Showing
8 changed files
with
161 additions
and
107 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,21 +1,25 @@ | ||
<a href='https://t.me/ml_code_for_100_days'><button>Discuss on telegram</button></a> | ||
## ML Track | ||
Welcome to the ML track. We hope you're really excited for this. | ||
For starters we'll brush up your Python Skills. This includes your understanding of | ||
- [Numpy](https://numpy.org/) | ||
- [Pandas](https://pandas.pydata.org/) | ||
- [Matplotlib](https://matplotlib.org/) | ||
# ML Track | ||
Welcome to the ML track. We hope you're really excited for this. | ||
For starters we'll brush up your Python Skills. This includes your understanding of the following: | ||
|
||
Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%201.ipynb) to view the Jupyter-Notebook. | ||
If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/). | ||
- [Numpy](https://numpy.org/) | ||
- [Pandas](https://pandas.pydata.org/) | ||
- [Matplotlib](https://matplotlib.org/) | ||
|
||
How to get mean of each column in a Data Frame named `df`? | ||
Please write the full command. ( answer is case sensitive ) | ||
<form method='POST'> | ||
<input name='answer'> | ||
<input type='submit' value='Submit'> | ||
<code class='code_checker'> | ||
def answer(s): | ||
return s == 'df.mean()' | ||
</code> | ||
</form> | ||
**Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%201.ipynb) to view the Jupyter-Notebook.** | ||
If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/). | ||
|
||
|
||
We hope you've gone through the code and other resources provided along. Let's wind it up with a quick question. | ||
How to get mean of each column in a Data Frame named `df`? | ||
Please write the full command. ( answer is case sensitive ) | ||
|
||
<form method='POST'> | ||
<input name='answer'> | ||
<input type='submit' value='Submit'> | ||
<code class='code_checker'> | ||
def answer(s): | ||
return s == 'df.mean()' | ||
</code> | ||
</form> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,25 +1,30 @@ | ||
<a href='https://t.me/ml_code_for_100_days'><button>Discuss on telegram</button></a> | ||
## ML Track - Week 3 | ||
Congratulations for making upto here! | ||
As now we've completed the preprocessing methods, we can start with Machine Learing Algorithms. | ||
We'll start with **Regression**. | ||
Regression analysis is a supervised method, used to predict **Continous**, **Independent** variable using dependent variables. | ||
This week will require you to have prior knowledge in linear, quadratic and polynomial equations. | ||
This week we'll learn about: | ||
# ML Track - Week 3 | ||
|
||
- Linear Regression | ||
- Multiple Linear Regression | ||
- Polynomial Regression | ||
Congratulations for making upto here! | ||
As now we've completed the preprocessing methods, we can start with Machine Learing Algorithms. | ||
We'll start with **Regression**. | ||
Regression analysis is a supervised method, used to predict **Continous**, **Independent** variable using dependent variables. | ||
This week will require you to have prior knowledge in linear, quadratic and polynomial equations. | ||
|
||
Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%203.ipynb) to view the Jupyter-Notebook. | ||
If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/) | ||
This week we'll learn about: | ||
|
||
`mean_squared_error` is a method from which class in Sklearn? | ||
<form method='POST'> | ||
<input name='answer'> | ||
<input type='submit' value='Submit'> | ||
<code class='code_checker'> | ||
def answer(s): | ||
return s.lower() == 'metrics' | ||
</code> | ||
</form> | ||
1. Linear Regression | ||
2. Multiple Linear Regression | ||
3. Polynomial Regression | ||
|
||
**Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%203.ipynb) to view the Jupyter-Notebook.** | ||
If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/). | ||
|
||
|
||
We hope you've gone through the code and other resources provided along. Let's wind it up with a quick question. | ||
`mean_squared_error` is a method from which module in Sklearn? | ||
|
||
<form method='POST'> | ||
<input name='answer'> | ||
<input type='submit' value='Submit'> | ||
<code class='code_checker'> | ||
def answer(s): | ||
return s.lower() == 'metrics' | ||
</code> | ||
</form> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,33 +1,33 @@ | ||
<a href='https://t.me/ml_code_for_100_days'><button>Discuss on telegram</button></a> | ||
## ML Track - Week 5 | ||
Congratulations, You have come mid-way! | ||
Now, let's learn how good or bad our model is performing and why? | ||
# ML Track - Week 5 | ||
Congratulations! You have come mid-way! | ||
Now, let's learn how good or bad our model is performing and why? | ||
|
||
Topics covered in this week: | ||
- Underfitting | ||
- Overfitting | ||
- Bias Variance Trade-off | ||
- Regularization | ||
- Support Vector Machine | ||
Topics covered in this week: | ||
|
||
Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%206.ipynb) to view the Jupyter-Notebook. | ||
If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/) | ||
1. Underfitting | ||
2. Overfitting | ||
3. Bias Variance Trade-off | ||
4. Regularization | ||
5. Support Vector Machine | ||
|
||
I hope this that week would have proven useful to you and let's wind it up with a quick question . | ||
**Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%205.ipynb) to view the Jupyter-Notebook.** | ||
If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/). | ||
|
||
In terms of the bias-variance trade-off, which of the following is/are substantially more harmful to the test error than the training error? Type the correct option number to answer. | ||
|
||
We hope you've gone through the code and other resources provided along. Let's wind it up with a quick question. | ||
In terms of the bias-variance trade-off, which of the following is/are substantially more harmful to the test error than the training error? (Input the correct option) | ||
|
||
1. Bias | ||
2. Loss | ||
3. Variance | ||
4. Risk | ||
|
||
|
||
<form method='POST'> | ||
<input name='answer'> | ||
<input type='submit' value='Submit'> | ||
<code class='code_checker'> | ||
def answer(s): | ||
return s.lower() == '3' | ||
return s == '3' | ||
</code> | ||
</form> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,27 +1,30 @@ | ||
<a href='https://t.me/ml_code_for_100_days'><button>Discuss on telegram</button></a> | ||
## ML Track-Week 7 | ||
# ML Track-Week 7 | ||
|
||
Congratulations for making it upto here ! | ||
Congratulations for making it upto here! | ||
|
||
This week will introduce you to Dimensionality Reduction Techniques and Model Selection strategies like K cross fold validation, Grid Search and Stacking. | ||
This week will introduce you to Dimensionality Reduction Techniques and Model Selection strategies like K cross fold validation, Grid Search and Stacking. | ||
|
||
Dimensionality Reduction means reducing the number of features(columns) in a given dataset.Imagine working with a dataset with nearly 20000 features. Having | ||
so many features makes it problematic to draw insights from the data. It’s not feasible to analyze each and every variable at a microscopic level. Hence, we use Dimensionality Reduction techniques. | ||
Dimensionality Reduction means reducing the number of features(columns) in a given dataset.Imagine working with a dataset with nearly 20000 features. | ||
Having so many features makes it problematic to draw insights from the data. It's not feasible to analyze each and every variable at a microscopic level. | ||
Hence, we use Dimensionality Reduction techniques. | ||
|
||
Model selection,on the hand, is the process of selecting one final machine learning model from among a collection of candidate machine learning models | ||
for a training dataset. | ||
Model selection,on the hand, is the process of selecting one final machine learning model from among a collection of candidate machine learning models for a training dataset. | ||
|
||
Let's start then ,shall we ? | ||
Let's start then, shall we? | ||
|
||
Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%207.ipynb).If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/) | ||
**Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%207.ipynb) to view the Jupyter-Notebook.** | ||
If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/). | ||
|
||
|
||
We hope you've gone through the code and other resources provided along. Let's wind it up with a quick question. | ||
Kernel PCA cannot be used for non linear data. (True / False) | ||
|
||
Question to be answered after you complete your notebook. | ||
Kernel PCA cannot be used for non linear data. (true / false) | ||
<form method='POST'> | ||
<input name='answer'> | ||
<input type='submit' value='Submit'> | ||
<code class='code_checker'> | ||
def answer(s): | ||
return s.lower() == 'false' | ||
return s.lower() == 'False' | ||
</code> | ||
</form> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
<a href='https://t.me/ml_code_for_100_days'><button>Discuss on telegram</button></a> | ||
# ML Track-Week 8 | ||
|
||
Now, let us boost our learning. | ||
This week will introduce you to algorithms that can boost the accuracy of your model. | ||
Boosting considers many models, placed sequentially where each model tries to minimize the error obtained from the previous model. | ||
|
||
We'll learn About | ||
|
||
1. Gradient Boosting Algorithm | ||
2. Extreme Boosting Algorithm | ||
3. Ada Boost Algorithm | ||
|
||
**Click [here](https://github.com/kabirnagpal/SoA-ML-14/blob/master/week%208.ipynb) to view the Jupyter-Notebook.** | ||
If you don't have any Python Environment, you can also try the code in [Google Colab](https://colab.research.google.com/). | ||
|
||
|
||
We hope you've gone through the code and other resources provided along. Let's wind it up with a quick question. | ||
Which of the following algorithm are not an example of ensemble learning algorithm? | ||
|
||
1. Random Forest | ||
2. Adaboost | ||
3. Extra Trees | ||
4. Gradient Boosting | ||
5. Decision Trees | ||
|
||
<form method='POST'> | ||
<input name='answer'> | ||
<input type='submit' value='Submit'> | ||
<code class='code_checker'> | ||
def answer(s): | ||
return s =='5': | ||
|
||
</code> | ||
</form> |