FLOW Lab
- 2025-01-09T15:40:54-07:00
+ 2025-01-11T15:43:50-07:00http://flow.byu.eduAndrew Ning
diff --git a/feed.xml b/feed.xml
index 89574db..913a9aa 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1 +1 @@
-Jekyll2025-01-09T15:40:54-07:00http://flow.byu.edu/feed.xmlFLOW LabFlight, Optimization, and Wind LaboratoryAndrew NingOptimization Book Available2021-10-15T00:00:00-06:002021-10-15T00:00:00-06:00http://flow.byu.edu/posts/opt-book]]>Andrew NingBEM Paper2021-07-30T00:00:00-06:002021-07-30T00:00:00-06:00http://flow.byu.edu/posts/bem-paper]]>Andrew NingEduardo Research Update2021-03-01T00:00:00-07:002021-03-01T00:00:00-07:00http://flow.byu.edu/posts/eduardo-latest]]>Eduardo AlvarezReformulated VPM2021-02-01T00:00:00-07:002021-02-01T00:00:00-07:00http://flow.byu.edu/posts/reformulated-vpm]]>Eduardo AlvarezOptimization Book Announcement2021-01-20T00:00:00-07:002021-01-20T00:00:00-07:00http://flow.byu.edu/posts/optimization-book]]>Andrew NingPJ Defense2020-09-30T00:00:00-06:002020-09-30T00:00:00-06:00http://flow.byu.edu/posts/pj-defense]]>Andrew NingVPM Paper2020-08-30T00:00:00-06:002020-08-30T00:00:00-06:00http://flow.byu.edu/posts/vpm-paper]]>Eduardo AlvarezAirborne Wind with Vortex Particle Method2020-08-16T00:00:00-06:002020-08-16T00:00:00-06:00http://flow.byu.edu/posts/wind-harvesting]]>Judd Mehr and Eduardo AlvarezEduardo’s Past Three Years2020-08-10T00:00:00-06:002020-08-10T00:00:00-06:00http://flow.byu.edu/posts/eduardo-three-years]]>Eduardo AlvarezFLOWUnsteady in Google Drive2020-08-02T00:00:00-06:002020-08-02T00:00:00-06:00http://flow.byu.edu/posts/google-drive-vpm]]>Eduardo Alvarez
\ No newline at end of file
+Jekyll2025-01-11T15:43:50-07:00http://flow.byu.edu/feed.xmlFLOW LabFlight, Optimization, and Wind LaboratoryAndrew NingOptimization Book Available2021-10-15T00:00:00-06:002021-10-15T00:00:00-06:00http://flow.byu.edu/posts/opt-book]]>Andrew NingBEM Paper2021-07-30T00:00:00-06:002021-07-30T00:00:00-06:00http://flow.byu.edu/posts/bem-paper]]>Andrew NingEduardo Research Update2021-03-01T00:00:00-07:002021-03-01T00:00:00-07:00http://flow.byu.edu/posts/eduardo-latest]]>Eduardo AlvarezReformulated VPM2021-02-01T00:00:00-07:002021-02-01T00:00:00-07:00http://flow.byu.edu/posts/reformulated-vpm]]>Eduardo AlvarezOptimization Book Announcement2021-01-20T00:00:00-07:002021-01-20T00:00:00-07:00http://flow.byu.edu/posts/optimization-book]]>Andrew NingPJ Defense2020-09-30T00:00:00-06:002020-09-30T00:00:00-06:00http://flow.byu.edu/posts/pj-defense]]>Andrew NingVPM Paper2020-08-30T00:00:00-06:002020-08-30T00:00:00-06:00http://flow.byu.edu/posts/vpm-paper]]>Eduardo AlvarezAirborne Wind with Vortex Particle Method2020-08-16T00:00:00-06:002020-08-16T00:00:00-06:00http://flow.byu.edu/posts/wind-harvesting]]>Judd Mehr and Eduardo AlvarezEduardo’s Past Three Years2020-08-10T00:00:00-06:002020-08-10T00:00:00-06:00http://flow.byu.edu/posts/eduardo-three-years]]>Eduardo AlvarezFLOWUnsteady in Google Drive2020-08-02T00:00:00-06:002020-08-02T00:00:00-06:00http://flow.byu.edu/posts/google-drive-vpm]]>Eduardo Alvarez
\ No newline at end of file
diff --git a/me595r/schedule/hw1/index.html b/me595r/schedule/hw1/index.html
index 2dd6845..71faf80 100644
--- a/me595r/schedule/hw1/index.html
+++ b/me595r/schedule/hw1/index.html
@@ -97,19 +97,27 @@
HW 1: Vanilla NN
Download the Auto MPG dataset, specifically the file auto-mpg.data. The auto-mpg.names file describes each of the 9 columns. Our goal will be to use this data to predict an automobile’s mpg as a function of the other parameters (except for “car name”, which we won’t need).
-
First, some data preparation. Some of the rows have missing values —eliminate those rows from your dataset. Next, randomly separate the data into a training set and a testing set (with an 80/20 train/test split). Finally, normalize the inputs of each column using a standard normal distribution:
+
First, some data preparation. Some of the rows have missing values —eliminate those rows from your dataset (you can do that beforehand or just in a loop when you read in the file). Next, normalize the inputs of each column using a standard normal distribution:
where \(\mu\) and \(\sigma\) are the mean and standard deviation respectively of the column. It is generally desirable for the input data to be zero-centered (especially with zero-centered activation functions) and also helps avoid biasing weights towards a particular sign. It is also usually helpful to normalize so that we don’t bias the influence of some parameters over others just because of their magnitude (i.e., a unit choice). It is often also helpful to normalize the targets (mpg), though if you do, remember to unnormalize them when plotting/printing results to get back the actual mpg values.
+
where \(\mu\) and \(\sigma\) are the mean and standard deviation respectively of the column.
+It is generally desirable for the input data to be zero-centered, and also helps avoid biasing weights towards a particular sign. It is also usually helpful to normalize so that we don’t bias the influence of some parameters over others just because of their magnitude (i.e., a unit choice). It is often also helpful to normalize the targets (mpg), though if you do, remember to unnormalize them when plotting/printing results to get back the actual mpg values.
-
Setup a neural net with two hidden layers. You’ll need to experiment some with different layer widths, activation functions, batch sizes, learning rates, and number of epochs. You should be able to get down to an average absolute error of about 2 mpg on the test set. You’ll see some scatter in the results for sure, but that’s still pretty decent predictive capability given how little data we have (and no physics).
+
Finally, randomly separate the data into a training set and a testing set (with an 80/20 train/test split).
-
Plot the following:
+
Setup a neural net with two layers (just one hidden layer). You’ll need to experiment some with different layer widths, activation functions, batch sizes, learning rates, and number of epochs. You should be able to an average absolute error of under 2 mpg on the test set. You’ll see some scatter in the results for sure, but that’s still pretty decent predictive capability given how little data we have (and no physics).
+
+
To turn in:
+
+
Plot the objective (mean squared error) across each iteration for both the training and testing set on the same plot.
+
Once training is complete, report your average absolute error of the test set in mpg (again you should be able to get under 2, we’ll say under 2.25 to give some buffer).
+
+
+
Not required stuff:
-
The objective (mean squared error) across each iteration for both the training and testing set on the same plot.
-
The average absolute error of the test set across each iteration.
-
Plot the actual mpg and the model predictions, for the test set, against each other. Plot also a straight diagonal line for reference (the line of perfect predictions).
+
Note that while MSE is a reasonable objective, it is not always the most intuitive to interpret so it can be helpful to plot other quantities (like in this case mean absolute error).
+
One way to look at predictive capabilities is to plot the actual mpg and the predictions, both from test set, against each other. Plot also a straight diagonal line for reference (the line of perfect predictions).