diff --git a/atom.xml b/atom.xml index 8ad1724..69299ad 100644 --- a/atom.xml +++ b/atom.xml @@ -4,7 +4,7 @@ FLOW Lab - 2025-01-09T15:40:54-07:00 + 2025-01-11T15:43:50-07:00 http://flow.byu.edu Andrew Ning diff --git a/feed.xml b/feed.xml index 89574db..913a9aa 100644 --- a/feed.xml +++ b/feed.xml @@ -1 +1 @@ -Jekyll2025-01-09T15:40:54-07:00http://flow.byu.edu/feed.xmlFLOW LabFlight, Optimization, and Wind LaboratoryAndrew NingOptimization Book Available2021-10-15T00:00:00-06:002021-10-15T00:00:00-06:00http://flow.byu.edu/posts/opt-book]]>Andrew NingBEM Paper2021-07-30T00:00:00-06:002021-07-30T00:00:00-06:00http://flow.byu.edu/posts/bem-paper]]>Andrew NingEduardo Research Update2021-03-01T00:00:00-07:002021-03-01T00:00:00-07:00http://flow.byu.edu/posts/eduardo-latest]]>Eduardo AlvarezReformulated VPM2021-02-01T00:00:00-07:002021-02-01T00:00:00-07:00http://flow.byu.edu/posts/reformulated-vpm]]>Eduardo AlvarezOptimization Book Announcement2021-01-20T00:00:00-07:002021-01-20T00:00:00-07:00http://flow.byu.edu/posts/optimization-book]]>Andrew NingPJ Defense2020-09-30T00:00:00-06:002020-09-30T00:00:00-06:00http://flow.byu.edu/posts/pj-defense]]>Andrew NingVPM Paper2020-08-30T00:00:00-06:002020-08-30T00:00:00-06:00http://flow.byu.edu/posts/vpm-paper]]>Eduardo AlvarezAirborne Wind with Vortex Particle Method2020-08-16T00:00:00-06:002020-08-16T00:00:00-06:00http://flow.byu.edu/posts/wind-harvesting]]>Judd Mehr and Eduardo AlvarezEduardo’s Past Three Years2020-08-10T00:00:00-06:002020-08-10T00:00:00-06:00http://flow.byu.edu/posts/eduardo-three-years]]>Eduardo AlvarezFLOWUnsteady in Google Drive2020-08-02T00:00:00-06:002020-08-02T00:00:00-06:00http://flow.byu.edu/posts/google-drive-vpm]]>Eduardo Alvarez \ No newline at end of file +Jekyll2025-01-11T15:43:50-07:00http://flow.byu.edu/feed.xmlFLOW LabFlight, Optimization, and Wind LaboratoryAndrew NingOptimization Book Available2021-10-15T00:00:00-06:002021-10-15T00:00:00-06:00http://flow.byu.edu/posts/opt-book]]>Andrew NingBEM Paper2021-07-30T00:00:00-06:002021-07-30T00:00:00-06:00http://flow.byu.edu/posts/bem-paper]]>Andrew NingEduardo Research Update2021-03-01T00:00:00-07:002021-03-01T00:00:00-07:00http://flow.byu.edu/posts/eduardo-latest]]>Eduardo AlvarezReformulated VPM2021-02-01T00:00:00-07:002021-02-01T00:00:00-07:00http://flow.byu.edu/posts/reformulated-vpm]]>Eduardo AlvarezOptimization Book Announcement2021-01-20T00:00:00-07:002021-01-20T00:00:00-07:00http://flow.byu.edu/posts/optimization-book]]>Andrew NingPJ Defense2020-09-30T00:00:00-06:002020-09-30T00:00:00-06:00http://flow.byu.edu/posts/pj-defense]]>Andrew NingVPM Paper2020-08-30T00:00:00-06:002020-08-30T00:00:00-06:00http://flow.byu.edu/posts/vpm-paper]]>Eduardo AlvarezAirborne Wind with Vortex Particle Method2020-08-16T00:00:00-06:002020-08-16T00:00:00-06:00http://flow.byu.edu/posts/wind-harvesting]]>Judd Mehr and Eduardo AlvarezEduardo’s Past Three Years2020-08-10T00:00:00-06:002020-08-10T00:00:00-06:00http://flow.byu.edu/posts/eduardo-three-years]]>Eduardo AlvarezFLOWUnsteady in Google Drive2020-08-02T00:00:00-06:002020-08-02T00:00:00-06:00http://flow.byu.edu/posts/google-drive-vpm]]>Eduardo Alvarez \ No newline at end of file diff --git a/me595r/schedule/hw1/index.html b/me595r/schedule/hw1/index.html index 2dd6845..71faf80 100644 --- a/me595r/schedule/hw1/index.html +++ b/me595r/schedule/hw1/index.html @@ -97,19 +97,27 @@

HW 1: Vanilla NN

Download the Auto MPG dataset, specifically the file auto-mpg.data. The auto-mpg.names file describes each of the 9 columns. Our goal will be to use this data to predict an automobile’s mpg as a function of the other parameters (except for “car name”, which we won’t need).

-

First, some data preparation. Some of the rows have missing values —eliminate those rows from your dataset. Next, randomly separate the data into a training set and a testing set (with an 80/20 train/test split). Finally, normalize the inputs of each column using a standard normal distribution:

+

First, some data preparation. Some of the rows have missing values —eliminate those rows from your dataset (you can do that beforehand or just in a loop when you read in the file). Next, normalize the inputs of each column using a standard normal distribution:

\[\hat{x_i} = \frac{x_i - \mu_{x_i}}{\sigma_{x_i}}\] -

where \(\mu\) and \(\sigma\) are the mean and standard deviation respectively of the column. It is generally desirable for the input data to be zero-centered (especially with zero-centered activation functions) and also helps avoid biasing weights towards a particular sign. It is also usually helpful to normalize so that we don’t bias the influence of some parameters over others just because of their magnitude (i.e., a unit choice). It is often also helpful to normalize the targets (mpg), though if you do, remember to unnormalize them when plotting/printing results to get back the actual mpg values.

+

where \(\mu\) and \(\sigma\) are the mean and standard deviation respectively of the column. +It is generally desirable for the input data to be zero-centered, and also helps avoid biasing weights towards a particular sign. It is also usually helpful to normalize so that we don’t bias the influence of some parameters over others just because of their magnitude (i.e., a unit choice). It is often also helpful to normalize the targets (mpg), though if you do, remember to unnormalize them when plotting/printing results to get back the actual mpg values.

-

Setup a neural net with two hidden layers. You’ll need to experiment some with different layer widths, activation functions, batch sizes, learning rates, and number of epochs. You should be able to get down to an average absolute error of about 2 mpg on the test set. You’ll see some scatter in the results for sure, but that’s still pretty decent predictive capability given how little data we have (and no physics).

+

Finally, randomly separate the data into a training set and a testing set (with an 80/20 train/test split).

-

Plot the following:

+

Setup a neural net with two layers (just one hidden layer). You’ll need to experiment some with different layer widths, activation functions, batch sizes, learning rates, and number of epochs. You should be able to an average absolute error of under 2 mpg on the test set. You’ll see some scatter in the results for sure, but that’s still pretty decent predictive capability given how little data we have (and no physics).

+ +

To turn in:

+ + +

Not required stuff: