Skip to content

Locally connected layer #201

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Mar 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
b5e7f74
changing reshape layer
ricor07 Feb 16, 2025
cf2caf6
added tests; to note that they don't work
ricor07 Feb 16, 2025
a4d5cc8
Merge remote-tracking branch 'upstream/main'
ricor07 Feb 16, 2025
eb4079d
Now reshape2d works, maxpool still not
ricor07 Feb 17, 2025
6842f3a
Saving changes before rebasing
ricor07 Feb 22, 2025
c7e682b
Merge remote-tracking branch 'upstream/main'
ricor07 Feb 22, 2025
b942637
Resolved merge conflicts
ricor07 Feb 22, 2025
9a4f710
Merging
ricor07 Feb 22, 2025
5d62b13
Bug fixed; Added conv1d; Conv1d and maxpool backward still not working
ricor07 Feb 22, 2025
a08fba0
Bug fixes; now everything works
ricor07 Feb 23, 2025
52f958f
Updated the comments
ricor07 Feb 23, 2025
b64038a
Implemented locally connected 1d
ricor07 Feb 23, 2025
9082db8
Bug fix
ricor07 Feb 23, 2025
d1cffae
Bug fix
ricor07 Feb 24, 2025
a055b20
New bugs
ricor07 Feb 25, 2025
c6b4d87
Bug fix
ricor07 Feb 25, 2025
2e64151
Definitive bug fixes
ricor07 Feb 25, 2025
2b7c548
Adding jvdp1's review
ricor07 Feb 25, 2025
be5bb76
Implemented OneAdder's suggestions
ricor07 Feb 25, 2025
33a6549
Deleting useless variables
ricor07 Feb 25, 2025
b69ba9a
again
ricor07 Feb 25, 2025
d4a87e2
Update src/nf/nf_conv1d_layer_submodule.f90
ricor07 Feb 27, 2025
8e923a0
Resolve conflicts with main
milancurcic Mar 5, 2025
95c4a2a
locally_connected_1d -> locally_connected1d
milancurcic Mar 13, 2025
f3daf43
Update features table
milancurcic Mar 13, 2025
7819b55
Fix CMakeLists
milancurcic Mar 13, 2025
5b46efb
Fix CmakeListst
milancurcic Mar 13, 2025
473fcf3
Another one
milancurcic Mar 13, 2025
174a421
Tidy up
milancurcic Mar 13, 2025
a486648
Acknowledge contributors
milancurcic Mar 14, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ add_library(neural-fortran
src/nf.f90
src/nf/nf_activation.f90
src/nf/nf_base_layer.f90
src/nf/nf_conv1d_layer.f90
src/nf/nf_conv1d_layer_submodule.f90
src/nf/nf_conv2d_layer.f90
src/nf/nf_conv2d_layer_submodule.f90
src/nf/nf_cross_attention_layer.f90
Expand All @@ -41,12 +43,16 @@ add_library(neural-fortran
src/nf/nf_layernorm_submodule.f90
src/nf/nf_layer.f90
src/nf/nf_layer_submodule.f90
src/nf/nf_locally_connected1d_layer_submodule.f90
src/nf/nf_locally_connected1d_layer.f90
src/nf/nf_linear2d_layer.f90
src/nf/nf_linear2d_layer_submodule.f90
src/nf/nf_embedding_layer.f90
src/nf/nf_embedding_layer_submodule.f90
src/nf/nf_loss.f90
src/nf/nf_loss_submodule.f90
src/nf/nf_maxpool1d_layer.f90
src/nf/nf_maxpool1d_layer_submodule.f90
src/nf/nf_maxpool2d_layer.f90
src/nf/nf_maxpool2d_layer_submodule.f90
src/nf/nf_metrics.f90
Expand All @@ -60,6 +66,8 @@ add_library(neural-fortran
src/nf/nf_random.f90
src/nf/nf_reshape_layer.f90
src/nf/nf_reshape_layer_submodule.f90
src/nf/nf_reshape2d_layer.f90
src/nf/nf_reshape2d_layer_submodule.f90
src/nf/nf_self_attention_layer.f90
src/nf/io/nf_io_binary.f90
src/nf/io/nf_io_binary_submodule.f90
Expand Down
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,16 +33,18 @@ Read the paper [here](https://arxiv.org/abs/1902.06714).
| Embedding | `embedding` | n/a | 2 | ✅ | ✅ |
| Dense (fully-connected) | `dense` | `input1d`, `dense`, `dropout`, `flatten` | 1 | ✅ | ✅ |
| Dropout | `dropout` | `dense`, `flatten`, `input1d` | 1 | ✅ | ✅ |
| Convolutional (2-d) | `conv2d` | `input3d`, `conv2d`, `maxpool2d`, `reshape` | 3 | ✅ | ✅(*) |
| Locally connected (1-d) | `locally_connected1d` | `input2d`, `locally_connected1d`, `conv1d`, `maxpool1d`, `reshape2d` | 2 | ✅ | ✅ |
| Convolutional (1-d) | `conv1d` | `input2d`, `conv1d`, `maxpool1d`, `reshape2d` | 2 | ✅ | ✅ |
| Convolutional (2-d) | `conv2d` | `input3d`, `conv2d`, `maxpool2d`, `reshape` | 3 | ✅ | ✅ |
| Max-pooling (1-d) | `maxpool1d` | `input2d`, `conv1d`, `maxpool1d`, `reshape2d` | 2 | ✅ | ✅ |
| Max-pooling (2-d) | `maxpool2d` | `input3d`, `conv2d`, `maxpool2d`, `reshape` | 3 | ✅ | ✅ |
| Linear (2-d) | `linear2d` | `input2d`, `layernorm`, `linear2d`, `self_attention` | 2 | ✅ | ✅ |
| Self-attention | `self_attention` | `input2d`, `layernorm`, `linear2d`, `self_attention` | 2 | ✅ | ✅ |
| Layer Normalization | `layernorm` | `linear2d`, `self_attention` | 2 | ✅ | ✅ |
| Flatten | `flatten` | `input2d`, `input3d`, `conv2d`, `maxpool2d`, `reshape` | 1 | ✅ | ✅ |
| Reshape (1-d to 2-d) | `reshape2d` | `input2d`, `conv1d`, `locally_connected1d`, `maxpool1d` | 2 | ✅ | ✅ |
| Reshape (1-d to 3-d) | `reshape` | `input1d`, `dense`, `flatten` | 3 | ✅ | ✅ |

(*) See Issue [#145](https://github.com/modern-fortran/neural-fortran/issues/145) regarding non-converging CNN training on the MNIST dataset.

## Getting started

Get the code:
Expand Down Expand Up @@ -267,7 +269,9 @@ Thanks to all open-source contributors to neural-fortran:
[jvdp1](https://github.com/jvdp1),
[jvo203](https://github.com/jvo203),
[milancurcic](https://github.com/milancurcic),
[OneAdder](https://github.com/OneAdder),
[pirpyn](https://github.com/pirpyn),
[rico07](https://github.com/ricor07),
[rouson](https://github.com/rouson),
[rweed](https://github.com/rweed),
[Spnetic-5](https://github.com/Spnetic-5),
Expand Down
1 change: 1 addition & 0 deletions example/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
foreach(execid
cnn_mnist
cnn_mnist_1d
dense_mnist
get_set_network_params
network_parameters
Expand Down
6 changes: 3 additions & 3 deletions example/cnn_mnist.f90
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ program cnn_mnist
real, allocatable :: validation_images(:,:), validation_labels(:)
real, allocatable :: testing_images(:,:), testing_labels(:)
integer :: n
integer, parameter :: num_epochs = 10
integer, parameter :: num_epochs = 250

call load_mnist(training_images, training_labels, &
validation_images, validation_labels, &
Expand All @@ -35,9 +35,9 @@ program cnn_mnist
call net % train( &
training_images, &
label_digits(training_labels), &
batch_size=128, &
batch_size=16, &
epochs=1, &
optimizer=sgd(learning_rate=3.) &
optimizer=sgd(learning_rate=0.001) &
)

print '(a,i2,a,f5.2,a)', 'Epoch ', n, ' done, Accuracy: ', accuracy( &
Expand Down
67 changes: 67 additions & 0 deletions example/cnn_mnist_1d.f90
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
program cnn_mnist_1d

use nf, only: network, sgd, &
input, conv1d, maxpool1d, flatten, dense, reshape, reshape2d, locally_connected1d, &
load_mnist, label_digits, softmax, relu

implicit none

type(network) :: net

real, allocatable :: training_images(:,:), training_labels(:)
real, allocatable :: validation_images(:,:), validation_labels(:)
real, allocatable :: testing_images(:,:), testing_labels(:)
integer :: n
integer, parameter :: num_epochs = 250

call load_mnist(training_images, training_labels, &
validation_images, validation_labels, &
testing_images, testing_labels)

net = network([ &
input(784), &
reshape2d([28, 28]), &
locally_connected1d(filters=8, kernel_size=3, activation=relu()), &
maxpool1d(pool_size=2), &
locally_connected1d(filters=16, kernel_size=3, activation=relu()), &
maxpool1d(pool_size=2), &
dense(10, activation=softmax()) &
])

call net % print_info()

epochs: do n = 1, num_epochs

call net % train( &
training_images, &
label_digits(training_labels), &
batch_size=16, &
epochs=1, &
optimizer=sgd(learning_rate=0.01) &
)

print '(a,i2,a,f5.2,a)', 'Epoch ', n, ' done, Accuracy: ', accuracy( &
net, validation_images, label_digits(validation_labels)) * 100, ' %'

end do epochs

print '(a,f5.2,a)', 'Testing accuracy: ', &
accuracy(net, testing_images, label_digits(testing_labels)) * 100, '%'

contains

real function accuracy(net, x, y)
type(network), intent(in out) :: net
real, intent(in) :: x(:,:), y(:,:)
integer :: i, good
good = 0
do i = 1, size(x, dim=2)
if (all(maxloc(net % predict(x(:,i))) == maxloc(y(:,i)))) then
good = good + 1
end if
end do
accuracy = real(good) / size(x, dim=2)
end function accuracy

end program cnn_mnist_1d

4 changes: 4 additions & 0 deletions src/nf.f90
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ module nf
use nf_datasets_mnist, only: label_digits, load_mnist
use nf_layer, only: layer
use nf_layer_constructors, only: &
conv1d, &
conv2d, &
dense, &
dropout, &
Expand All @@ -11,8 +12,11 @@ module nf
input, &
layernorm, &
linear2d, &
locally_connected1d, &
maxpool1d, &
maxpool2d, &
reshape, &
reshape2d, &
self_attention
use nf_loss, only: mse, quadratic
use nf_metrics, only: corr, maxabs
Expand Down
Loading