Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

365 implement inference execution with pytest #384

Merged
merged 54 commits into from
Nov 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
6969839
fix(nn): fixed fxp_mac reset. This modules reset is now working at an…
LeoBuron Aug 13, 2024
75c1cbe
refactor(vhdl): better ghdl simulation class
LeoBuron Aug 13, 2024
d8ca219
feat(nn): added conv1d. Simulation works. End-to-end system test is s…
LeoBuron Aug 13, 2024
b422ccb
refactor(vhdl): made the name a property so the name is already set c…
LeoBuron Aug 13, 2024
70f9b5e
wip(nn): added simulation for linear layer. Still not finished
LeoBuron Aug 13, 2024
a2cd0a0
feat(nn): added simulation for linear layer
LeoBuron Aug 13, 2024
054b8ab
fix(nn): fixed fxp_mac reset. This modules reset is now working at an…
LeoBuron Aug 13, 2024
873fd42
refactor(vhdl): better ghdl simulation class
LeoBuron Aug 13, 2024
93e5ecd
feat(nn): added conv1d. Simulation works. End-to-end system test is s…
LeoBuron Aug 13, 2024
593682b
refactor(vhdl): made the name a property so the name is already set c…
LeoBuron Aug 13, 2024
67382df
wip(nn): added simulation for linear layer. Still not finished
LeoBuron Aug 13, 2024
aac395b
feat(nn): added simulation for linear layer
LeoBuron Aug 13, 2024
cec8f25
Merge remote-tracking branch 'origin/360-add-the-ghdl-simulation-test…
LeoBuron Aug 13, 2024
7f5d30c
fix(nn): linear layer uses signals now. So simulation works
LeoBuron Aug 21, 2024
3089341
feat(nn): added enV5 usb library to development pyproject.toml. This …
LeoBuron Aug 21, 2024
2c3faf5
feat(vhdl): added a generator for echo server with skeleton #378
LeoBuron Aug 23, 2024
948af39
fix(vhdl): fixed error in template
LeoBuron Aug 26, 2024
3f46780
feat(vhdl): added an example for the echoserver with skeleton v2
LeoBuron Aug 26, 2024
dc2ee96
fix(vhdl): fixed an error in the skeleton v2 template
LeoBuron Aug 29, 2024
cfaf318
fix(dependencies): fixed the dependency for the runtime utils
LeoBuron Aug 29, 2024
31868ca
fix(dependencies): fixed the poetry lock file
LeoBuron Aug 29, 2024
75ef96a
fix(vhdl): fixed the test for the firmware with skelton v2
LeoBuron Aug 29, 2024
ce091a1
Merge remote-tracking branch 'refs/remotes/origin/363-integrate-the-u…
LeoBuron Aug 29, 2024
d94e2ad
wip(tests): added system test for linear layer. Still work in progress
LeoBuron Sep 2, 2024
f11b606
fix(nn): revert changes in linear.tpl.vhd
LeoBuron Sep 2, 2024
83ecfcb
Merge branch 'refs/heads/develop' into 365-implement-inference-execut…
LeoBuron Sep 2, 2024
f1ccff8
Merge remote-tracking branch 'refs/remotes/origin/378-add-echoserver-…
LeoBuron Sep 2, 2024
a673890
wip(nn): echoserver does not work. Linear also not
LeoBuron Sep 2, 2024
a4359a0
feat(tests): echo server works now
LeoBuron Sep 4, 2024
04221ec
refactor(vhdl): changing the wake_up signal to best practice method
LeoBuron Sep 4, 2024
6c778bc
refactor(tests): making the test a bit more convinient
LeoBuron Sep 4, 2024
f8a7a0f
chore(pipeline): added package.json to fix the verison of commitlint
LeoBuron Sep 5, 2024
f100e1f
Merge branch '382-fix-commitlint-version' into 365-implement-inferenc…
LeoBuron Sep 5, 2024
2d7f140
fix(vhdl): fixed error in test
LeoBuron Sep 6, 2024
cd4c180
wip(tests): still trying to fix linear layer
LeoBuron Sep 6, 2024
390656c
fix(nn): fixed error in convolution
LeoBuron Aug 21, 2024
7943b3b
wip(nn): fixed error in linear layer
LeoBuron Sep 6, 2024
9316e3d
wip(tests): added conv1d and fixed stuff
LeoBuron Oct 1, 2024
082b8fd
refactor(nn): add new replacement variable in log2 calculation of lin…
AErbsloeh Oct 9, 2024
44ef433
wip(tests): changed workflow for generating data and checking data fr…
AErbsloeh Oct 9, 2024
238964a
feat(tests): linear layer system test with elastic node works now
LeoBuron Nov 5, 2024
25cdf90
docs(nn): removed unnecessary comments
LeoBuron Nov 14, 2024
6b1256f
docs(nn): removed unnecessary comments
LeoBuron Nov 14, 2024
55c9f4d
docs(nn): added comments to parsing functions in testbenches
LeoBuron Nov 14, 2024
01dd3c5
refactor(vhdl): changed sensitivity list to clock only
LeoBuron Nov 14, 2024
088bc1f
fix(vhdl): fixed test for changes in sensitivity list and for rising/…
LeoBuron Nov 14, 2024
7f8f445
fix(nn): fixed code generation test for linear layer
LeoBuron Nov 14, 2024
8431bc7
refactor(tests): moved the opening of the serial port to context manager
LeoBuron Nov 14, 2024
4925d67
refactor(nn): moved mac operators to vhdl shared design
LeoBuron Nov 15, 2024
3b927b6
refactor(nn): moved simulated layer. MAC operator design simulations …
LeoBuron Nov 15, 2024
a5ac0df
chore(dependency): fixed dependency of elasticai-runtime-env5 from de…
LeoBuron Nov 15, 2024
70c8b4b
docs(nn): added more context for the parse reported content functions
LeoBuron Nov 15, 2024
c3615bd
refactor(vhdl): removed unnecessary print statements and added type hint
LeoBuron Nov 15, 2024
3e6502c
wip(vhdl): mac operator simulations do not work
LeoBuron Nov 15, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ simulation/*
node_modules
.mypy_cache
package-lock.json
package.json

.coverage
coverage.xml
*.onnx
Expand Down
43 changes: 0 additions & 43 deletions elasticai/creator/nn/binary/mac/_mac_test.py

This file was deleted.

54 changes: 0 additions & 54 deletions elasticai/creator/nn/binary/mac/mactestbench.py

This file was deleted.

56 changes: 0 additions & 56 deletions elasticai/creator/nn/binary/mac/testbench.tpl.vhd

This file was deleted.

141 changes: 141 additions & 0 deletions elasticai/creator/nn/fixed_point/conv1d/_testbench_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
from typing import Any, Callable

import pytest
import torch

from elasticai.creator.vhdl.auto_wire_protocols.port_definitions import create_port
from elasticai.creator.vhdl.design.ports import Port

from ..number_converter import FXPParams, NumberConverter
from .testbench import Conv1dTestbench
from .design import Conv1dDesign


class DummyConv1d:
def __init__(self, fxp_params: FXPParams, in_channels: int, out_channels: int):
self.name: str = "conv1d"
self.kernel_size: int = 1
self.input_signal_length = 1
self.in_channels = in_channels
self.out_channels = out_channels
self.port: Port = create_port(
y_width=fxp_params.total_bits,
x_width=fxp_params.total_bits,
x_count=1,
y_count=2,
)


def parameters_for_reported_content_parsing(fxp_params, input_expected_pairs):
def add_expected_prefix_to_pairs(pairs):
_converter_for_batch = NumberConverter(
FXPParams(8, 0)
) # max for 255 lines of inputs
pairs_with_prefix = list()
for i, (pairs_text, pairs_number) in enumerate(pairs):
pairs_with_prefix.append(list())
pairs_with_prefix[i].append(list())
pairs_with_prefix[i].append(pairs_number)
for batch_number, batch_channel_text in enumerate(pairs_text):
for out_channel_text in batch_channel_text:
for value_text in out_channel_text:
pairs_with_prefix[i][0].append(
f"result: {_converter_for_batch.integer_to_bits(batch_number)},"
f" {value_text}"
)
return pairs_with_prefix

pairs_with_prefix = [
(fxp_params, a, b)
for a, b in add_expected_prefix_to_pairs(input_expected_pairs)
]
return pairs_with_prefix


@pytest.fixture
def create_uut() -> Callable[[FXPParams, int, int], Conv1dDesign]:
def create(fxp_params, in_channels: int, out_channels: int) -> Conv1dDesign:
return DummyConv1d(fxp_params, in_channels=in_channels, out_channels=out_channels)

return create


@pytest.mark.parametrize(
"fxp_params, reported, y", (
parameters_for_reported_content_parsing(
fxp_params=FXPParams(total_bits=3, frac_bits=0),
input_expected_pairs=[
([[["010"]]], [[[2.0]]]),
([[["001", "010"]]], [[[1.0, 2.0]]]),
([[["111", "001"]]], [[[-1.0, 1.0]]]),
]
) +
parameters_for_reported_content_parsing(
fxp_params=FXPParams(total_bits=4, frac_bits=1),
input_expected_pairs=[
([[["0001", "1111"]]], [[[0.5, -0.5]]]),
([[["0001", "0011"]], [["1000", "1111"]]], [[[0.5, 1.5]], [[-4.0, -0.5]]]),
]
)
)
)
def test_parse_reported_content_one_out_channel(fxp_params, reported, y, create_uut):
in_channels = None
out_channels = 1
bench = Conv1dTestbench(
name="conv1d_testbench", fxp_params=fxp_params, uut=create_uut(fxp_params, in_channels, out_channels)
)
print(reported)
assert y == bench.parse_reported_content(reported)


@pytest.mark.parametrize(
"fxp_params, reported, y", (
parameters_for_reported_content_parsing(
fxp_params=FXPParams(total_bits=3, frac_bits=0),
input_expected_pairs=[
([[["010"],["010"]]], [[[2.0],[2.0]]]),
([[["001", "010"], ["001", "010"]]], [[[1.0, 2.0], [1.0, 2.0]]]),
([[["111", "001"], ["111", "001"]]], [[[-1.0, 1.0], [-1.0, 1.0]]]),
]
) +
parameters_for_reported_content_parsing(
fxp_params=FXPParams(total_bits=4, frac_bits=1),
input_expected_pairs=[
([[["0001", "1111"], ["0001", "1111"]]], [[[0.5, -0.5], [0.5, -0.5]]]),
([[["0001", "0011"], ["0001", "0011"]], [["1000", "1111"], ["1000", "1111"]]],
[[[0.5, 1.5], [0.5, 1.5]], [[-4.0, -0.5], [-4.0, -0.5]]]),
]
)
)
)
def test_parse_reported_content_two_out_channel(fxp_params, reported, y, create_uut):
in_channels = None
out_channels = 2
bench = Conv1dTestbench(
name="conv1d_testbench", fxp_params=fxp_params, uut=create_uut(fxp_params, in_channels, out_channels)
)
print(reported)
assert y == bench.parse_reported_content(reported)

def test_input_preparation_with_one_in_channel(create_uut):
fxp_params = FXPParams(total_bits=3, frac_bits=0)
in_channels = 1
out_channels = None
bench = Conv1dTestbench(
name="bench_name", fxp_params=fxp_params, uut=create_uut(fxp_params, in_channels, out_channels),
)
input = torch.Tensor([[[1.0, 1.0]]])
expected = [{"x_0_0": "001", "x_0_1": "001"}]
assert expected == bench.prepare_inputs(input.tolist())

def test_input_preparation_with_two_in_channel(create_uut):
fxp_params = FXPParams(total_bits=3, frac_bits=0)
in_channels = 1
out_channels = None
bench = Conv1dTestbench(
name="bench_name", fxp_params=fxp_params, uut=create_uut(fxp_params, in_channels, out_channels),
)
input = torch.Tensor([[[1.0, 1.0], [1.0, 2.0]]])
expected = [{"x_0_0": "001", "x_0_1": "001", "x_1_0": "001", "x_1_1": "010"}]
assert expected == bench.prepare_inputs(input.tolist())
61 changes: 55 additions & 6 deletions elasticai/creator/nn/fixed_point/conv1d/conv1d.tpl.vhd
Original file line number Diff line number Diff line change
@@ -1,6 +1,55 @@
-- Dummy File for testing implementation of conv1d Design
${total_bits}
${frac_bits}
${in_channels}
${out_channels}
${kernel_size}
library ieee;
use ieee.std_logic_1164.all;

entity ${name} is
port (
enable : in std_logic;
clock : in std_logic;
x_address : out std_logic_vector(${x_address_width}-1 downto 0);
y_address : in std_logic_vector(${y_address_width}-1 downto 0);

x : in std_logic_vector(${x_width}-1 downto 0);
y : out std_logic_vector(${y_width}-1 downto 0);

done : out std_logic
);
end;

architecture rtl of ${name} is
constant TOTAL_WIDTH : natural := ${x_width};
constant FRAC_WIDTH : natural := ${frac_width};
constant VECTOR_WIDTH : natural := ${vector_width};
constant KERNEL_SIZE : natural := ${kernel_size};
constant IN_CHANNELS : natural := ${in_channels};
constant OUT_CHANNELS : natural := ${out_channels};
constant X_ADDRESS_WIDTH : natural := ${x_address_width};
constant Y_ADDRESS_WIDTH : natural := ${y_address_width};

signal reset : std_logic;

begin

reset <= not enable;

${name}_conv1d : entity work.conv1d_fxp_MAC_RoundToZero
generic map(
TOTAL_WIDTH => TOTAL_WIDTH,
FRAC_WIDTH => FRAC_WIDTH,
VECTOR_WIDTH => VECTOR_WIDTH,
KERNEL_SIZE => KERNEL_SIZE,
IN_CHANNELS => IN_CHANNELS,
OUT_CHANNELS => OUT_CHANNELS,
X_ADDRESS_WIDTH => X_ADDRESS_WIDTH,
Y_ADDRESS_WIDTH => Y_ADDRESS_WIDTH
)
port map (
clock => clock,
enable => enable,
reset => reset,
x => x,
x_address => x_address,
y => y,
y_address => y_address,
done => done
);
end rtl;
Loading
Loading