elasticai.creator.nn.fixed_point.conv1d.testbench#

Module Contents#

Classes#

API#

class elasticai.creator.nn.fixed_point.conv1d.testbench.Conv1dDesignProtocol[source]#

Bases: typing.Protocol

abstract property name: str#
abstract property input_signal_length: int#
abstract property port: elasticai.creator.vhdl.design.ports.Port#
abstract property kernel_size: int#
abstract property in_channels: int#
abstract property out_channels: int#
class elasticai.creator.nn.fixed_point.conv1d.testbench.Conv1dTestbench(name: str, uut: elasticai.creator.nn.fixed_point.conv1d.testbench.Conv1dDesignProtocol, fxp_params: elasticai.creator.nn.fixed_point.number_converter.FXPParams)[source]#

Bases: elasticai.creator.vhdl.simulated_layer.Testbench

Initialization

save_to(destination: elasticai.creator.file_generation.savable.Path)[source]#
property name: str#
prepare_inputs(*inputs) list[dict][source]#
parse_reported_content(content: list[str]) list[list[list[float]]][source]#

This function parses the reported content, which is just a list of strings. All lines starting with ‘output_text:’ are considered as a result of the testbench. These results will be stacked for each batch. So you get a list[list[list[float]]] which is similar to batch[out channels[output neurons[float]]]. For each item reported it is checked if the string starts with ‘result: ‘. If so the remaining part will be split by ‘,’. The first number gives the batch. The second the result. The channels are greedy guessed. We do so by taking the first X-values, where X is the number of values per channel. After we have enough values for the channel, we increase the channel number. If you report not enough values per channel, this will look like the last channel has not reported enough values.