elasticai.creator.testing#

Package Contents#

Classes#

GHDLSimulator

Run a simulation tool for a given top_design and save whatever is written to stdout for subsequent inspection.

Testbench

SimulatedLayer

Functions#

run_cocotb_sim

Function for running Verilog/VHDL Simulation using COCOTB environment

run_cocotb_sim_for_src_dir

Function for running Verilog/VHDL Simulation using COCOTB environment

check_cocotb_test_result

build_report_folder_and_testdata

Building the test/simulation folder which contains the test data and hardware design for testing in cocotb

read_testdata

Reading the data as testpattern in the cocotb testbench

parse_report

cocotb_test_fixture

Yields the setup CocotbTestFixture and performs necessary clean up after the test run.

eai_testbench

Intended usage:

API#

elasticai.creator.testing.run_cocotb_sim(src_files: collections.abc.Iterable[str] | collections.abc.Iterable[pathlib.Path], top_module_name: str, cocotb_test_module: str, defines: dict[str, Any] | collections.abc.Callable[[], dict[str, Any]] = lambda : ..., params: dict[str, Any] | collections.abc.Callable[[], dict[str, Any]] = lambda : ..., timescale: tuple[str, str] = ('1ps', '1fs'), en_debug_mode: bool = True, waveform_save_dst: str = '', build_sim_dir: str | pathlib.Path | None = None) pathlib.Path[source]#

Function for running Verilog/VHDL Simulation using COCOTB environment

Parameters:
  • src_files – List with source files of each used Verilog/VHDL file

  • top_module_name – Name of the top module (from file)

  • cocotb_test_module – Fully qualified name of python module containing cotName of the cocotb testbench in Python

  • defines – Dictionary of parameters to pass to the module [key: value, …] - usable only in Verilog

  • params – Dictionary of parameters to pass to the module [key: value, …] - value will be ignored

  • timescale – Tuple with Timescale value for simulation (step, accuracy)

  • en_debug_mode – Enable debug mode

  • waveform_save_dst – Path to the destination folder for saving waveform file

Returns:

Path to folder which includes waveform file [Default: simulation output folder]

elasticai.creator.testing.run_cocotb_sim_for_src_dir(src_files: collections.abc.Iterable[str] | collections.abc.Iterable[pathlib.Path], top_module_name: str, cocotb_test_module: str, path2src: str = '', defines: dict | collections.abc.Callable[[], dict] = lambda : ..., params: dict | collections.abc.Callable[[], dict] = lambda : ..., timescale: tuple[str, str] = ('1ps', '1fs'), en_debug_mode: bool = True, waveform_save_dst: str = '') pathlib.Path[source]#

Function for running Verilog/VHDL Simulation using COCOTB environment

Parameters:
  • src_files – List with source files of each used Verilog/VHDL file

  • top_module_name – Name of the top module (from file)

  • cocotb_test_module – Fully qualified name of python module containing cotName of the cocotb testbench in Python

  • path2src – Path to the folder in which all src files are available for testing

  • defines – Dictionary of parameters to pass to the module [key: value, …] - usable only in Verilog

  • params – Dictionary of parameters to pass to the module [key: value, …] - value will be ignored

  • timescale – Tuple with Timescale value for simulation (step, accuracy)

  • en_debug_mode – Enable debug mode

  • waveform_save_dst – Path to the destination folder for saving waveform file

Returns:

Path to folder which includes waveform file [Default: simulation output folder]

elasticai.creator.testing.check_cocotb_test_result(result_folder_cocotb: str = 'build_sim') bool[source]#
elasticai.creator.testing.build_report_folder_and_testdata(dut_name: str, testdata: dict) pathlib.Path[source]#

Building the test/simulation folder which contains the test data and hardware design for testing in cocotb

Parameters:
  • dut_name – The name of the Top Module

  • testdata – Dictionary with test data/params data

Returns:

Path to the report folder containing hardware design and testpattern data

elasticai.creator.testing.read_testdata(dut_name: str) dict[source]#

Reading the data as testpattern in the cocotb testbench

Parameters:

dut_name – The name of the Top Module DUT in the cocotb testbench (using dut._name)

Returns:

Dictionary with testpattern for testing the DUT

class elasticai.creator.testing.GHDLSimulator(workdir, top_design_name)[source]#

Run a simulation tool for a given top_design and save whatever is written to stdout for subsequent inspection.

This runner uses the GHDL tool. The parsed content has the following keys: `(“source”, “line”, “column”, “time”, “type”, “content”)’

Will raise a SimulationError in case any of the calls to ghdl in the steps initialize or run fails. Args: workdir: typically the path to your build root, this is where we will look for vhd files

Initialization

add_generic(**kwargs)[source]#
initialize()[source]#

Call this function once before calling run() and on every file change.

run()[source]#

Runs the simulation and saves whatever the tool wrote to stdout. You’re supposed to call initialize once, before calling run.

getReportedContent() list[str][source]#

Strips any information that the simulation tool added automatically to the output to return only the information that was printed to stdout via VHDL/Verilog statements.

getFullReport() list[dict][source]#

Parses the output from the simulation tool, to provide a more structured representation. The exact content depends on the simulation tool.

getRawResult() str[source]#

Returns the raw stdout output as written by the simulation tool.

elasticai.creator.testing.parse_report(text: str)[source]#
class elasticai.creator.testing.Testbench[source]#
abstract property name: str#
abstractmethod save_to(destination: elasticai.creator.file_generation.savable.Path) None[source]#
abstractmethod prepare_inputs(*args: Any, **kwargs: Any) Any[source]#
abstractmethod parse_reported_content(*args, **kwargs: Any) Any[source]#
class elasticai.creator.testing.SimulatedLayer(testbench: elasticai.creator.testing.simulated_layer.Testbench, simulator_constructor, working_dir: str | pathlib.Path)[source]#

Initialization

__call__(inputs: Any) Any[source]#
elasticai.creator.testing.cocotb_test_fixture(request) collections.abc.Iterator[elasticai.creator.testing.cocotb_pytest.CocotbTestFixture][source]#

Yields the setup CocotbTestFixture and performs necessary clean up after the test run.

To use the fixture either place add the line

pytest_plugins = "elasticai.creator.testing.cocotb_pytest"

to either a conftest.py in the test directory tree or in the test module.

elasticai.creator.testing.eai_testbench(fn)[source]#

Intended usage:

@cocotb.test()
@eai_testbench
async def my_testbench_for_input_buffer(dut, x, input_data):
  dut.d_in = x

and

@pytest.mark.parametrize("x", [1, 2, 3])
def test_input_buffer(cocotb_test_fixture, x):
  cocotb_test_fixture.write({"input_data": "hello world"})
  cocotb_test_fixture.run()

The example will assume your toplevel module is "input_buffer" and it’s source file lives in a sibling folder of the test folder that contains the pytest test function. It will create a unique subdirectory under build_test that matches the path to the module containing the testbench definition and pytest test function (those need to be the same). This prevents test A overriding the artifacts of test B. The name of the subdirectory will be derived from the parameters passed via the parametrize pytest marker and the top module name. In this example this results in folders: input_buffer_test_input_buffer_x_1, input_buffer_test_input_buffer_x_2, input_buffer_test_input_buffer_x_3.