Testing API¶
Modules:
-
cocotb_prepare– -
cocotb_pytest– -
cocotb_runner– -
cocotb_stream– -
ghdl_simulation– -
hw_tester–
Classes:
-
AIAccelerator–Hardware accelerator interface for executing inference on FPGA hardware.
-
CocotbTestFixture–Run cocotb via pytest, inject parameters to be available before and during test execution.
-
GHDLSimulator–Run a simulation tool for a given
top_designand save whatever is written to stdout -
HWTester–HW-in-the-Loop testing framework for hardware implementations.
-
RemoteControl–Remote control interface for hardware device management.
-
ResetControl–Automatic synchronous reset for simulated hw components.
-
StreamInterface–Automatic IO on pipelined hw components.
Functions:
-
build_report_folder_and_testdata–Building the test/simulation folder which contains the test data and hardware design for testing in cocotb
-
cocotb_test_fixture–Yields the setup CocotbTestFixture and performs necessary clean up after the test run.
-
eai_testbench–Intended usage:
-
read_testdata–Reading the data as testpattern in the cocotb testbench
-
run_cocotb_sim–Function for running Verilog/VHDL Simulation using COCOTB environment
-
run_cocotb_sim_for_src_dir–Function for running Verilog/VHDL Simulation using COCOTB environment
AIAccelerator
¶
Bases: Protocol
Hardware accelerator interface for executing inference on FPGA hardware.
This protocol defines the standard interface provided by the testing framework for executing hardware-accelerated inference. The HWTester returns objects implementing this interface through the prepare_hw_function() method.
The interface provides: - Input data as bytes for hardware processing - Execution of inference on the hardware device - Result data of specified size as bytes - Hardware-agnostic operation for consistent usage
Clients use this interface to execute inference without needing to know the underlying hardware implementation details.
CocotbTestFixture
¶
Run cocotb via pytest, inject parameters to be available before and during test execution.
The fixture will inspect the requesting test function to assume some default values and perform a little bit of setup. Namely this is
-
Use the test function name to determine the dut top module name and the name of its containing source file. These can be overriden inside the test function using
.set_top_module_name()and.set_srcs(). The default name will be derived by stripping thetest_prefix from the test function name. The implementation will try to find a vhdl or verilog file under../{vhdl, verilog}/<name>.{vhd, v}. Vhdl will take precedence. If no file is found, the initial srcs list will be left empty without raising an exception. -
It will create a folder to contain test artifacts including waveforms, xml result, testdata json and compiled simulation object files. To avoid collisions, the name of the folder will be derived from the fully qualified test function name (replacing
.by/) and the parameter list provided via pytest parametrization.The fixture assumes the test resolves package resources itself (e.g. via
get_file_from_package). It will keep open the files whose paths are passed toset_srcs()/add_srcs()whilerunexecutes, but you must keep a surroundingwithin the test so the package helper holds the resource alive during the simulation. -
If you need to generate hdl sources prior to running testbenches and want to store them with the rest of the testing artifacts, you can retrieve the automatically determined folder via the
.get_artifact_dir()method. This allows you to store the sources there and pass the resulting paths to the fixture using eg..add_srcs.
This is not intended to be used directly. Request cocotb_test_fixture as a pytest fixture instead.
GHDLSimulator
¶
Run a simulation tool for a given top_design and save whatever is written to stdout
for subsequent inspection.
This runner uses the GHDL tool. The parsed content has the following keys: `("source", "line", "column", "time", "type", "content")'
Will raise a SimulationError in case any of the calls to ghdl in the steps initialize or run fails.
Args:
workdir: typically the path to your build root, this is where we will look for vhd files
Methods:
-
getFullReport–Parses the output from the simulation tool, to provide a more structured representation.
-
getRawResult–Returns the raw stdout output as written by the simulation tool.
-
getReportedContent–Strips any information that the simulation tool added automatically to the output
-
initialize–Call this function once before calling
run()and on every file change. -
run–Runs the simulation and saves whatever the tool wrote to stdout.
getFullReport
¶
Parses the output from the simulation tool, to provide a more structured representation. The exact content depends on the simulation tool.
getRawResult
¶
getRawResult() -> str
Returns the raw stdout output as written by the simulation tool.
getReportedContent
¶
Strips any information that the simulation tool added automatically to the output to return only the information that was printed to stdout via VHDL/Verilog statements.
run
¶
Runs the simulation and saves whatever the tool wrote to stdout.
You're supposed to call initialize once, before calling run.
HWTester
¶
HWTester(synth_fn: _SynthesisFn, device: AbstractContextManager[RemoteControl])
HW-in-the-Loop testing framework for hardware implementations.
The HWTester class manages hardware testing, including device connections, bitstream uploads, and hardware function execution. It enables testing of hardware implementations by synthesizing VHDL designs and running them on actual hardware devices.
Usage example:
import elasticai.experiment_framework as eaixp
import elasticai.creator.testing as crt
from pathlib import Path
import pytest
@pytest.fixture
def hw_tester():
synthesis = eaixp.synthesis.CachedVivadoSynthesis()
def synthesize(src_dir: Path) -> Path:
return synthesis.synthesize(src_dir) / "results/impl/env5_top_reconfig.bin"
device = eaixp.remote_control.probe_for_devices()[0]
ctx = HWTester(synthesize,
eaixp.remote_control.connect_remote_control(device))
return ctx
def test_my_fn(hw_tester, tmp_dir):
build_dir = tmp_dir / "build"
expected_id = generate_srcs(build_dir)
with hw_tester.prepare_hw_function(build_dir, expected_id) as my_fn:
result = my_fn(b"Þ¾ï", 1)
assert result = b""s
Parameters:
-
(synth_fn¶_SynthesisFn) –Function that synthesizes VHDL source into a bitstream. Takes source directory path and returns bitstream file path.
-
(device¶AbstractContextManager[RemoteControl]) –Context manager providing remote control interface to hardware device. Should implement the RemoteControl protocol when entered.
The constructor sets up the testing framework but does not immediately connect to hardware or perform synthesis. Actual hardware interaction happens when prepare_hw_function() is called.
Methods:
-
prepare_hw_function–Prepare and upload a hardware function for testing.
prepare_hw_function
¶
Prepare and upload a hardware function for testing.
This method handles the complete workflow for hardware testing: 1. Synthesizes VHDL source files into a bitstream 2. Connects to the hardware device 3. Uploads the bitstream (if not already loaded) 4. Yields an AIAccelerator instance for inference execution 5. Cleans up hardware connection when done
Parameters:
-
(src_dir¶Path) –Path to VHDL source directory containing the hardware design
-
(id¶bytes | None, default:None) –Optional hardware function ID. If provided, the framework checks if this function is already loaded on the device. If None, the bitstream is always uploaded without checking.
Returns:
-
Generator[AIAccelerator]–Context manager that yields an AIAccelerator instance for executing
-
Generator[AIAccelerator]–inference on the hardware device.
Example
Note
- The hardware function ID should uniquely identify the hardware design
- If ID is None, bitstream upload is forced (useful for development)
- The context manager ensures proper hardware cleanup
RemoteControl
¶
Bases: Protocol
Remote control interface for hardware device management.
This protocol defines the interface that clients must implement to provide hardware device control to the HWTester. The framework uses this interface to interact with hardware devices during testing.
Clients typically provide implementations using hardware-specific libraries or the Elastic-AI Experiment Framework.
Methods:
-
fpga_power_off–Power off the FPGA device.
-
fpga_power_on–Power on the FPGA device.
-
predict–Execute inference on the hardware device.
-
read_skeleton_id–Read the currently loaded hardware function identifier.
-
upload_bitstream–Upload bitstream to the hardware device.
read_skeleton_id
¶
upload_bitstream
¶
upload_bitstream(flash_sector: int, path_to_bitstream: str) -> None
ResetControl
¶
Automatic synchronous reset for simulated hw components.
StreamInterface
¶
StreamInterface(
clk: LogicObject,
data_in: LogicArrayObject,
valid_in: LogicObject,
data_out: LogicArrayObject,
valid_out: LogicObject,
ready_in: LogicObject,
*,
input_to_logic_array: Callable[[TInput], LogicArray],
output_from_value: Callable[[LogicArray], TOutput],
)
Automatic IO on pipelined hw components.
Components under test have to follow the
interface specified in :doc:/creator/pipelined_hw_components.
Use the coroutine .drive_chunks() to write data to the dut
and start the coroutine .collect_chunks() with cocotb.start_soon()
to read data asynchronously from the dut.
In most cases creating a new StreamInterface object
should be done by calling the .from_dut() function.
By default the data is provided and collect as strings,
but you can use a custom data type, by injecting
functions for conversion to/from cocotb LogicArrays.
Example:
cocotb.start_soon(Clock(dut.clk, 10, "ns").start())
stream = StreamInterface.from_dut(dut)
reset = ResetControl.from_dut(dut)
dut.src_valid.value = 0
dut.dst_ready.value = 0
await RisingEdge(dut.clk)
await reset.reset_active_high()
dut.en.value = 1
collect_task = cocotb.start_soon(
stream.collect_chunks(expected_count=1, max_cycles=10)
)
await stream.drive_chunks([input])
observed = await collect_task
assert observed == [expected]
build_report_folder_and_testdata
¶
Building the test/simulation folder which contains the test data and hardware design for testing in cocotb :param dut_name: The name of the Top Module :param testdata: Dictionary with test data/params data :return: Path to the report folder containing hardware design and testpattern data
cocotb_test_fixture
¶
cocotb_test_fixture(request) -> Iterator[CocotbTestFixture]
Yields the setup CocotbTestFixture and performs necessary clean up after the test run.
To use the fixture either place add the line
to either a conftest.py in the test directory tree or in the test module.For more information see the documentation of CocotbTestFixture
eai_testbench
¶
Intended usage:
@cocotb.test()
@eai_testbench
async def my_testbench_for_input_buffer(dut, x, input_data):
dut.d_in = x
and
@pytest.mark.parametrize("x", [1, 2, 3])
def test_input_buffer(cocotb_test_fixture, x):
cocotb_test_fixture.write({"input_data": "hello world"})
cocotb_test_fixture.run()
The example will assume your toplevel module is "input_buffer" and
it's source file lives in a sibling folder of the test folder that
contains the pytest test function.
It will create a unique subdirectory under build_test that matches
the path to the module containing the testbench definition and pytest
test function (both need to live in the same module).
This prevents test A overriding the artifacts of test B.
The name of the subdirectory will be derived from the parameters
passed via the parametrize pytest marker and the top module name.
In this example this results in folders: input_buffer_test_input_buffer_x_1, input_buffer_test_input_buffer_x_2, input_buffer_test_input_buffer_x_3.
read_testdata
¶
Reading the data as testpattern in the cocotb testbench :param dut_name: The name of the Top Module DUT in the cocotb testbench (using dut._name) :return: Dictionary with testpattern for testing the DUT
run_cocotb_sim
¶
run_cocotb_sim(
src_files: Iterable[str] | Iterable[Path],
top_module_name: str,
cocotb_test_module: str,
defines: dict[str, Any] | Callable[[], dict[str, Any]] = lambda: {},
params: dict[str, Any] | Callable[[], dict[str, Any]] = lambda: {},
timescale: tuple[str, str] = ("1ps", "1fs"),
en_debug_mode: bool = True,
waveform_save_dst: str = "",
build_sim_dir: str | Path | None = None,
) -> Path
Function for running Verilog/VHDL Simulation using COCOTB environment :param src_files: List with source files of each used Verilog/VHDL file :param top_module_name: Name of the top module (from file) :param cocotb_test_module: Fully qualified name of python module containing cotName of the cocotb testbench in Python :param defines: Dictionary of parameters to pass to the module [key: value, ...] - usable only in Verilog :param params: Dictionary of parameters to pass to the module [key: value, ...] - verilog parameters or vhdl generics :param timescale: Tuple with Timescale value for simulation (step, accuracy) :param en_debug_mode: Enable debug mode :param waveform_save_dst: Path to the destination folder for saving waveform file :return: Path to folder which includes waveform file [Default: simulation output folder]
run_cocotb_sim_for_src_dir
¶
run_cocotb_sim_for_src_dir(
src_files: Iterable[str] | Iterable[Path],
top_module_name: str,
cocotb_test_module: str,
path2src: str = "",
defines: dict | Callable[[], dict] = lambda: {},
params: dict | Callable[[], dict] = lambda: {},
timescale: tuple[str, str] = ("1ps", "1fs"),
en_debug_mode: bool = True,
waveform_save_dst: str = "",
) -> Path
Function for running Verilog/VHDL Simulation using COCOTB environment :param src_files: List with source files of each used Verilog/VHDL file :param top_module_name: Name of the top module (from file) :param cocotb_test_module: Fully qualified name of python module containing cotName of the cocotb testbench in Python :param path2src: Path to the folder in which all src files are available for testing :param defines: Dictionary of parameters to pass to the module [key: value, ...] - usable only in Verilog :param params: Dictionary of parameters to pass to the module [key: value, ...] - value will be ignored :param timescale: Tuple with Timescale value for simulation (step, accuracy) :param en_debug_mode: Enable debug mode :param waveform_save_dst: Path to the destination folder for saving waveform file :return: Path to folder which includes waveform file [Default: simulation output folder]