elasticai.creator.nn.quantized_grads.base_modules.conv1d
#
Module Contents#
Classes#
A 1d convolution. The weights and bias are fake quantized during initialization. Make sure that math_ops is a module where all needed tensors are part of it, so they can be moved to the same device. Make sure that weight_quantization and bias_quantization are modules that implement the forward function. If you want to quantize during initialization or only apply quantized updates make sure to use a quantized optimizer and implement the right_inverse method for your module. |
API#
- class elasticai.creator.nn.quantized_grads.base_modules.conv1d.Conv1d(math_ops: torch.nn.Module, weight_quantization: torch.nn.Module, in_channels: int, out_channels: int, kernel_size: int | tuple[int], stride: int | tuple[int] = 1, padding: int | tuple[int] | str = 0, dilation: int | tuple[int] = 1, groups: int = 1, bias: bool = True, bias_quantization: torch.nn.Module = None, device: Any = None, dtype: Any = None)[source]#
Bases:
torch.nn.Conv1d
A 1d convolution. The weights and bias are fake quantized during initialization. Make sure that math_ops is a module where all needed tensors are part of it, so they can be moved to the same device. Make sure that weight_quantization and bias_quantization are modules that implement the forward function. If you want to quantize during initialization or only apply quantized updates make sure to use a quantized optimizer and implement the right_inverse method for your module.
Initialization
Initialize internal Module state, shared by both nn.Module and ScriptModule.