elasticai.creator.nn.quantized_grads.base_modules.batchnorm2d#

Module Contents#

Classes#

BatchNorm2d

A BatchNorm2d. The output of the batchnorm is fake quantized. The weights and bias are fake quantized during initialization. Make sure that math_ops is a module where all needed tensors are part of it, so they can be moved to the same device. Make sure that weight_quantization and bias_quantization are modules that implement the forward function. If you want to quantize during initialization or only apply quantized updates make sure to use a quantized optimizer and implement the right_inverse method for your module.

API#

class elasticai.creator.nn.quantized_grads.base_modules.batchnorm2d.BatchNorm2d(math_ops: torch.nn.Module, weight_quantization: torch.nn.Module, bias_quantization: torch.nn.Module, num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = True, track_running_stats: bool = True, device: Any = None, dtype: Any = None)[source]#

Bases: torch.nn.BatchNorm2d

A BatchNorm2d. The output of the batchnorm is fake quantized. The weights and bias are fake quantized during initialization. Make sure that math_ops is a module where all needed tensors are part of it, so they can be moved to the same device. Make sure that weight_quantization and bias_quantization are modules that implement the forward function. If you want to quantize during initialization or only apply quantized updates make sure to use a quantized optimizer and implement the right_inverse method for your module.

Initialization

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor) torch.Tensor[source]#