braincog.base.connection package
Submodules
braincog.base.connection.CustomLinear module
braincog.base.connection.layer module
- class braincog.base.connection.layer.LIPool(child=None)
基类:
Module
用于转换方法的最大池化层的精准替换 LIPooling通过引入侧向抑制机制保证在转换后的SNN中输出的最大值与期望值相同。
Reference: https://arxiv.org/abs/2204.13271
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset()
- training: bool
- class braincog.base.connection.layer.LayerNorm(normalized_shape, eps=1e-06, data_format='channels_last')
基类:
Module
LayerNorm that supports two data formats: channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width).
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class braincog.base.connection.layer.NDropout(p)
基类:
Module
与Drop功能相同, 但是会保证同一个样本不同时刻的mask相同.
- create_mask(x)
生成新的mask :param x: 输入Tensor, 生成与之形状相同的mask :return:
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- n_reset()
重置, 能够生成新的mask :return:
- training: bool
- class braincog.base.connection.layer.SMaxPool(child)
基类:
Module
用于转换方法的最大池化层的常规替换 选用具有最大脉冲发放率的神经元的脉冲通过,能够满足一般性最大池化层的需要
Reference: https://arxiv.org/abs/1612.04052
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset()
- training: bool
- class braincog.base.connection.layer.TEBN(num_features, step, eps=1e-05, momentum=0.1, **kwargs)
基类:
Module
- forward(input)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class braincog.base.connection.layer.ThresholdDependentBatchNorm2d(num_features, alpha: float, threshold: float = 0.5, layer_by_layer: bool = True, affine: bool = True, **kwargs)
基类:
_BatchNorm
tdBN https://ojs.aaai.org/index.php/AAAI/article/view/17320
- affine: bool
- eps: float
- forward(input)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- momentum: float
- num_features: int
- track_running_stats: bool
- class braincog.base.connection.layer.VotingLayer(voter_num: int)
基类:
Module
用于SNNs的输出层, 几个神经元投票选出最终的类 :param voter_num: 投票的神经元的数量, 例如
voter_num = 10
, 则表明会对这10个神经元取平均- forward(x: Tensor)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class braincog.base.connection.layer.WTALayer(k=1)
基类:
Module
winner take all用于SNNs的每层后,将随机选取一个或者多个输出 :param k: X选取的输出数目 k默认等于1
- forward(x: Tensor)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
Module contents
- class braincog.base.connection.CustomLinear(weight, mask=None)
基类:
Module
用户自定义连接 通常stdp的计算
- forward(x: Tensor)
:param x:输入 x.shape = [N ]
- training: bool
- update(dw)
:param dw:权重更新量
- class braincog.base.connection.LIPool(child=None)
基类:
Module
用于转换方法的最大池化层的精准替换 LIPooling通过引入侧向抑制机制保证在转换后的SNN中输出的最大值与期望值相同。
Reference: https://arxiv.org/abs/2204.13271
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset()
- training: bool
- class braincog.base.connection.LayerNorm(normalized_shape, eps=1e-06, data_format='channels_last')
基类:
Module
LayerNorm that supports two data formats: channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width).
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class braincog.base.connection.NDropout(p)
基类:
Module
与Drop功能相同, 但是会保证同一个样本不同时刻的mask相同.
- create_mask(x)
生成新的mask :param x: 输入Tensor, 生成与之形状相同的mask :return:
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- n_reset()
重置, 能够生成新的mask :return:
- training: bool
- class braincog.base.connection.SMaxPool(child)
基类:
Module
用于转换方法的最大池化层的常规替换 选用具有最大脉冲发放率的神经元的脉冲通过,能够满足一般性最大池化层的需要
Reference: https://arxiv.org/abs/1612.04052
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset()
- training: bool
- class braincog.base.connection.ThresholdDependentBatchNorm2d(num_features, alpha: float, threshold: float = 0.5, layer_by_layer: bool = True, affine: bool = True, **kwargs)
基类:
_BatchNorm
tdBN https://ojs.aaai.org/index.php/AAAI/article/view/17320
- affine: bool
- eps: float
- forward(input)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- momentum: float
- num_batches_tracked: Optional[Tensor]
- num_features: int
- running_mean: Optional[Tensor]
- running_var: Optional[Tensor]
- track_running_stats: bool
- training: bool
- class braincog.base.connection.VotingLayer(voter_num: int)
基类:
Module
用于SNNs的输出层, 几个神经元投票选出最终的类 :param voter_num: 投票的神经元的数量, 例如
voter_num = 10
, 则表明会对这10个神经元取平均- forward(x: Tensor)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class braincog.base.connection.WTALayer(k=1)
基类:
Module
winner take all用于SNNs的每层后,将随机选取一个或者多个输出 :param k: X选取的输出数目 k默认等于1
- forward(x: Tensor)
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool