braincog.base.conversion package

Submodules

braincog.base.conversion.convertor module

class braincog.base.conversion.convertor.Convertor(dataloader, device=None, p=0.9995, channelnorm=False, lipool=True, gamma=1, soft_mode=True, merge=True, batch_num=1)

基类:Module

ANN2SNN转换器

用于转换完整的pytorch模型,使用dataloader中部分数据进行最大值计算,通过p控制获取第p百分比最大值

channlenorm: https://arxiv.org/abs/1903.06530 channelnorm可以对每个通道获取最大值并进行权重归一化

gamma: https://arxiv.org/abs/2204.13271 gamma可以控制burst spikes的脉冲数,burst spike可以提高神经元的脉冲发放能力,减小信息残留

lipool: https://arxiv.org/abs/2204.13271 lipool用于使用侧向抑制机制进行最大池化,LIPooling能够对SNN中的最大池化进行有效的转换

soft_mode: https://arxiv.org/abs/1612.04052 soft_mode被称为软重置,可以减小重置过程神经元的信息损失,有效提高转换的性能

merge用于是否对网络中相邻的卷积和BN层进行融合 batch_norm控制对dataloader的数据集的用量

forward(model)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

static get_percentile(model, dataloader, device, batch_num=1)

该函数需与具有HookScale层的网络配合使用

static register_hook(model, p=0.99, channelnorm=False, gamma=0.999)

Reference: https://github.com/fangwei123456/spikingjelly

将网络的每一层后注册一个HookScale类 该方法在仿真上等效于与对权重进行归一化操作,且易扩展到任意结构的网络中

static replace_for_spike(model, lipool=True, soft_mode=True, gamma=1)

该函数用于将定义好的ANN模型转换为SNN模型 ReLU单元将被替换为脉冲神经元, 如果模型中使用了最大池化,lipool参数将定义使用常规模型还是LIPooling方法

training: bool
class braincog.base.conversion.convertor.HookScale(p: float = 0.9995, channelnorm: bool = False, gamma: float = 0.999)

基类:Module

在每个ReLU层后记录该层的百分位最大值

For channelnorm: 获取最大值时使用了torch.quantile For layernorm: 使用sort,然后手动取百分比,因为quantile在计算单个通道时有上限,batch较大时易出错

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class braincog.base.conversion.convertor.Hookoutput(module)

基类:Module

在伪转换中为ReLU和ClipQuan提供包装,用于监控其输出

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class braincog.base.conversion.convertor.SNode(soft_mode=False, gamma=5)

基类:Module

用于转换后的SNN的神经元模型 IF神经元模型由gamma=1确定,当gamma为其他大于1的值时,即为使用burst神经元模型 soft_mode用于定义神经元的重置方法,soft重置能够极大地减少神经元在重置过程的信息损失

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

hard_reset()

硬重置后神经元的膜电势被重置为0

reset()
soft_reset()

软重置后神经元的膜电势为神经元当前膜电势减去阈值

training: bool
class braincog.base.conversion.convertor.Scale(scale: float = 1.0)

基类:Module

对前向过程的值进行缩放

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
braincog.base.conversion.convertor.reset(self)

转换的网络来自ANN,需要将新附加上的脉冲module进行reset 判断module名称并调用各自节点的reset方法

braincog.base.conversion.merge module

braincog.base.conversion.merge.merge(conv, bn)

conv: 卷积层实例 bn: BN层实例

braincog.base.conversion.merge.mergeConvBN(m)

合并网络模块中的卷积与BN层

Module contents

class braincog.base.conversion.Convertor(dataloader, device=None, p=0.9995, channelnorm=False, lipool=True, gamma=1, soft_mode=True, merge=True, batch_num=1)

基类:Module

ANN2SNN转换器

用于转换完整的pytorch模型,使用dataloader中部分数据进行最大值计算,通过p控制获取第p百分比最大值

channlenorm: https://arxiv.org/abs/1903.06530 channelnorm可以对每个通道获取最大值并进行权重归一化

gamma: https://arxiv.org/abs/2204.13271 gamma可以控制burst spikes的脉冲数,burst spike可以提高神经元的脉冲发放能力,减小信息残留

lipool: https://arxiv.org/abs/2204.13271 lipool用于使用侧向抑制机制进行最大池化,LIPooling能够对SNN中的最大池化进行有效的转换

soft_mode: https://arxiv.org/abs/1612.04052 soft_mode被称为软重置,可以减小重置过程神经元的信息损失,有效提高转换的性能

merge用于是否对网络中相邻的卷积和BN层进行融合 batch_norm控制对dataloader的数据集的用量

forward(model)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

static get_percentile(model, dataloader, device, batch_num=1)

该函数需与具有HookScale层的网络配合使用

static register_hook(model, p=0.99, channelnorm=False, gamma=0.999)

Reference: https://github.com/fangwei123456/spikingjelly

将网络的每一层后注册一个HookScale类 该方法在仿真上等效于与对权重进行归一化操作,且易扩展到任意结构的网络中

static replace_for_spike(model, lipool=True, soft_mode=True, gamma=1)

该函数用于将定义好的ANN模型转换为SNN模型 ReLU单元将被替换为脉冲神经元, 如果模型中使用了最大池化,lipool参数将定义使用常规模型还是LIPooling方法

training: bool
class braincog.base.conversion.HookScale(p: float = 0.9995, channelnorm: bool = False, gamma: float = 0.999)

基类:Module

在每个ReLU层后记录该层的百分位最大值

For channelnorm: 获取最大值时使用了torch.quantile For layernorm: 使用sort,然后手动取百分比,因为quantile在计算单个通道时有上限,batch较大时易出错

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class braincog.base.conversion.Hookoutput(module)

基类:Module

在伪转换中为ReLU和ClipQuan提供包装,用于监控其输出

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class braincog.base.conversion.SNode(soft_mode=False, gamma=5)

基类:Module

用于转换后的SNN的神经元模型 IF神经元模型由gamma=1确定,当gamma为其他大于1的值时,即为使用burst神经元模型 soft_mode用于定义神经元的重置方法,soft重置能够极大地减少神经元在重置过程的信息损失

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

hard_reset()

硬重置后神经元的膜电势被重置为0

reset()
soft_reset()

软重置后神经元的膜电势为神经元当前膜电势减去阈值

training: bool
class braincog.base.conversion.Scale(scale: float = 1.0)

基类:Module

对前向过程的值进行缩放

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
braincog.base.conversion.merge(conv, bn)

conv: 卷积层实例 bn: BN层实例

braincog.base.conversion.mergeConvBN(m)

合并网络模块中的卷积与BN层