braincog.base.encoder package

Submodules

braincog.base.encoder.encoder module

class braincog.base.encoder.encoder.AutoEncoder(step, spike_output=True)

基类:Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class braincog.base.encoder.encoder.Encoder(step, encode_type='ttfs', *args, **kwargs)

基类:Module

将static image编码 :param step: 仿真步长 :param encode_type: 编码方式, 可选 direct, ttfs, rate, phase :param temporal_flatten: 直接将temporal维度concat到channel维度 :param layer_by_layer: 是否使用计算每一层的所有的输出的方式进行推理 :param (step, batch_size, )

auto(inputs)
delete(inputs, prob)

在Coding 过程中随机删除脉冲 :param inputs: encoded data :param prob: 删除脉冲的概率 :return: 随机删除脉冲之后的数据

direct(inputs)

直接编码 :param inputs: 形状(b, c, w, h) :return: (t, b, c, w, h)

forward(inputs, deletion_prob=None, shift_var=None)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

phase(inputs)

Phase Coding 相位编码 :param inputs: static data :return: encoded data

rate(inputs)

Rate Coding :param inputs: :return:

shift(inputs, var)

对数据进行随机平移, 添加噪声 :param inputs: encoded data :param var: 随机平移的方差 :return: shifted data

training: bool
ttfs(inputs)

Time-to-First-Spike Encoder :param inputs: static data :return: Encoded data

braincog.base.encoder.population_coding module

class braincog.base.encoder.population_coding.PEncoder(step, encode_type)

基类:Module

Population coding :param step: time steps :param encode_type: encoder type (str)

forward(inputs, num_popneurons, *args, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

population_time(inputs, m)

one feature will be encoded into gauss_neurons the center of i-th neuron is: gauss –

\[\mu u_i = I_min + (2i-3)/2(I_max-I_min)/(m -2)\]

the width of i-th neuron is : gauss –

\[\sigma sigma_i = \frac{1}{1.5}\frac{(I_max-I_min)}{m - 2}\]
参数
  • inputs – (N_num, N_feature) array

  • m – the number of the gaussian neurons

i : the i_th gauss_neuron 1.5: experience value popneurons_spike_t: gauss – function I_min = min(inputs) I_max = max(inputs) :return: (step, num_gauss_neuron)

population_voltage(inputs, m, VTH)

The more similar the input is to the mean, the more sensitive the neuron corresponding to the mean is to the input. You can change the maen. :param inputs: (N_num, N_feature) array :param m : the number of the gaussian neurons :param VTH : threshold voltage i : the i_th gauss_neuron one feature will be encoded into gauss_neurons the center of i-th neuron is: gauss – mu u_i = I_min + (2i-3)/2(I_max-I_min)/(m -2) the width of i-th neuron is : gauss – sigma sigma_i = 1/1.5(I_max-I_min)/(m -2) 1.5: experience value popneuron_v: gauss – function I_min = min(inputs) I_max = max(inputs) :return: (step, num_gauss_neuron, dim_inputs)

training: bool

braincog.base.encoder.qs_coding module

class braincog.base.encoder.qs_coding.QSEncoder(lambda_max, steps, sig_len, shift=False, noise=None, noise_rate=None, eps=1e-06)

基类:object

QS Encoding. :param lambda_max: 最大发放率 :param steps: 脉冲发放周期长度 T :param sig_len: 脉冲发放窗口 :param shift: 是否反转背景 :param noise: 是否增加噪声 :param noise_rate: 噪声比例 :param eps: 防止溢出参数

noise_trans(image, image_ori, image_ori_delta)

噪声图片转脉冲序列 :param image: 背景反转图片 :param image_ori: 原始图片 :param image_ori_delta: 原始扰动图片

reverse_pixels(image, image_delta, noise_rate, flip_bits=None)

反转图片像素

shift_trans(image, image_delta, image_ori, image_ori_delta)

背景翻转图片转脉冲序列。 :param image: 背景反转图片 :param image_delta: 扰动图片,用于计算相位 :param image_ori: 原始图片 :param image_ori_delta: 原始扰动图片

Module contents

class braincog.base.encoder.Encoder(step, encode_type='ttfs', *args, **kwargs)

基类:Module

将static image编码 :param step: 仿真步长 :param encode_type: 编码方式, 可选 direct, ttfs, rate, phase :param temporal_flatten: 直接将temporal维度concat到channel维度 :param layer_by_layer: 是否使用计算每一层的所有的输出的方式进行推理 :param (step, batch_size, )

auto(inputs)
delete(inputs, prob)

在Coding 过程中随机删除脉冲 :param inputs: encoded data :param prob: 删除脉冲的概率 :return: 随机删除脉冲之后的数据

direct(inputs)

直接编码 :param inputs: 形状(b, c, w, h) :return: (t, b, c, w, h)

forward(inputs, deletion_prob=None, shift_var=None)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

phase(inputs)

Phase Coding 相位编码 :param inputs: static data :return: encoded data

rate(inputs)

Rate Coding :param inputs: :return:

shift(inputs, var)

对数据进行随机平移, 添加噪声 :param inputs: encoded data :param var: 随机平移的方差 :return: shifted data

training: bool
ttfs(inputs)

Time-to-First-Spike Encoder :param inputs: static data :return: Encoded data

class braincog.base.encoder.PEncoder(step, encode_type)

基类:Module

Population coding :param step: time steps :param encode_type: encoder type (str)

forward(inputs, num_popneurons, *args, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

population_time(inputs, m)

one feature will be encoded into gauss_neurons the center of i-th neuron is: gauss –

\[\mu u_i = I_min + (2i-3)/2(I_max-I_min)/(m -2)\]

the width of i-th neuron is : gauss –

\[\sigma sigma_i = \frac{1}{1.5}\frac{(I_max-I_min)}{m - 2}\]
参数
  • inputs – (N_num, N_feature) array

  • m – the number of the gaussian neurons

i : the i_th gauss_neuron 1.5: experience value popneurons_spike_t: gauss – function I_min = min(inputs) I_max = max(inputs) :return: (step, num_gauss_neuron)

population_voltage(inputs, m, VTH)

The more similar the input is to the mean, the more sensitive the neuron corresponding to the mean is to the input. You can change the maen. :param inputs: (N_num, N_feature) array :param m : the number of the gaussian neurons :param VTH : threshold voltage i : the i_th gauss_neuron one feature will be encoded into gauss_neurons the center of i-th neuron is: gauss – mu u_i = I_min + (2i-3)/2(I_max-I_min)/(m -2) the width of i-th neuron is : gauss – sigma sigma_i = 1/1.5(I_max-I_min)/(m -2) 1.5: experience value popneuron_v: gauss – function I_min = min(inputs) I_max = max(inputs) :return: (step, num_gauss_neuron, dim_inputs)

training: bool
class braincog.base.encoder.QSEncoder(lambda_max, steps, sig_len, shift=False, noise=None, noise_rate=None, eps=1e-06)

基类:object

QS Encoding. :param lambda_max: 最大发放率 :param steps: 脉冲发放周期长度 T :param sig_len: 脉冲发放窗口 :param shift: 是否反转背景 :param noise: 是否增加噪声 :param noise_rate: 噪声比例 :param eps: 防止溢出参数

noise_trans(image, image_ori, image_ori_delta)

噪声图片转脉冲序列 :param image: 背景反转图片 :param image_ori: 原始图片 :param image_ori_delta: 原始扰动图片

reverse_pixels(image, image_delta, noise_rate, flip_bits=None)

反转图片像素

shift_trans(image, image_delta, image_ori, image_ori_delta)

背景翻转图片转脉冲序列。 :param image: 背景反转图片 :param image_delta: 扰动图片,用于计算相位 :param image_ori: 原始图片 :param image_ori_delta: 原始扰动图片