braincog.base.strategy package

Submodules

braincog.base.strategy.LateralInhibition module

class braincog.base.strategy.LateralInhibition.LateralInhibition(node, inh, mode='constant')

基类:Module

侧抑制 用于发放脉冲的神经元抑制其他同层神经元 在膜电位上作用

forward(x: Tensor, xori=None)

Defines the computation performed at every call.

Should be overridden by all subclasses.

备注

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

braincog.base.strategy.surrogate module

class braincog.base.strategy.surrogate.AtanGrad(alpha=2.0, requires_grad=True)

基类:SurrogateFunctionBase

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

training: bool
class braincog.base.strategy.surrogate.BackEIGateGrad(alpha=2.0, requires_grad=False)

基类:SurrogateFunctionBase

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

training: bool
class braincog.base.strategy.surrogate.EIGrad(alpha=2.0, requires_grad=False)

基类:SurrogateFunctionBase

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

training: bool
class braincog.base.strategy.surrogate.GateGrad(alpha=2.0, requires_grad=False)

基类:SurrogateFunctionBase

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

training: bool
class braincog.base.strategy.surrogate.QGateGrad(alpha=2.0, requires_grad=False)

基类:SurrogateFunctionBase

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

training: bool
class braincog.base.strategy.surrogate.ReLUGrad(alpha=2.0, requires_grad=False)

基类:SurrogateFunctionBase

使用ReLU作为代替梯度函数, 主要用为相同结构的ANN的测试

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

training: bool
class braincog.base.strategy.surrogate.STDPGrad(alpha=2.0, requires_grad=False)

基类:SurrogateFunctionBase

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

training: bool
class braincog.base.strategy.surrogate.SigmoidGrad(alpha=1.0, requires_grad=False)

基类:SurrogateFunctionBase

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

training: bool
class braincog.base.strategy.surrogate.SurrogateFunctionBase(alpha, requires_grad=True)

基类:Module

Surrogate Function 的基类 :param alpha: 为一些能够调控函数形状的代理函数提供参数. :param requires_grad: 参数 alpha 是否需要计算梯度, 默认为 False

static act_fun(x, alpha)
参数
  • x – 膜电位的输入

  • alpha – 控制代理梯度形状的变量, 可以为 NoneType

返回

激发之后的spike, 取值为 [0, 1]

forward(x)
参数

x – 膜电位输入

返回

激发之后的spike

training: bool
class braincog.base.strategy.surrogate.atan(*args, **kwargs)

基类:Function

使用 Atan 作为代理梯度函数 对应的原函数为:

\[g(x) = \frac{1}{\pi} \arctan(\frac{\pi}{2}\alpha x) + \frac{1}{2}\]

反向传播的函数为:

\[g'(x) = \frac{\alpha}{2(1 + (\frac{\pi}{2}\alpha x)^2)}\]
static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, inputs, alpha)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class braincog.base.strategy.surrogate.backeigate(*args, **kwargs)

基类:Function

static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, input)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class braincog.base.strategy.surrogate.ei(*args, **kwargs)

基类:Function

static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, input)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class braincog.base.strategy.surrogate.gate(*args, **kwargs)

基类:Function

使用 gate 作为代理梯度函数 对应的原函数为:

\[g(x) = \mathrm{NonzeroSign}(x) \log (|\alpha x| + 1)\]

反向传播的函数为:

\[g'(x) = \frac{\alpha}{1 + |\alpha x|} = \frac{1}{\frac{1}{\alpha} + |x|}\]
static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, x, alpha)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

braincog.base.strategy.surrogate.heaviside(x)
class braincog.base.strategy.surrogate.quadratic_gate(*args, **kwargs)

基类:Function

使用 quadratic_gate 作为代理梯度函数 对应的原函数为:

\[\begin{split}g(x) = \begin{cases} 0, & x < -\frac{1}{\alpha} \\ -\frac{1}{2}\alpha^2|x|x + \alpha x + \frac{1}{2}, & |x| \leq \frac{1}{\alpha} \\ 1, & x > \frac{1}{\alpha} \\ \end{cases}\end{split}\]

反向传播的函数为:

\[\begin{split}g'(x) = \begin{cases} 0, & |x| > \frac{1}{\alpha} \\ -\alpha^2|x|+\alpha, & |x| \leq \frac{1}{\alpha} \end{cases}\end{split}\]
static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, x, alpha)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class braincog.base.strategy.surrogate.relu_like(*args, **kwargs)

基类:Function

static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, x, alpha)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class braincog.base.strategy.surrogate.sigmoid(*args, **kwargs)

基类:Function

使用 sigmoid 作为代理梯度函数 对应的原函数为:

\[g(x) = \mathrm{sigmoid}(\alpha x) = \frac{1}{1+e^{-\alpha x}}\]

反向传播的函数为:

\[g'(x) = \alpha * (1 - \mathrm{sigmoid} (\alpha x)) \mathrm{sigmoid} (\alpha x)\]
static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, x, alpha)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class braincog.base.strategy.surrogate.stdp(*args, **kwargs)

基类:Function

static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, inputs)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class braincog.base.strategy.surrogate.straight_through_estimator(*args, **kwargs)

基类:Function

使用直通估计器作为代理梯度函数 http://arxiv.org/abs/1308.3432

static backward(ctx, grad_output)

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, inputs)

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

Module contents