-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于 _C_neuron.ParametricLIF_hard_reset_fptt_with_grad #14
Comments
这应该是老版本的代码。 第二个问题,参考以下教程: |
非常感谢您的解答,另外SEW的代码是不是只能使用老版本代码,因为readme 将下载的spikingjelly版本回滚了。因为不太熟悉C++/CUDA,所以新版本可以用的话就太好了:) |
这个教程是spiking resnet的,但很容易改成sew resnet,而且框架里面提供了网络定义的代码: |
好的 非常感谢您的解答! |
您好,不好意思再次打扰您,请问是否有方式同时返回 当前层的spike和对应的的膜电压 |
某个版本后的SJ框架中就支持返回v_seq和spike_seq了 |
def neuronal_charge(self, x: torch.Tensor):
self.v_float_to_tensor(x)
if self.decay_input:
if self.v_reset is None or self.v_reset == 0.:
self.v = self.neuronal_charge_decay_input_reset0(x, self.v, self.tau)
else:
self.v = self.neuronal_charge_decay_input(x, self.v, self.v_reset, self.tau)
else:
if self.v_reset is None or self.v_reset == 0.:
self.v = self.neuronal_charge_no_decay_input_reset0(x, self.v, self.tau)
else:
self.v = self.neuronal_charge_no_decay_input(x, self.v, self.v_reset, self.tau)
@staticmethod
@torch.jit.script
def neuronal_charge_decay_input_reset0(x: torch.Tensor, v: torch.Tensor, tau: float):
v = v + (x - v) / tau
return v 您好,我在自己项目使用您的代码过程中,出现了下面的报错。我的输入是[5,20,3,288,288], 使用的是LIF模型。在debug过程中发现, Traceback (most recent call last):
File "/home/iccd/cv23-spiking/pytracking/ltr/trainers/base_trainer.py", line 70, in train
self.train_epoch()
File "/home/iccd/cv23-spiking/pytracking/ltr/trainers/ltr_trainer.py", line 80, in train_epoch
self.cycle_dataset(loader)
File "/home/iccd/cv23-spiking/pytracking/ltr/trainers/ltr_trainer.py", line 66, in cycle_dataset
loss.backward()
File "/home/iccd/miniconda3/envs/pytracking/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/iccd/miniconda3/envs/pytracking/lib/python3.8/site-packages/torch/autograd/__init__.py", line 130, in backward
Variable._execution_engine.run_backward(
File "/home/iccd/miniconda3/envs/pytracking/lib/python3.8/site-packages/torch/autograd/function.py", line 89, in apply
return self._forward_cls.backward(self, *args) # type: ignore
File "/home/iccd/miniconda3/envs/pytracking/lib/python3.8/site-packages/spikingjelly-0.0.0.0.13-py3.8.egg/spikingjelly/activation_based/surrogate.py", line 638, in backward
return atan_backward(grad_output, ctx.saved_tensors[0], ctx.alpha)
RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)
nvrtc compilation failed:
#define NAN __int_as_float(0x7fffffff)
#define POS_INFINITY __int_as_float(0x7f800000)
#define NEG_INFINITY __int_as_float(0xff800000)
template<typename T>
__device__ T maximum(T a, T b) {
return isnan(a) ? a : (a > b ? a : b);
}
template<typename T>
__device__ T minimum(T a, T b) {
return isnan(a) ? a : (a < b ? a : b);
}
extern "C" __global__
void func_1(float* t0, float v1, float* t2, float* aten_mul_flat) {
{
float v = __ldg(t2 + 18 * ((512 * blockIdx.x + threadIdx.x) / 18) + (512 * blockIdx.x + threadIdx.x) % 18);
float v_1 = __ldg(t0 + 18 * ((512 * blockIdx.x + threadIdx.x) / 18) + (512 * blockIdx.x + threadIdx.x) % 18);
aten_mul_flat[512 * blockIdx.x + threadIdx.x] = ((1.f / (v + 1.f)) * v1) * v_1;
}
} |
在新版SJ框架中,如果不使用cupy后端,多步的前向传播是通过多次调用单步实现的,而单步的实现是纯python,但你这个报错是cuda编译问题:
应该是你在某些地方引入自定义CUDA扩展,但CUDA的代码有问题导致的 |
但是第一次事件步循环是没有问题的,第二次循环才报错。我再调试看看。谢谢您的解答。 |
SEW-Resnet18这个网络适合imagenet,对于dvs手势而言过于庞大了,因此原文才重新设计了一个小网络 |
噢噢好的,多谢您及时的解答 |
您好,我有两个问题希望您帮忙解答一下,
The text was updated successfully, but these errors were encountered: