You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Nodes: When traces_additive is False, spike trace (self.x) would be filled with 1 at spike points. However, it is not always the case (For example in Izhikevich 2007, this value is set to 0.1). I think the 1 is better to be replaced by the trace_scale value, so users can set this value as well.
Nodes: typo: self.tau
Learning: the default value for nu is set to None, which makes the PostPre learning rule practically wasteful. I believe it is not a good default value for an argument and it is better to be replaced by some float numbers or get changed to a mandatory argument.
Network: the format of the inputs argument of the run method is not very common and since the input in most of the deep learning frameworks (mainly TensorFlow and Pytorch) is in the type of tensors, it can be tricky especially for new users. Furthermore, since there is no explicit error to point out this problem, it could be sometimes hard to find and a time-consuming error. Adding an assert to check the type of inputs can be helpful in this regard.
Network: A progress bar can be useful in very long runs A custom progress bar (including some information about the run and the network) can be added later.
Thanks a lot.
-Nodes - traces_additive: Makes sense. Agreed.
-Nu default value: this is very model dependent, it's not obvious to compare with TensorFlow/keras here. There is no global answer by default.
-Network input spikes format assert: yes, nice.
-Progress bar: no, the call to .run() is usually very short (less than 0.1s in most cases). It would also add another dependency to Bindsnet. It's user duty to use it or not.
-torch.ger: yes, thanks for the info!
-typos: Thanks again!
I think I need to elaborate on my point on the default value of nu. I totally agree that the value of nu is highly model-dependent, nevertheless, I still believe None is not a good default value and must somehow be changed. My point is, most users usually do not change the default values unless there is a certain need for it. hence, many users may add a learning rule to their networks without knowing their network is not changing at all, therefore, this default value is misleading and not user-friendly. I believe it is not only a bad idea to set default nu to zero, but also users must be forbidden to use 0 as a value for nu (there must be an assert to check if the nu value is set to a non-zero value). I mean, why someone would add a learning rule and then set the learning rate to 0? what is the point in that? The learning rule is literally doing nothing in that case. For the default value, despite the fact that the amount of nu is dependent on the problem, a value used on a valid source can be a fine choice. Also, it can be at least a mandatory argument, so users have to set it themselves. At last, if you do not agree with any of these suggestions, I believe there must be at least a warning to arise whenever this happens and inform the user that their learning rule is not working and they are practically wasting their time.
About the progress bar, actually, it occurred to me when I was trying to train a very long time series. Yet, I agree this does not happen that often. Do you think it is worth it if I write a progress bar instead of tqdm or it is not a good idea in general?
I agree, using STDP with the default value needs to raise a warning to the user that nu is zero and as a result, there are no weight changes. Same for nu = 0 or nu = [0,0]. However, Simon is correct, zero one side of nu is acceptable behavior, for example, nu = [0, 0.2] when you want to use only one part of the STDP rule. In any case, these checks and warnings, need to be at the initialization part of the object, not the compute part.
About tqdm at the network level, adding it as a default can complicate users that would like to nest tqdm in the epoch level, train level, etc. tqdm is a nice addition that needs to be used carefully and aesthetically depend on the task and user preferences.
Please fix the issues above and I will accept the PR.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
A few suggestions:
Nodes:Whentraces_additiveis False, spike trace (self.x) would be filled with 1 at spike points. However, it is not always the case (For example in Izhikevich 2007, this value is set to 0.1). I think the 1 is better to be replaced by thetrace_scalevalue, so users can set this value as well.Nodes:typo:self.tauLearning:the default value fornuis set toNone, which makes thePostPrelearning rule practically wasteful. I believe it is not a good default value for an argument and it is better to be replaced by some float numbers or get changed to a mandatory argument.Network:the format of theinputsargument of therunmethod is not very common and since the input in most of the deep learning frameworks (mainly TensorFlow and Pytorch) is in the type of tensors, it can be tricky especially for new users. Furthermore, since there is no explicit error to point out this problem, it could be sometimes hard to find and a time-consuming error. Adding an assert to check the type ofinputscan be helpful in this regard.Network:A progress bar can be useful in very long runs A custom progress bar (including some information about the run and the network) can be added later.Learning:replacetorch.gerwithtorch.outer(https://pytorch.org/docs/stable/generated/torch.ger.html)models:typo:inhibitory@Hananel-Hazan