Add amplitude_scaling implementation#1485
Add amplitude_scaling implementation#1485alejoe91 wants to merge 7 commits intoSpikeInterface:main_tmpfrom
Conversation
|
@TomBugnon here is a first implementation of the amplitude scalings (note that we are moving things to the Currently, the implementation is naive, since it doesn't account for spike collisions, but it accounts for multiple channels in a user-defined sparsity. |
| if sample_index - cut_out_before < 0: | ||
| local_waveform = traces_with_margin[:cut_out_end, sparse_indices] | ||
| template = template[cut_out_before - sample_index :] | ||
| elif sample_index + cut_out_after > end_frame + right: | ||
| local_waveform = traces_with_margin[cut_out_start:, sparse_indices] | ||
| template = template[: -(sample_index + cut_out_after - end_frame)] | ||
| else: | ||
| local_waveform = traces_with_margin[cut_out_start:cut_out_end, sparse_indices] |
There was a problem hiding this comment.
why this when we have margin ?
There was a problem hiding this comment.
This is a smaller cut out to get a local waveform
There was a problem hiding this comment.
yes but the margin ensure always the correct lenght no ?
When you get_chunk_with_margin(add_zeros=True)
| i0 = np.searchsorted(spikes["segment_ind"], segment_index) | ||
| i1 = np.searchsorted(spikes["segment_ind"], segment_index + 1) | ||
| spikes_in_segment = spikes[i0:i1] |
| i0 = np.searchsorted(spikes["segment_ind"], segment_index) | ||
| i1 = np.searchsorted(spikes["segment_ind"], segment_index + 1) | ||
| spikes_in_segment = spikes[i0:i1] |
There was a problem hiding this comment.
The segment slicing could be done in worker init once.
There was a problem hiding this comment.
But the workers could span multiple segments no?
There was a problem hiding this comment.
yes but the first serahsorted can be done once for all.
| assert template.shape == local_waveform.shape | ||
| local_waveforms.append(local_waveform) | ||
| templates.append(template) | ||
| linregress_res = linregress(template.flatten(), local_waveform.flatten()) |
There was a problem hiding this comment.
Does using linregress do not make it too slow ?
Why not a simple scalar product to speedup ?
There was a problem hiding this comment.
Not it's actually quite fast :) happy to discuss options
|
Cool!!! |
Thanks Camarade ;) let's discuss tomorrow |
|
Just trying to catch up here. What is the difference between such amplitudes and the ones provided via the matching engine? Is it different? |
|
The only difference is that this knew makes sure that all spikes are matched. Would there be an option for matching engines to not find new spikes and assign a weight to all existing spikes? |
|
Not at the moment, but anyway this is likely to be slower user the engines, because of all the burden for collisions. I'm just not really getting the use case for such a functions. |
|
@yger of course this PR do not take care of collision. |
Implement
amplitude_scalingsextension to compute scaling of each spike with respect to the template.Currently using the
slopeof thescipy.stats.linregressfunction.A possible extension would be to instead of using spike by spike regression, to gather all spikes in a nighborhood and fit colliding spikes.