Skip to content

Add amplitude_scaling implementation#1485

Closed
alejoe91 wants to merge 7 commits intoSpikeInterface:main_tmpfrom
alejoe91:amplitude_scaling_main
Closed

Add amplitude_scaling implementation#1485
alejoe91 wants to merge 7 commits intoSpikeInterface:main_tmpfrom
alejoe91:amplitude_scaling_main

Conversation

@alejoe91
Copy link
Member

@alejoe91 alejoe91 commented Apr 4, 2023

Implement amplitude_scalings extension to compute scaling of each spike with respect to the template.

Currently using the slope of the scipy.stats.linregress function.

A possible extension would be to instead of using spike by spike regression, to gather all spikes in a nighborhood and fit colliding spikes.

@alejoe91 alejoe91 added the postprocessing Related to postprocessing module label Apr 4, 2023
@alejoe91
Copy link
Member Author

alejoe91 commented Apr 4, 2023

@TomBugnon here is a first implementation of the amplitude scalings (note that we are moving things to the main branch and the src/spikeinterface package organization)

Currently, the implementation is naive, since it doesn't account for spike collisions, but it accounts for multiple channels in a user-defined sparsity.
Nevertheless, it should allow us to test whether the amp cutoff metric is dependent on scaling VS absolute amplitudes.

Comment on lines +289 to +296
if sample_index - cut_out_before < 0:
local_waveform = traces_with_margin[:cut_out_end, sparse_indices]
template = template[cut_out_before - sample_index :]
elif sample_index + cut_out_after > end_frame + right:
local_waveform = traces_with_margin[cut_out_start:, sparse_indices]
template = template[: -(sample_index + cut_out_after - end_frame)]
else:
local_waveform = traces_with_margin[cut_out_start:cut_out_end, sparse_indices]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this when we have margin ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a smaller cut out to get a local waveform

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes but the margin ensure always the correct lenght no ?
When you get_chunk_with_margin(add_zeros=True)

Comment on lines +255 to +257
i0 = np.searchsorted(spikes["segment_ind"], segment_index)
i1 = np.searchsorted(spikes["segment_ind"], segment_index + 1)
spikes_in_segment = spikes[i0:i1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate

Comment on lines +251 to +253
i0 = np.searchsorted(spikes["segment_ind"], segment_index)
i1 = np.searchsorted(spikes["segment_ind"], segment_index + 1)
spikes_in_segment = spikes[i0:i1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The segment slicing could be done in worker init once.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the workers could span multiple segments no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes but the first serahsorted can be done once for all.

assert template.shape == local_waveform.shape
local_waveforms.append(local_waveform)
templates.append(template)
linregress_res = linregress(template.flatten(), local_waveform.flatten())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does using linregress do not make it too slow ?
Why not a simple scalar product to speedup ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not it's actually quite fast :) happy to discuss options

@samuelgarcia
Copy link
Member

Cool!!!
Did some comments on the fly, will have a deeper look later.

@alejoe91
Copy link
Member Author

alejoe91 commented Apr 4, 2023

Cool!!! Did some comments on the fly, will have a deeper look later.

Thanks Camarade ;) let's discuss tomorrow

@yger
Copy link
Collaborator

yger commented Apr 5, 2023

Just trying to catch up here. What is the difference between such amplitudes and the ones provided via the matching engine? Is it different?

@alejoe91
Copy link
Member Author

alejoe91 commented Apr 5, 2023

The only difference is that this knew makes sure that all spikes are matched. Would there be an option for matching engines to not find new spikes and assign a weight to all existing spikes?

@yger
Copy link
Collaborator

yger commented Apr 5, 2023

Not at the moment, but anyway this is likely to be slower user the engines, because of all the burden for collisions. I'm just not really getting the use case for such a functions.
If you have a sorting, it means that you got the spikes via the matching engine, am I right? Except in the case of GT, where maybe some spikes are there but not found via matching because of collisions. But then, the amplitudes for these spikes are not likely to be close to the ones of the templates.

@samuelgarcia
Copy link
Member

@yger of course this PR do not take care of collision.
The idea is to estimate distribution so even with 10% of spikes in collision the distrib will be easy to estimate.
We want some kind of amplitude-scaling cut-off metric.
Here it is the prostprocessing module so totally agnostic to the sorter and the template matching engine behind.
We want metrics that are sorter independant.

@alejoe91 alejoe91 closed this Apr 5, 2023
@alejoe91 alejoe91 deleted the amplitude_scaling_main branch April 7, 2023 16:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

postprocessing Related to postprocessing module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants