YaRN : correction to GPT-NeoX implementation#4093
Conversation
|
Is there some way to compare the results with a reference implementation. I'm confused as well at this point |
|
As far as i can see, the only reference implementation of gpt-neox is the pre-gguf implemementation in |
I believe he means a reference implementation of YaRN for GPT-NeoX... but there is none. There is only one for LLaMA. |
|
Is this still relevant? I think RoPE is computed correctly across all backends now |
The changes in this PR only affect RoPE when using the YaRN scaling options. I believe one should see a perplexity difference between this PR and master while using YaRN with Falcon or any other model using GPT-NeoX RoPE. |
|
superseded by #7617 |
At one point I was struggling to understand what the Metal kernel was doing with GPT-NeoX RoPE, and I think I got it wrong. I got halfway there - the comment makes it fairly obvious what is going on. But the rotation amount should be an integer and should not be multiplied by inv_ndims - inv_ndims should only be part of theta.
@jquesnelle does this seem like the right thing to do?
I learned from my mistakes, this is running on ggml-ci so I don't have to worry about error-prone manual testing across several machines.