Skip to content

Conversation

@ysiraichi
Copy link
Collaborator

This PR modifies PyTorch/XLA benchmarking scripts so that we execute the benchmarks with the same precision as what we are setting for PyTorch.

Previously, we only called torch.set_float32_matmul_precision('high'), leaving PyTorch/XLA precision to be the default one. Now, we are also calling torch_xla._XLAC._xla_set_mat_mul_precision('high').

Additionally, this PR also replaces _xla_set_use_full_mat_mul_precision by _xla_set_mat_mul_precision, making the precision set more explicit and translatable between PyTorch and PyTorch/XLA.

cc @miladm @JackCaoG @zpcore

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants