Fix pre-autograd transforms not getting persisted during xnnpack export#9118
Fix pre-autograd transforms not getting persisted during xnnpack export#9118
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/9118
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 9978148 with merge base cf8ce89 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
kimishpatel
left a comment
There was a problem hiding this comment.
Lets keep exported_program as the only source of truth
extension/llm/export/builder.py
Outdated
|
|
||
| # Prior to export, persist the changes to the pre autograd | ||
| # graph module back to the source-of-truth ExportedProgram. | ||
| self.export(self.pre_autograd_graph_module) |
There was a problem hiding this comment.
I think we should keep exported_program up-to-date. Thus shouldnt do this here but rather wherever we extract graph_module and apply any transformations. Thus we should not keep self.pre_autograd_graph_module at all. Only source of truth would be exported_program
f5506bf to
9978148
Compare
kimishpatel
left a comment
There was a problem hiding this comment.
Looks good. We discussed to follow up to answer " what should be the source of truth, graph_module or EP"
Summary
After moving to
to_edge_transform_and_lowerfor the XNNPack export route in #8624, we were discarding all of the transforms made to the pre-autograd graph module stored inLLMEdgeManager, since the newto_edge_transform_and_lowertook in anExportedPrograminstead of ann.Moduleas an argument. To solve this, we re-run export for training right before eachLLMEdgeManagerAPI that runs the full non-autograd safetorch.export().Test plan
Tested manually on Llama3.2 1B export:
Before ops (contains permute_copy):
After ops (no permute_copy):