You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Default OpSum coefficient type to Float64, require users to specify OpSum{ComplexF64} if they want that.
Check / Improve compatibility with feature set of OpSum to MPO conversion in ITensors: support multi-site operators, ensure sorting comparisons work and are implemented consistently with ITensors implementation, implement all relevant sorting w.r.t to traversal order of tree instead of site-labels to ensure compatibility with arbitrary vertextype.
Copy ITensors functions being used in ttn_svd like ITensors.determineValType, ITensors.posInLink!, ITensors.MatElem, etc. to ITensorNetworks.jl and update their style. Functions like ITensors.which_op, ITensors.params, ITensors.site, ITensors.argument, etc. that come from the Ops module related to OpSum shouldn't be copied over.
Split off logic for building symbolic representation of TTNO into a separate function.
Move calc_qn outside of ttn_svd.
Use sparse matrix/array data structures or metagraphs for symbolic representation of TTNO (for example NDTensors.SparseArrayDOKs may be useful for that).
Split off logic of grouping terms by QNs.
Factor out logic for building link indices, make use of IndsNetwork.
Refactor code logic to first work without merged blocks/QNs and then optionally merge and compress as needed.
Support other compression schemes, like rank-revealing sparse QR.
Implement sequential compression as opposed to the current method which uses parallel compression (i.e. right now it compresses each link index effectively independently) to improve performance.
Allow compression to take into account operator information (perhaps by preprocessing by expanding in an orthonormal operator basis), not just coefficients.
Handle starting and ending blocks in a more elegant way, for example as part of a sparse matrix.
Handle vertices without any site indices (internal vertices, such as for hierarchical TTN).
Make sure the fermion signs of the tensors being constructed are correct and work with with automatic fermion sign system.
Followup to #116:
MatElemandQNArrElemwith FillArrays.OneElement.determine_val_typetocoefficient_type.OpSumcoefficient type toFloat64, require users to specifyOpSum{ComplexF64}if they want that.OpSumtoMPOconversion in ITensors: support multi-site operators, ensure sorting comparisons work and are implemented consistently with ITensors implementation, implement all relevant sorting w.r.t to traversal order of tree instead of site-labels to ensure compatibility with arbitraryvertextype.ITensorsfunctions being used inttn_svdlikeITensors.determineValType,ITensors.posInLink!,ITensors.MatElem, etc. toITensorNetworks.jland update their style. Functions likeITensors.which_op,ITensors.params,ITensors.site,ITensors.argument, etc. that come from theOpsmodule related toOpSumshouldn't be copied over.calc_qnoutside ofttn_svd.IndsNetwork.