2.3.0 / 2018-12-01¶
This release primarily focuses on expanding the suite of available path technologies to provide better optimization characistics for 4-20 tensors while decreasing the time to find paths for 50-200+ tensors. See Path Overview for more information.
- (GH#60) A new
greedyimplementation has been added which is up to two orders of magnitude faster for 200 tensors.
- (GH#73) Adds a new
branchpath that uses
greedyideas to prune the
optimalexploration space to provide a better path than
- (GH#73) Adds a new
autokeyword to the
pathoption. This keyword automatically chooses the best path technology that takes under 1ms to execute.
- (GH#61) The
pathkeyword has been changed to
optimizeto more closely match NumPy.
pathwill be deprecated in the future.
- (GH#61) The
opt_einsum.contract_path()now returns a
opt_einsum.contract.PathInfo()object that can be queried for the scaling, flops, and intermediates of the path. The print representation of this object is identical to before.
- (GH#61) The default
memory_limitis now unlimited by default based on community feedback.
- (GH#66) The Torch backend will now use
tensordotwhen using a version of Torch which includes this functionality.
- (GH#68) Indices can now be any hashable object when provided in the “Interleaved Input” syntax.
- (GH#74) Allows the default transpose operation to be overridden to take advantage of more advanced tensor transpose libraries.
- (GH#73) The
optimalpath is now significantly faster.
2.2.0 / 2018-07-29¶
2.1.3 / 2018-8-23¶
- Fixes unicode issue for large numbers of tensors in Python 2.7.
- Fixes unicode install bug in README.md.
2.1.0 / 2018-8-15¶
opt_einsum continues to improve its support for additional backends beyond NumPy with PyTorch.
We have also published the opt_einsum package in the Journal of Open Source Software. If you use this package in your work, please consider citing us!
- PyTorch backend support
- Tensorflow eager-mode execution backend support
- Intermediate tensordot-like expressions are now ordered to avoid transposes.
- CI now uses conda backend to better support GPU and tensor libraries.
- Now accepts arbitrary unicode indices rather than a subset.
- New auto path option which switches between optimal and greedy at four tensors.
- Fixed issue where broadcast indices were incorrectly locked out of tensordot-like evaluations even after their dimension was broadcast.
2.0.1 / 2018-6-28¶
- Allows unlimited Unicode indices.
- Adds a Journal of Open-Source Software paper.
- Minor documentation improvements.
2.0.0 / 2018-5-17¶
opt_einsum is a powerful tensor contraction order optimizer for NumPy and related ecosystems.
- Expressions can be precompiled so that the expression optimization need not happen multiple times.
- The greedy order optimization algorithm has been tuned to be able to handle hundreds of tensors in several seconds.
- Input indices can now be unicode so that expressions can have many thousands of indices.
- GPU and distributed computing backends have been added such as Dask, TensorFlow, CUPy, Theano, and Sparse.
- An error affecting cases where opt_einsum mistook broadcasting operations for matrix multiply has been fixed.
- Most error messages are now more expressive.
1.0.0 / 2016-10-14¶
Einsum is a very powerful function for contracting tensors of arbitrary dimension and index. However, it is only optimized to contract two terms at a time resulting in non-optimal scaling for contractions with many terms. Opt_einsum aims to fix this by optimizing the contraction order which can lead to arbitrarily large speed ups at the cost of additional intermediate tensors.
Opt_einsum is also implemented into the np.einsum function as of NumPy v1.12.