Changelog¶
2.3.0 / 2018-12-01¶
This release primarily focuses on expanding the suite of available path technologies to provide better optimization characistics for 4-20 tensors while decreasing the time to find paths for 50-200+ tensors. See Path Overview for more information.
New Features¶
- (GH#60) A new
greedy
implementation has been added which is up to two orders of magnitude faster for 200 tensors. - (GH#73) Adds a new
branch
path that usesgreedy
ideas to prune theoptimal
exploration space to provide a better path thangreedy
at suboptimal
cost. - (GH#73) Adds a new
auto
keyword to theopt_einsum.contract()
path
option. This keyword automatically chooses the best path technology that takes under 1ms to execute.
Enhancements¶
- (GH#61) The
opt_einsum.contract()
path
keyword has been changed tooptimize
to more closely match NumPy.path
will be deprecated in the future. - (GH#61) The
opt_einsum.contract_path()
now returns aopt_einsum.contract.PathInfo()
object that can be queried for the scaling, flops, and intermediates of the path. The print representation of this object is identical to before. - (GH#61) The default
memory_limit
is now unlimited by default based on community feedback. - (GH#66) The Torch backend will now use
tensordot
when using a version of Torch which includes this functionality. - (GH#68) Indices can now be any hashable object when provided in the “Interleaved Input” syntax.
- (GH#74) Allows the default transpose operation to be overridden to take advantage of more advanced tensor transpose libraries.
- (GH#73) The
optimal
path is now significantly faster.
Bug fixes¶
- (GH#72) Fixes the “Interleaved Input” syntax and adds documentation.
2.1.3 / 2018-8-23¶
Bug fixes¶
- Fixes unicode issue for large numbers of tensors in Python 2.7.
- Fixes unicode install bug in README.md.
2.1.0 / 2018-8-15¶
opt_einsum
continues to improve its support for additional backends beyond NumPy with PyTorch.
We have also published the opt_einsum package in the Journal of Open Source Software. If you use this package in your work, please consider citing us!
New features¶
- PyTorch backend support
- Tensorflow eager-mode execution backend support
Enhancements¶
- Intermediate tensordot-like expressions are now ordered to avoid transposes.
- CI now uses conda backend to better support GPU and tensor libraries.
- Now accepts arbitrary unicode indices rather than a subset.
- New auto path option which switches between optimal and greedy at four tensors.
Bug fixes¶
- Fixed issue where broadcast indices were incorrectly locked out of tensordot-like evaluations even after their dimension was broadcast.
2.0.1 / 2018-6-28¶
New Features¶
- Allows unlimited Unicode indices.
- Adds a Journal of Open-Source Software paper.
- Minor documentation improvements.
2.0.0 / 2018-5-17¶
opt_einsum
is a powerful tensor contraction order optimizer for NumPy and related ecosystems.
New Features¶
- Expressions can be precompiled so that the expression optimization need not happen multiple times.
- The greedy order optimization algorithm has been tuned to be able to handle hundreds of tensors in several seconds.
- Input indices can now be unicode so that expressions can have many thousands of indices.
- GPU and distributed computing backends have been added such as Dask, TensorFlow, CUPy, Theano, and Sparse.
Bug Fixes¶
- An error affecting cases where opt_einsum mistook broadcasting operations for matrix multiply has been fixed.
- Most error messages are now more expressive.
1.0.0 / 2016-10-14¶
Einsum is a very powerful function for contracting tensors of arbitrary dimension and index. However, it is only optimized to contract two terms at a time resulting in non-optimal scaling for contractions with many terms. Opt_einsum aims to fix this by optimizing the contraction order which can lead to arbitrarily large speed ups at the cost of additional intermediate tensors.
Opt_einsum is also implemented into the np.einsum function as of NumPy v1.12.
New Features¶
- Tensor contraction order optimizer.
opt_einsum.contract()
as a drop-in replacement fornumpy.einsum()
.