Add py-opt-einsum 3.2.1

Optimized einsum can significantly reduce the overall execution time of
einsum-like expressions (e.g., np.einsum, dask.array.einsum, pytorch.einsum,
tensorflow.einsum, ) by optimizing the expression's contraction order and
dispatching many operations to canonical BLAS, cuBLAS, or other specialized
routines. Optimized einsum is agnostic to the backend and can handle NumPy,
Dask, PyTorch, Tensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as
well as potentially any library which conforms to a standard API.

WWW: https://github.com/dgasmith/opt_einsum
This commit is contained in:
Sunpoet Po-Chuan Hsieh 2020-07-09 18:08:00 +00:00
parent 3bbba57f62
commit 25b4607212
Notes: svn2git 2021-03-31 03:12:20 +00:00
svn path=/head/; revision=541766
4 changed files with 37 additions and 0 deletions

View File

@ -781,6 +781,7 @@
SUBDIR += py-numexpr
SUBDIR += py-numpy
SUBDIR += py-nzmath
SUBDIR += py-opt-einsum
SUBDIR += py-osqp
SUBDIR += py-pandas
SUBDIR += py-pandas-datareader

View File

@ -0,0 +1,24 @@
# Created by: Po-Chuan Hsieh <sunpoet@FreeBSD.org>
# $FreeBSD$
PORTNAME= opt-einsum
PORTVERSION= 3.2.1
CATEGORIES= math python
MASTER_SITES= CHEESESHOP
PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX}
DISTNAME= opt_einsum-${PORTVERSION}
MAINTAINER= sunpoet@FreeBSD.org
COMMENT= Optimizing numpys einsum function
LICENSE= MIT
LICENSE_FILE= ${WRKSRC}/LICENSE
RUN_DEPENDS= ${PYNUMPY}
USES= python:3.5+
USE_PYTHON= autoplist concurrent distutils
NO_ARCH= yes
.include <bsd.port.mk>

View File

@ -0,0 +1,3 @@
TIMESTAMP = 1594308020
SHA256 (opt_einsum-3.2.1.tar.gz) = 83b76a98d18ae6a5cc7a0d88955a7f74881f0e567a0f4c949d24c942753eb998
SIZE (opt_einsum-3.2.1.tar.gz) = 72186

View File

@ -0,0 +1,9 @@
Optimized einsum can significantly reduce the overall execution time of
einsum-like expressions (e.g., np.einsum, dask.array.einsum, pytorch.einsum,
tensorflow.einsum, ) by optimizing the expression's contraction order and
dispatching many operations to canonical BLAS, cuBLAS, or other specialized
routines. Optimized einsum is agnostic to the backend and can handle NumPy,
Dask, PyTorch, Tensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as
well as potentially any library which conforms to a standard API.
WWW: https://github.com/dgasmith/opt_einsum