Add py-jax 0.1.72

JAX is Autograd and XLA, brought together for high-performance machine learning
research.

With its updated version of Autograd, JAX can automatically differentiate native
Python and NumPy functions. It can differentiate through loops, branches,
recursion, and closures, and it can take derivatives of derivatives of
derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)
via grad as well as forward-mode differentiation, and the two can be composed
arbitrarily to any order.

What's new is that JAX uses XLA to compile and run your NumPy programs on GPUs
and TPUs. Compilation happens under the hood by default, with library calls
getting just-in-time compiled and executed. But JAX also lets you just-in-time
compile your own Python functions into XLA-optimized kernels using a
one-function API, jit. Compilation and automatic differentiation can be composed
arbitrarily, so you can express sophisticated algorithms and get maximal
performance without leaving Python. You can even program multiple GPUs or TPU
cores at once using pmap, and differentiate through the whole thing.

Dig a little deeper, and you'll see that JAX is really an extensible system for
composable function transformations. Both grad and jit are instances of such
transformations. Others are vmap for automatic vectorization and pmap for
single-program multiple-data (SPMD) parallel programming of multiple
accelerators, with more to come.

WWW: https://github.com/google/jax
This commit is contained in:
Sunpoet Po-Chuan Hsieh 2020-07-09 18:08:06 +00:00
parent 25b4607212
commit 4d0e669f8d
Notes: svn2git 2021-03-31 03:12:20 +00:00
svn path=/head/; revision=541767
4 changed files with 54 additions and 0 deletions

View File

@ -762,6 +762,7 @@
SUBDIR += py-hdbscan
SUBDIR += py-hdmedians
SUBDIR += py-intspan
SUBDIR += py-jax
SUBDIR += py-keras
SUBDIR += py-keras-applications
SUBDIR += py-keras-preprocessing

24
math/py-jax/Makefile Normal file
View File

@ -0,0 +1,24 @@
# Created by: Po-Chuan Hsieh <sunpoet@FreeBSD.org>
# $FreeBSD$
PORTNAME= jax
PORTVERSION= 0.1.72
CATEGORIES= math python
MASTER_SITES= CHEESESHOP
PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX}
MAINTAINER= sunpoet@FreeBSD.org
COMMENT= Differentiate, compile, and transform Numpy code
LICENSE= APACHE20
RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}absl-py>=0:devel/py-absl-py@${PY_FLAVOR} \
${PYNUMPY} \
${PYTHON_PKGNAMEPREFIX}opt-einsum>=0:math/py-opt-einsum@${PY_FLAVOR}
USES= python:3.6+
USE_PYTHON= autoplist concurrent distutils
NO_ARCH= yes
.include <bsd.port.mk>

3
math/py-jax/distinfo Normal file
View File

@ -0,0 +1,3 @@
TIMESTAMP = 1594308022
SHA256 (jax-0.1.72.tar.gz) = b551a7b9fee31e744449191f83e0121d9a6a5a04755494df5b3cd468477f2119
SIZE (jax-0.1.72.tar.gz) = 398705

26
math/py-jax/pkg-descr Normal file
View File

@ -0,0 +1,26 @@
JAX is Autograd and XLA, brought together for high-performance machine learning
research.
With its updated version of Autograd, JAX can automatically differentiate native
Python and NumPy functions. It can differentiate through loops, branches,
recursion, and closures, and it can take derivatives of derivatives of
derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)
via grad as well as forward-mode differentiation, and the two can be composed
arbitrarily to any order.
What's new is that JAX uses XLA to compile and run your NumPy programs on GPUs
and TPUs. Compilation happens under the hood by default, with library calls
getting just-in-time compiled and executed. But JAX also lets you just-in-time
compile your own Python functions into XLA-optimized kernels using a
one-function API, jit. Compilation and automatic differentiation can be composed
arbitrarily, so you can express sophisticated algorithms and get maximal
performance without leaving Python. You can even program multiple GPUs or TPU
cores at once using pmap, and differentiate through the whole thing.
Dig a little deeper, and you'll see that JAX is really an extensible system for
composable function transformations. Both grad and jit are instances of such
transformations. Others are vmap for automatic vectorization and pmap for
single-program multiple-data (SPMD) parallel programming of multiple
accelerators, with more to come.
WWW: https://github.com/google/jax