Facebook brings gpu-powered machine learning to python _ infoworld

Facebook’s AI research team has released a Python package for GPU-accelerated deep neural network programming that can complement or partly replace existing Python packages for math and stats, such as NumPy.

A Python implementation of the Torch machine learning framework, PyTorch has enjoyed broad uptake at Twitter, Carnegie Mellon University, Salesforce, and Facebook. Database platforms [ The InfoWorld deep learning reviews: Spark lights up machine learning.


Data recovery group | TensorFlow shines a light on deep learning. Database xls | Microsoft takes on TensorFlow. C database library | MXNet: Amazon’s scalable deep learning. 510 k database | Caffe deep learning conquers image classification. How to become a database administrator | Get a digest of the day’s top tech stories in the InfoWorld Daily newsletter. Database xml ]

Torch was originally implemented in C with a wrapper in the Lua scripting language, but PyTorch wraps the core Torch binaries in Python and provides GPU acceleration for many functions.

Torch is a tensor library for manipulating multidimensional matrices of data employed in machine learning and many other math-intensive applications. Database terminology PyTorch provides libraries for basic tensor manipulation on CPUs or GPUs, a built-in neural network library, model training utilities, and a multiprocessing library that can work with shared memory, “useful for data loading and hogwild training,” as PyTorch’s developers put it.

A chief advantage to PyTorch is that it lives in and allows the developer to plug into the vast ecosystem of Python libraries and software. Database theory Python programmers are also encouraged to use the styles they’re familiar with, rather than write code specifically meant to be a wrapper for an external C/C++ library. Existing packages like NumPy, SciPy, and Cython (for compiling Python to C for the sake of speed) can all work hand-in-hand with PyTorch.

PyTorch also offers a quick method to modify existing neural networks without having to rebuild the network from scratch. Database 1 to 1 relationship The techniques used to do this are borrowed from Chainer, another neural-network framework written in Python. Database testing The developers also emphasize PyTorch’s memory efficiency thanks to a custom-written GPU memory allocator, so “your deep learning models are maximally memory efficient. Database graph This enables you to train bigger deep learning models than before.”

While PyTorch may have been optimized for machine learning, that’s not the exclusive use case. Database naming conventions For instance, with NumPy, PyTorch’s tensor computation can work as a replacement for similar functions in NumPy. Database entity PyTorch provides GPU-accelerated versions of those functions and can drop back to the CPU if a GPU isn’t available.

banner