What Is PyTorch?
PyTorch is an open‑source deep‑learning framework that evolved from Facebook’s AI research team (now Meta AI). It was released in 2016 and is now maintained by the PyTorch Foundation under the Linux Foundation. PyTorch provides a set of tools and libraries for building machine‑learning models in areas such as computer vision, natural‑language processing and reinforcement learning.
PyTorch centres on tensors, multidimensional arrays similar to NumPy arrays but designed to run efficiently on both CPUs and GPUs. It uses reverse‑mode automatic differentiation (“autograd”) to compute gradients and supports dynamic computation graphs, allowing you to modify the model’s architecture on the fly. These features make PyTorch flexible and intuitive, especially when experimenting with new ideas.
Installing PyTorch
Most beginners install PyTorch via pip. A simple command installs the latest CPU‑only version along with auxiliary libraries:
pip install torch torchvision torchaudio
This command fetches PyTorch and its vision/audio wrappers for you. To verify the installation, open Python and run:
import torch
print(torch.__version__) # prints installed version
print(torch.cuda.is_available()) # checks if GPU support is available
The first line outputs the version, while torch.cuda.is_available() returns True when your hardware and drivers support CUDA.
Running PyTorch Code Locally
A convenient way to experiment with PyTorch is through Jupyter Notebook:
Install Jupyter if you haven’t already (e.g.,
pip install notebook) and launch it from your terminal withjupyter notebook.Create a new notebook and select a Python kernel.
In a cell, write and run the following:
import torch
# Create a tensor from a Python list
t1 = torch.tensor([1, 2, 3])
print("tensor:", t1)
# Create a 2×3 tensor filled with zeros
t2 = torch.zeros(2, 3)
print("zeros:", t2)
# Add the two tensors (broadcasting t1 across t2’s rows)
result = t1 + t2
print("t1 + t2:", result)
This example demonstrates how to create tensors and perform element‑wise addition. You can move tensors to a GPU using tensor.cuda() or tensor.to("cuda") when torch.cuda.is_available() returns True.
Running Code in the Cloud
If you prefer not to install anything locally, Google Colab offers a free cloud‑hosted notebook service. Visit colab.research.google.com, sign in with a Google account, create a new notebook and change the runtime type to GPU. PyTorch is usually pre‑installed on Colab; however, you can install or upgrade it with !pip install torch torchvision torchaudio. Colab provides a GPU environment for testing GPU‑accelerated code.
Final Thoughts
PyTorch has become one of the most popular frameworks for research and production because of its flexibility and Pythonic design. Its tensor library supports both CPU and GPU computation, and its dynamic computation graph, built with reverse‑mode auto‑differentiation, makes it easy to iterate on new model architectures. Whether you’re building a simple classifier or exploring cutting‑edge research, PyTorch’s intuitive interface and active community make it a powerful tool for modern machine learning.