Skip to Content

Posts by Narendra Patwardhan

Why Erlang and Elixir Matter

When you hear “Elixir” for the first time, you might wonder why a language that runs on top of Erlang is worth learning. The answer lies in the problems you’ll face when you build modern, Internet‑scale services: handling thousands of concurrent users, surviving hardware failures, and updating code without dropping connections. Erlang was created in the 1980s to solve exactly those challenges for a telecom giant, and Elixir simply makes the experience of writing Erlang‑based code a lot more pleasant.

What We’ll Cover

  • The traits that make a system “highly available”
  • How Erlang’s process model
Why Erlang and Elixir Matter Read More

Using WASM as a Sandbox for AI agents

Artificial intelligence (AI) agents are becoming increasingly powerful, but with great capability comes the need for strong isolation and security guarantees. WebAssembly (WASM) offers a lightweight, portable, and sandboxed execution environment that can protect host systems while still delivering high performance. In this post, we explore how WASM can serve as an effective sandbox for AI agents, discuss the technical advantages, outline common architectures, and highlight real‑world use cases.

Why Choose WebAssembly for AI Sandboxing?

  • Platform independence: WASM modules run consistently across browsers, servers, edge devices, and even embedded systems without modification.
  • Strong security model: The runtime enforces memory
Using WASM as a Sandbox for AI agents Read More

Using Docker for orchestrating AI training

Artificial intelligence (AI) projects often involve complex pipelines, massive datasets, and a multitude of dependencies. Managing these moving parts manually can quickly become a nightmare, leading to inconsistent results, wasted compute cycles, and difficult reproducibility. Docker provides a lightweight, portable, and reproducible environment that can dramatically simplify the orchestration of AI training workloads. In this post we’ll explore why Docker is a natural fit for AI, walk through a practical setup, and share best practices for scaling and maintaining robust training pipelines.

Why Choose Docker for AI Training?

  • Reproducibility: Container images capture the exact versions of libraries, drivers, and
Using Docker for orchestrating AI training Read More

Using Einops with Pytorch

When working with deep learning models in PyTorch, tensor reshaping, rearranging, and reduction operations are inevitable. While torch.view, torch.permute, and torch.reshape get the job done, they often produce code that is hard to read and error‑prone. Einops (Einstein‑Notation for tensors) offers a concise, expressive, and readable alternative that integrates seamlessly with PyTorch. In this post we’ll explore why Einops is valuable, how to install it, and walk through practical examples that demonstrate its power.

What Is Einops?

Einops is a lightweight library that provides three core functions:

  • rearrange: Generalized permutation and reshaping.
  • reduce: Aggregation
Using Einops with Pytorch Read More

Unraveling self-attention vs cross-attention in Transformers

If you have worked with Transformers—whether BERT, GPT, or the original Encoder-Decoder architecture—you are intimately familiar with the concept of "Attention." The equation is arguably the most famous in modern NLP:

$$\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V$$

However, strictly memorizing the formula often masks the architectural nuances. While the mathematical operation is identical, the source of the inputs determines whether you are performing Self-Attention or Cross-Attention.

This article explores the mechanical and semantic differences between these two mechanisms, targeted at practitioners who understand the basics of deep learning.

Unraveling self-attention vs cross-attention in Transformers Read More