Andrej Karpathy Launches ‘nanochat’, An Open-Source ChatGPT-Style Model Training Pipeline

Andrej Karpathy Launches ‘nanochat’, An Open-Source ChatGPT-Style Model Training Pipeline

OpenAI co-founder and Eureka Labs founder Andrej Karpathy has introduced nanochat, an open-source framework that enables users to train, fine-tune, and interact with their own ChatGPT-style language model through a streamlined setup. Building upon his earlier project nanoGPT, which was limited to pretraining large language models, nanochat delivers a complete end-to-end training and inference pipeline — from tokenization to deployment — all within a compact and readable codebase.

In his announcement on X, Karpathy shared, “You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web UI.” The project’s repository, spanning roughly 8,000 lines of code, offers a fully functional system that includes tokeniser training in Rust, Transformer-based LLM pretraining on FineWeb, supervised fine-tuning, and optional reinforcement learning with GRPO. It also integrates efficient inference via KV caching, along with both command-line and browser-based interfaces for interacting with the model.

A key feature of nanochat is its scalability — users can train models at different levels of complexity depending on available resources and desired performance. Karpathy noted that a basic ChatGPT clone can be trained for around $100 in just four hours on an 8×H100 GPU node, while a more advanced run costing about $1,000 over 42 hours can produce a model capable of solving basic mathematical and coding problems.

Describing the project’s intent, Karpathy stated, “My goal is to get the full ‘strong baseline’ stack into one cohesive, minimal, readable, hackable, maximally forkable repo.” The initiative aims to make the inner workings of large language models more accessible and educational, particularly for researchers, students, and developers eager to understand and experiment with AI model architectures.

Nanochat will also serve as the capstone project for Karpathy’s upcoming course, LLM101n, at Eureka Labs, where participants will gain hands-on experience with every stage of the LLM lifecycle — from data preparation to reinforcement learning. By combining simplicity, transparency, and practical usability, nanochat represents a significant step toward democratizing LLM development and making AI experimentation more open and reproducible.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch