RunPod Blog
  • RunPod
  • Docs
Sign in Subscribe
How to Fine-Tune LLMs with Axolotl on RunPod
Fine-Tuning

How to Fine-Tune LLMs with Axolotl on RunPod

Learn how to fine-tune large language models (LLMs) using Axolotl on RunPod. This step-by-step guide covers setup, configuration, and training with LoRA, 8-bit quantization, and DeepSpeed—all on scalable GPU infrastructure.
21 Apr 2025 3 min read
RTX 5090 LLM Benchmarks for AI: Is It the Best GPU for ML?

RTX 5090 LLM Benchmarks for AI: Is It the Best GPU for ML?

The AI landscape demands ever-increasing performance for demanding workloads, especially for large language model (LLM) inference. Today, we're excited to showcase how the NVIDIA RTX 5090 is reshaping what's possible in AI compute with breakthrough performance that outpaces even specialized data center hardware. Benchmark Showdown: RTX
17 Apr 2025 4 min read
The Complete Guide to Training Video LoRAs: From Concept to Creation
LoRAs

The Complete Guide to Training Video LoRAs: From Concept to Creation

Learn how to train custom video LoRAs for models like Wan, Hunyuan Video, and LTX Video. This guide covers hyperparameters, dataset prep, and best practices to help you fine-tune high-quality, motion-aware video outputs.
16 Apr 2025 10 min read
The RTX 5090 Is Here: Serve 65,000+ Tokens per Second on RunPod

The RTX 5090 Is Here: Serve 65,000+ Tokens per Second on RunPod

RunPod customers can now access the NVIDIA RTX 5090—the latest powerful GPU for real-time LLM inference. With impressive throughput and large memory capacity, the 5090 enables serving for small and mid-sized AI models at scale. Whether you’re deploying high-concurrency chatbots, inference APIs, or multi-model backends, this next-gen GPU
15 Apr 2025 2 min read
Cost-effective Computing with Autoscaling on RunPod
Runpod Platform

Cost-effective Computing with Autoscaling on RunPod

Learn how RunPod helps you autoscale AI workloads for both training and inference. Explore Pods vs. Serverless, cost-saving strategies, and real-world examples of dynamic resource management for efficient, high-performance compute.
14 Apr 2025 3 min read
The Future of AI Training: Are GPUs Enough for the Next Generation of AI?
AI Development

The Future of AI Training: Are GPUs Enough for the Next Generation of AI?

AI workloads are evolving fast. GPUs still dominate training in 2025, but emerging hardware and hybrid infrastructure are reshaping the future. Here’s what GTC 2025 reveals—and how RunPod fits in.
10 Apr 2025 4 min read
Llama-4 Scout and Maverick Are Here—How Do They Shape Up?

Llama-4 Scout and Maverick Are Here—How Do They Shape Up?

Meta has been one of the kings of open source, open weight large language models. Their first foray with Llama-1 in 2023, while limited in its application and licensing, was a clear direction to the community that there was an alternative to large closed-off models. Later in 2023 we got
09 Apr 2025 5 min read
Built on RunPod: How Cogito Trained High-Performance Open Models on the Path to ASI

Built on RunPod: How Cogito Trained High-Performance Open Models on the Path to ASI

At RunPod, we're proud to power the next generation of AI breakthroughs—and this one is big. San Francisco-based Deep Cogito has just released Cogito v1, a family of open-source models ranging from 3B to 70B parameters. Each model outperforms leading alternatives from LLaMA, DeepSeek, and Qwen in
08 Apr 2025 3 min read
How AI Helped Win a Nobel Prize - Protein Folding and AI
AI Development

How AI Helped Win a Nobel Prize - Protein Folding and AI

AlphaFold just won the Nobel Prize—and proved AI can solve problems once thought impossible. This post explores what it means for science, compute, and how RunPod is helping make the next breakthrough accessible to everyone.
07 Apr 2025 3 min read
No-Code AI: How I Ran My First Language Model Without Coding
No-Code AI

No-Code AI: How I Ran My First Language Model Without Coding

I wanted to run an open-source AI model myself—no code, just curiosity. Here’s how I deployed Mistral 7B on a cloud GPU and what I learned.
03 Apr 2025 8 min read
Bare Metal vs. Instant Clusters: Which Is Right for Your AI Workload?
Bare Metal

Bare Metal vs. Instant Clusters: Which Is Right for Your AI Workload?

Instant Clusters are here. RunPod’s newest deployment option lets you spin up multi-node environments in minutes—no contracts, no config files. Learn how they compare to Bare Metal and when to use each for your AI workloads.
02 Apr 2025 3 min read
Introducing Instant Clusters: Multi-Node AI Compute, On Demand

Introducing Instant Clusters: Multi-Node AI Compute, On Demand

Until now, RunPod users could generally scale up to 8 GPUs in a single pod. For most use cases—like running inference on Llama 70B or fine-tuning FLUX—that was plenty. But some workloads need more compute than a single server. They need to scale across multiple machines. Today, we’
31 Mar 2025 3 min read
Machine Learning Basics for People Who Don't Code
No-Code AI

Machine Learning Basics for People Who Don't Code

You don’t need to know code to understand machine learning. This post breaks down how AI models learn—and how you can start exploring them without a technical background.
28 Mar 2025 4 min read
RunPod Expands in Asia-Pacific with Launch of AP-JP-1 in Fukushima

RunPod Expands in Asia-Pacific with Launch of AP-JP-1 in Fukushima

We're excited to announce the launch of AP-JP-1, RunPod's first data center in Japan—now live in Fukushima. This marks a major step forward in our global infrastructure strategy and opens the door to dramatically better performance for users across the Asia-Pacific region. Why This Matters
27 Mar 2025 1 min read
Supporting the Future of AGI: RunPod Partners with ARC Prize 2025

Supporting the Future of AGI: RunPod Partners with ARC Prize 2025

The race toward artificial general intelligence isn't just happening behind closed doors at trillion-dollar tech companies. It's also unfolding in the open—in research labs, Discord servers, GitHub repos, and competitions like the ARC Prize. This year, the ARC Prize Foundation is back with ARC-AGI-2, a
26 Mar 2025 2 min read
Introducing Easy LLM Fine-Tuning on RunPod: Axolotl Made Simple

Introducing Easy LLM Fine-Tuning on RunPod: Axolotl Made Simple

At RunPod, we're constantly looking for ways to make AI development more accessible. Today, we're excited to announce our newest feature: a pre-configured Axolotl environment for LLM fine-tuning that dramatically simplifies the process of customizing models to your specific needs. Why Fine-Tuning Matters Fine-tuning large language
19 Mar 2025 3 min read
Introducing Bare Metal: Dedicated GPU Servers with Maximum Control and Savings

Introducing Bare Metal: Dedicated GPU Servers with Maximum Control and Savings

AI teams and ML engineers need flexibility, performance, and cost efficiency—and RunPod's new Bare Metal offering delivers exactly that. With Bare Metal, you can now reserve dedicated GPU servers for months or even years, ensuring consistent performance without the hassle of hourly or daily pricing. This means
18 Mar 2025 1 min read
Deploying Multimodal Models on RunPod

Deploying Multimodal Models on RunPod

Multimodal AI models integrate various types of data, such as text, images, audio, or video, to allow tasks such as image-text retrieval, video question answering, or speech-to-text. Examples are CLIP, BLIP, and Flamingo, among others, showing what is possible by combining these modes–(but deploying them presents unique challenges including
18 Mar 2025 4 min read
Open Source Video and LLM: New Model Roundup

Open Source Video and LLM: New Model Roundup

Remember when generating decent-looking videos with AI seemed like something only the big tech companies could pull off? Those days are officially over. 2024 brought us a wave of seriously impressive open-source video generation models that anyone can download and start playing with. And here's the kicker -
14 Mar 2025 13 min read
What Even Is AI? (A Writer & Marketer’s Perspective)

What Even Is AI? (A Writer & Marketer’s Perspective)

Learn AI With Me: The No-Code Series, Part 1 AI Is Everywhere, But What Is It? If you spend any time online, you’ve probably seen the explosion of AI tools—ChatGPT, MidJourney, DALL·E, Claude, and Gemini. Everyone is talking about AI, but when you ask, "What exactly
13 Mar 2025 6 min read
Training Flux.1 Dev on the MI300X GPU with Huge Batch Sizes

Training Flux.1 Dev on the MI300X GPU with Huge Batch Sizes

For this post I'm going to be experimenting with fine-tuning Flux.1 Dev on the world's largest GPU, the AMD MI300X. At 192GB of VRAM per GPU, the MI300X allows training at batch sizes and resolutions that no other GPU currently supports. (While these instructions were
11 Mar 2025 10 min read
Streamline GPU Cloud Management with RunPod's New REST API

Streamline GPU Cloud Management with RunPod's New REST API

Managing GPU resources has always been a bit of a pain point, with most of the time spent clicking around interfaces with repetitive manual configuration. Our new API lets you control everything through code instead, which is great news for those who'd rather automate repetitive tasks and focus
10 Mar 2025 3 min read
RunPod Achieves SOC 2 Type I Certification: A Major Milestone in Security and Compliance
Runpod Platform

RunPod Achieves SOC 2 Type I Certification: A Major Milestone in Security and Compliance

RunPod has successfully completed its SOC 2 Type I audit, conducted by Sensiba. This marks a significant milestone in our commitment to security, compliance, and trust for our customers and partners.
05 Mar 2025 2 min read
AI, Content, and Courage Over Comfort: Why I Joined RunPod
AccessibleTech

AI, Content, and Courage Over Comfort: Why I Joined RunPod

Alyssa Mazzina Every big move in my career has come down to one thing: choosing courage over comfort. That's why, when I discovered RunPod’s bold, developer-first approach to AI infrastructure, even though I knew next to nothing about AI infrastructure, I knew this was exactly where I
04 Mar 2025 3 min read
Unveiling Enhanced CPU Pods: Docker Runtime and Network Volume Support
Cloud Computing

Unveiling Enhanced CPU Pods: Docker Runtime and Network Volume Support

We're excited to announce two significant upgrades to our CPU pods that will streamline your development workflow and expand storage options. Our CPU pods now feature Docker runtime (replacing Kata Containers) and support for network volumes—previously exclusive to our GPU pods.
03 Mar 2025 2 min read
← Newer Posts Page 2 of 9 Older Posts →
RunPod Blog © 2025
  • Sign up
Powered by Ghost