
How I Learned to Stop Course-Correcting and Start Using Message Checkpoints
Why hitting the reset button will actually save you time instead of fighting an upstream context current.
Exploring the world of AI coding @Cline
Why hitting the reset button will actually save you time instead of fighting an upstream context current.
An LLM's power isn't just in the model itself, but in the tools it can access. While base models are powerful, they often hit limitations when faced with tasks requiring up-to-the-minute information, interaction with live websites, or deep, structured reasoning. MCP servers are dedicated tools that extend Cline’s capabilities, allowing it to overcome the inherent limitations of large language models. By connecting Cline to specialized servers for search, documentation, browser control, and more
Hello Cline community 🫡 Cline v3.18 is a focused release that introduces the Gemini CLI as a provider, delivers significant performance and reliability upgrades for the Claude 4 family, and ships several important core improvements. Gemini CLI Provider If you have the Gemini CLI tool installed and authenticated with your personal Google account, you can now leverage it directly within Cline. This gives you access to 1,000 free requests per day for the Gemini 2.5 Pro and Flash models, making
If you're a Cline user with a Claude Max subscription, you can connect your account to save on token costs. Instead of paying per token via an API key, the Claude Code provider lets you leverage your existing subscription for all your development tasks in Cline. 0:00 /0:17 1× Setup 1. Install Claude Code: First, install the Claude Code CLI: npm install -g @anthropic-ai/claude-code (Anthropic's instructions) 2. Configure in Cline: * Navigate to
When it comes to modifying code, Cline has two primary methods in its toolkit: 1. write_to_file for creating or overwriting entire files, and 2. replace_in_file for making surgical, targeted changes (diff edits). We call these targeted changes "diff edits," and their reliability is fundamental to the agent's performance. As part of our push to optimize every core subsystem of the agent, we've been intensely focused on improving the success rate of these diff edits. The result? We recently
We're excited to announce that Amazon Web Services has officially contributed 35 new Model Context Protocol (MCP) servers to the Cline ecosystem. This is a major milestone that extends the power of AI to nearly every corner of the AWS platform, allowing developers to manage their entire cloud infrastructure using natural language. For a long time, managing a complex cloud environment meant juggling dozens of consoles, dashboards, and configuration files. With these new MCP servers, that complex
For some time, Cline offered two ways to provide persistent guidance: a simple "Custom Instructions" text box and the more robust, file-based .clinerules system. While the text box was a familiar starting point, it was always a stepping stone. Today, we're fully embracing the more powerful paradigm by deprecating the old Custom Instructions feature to focus entirely on .clinerules. The reason is simple: treating instructions as code is fundamentally better. A single, static text box doesn’t sca
Hello Cline community 🫡 v3.17.14 is focused on expanding provider support, refining the terminal experience, and shipping a handful of important stability improvements. New Provider Integrations We're committed to making Cline as flexible as possible, and that means letting you plug in the tools you already use. This release adds support for two new API providers. * SAP AI Core: We've also added support for SAP AI Core, enabling users to connect to both Claude and GPT models through the s
It feels like every month there's a new "must-have" AI coding tool. The FOMO is real; but so is the fatigue of constantly switching, learning new workflows, and migrating settings. It’s exhausting, but that's the price for developers who want to be armed with the greatest leverage powered by AI. The magic of AI coding isn't just in the tool itself; it's in the power of the underlying model. And the "best" model is a moving target. One year ago, GPT-4o led the way. Then Anthropic's Claude 3.5 So
Remember when AI coding meant tab autocomplete? The human did 95% of the work: navigating the codebase, finding the right files, locating the exact spot to edit, beginning to type, and only then could AI offer a helpful suggestion. The human was the driver, AI was barely a passenger. Today's agentic AI can search codebases, read files, write entire modules, refactor systems, and orchestrate complex changes across multiple files. With tools like Cline, AI has eaten up nearly the entire coding pi
When developers first encounter Cline, they often describe it as their "AGI moment" – that pivotal instant when they realize AI has crossed from helpful suggestion tool to genuine coding partner. But what exactly separates a true coding agent from the growing crowd of AI-powered development tools? The answer lies in understanding what the word "agent" actually means. Defining the Agent OpenAI defines an agent as "a system that independently accomplishes tasks on your behalf." Anthropic takes
Yesterday, Windsurf users lost access to Claude 3.x models with five days' notice. OpenAI's acquisition of Windsurf created competitive tensions with Anthropic, and developers got caught in the crossfire. Picture this: you're deep into a critical project, and suddenly your AI coding assistant is crippled by corporate politics. Free tier users lost access entirely; paid subscribers face severe capacity constraints. This validates why we built Cline differently from day one. When Corporate War
Hello Cline community 🫡 We've been burning the candle at both ends to make Cline work as well as possible with the new Claude 4 family of models, and we're excited to share that Cline 3.17.9 includes experimental Claude 4 support that addresses reliability issues, an upgraded task timeline with scrolling navigation, and expanded file upload capabilities for data analysis. Experimental Claude 4 Support If you've been using Claude 4 with Cline, you've probably noticed some frustrating edit fai
Here's a common question we get from prospective Cline users: "How does Cline handle large codebases? Do you use RAG to index everything?" It's a reasonable question. Retrieval Augmented Generation (RAG) has become the go-to solution for giving AI systems access to large knowledge bases. But for Cline, we've taken a deliberately different path. We don't index your codebase, and this choice isn't an oversight, it's a fundamental design decision that delivers better code quality, stronger securit
Cline v3.17 brings several solid improvements to enhance your development workflow. The highlight is Global Workflows – a feature many of you have been requesting – along with a redesigned settings interface and expanded provider support. Global Workflows The standout feature in this release is global workflows. If you've built a workflow that works brilliantly in one project, you can now share it across all your workspaces. Here's how it works: * Global workflows live in ~/Documents/Cline
Welcome to Cline v3.16! We're really excited about this release, especially a new feature that's already making a big difference for our own team: Workflows. It's already saving our team countless hours and we know it can do the same for you. Let's get to it! → Workflows → Your Personal Automation Powerhouse You know those repetitive sequences you find yourself doing all the time? Setting up a new project module, running through a pre-commit checklist, or even the detailed steps for a PR revi
When you first dive into a powerful tool like Cline, it's natural to wonder, "What's the best way to set this up?" The beauty of Cline is that it's designed to adapt and grow with you. There isn't a one-size-fits-all answer, but rather a spectrum of configurations that can match your immediate needs and the complexity of the tasks you're tackling. This post will walk you through three common setup approaches – Beginner, Intermediate, and Advanced. My goal is to help you find what works best for
If you've been in tech for a while, you've seen the cycles. Skepticism greets nearly every major shift. We've certainly observed some resistance from engineers, a common echo of the "pessimistic streak" that often surfaces in tech hubs when new technologies emerge. It's reminiscent of the doubts that once surrounded Linux, the cloud, and even JavaScript on the server. History tends to rhyme, and the current wave of AI in software development is no different. But here’s the thing: while some deb
Welcome to Cline v3.15! This update is all about refining the experience of Cline as your daily driver. Let's get right into it. Task Timeline = Story board of your conversation Ever wished you could instantly see the "story" of how Cline tackled a complex task? With v3.15, we're introducing the Task Timeline, a sleek, visual "storyboard" integrated directly into your task header. This isn't just a log; it's an intuitive series of blocks charting every key step Cline takes – from tool calls t
The allure of AI in development often conjures images of instant solutions: a quick prompt, a flash of code, and on to the next task. But as many developers have discovered, this path can quickly become a frustrating loop of near-misses, endless re-prompting, and AI-generated code that feels strangely disconnected from the project's core. This isn't merely inefficient; it represents a fundamental misunderstanding of how to truly harness the power of coding agents. At Cline, we see a more robust
We're excited to announce Cline v3.14, bringing significant improvements to Gemini caching and cost transparency, more flexible checkpoints, a new command for rule creation, LaTeX support, and a host of quality-of-life updates driven by community feedback. Let's dive into the highlights: Improved Gemini Caching & Transparency We know accurate cost tracking and efficient model usage are crucial. While Gemini's context caching offers potential savings, ensuring its effectiveness and providing
One of the most common questions developers ask when getting started with Cline is: "Which AI model should I use?" With Cline's approach to supporting virtually all models from the top providers (Anthropic, Google Gemini, OpenAI, and others), the choices can be overwhelming. The truth is, there's no single "best" model for all situations. What works brilliantly for one development task might be overkill – or underpowered – for another. Let's break down how to think about model selection through
As Product Marketing Manager at Cline, I often experiment with ways to fine-tune Cline's behavior for different tasks. For a while, I relied on a system of .clinerules/ and clinerules-bank/ folders, manually dragging rule files back and forth to activate different instruction sets. Honestly? It was kind of annoying – difficult to keep track of which rules were active, easy to forget to switch them, and just generally cumbersome. But I did get tremendous value from the "modular" nature of these .
Tired of manually managing context instructions for different tasks? Wish you could quickly branch explorations or fix a flawed prompt without starting over? Cline 3.13 tackles these workflow frustrations head-on. We're delivering precise control over your AI development process with three key updates: 1. Toggleable .clinerules for effortless context switching (docs) 2. Quick /newtask Slash Commands for session management (docs) 3. Message Editing with Checkpoint Restore for fearless explor
Ever wondered why a simple "Hi" to Cline might seem to use more resources than expected? Or perhaps you've noticed performance subtly changing during a long coding session? The answer often lies in a crucial, yet sometimes misunderstood, concept: the context window. Understanding it is key to unlocking Cline's full potential while managing performance and cost effectively. Think of the context window as the AI's short-term memory. It's the space where all the information relevant to your curren
Context windows have been both the power and limitation of AI coding assistants. They enable the contextual understanding that makes tools like Cline so useful, but their finite nature has always imposed constraints on complex, long-running tasks. With Cline's latest features, we've created a solution that fundamentally changes how developers can work with AI assistants on complex projects. 0:00 /0:27 1× The Context Window Problem We implemented the
We're excited to announce the release of Cline v3.12, packed with diff edit improvements, model favorites, new auto-approve options, and more. Let's dive into the highlights: Faster Diff Application 0:00 /0:05 1× Applying code changes suggested by Cline is now significantly faster, especially in large files. v3.12 boosts diff edit performance for a smoother, more responsive experience, letting you iterate faster. We've also added an indicator in th
Cline 3.10 is here, and it's packed with features designed to significantly enhance your development workflow, speed, and control. This release introduces a powerful new way to interact with the web, alongside several quality-of-life improvements and optimizations requested by the community. Let's dive in. Connect Cline to Your Local Chrome Browser The headline feature of 3.10 is a fundamental shift in how Cline interacts with web content: you can now connect Cline directly to your running lo
For developers using Cline, the gap between AI-assisted coding and direct database management just closed. Developers have been clamoring for it, and it's finally here: the official Supabase MCP Server is now live in the Cline MCP Marketplace. No more context switching between your editor, the Supabase dashboard, and SQL clients. You can now interact with your database – execute queries, manage schemas, fetch configurations, and more – all orchestrated by Cline, right where you code.
The developer community has been buzzing about the possibilities unlocked by the Model Context Protocol (MCP), and few examples have captured the imagination quite like Siddharth Ahuja's MCP servers for Ableton Live and Blender. These notoriously complex creative powerhouses, suddenly accessible through natural language prompts via tools like Cline, feel like a glimpse into a new era of software interaction. 🧩 Built an MCP that lets Claude talk directly to Blender. It helps you create beautifu
Cline is already transforming how developers approach complex coding tasks, acting as an autonomous agent within your IDE. But what if you could push its capabilities even further? What if Cline could not only write and refactor code but also conduct deep web research, generate UI components on the fly, convert documents, and even interact with creative software? That's what we envision for Model Context Protocol (MCP). MCP servers are like power-ups for Cline, allowing it to securely connect a
Here's what's new in 3.8.0: * Streamlined workflows: New 'Add to Cline' and 'Fix with Cline' code actions for faster problem-solving * Account view: Track billing and usage history for Cline accounts directly in the extension * Sort underlying provider routing: Choose how Cline routes API requests based on throughput, price, or latency to optimize for your specific needs * Rich MCP display improvements: Visualize data more effectively with dynamic image loading and GIF support directly in y
From the jump, we've envisioned a bolder path forward in AI coding. While others perfect tab autocomplete, we've been building a truly collaborative coding agent that transforms how software gets created. From day one, we've given engineers full, unrestricted access to frontier models—exactly as they were designed to be used. No context limitations, no artificial constraints to fit subscription economics. This is AI coding in its purest form. What does this mean in practice? Cline reads entire
Frontend development has always been a careful balance between creativity and tedium. For every exciting UX innovation you bring to life, there are dozens of navbar implementations, form components, and card layouts to build along the way. The repetitive nature of these tasks is a productivity killer—until now. By combining Cline's conversational AI capabilities with 21st.dev's Magic UI MCP, we've created a workflow that eliminates the most time-consuming parts of frontend development. The resu
Last week, we spoke with a San Francisco-based startup that estimated Cline had boosted their engineering output fivefold. While startups tend to adopt new technologies more quickly, we're still in the early days of AI-powered coding. This "5x" increase offers a glimpse into what could be a shockingly transformative future. Welcome to part one of our 5x Engineer Series where we're exploring the world where AI quintuples engineering productivity. What happens when every software engineer on the
Have you seen those viral videos of people creating stunning 3D scenes in Blender by just describing what they want? No clicking through complex menus, no hunting for the right tool in a sea of options, no watching 10-hour YouTube tutorials. Just conversation. 0:00 /0:20 1× This isn't just a neat demo—it's the beginning of the end for complicated user interfaces. The Blender MCP server that's going viral right now is just the first hint of a much large
In the fast-paced world of sports analytics, where deadlines are dictated by game schedules and championships, engineering teams face unique challenges. For Cleat Street, a fast-growing sports analytics company launched in 2019 out of a startup lab at Columbia University, the pressure to deliver quality code with a small team meant finding creative solutions to accelerate their development process. Now based in San Francisco with additional offices in Denver, Cleat Street has rapidly expanded t
When a senior developer tells you they spent $125 on API credits in a single day but produced 35,000 lines of functional code, is that expensive or a bargain? This question gets to the heart of how we should think about AI coding agents. While the sticker shock of token-based pricing might initially make you wince, the calculus changes dramatically when you consider the actual value delivered. Beyond Subscription Limitations Thus far, we've seen two pricing models for AI coding agents: subsc
I've been using Cline since it was released, and it's completely transformed how I approach building software. As Cline's product marketer, I've had the privilege of seeing this tool evolve, but what's more exciting is how it's evolved my own thinking about what I can create. I produce music on the side, and recently I had this idea: what if I could build a Cline-like assistant for Logic Pro? Something that would let me draw MIDI files and handle mixing through conversation, so I could focus mo
The emergence of Model Context Protocol (MCP) has created something we rarely see in technology: a genuinely new market with minimal established players and tremendous growth potential. As AI agents like Cline become increasingly central to developer workflows, MCP plugins represent not just a technical opportunity, but a significant business one. While many developers are focused solely on the technical aspects of building MCP plugins, forward-thinking builders have already begun establishing
I remember stories about the tech booms of past decades. In the late 1990s, the web revolution began. Teenagers who knew HTML were earning thousands per project, building websites for local businesses. In the late 2000s, a similar trend emerged with mobile apps. Young developers were profiting from creating applications for smartphones. "This is the future," people would say. Many were skeptical about how these basic websites or simple apps could change anything. We know how those predictions t
At Cline, we're committed to creating the best possible open-source coding agent for developers. Today, we're introducing anonymous telemetry to help us understand how our Cline is actually being used and where we can make meaningful improvements. Privacy-First Telemetry We've designed our telemetry system with your privacy as the top priority: * Completely anonymous - we use PostHog to collect aggregated usage data that cannot be traced back to individuals * Strictly limited scope - we onl
Yesterday, I asked a developer if he was using MCP in Cline. "I still don't understand what Model Context Protocol is," he said. I had to smile. Here was a developer missing out on powerful tools because he was getting caught up in confusing technical jargon. It's like developers are leaving features on the table because the name sounds too complicated. "Forget Model Context Protocol - that's just distracting you from what matters," I told him. "Think of these as extra tools you can give to C
Discover what MCP servers are and how they work. Learn how Cline leverages Model Context Protocol (MCP) servers to revolutionize AI tool integration and development. Read on for in-depth insights! I was at CodeGen Night in San Francisco on Tuesday and had a chance to talk with some Cline power users about MCP (Model Context Protocol) servers. These were sophisticated developers who'd heard of MCP but weren't using it because they weren't fully sure how it worked or why it mattered. This conver
When building complex applications, the constant context switching between research, coding, and documentation can kill your momentum. While working on an MP3 sharing application, I discovered three powerful patterns for using Perplexity's MCP server with Cline that have transformed my development workflow. ❓ "What is Perplexity MCP?" Perplexity is an AI research assistant that excels at finding and synthesizing up-to-date information from across the web. When integrated with Cline through the
Imagine a detective who loses his memory every time he falls asleep. To solve cases, he develops an ingenious system: tattooing critical facts on his body and keeping a carefully organized set of Polaroid photos. Each time he wakes up, he can quickly rebuild his understanding by following his own documentation system. That's the plot of Memento, and it inspired how we solved a common problem with AI coding assistants. Dont Believe His Lies Memento GIFfrom Dont Believe His Lies GIFs Here's
Two days ago, Andrej Karpathy set Tech Twitter ablaze with a provocative idea he calls "vibe coding" – where you "fully give in to the vibes, embrace exponentials, and forget that the code even exists." Using AI tools (like Cline), he demonstrated building an entire LLM reader application in about an hour, barely touching the keyboard. The reactions were predictable and eerily familiar to anyone who knows their computing history. But to understand why, we need to go back. Way back. What is 'ab
Picture this: You've got an amazing idea for a web app, but when you start googling "how to build a web app," you're hit with an avalanche of frameworks, tools, and conflicting advice. Should you learn React? Vue? Svelte? What even is a "backend"? And don't get me started on deployment... I get it. I've spent the last year helping complete beginners navigate this exact challenge. Here's what I've learned: The problem isn't that building web apps is hard – it's that most tutorials are teaching i
Ever stared at Cline wondering if it's about to hit a wall? You're deep in a complex refactoring task, and suddenly he seems to "forget" what you were discussing just moments ago. If this sounds familiar, you're running into one of the most common yet least understood aspects of AI coding assistants: context windows. Today, we're introducing a new feature in Cline that makes this invisible limit visible: the Context Window Progress Bar. What's a Context Window, Anyway? Think of a context win
If you've used DeepSeek's R1 (or V3 for that matter), you've probably been impressed at its performance for the price. And if you've run into issues with its API recently, your next thought was probably, “Hey, I’ve got a decent computer—maybe I can run this locally and run this myself!” Then reality hits: the full DeepSeek R1 model needs about 1,342 GB of VRAM—no, that’s not a typo. It’s designed to run on a cluster of 16 NVIDIA A100 GPUs, each with 80GB of memory (source). Let’s break down wha
Last week, Chinese AI startup, DeepSeek, caused the biggest single-day drop in NVIDIA's history, wiping nearly $600 billion from the chip giant's market value. But while Wall Street panicked about DeepSeek's cost claims, Cline users in our community were discovering a more nuanced reality. The Promise vs The Reality "R1 is so hesitant to open and read files while Claude just bulldozes through them," observed one of our users. This perfectly captures the gap between DeepSeek's impressive bench
Updated March 4, 2025 article to reflect recent developments Remember when GitHub Copilot first launched and we thought AI-assisted coding couldn't get more revolutionary? Two years later, we're seeing a fascinating divergence in how AI coding assistants approach development. With recent releases from both Cline (now 3.5) and Cursor (0.46), we're witnessing not just a battle of features, but a philosophical split in how AI should partner with developers. I've watched both tools mature. Let's c
Picture this: You're deep into development with your AI assistant, trying to juggle multiple tools – GitHub issues need updating, tests need running, and documentation needs reviewing. But instead of the seamless workflow you imagined, you're stuck with manual context switching and disconnected tools. Your AI assistant, brilliant as it is, feels trapped in its chat window. This is where the Model Context Protocol (MCP) changes everything. It's not just another developer tool – it's a fundamenta
In an interesting coincidence, DeepSeek released R1 on the same day we launched Plan & Act modes in Cline. And something fascinating started happening immediately: developers began naturally using R1 for planning phases and 3.5-Sonnet for implementation. Not because anyone suggested it – it just made sense. 0:00 /0:54 1× What's Actually Happening Here's what developers discovered works best: 1. Start new tasks in Plan mode using R1 ($0.55/M tokens)
The best AI engineers I know follow a specific pattern. They don't obsess over prompt crafting – they obsess over planning. There's a reason for this, and it's not what most people think. The Reality Check Here's what typically happens when someone starts working with AI: 1. They throw requirements at the model 2. They get mediocre outputs 3. They blame their prompting skills 4. They spend hours "optimizing" prompts 5. They still get mediocre results Sound familiar? But here's what eli
DeepSeek released V3 on December 26, and for the first time since 3.5-Sonnet's release, we have a serious contender for best coding LLM—one that's 53x cheaper and was trained for just $5.5M. Since then, Cline has seen installations skyrocket – nearly 50,000 new installs at double our usual daily rate. DeepSeek v3 is mind-blowing. I’m fascinated by the engineering elegance and scrappiness behind it. - MoE + MLA architecture (671B → 37B active) - DualPipe + FP8 , incredibly impressive optimizati