Zed was supposed to be the answer to Atom / Sublime Text in my opinion, and I kinda do want to use it as my main driver, but it just isn’t there yet for me. It’s shameful because I like its aesthetics as a product more than the competition out there.
Just this other day I tried using it for something it sort of advertised itself as the superior thing, which was to load this giant text file I had instantly and let me work on it.
I then tried opening this 1GB text file to do a simple find/replace on it only to find macOS run out of system memory with Zed quickly using 20gb of memory for that search operation.
I then switched to vscode, which, granted opened it in a buffered sort of way and limited capability, but got the job done.
Maybe that was a me issue I don’t know, but aside from this one-off, it doesn’t have a good extensions support in the community for my needs yet. I hope it gets there!
I feel like Zed stopped working on the editor itself since AI was rolled out. I also wanted it to be the open-source alternative to Sublime, but nothing comes close.
If the full magnitude of products that stopped working on the main product and started to try to shoehorn AI in was known, the economy would collapse overnight.
Intellij wasn't immune to it either. Number of bugs has exploded since they started adding ai features to their ide, none of which I used for more than 5 minutes.
It really is concerning. I keep an excel sheet with links of all companies I could apply to whenever i change jobs, and checking it the other day practically every row was now selling an ai product.
Yeah exactly this! I get they want to stay in the game and follow the market, but I’m sad they’re not being more aggressive on that original vision. I still think there could be a huge payoff for them if they invested more on their brand and aesthetics of a more polished and comfy editor.
The way I see it, we’re sort of living in a world where UX is king. (Looking at you Cursor)
I feel like there’s a general sentiment where folks just want a sense of home with their tools more than anything. Yes they need to work, but they also need to work for you in your way. Cursor reinvented autocomplete with AI and that felt like home for most, what’s next? I see so much focus on Agents but to me personally that feels more like it should live on the CI/CD layer of things. Editors are built for humans, something isn’t quite there yet, excited to see how it unfolds.
There have been improvements recently, but it still has some of the worst text rendering of any editor on macOS, if you have a non-4K display plugged in. Rendering text is kind of a big deal!
I think their recent push to delegate to CLI agents in the agent panel is the right direction. Claude Code has been running in Zed for the past month. Sure, there are SDK limitations and kinks to iron out, but it’s moving quickly. I’m into it.
I get what you are saying, and I think they are doing a good job there as well. That said, it still feels like something is missing in that whole workflow to me.
I sometimes worry if we are moving too fast for no reason. Some things are becoming standards in an organic way but they feel suboptimal in my own little bias bubble corner.
Maybe I am getting old and struggling to adapt to the new generation way of getting work done, but I have a gut feeling that we need to revisit some of this stuff more deliberately.
I still see Agents as something that will be more like a background thread that yields rather than a first class citizen inside the Editor you observe as it goes.
I don't know about you, but I feel an existential dread whenever I prompt an Agent and turn into a vegetable watching it breathe. — am I using it wrong? Should I be leaving and coming back later? Should I pick a different file and task while it's doing its thing?
Similar experience: I added a folder to my zed project that was too big, causing zed to lock up and eventually crash. But because the default setting was to re-open projects on launch, I was stuck in a loop where I couldn’t remove the folder either. Eventually found a way to clear the recent projects, load an empty editor, and change the setting to avoid it in the future.
You can open large codebases in Jetbrains IDEs and it takes forever to index, but it shouldn't outright crash or completely freeze.
You can open the kernel in CLion. Don't expect the advanced refactoring features to work, but it can deal with a ~40 million lines project folder for example
My experience with Zed was that it was a buggy mess with strange defaults. I had high hopes until I tried it but it was not even close to working for my pretty normal use case of coding C so I went back to Sublime.
In my experience, BBEdit will open files that kill other editors: "Handling large files presents no intrinsic problems for BBEdit, though some specific operations may be limited when dealing with files over 2GB in size."
I have always found sublime to be the best at large files, well over 1gb. Since you mention bbedit, maybe this is some mac specific issue? I really don't know. But at least among people i know, opening large files has effectively become its main USP.
Should be noted that the linked post is almost 15 years old at this point too, so perhaps not the most up to date either.
Speaking of TextEdit, I like what the folks at CodeEdit are doing. They are moving slow and focusing on just the core parts. Maybe I should go give them a try too!
Speaking of aesthetics, I switched back to VSCode but I ended up installing the theme "Zed One Theme" and switched the editor's font to "IBM Plex Mono".
I know it's not Zed, but I am pretty satisfied with the results.
I'm going through this with pricing for our product and the battle is:
Customer: wants predictable spend
Sales/Marketing: wants an amazing product that is easy to sell (read does the mostest for the leastest)
Advisors: want to see the SaaS model in all it's glory (i.e. 85% margin primarily from oversubscription of infrastructure)
Finance: wants to make money and is really worried about variable costs ruining SaaS profit margin
A couple of thoughts:
1. AI-based cost is mostly variable. Also, things like messaging, phone minutes and so on are variable. Cloud expenses are also variable... There's a theme and it's different.
2. The unit of value most ai software is delivering is work that is being done by people or should be being done and is not.
3. Seems like it's time to make friends with a way to make money other than subscription pricing.
Token based pricing generally makes a lot of sense for companies like Zed, but it sure does suck for forecasting spend.
Usage pricing on something like aws is pretty easy to figure out. You know what you're going to use, so you just do some simple arithmetic and you've got a pretty accurate idea. Even with serverless it's pretty easy. Tokens are so much harder, especially when using it in a development setting. It's so hard to have any reasonable forecast about how a team will use it, and how many tokens will be consumed.
I'm starting to track my usage with a bit of a breakdown in the hope that I'll find a somewhat reliable trend.
I suspect this is going to be one of the next big areas in cloud FinOps.
My rant on token-based pricing is primarily based on the difficulty in consistently forecasting spend.....and also that the ongoing value of a token is controlled by the vendor...."the house always wins"
There are enough vendors that it's difficult for any one vendor to charge too much per token. There are also a lot of really good open-weight models that your business could self-host if the hosted vendors all conspire to charge too much per token. (I believe it's only economical to self-host big models if you're using a lot of tokens, so there is a breakeven point.)
> I suspect this is going to be one of the next big areas in cloud FinOps.
It already is. There’s been a lot of talk and development around FinOps for AI and the challenges that come with that. For companies, forecasting token usage and AI costs is non-trivial for internal purposes. For external products, what’s the right unit economic? $/token, $/agentic execution, etc? The former is detached from customer value, the latter is hard to track and will have lots of variance.
With how variable output size can be (and input), it’s a tricky space to really get a grasp on at this point in time. It’ll become a solved problem, but right now, it’s the Wild West.
Until the rug inevitably gets pulled on those as well. It's not in your interest buy a $200/mo subscription unless you use >$200 of tokens per month, and long term it's not in their interest to sell you >$200 of tokens for a flat $200.
> It's not in your interest buy a $200/mo subscription unless you use >$200 of tokens per month
This is only true if you can find someone else selling them at cost.
If a company has a product that cost them $150, but they would ordinarily sell piecemeal for a total of $250, getting a stable recurring purchase at $200 might be worthwhile to them while still being a good deal for the customer.
The pricing model works as long as people (on average) think they need >$200 worth of tokens per month but actually do something less, like $170/month. Is that happening? No idea.
Maybe that is what Anthropic is banking on, from what I gather they obscure Max accounts actual token spend so it's hard for subscribers to tell if they're getting their moneys worth.
Well, the $200/mo plan model works as long as people on the $100/mo plan is insufficient for some people which works as long as the $17/mo plan is insufficient for some people.
I don't see how it matters to you that you aren't saturating your $200 plan. You have it because you hit the limits of the $100/mo plan.
I don't know about for people using CC on a regular basis, but according to `ccusage`, I can trivially go over $20 of API credits in a few days of hobby use. I'd presume if you are paying for a $200 plan then you know you have heavy usage and can easily exceed that.
Also seems like a great idea to create a business models where the companies aren't incentivised to provide the best product possible. Instead they'll want to create a product just useful enough to not drive away users, but just useless enough to temp people to go up a tier, "I'm so close, just one more prompt and it will be right this time!"
Edit: To be clear, I'm not talking about Zed. I'm talking about the companies make the models.
While Apple is incentivized to ship a smaller battery to cut costs, it is also incentivized to make their software efficient as possible to make the best use of the battery they do ship
I agree that tokens are a really hard metric for people. I think most people are used to getting something with a certain amount of capacity per time and dealing with that. If you get a server from AWS, you're getting a certain amount of capacity per time. You still might not know what it's going to cost you to do what you want - you might need more capacity to run your website than you think. But you understand the units that are being billed to you and it can't spiral out of control (assuming you aren't using autoscaling or something).
When you get Claude Code's $20 plan, you get "around 45 messages every 5 hours". I don't really know what that means. Does that mean I get 45 total conversations? Do minor followups count against a message just as much as a long initial prompt? Likewise, I don't know how many messages I'll use in a 5 hour period. However, I do understand when I start bumping up against limits. If I'm using it and start getting limited, I understand that pretty quickly - in the same way that I might understand a processor being slower and having to wait for things.
With tokens, I might blow through a month's worth of tokens in an afternoon. On one hand, it makes more sense to be flexible for users. If I don't use tokens for the first 10 days, they aren't lost. If I don't use Claude for the first 10 days, I don't get 2,160 message credits banked up. Likewise, if I know I'm going on vacation later, I can't use my Claude messages in advance. But it's just a lot easier for humans to understand bumping up against rate limits over a more finite period of time and get an intuition for what they need to budget for.
Both prefill and decode count against Claude’s subscriptions; your conversations are N^2 in conversation length.
My mental model is they’re assigning some amount of API credits to the account and billing the same way as if you were using tokens, shutting off at an arbitrary point. The point also appears to change based on load / time of day.
I'm personally looking forward to this change because I currently pay $20/month just to get edit prediction. I use Claude Code in my terminal for everything else. I do wish I could just pay for edit prediction at an even lower price, but I can understand why that's not an option.
I'm curious if they have plans to improve edit prediction though. It's honestly kind of garbage compared to Cursor, and I don't think I'm being hyperbolic by calling it garbage. Most of the time it's suggestions aren't helpful, but the 10-20% of the time it is helpful is worth the cost of the subscription for me.
Yeah, Cursor tab completion is basically in the realm of magical mind reading and might still be the most insane productivity demonstration of LLM tech in software.
It obsoleted making Vim macros and multiline editing for example. Now you just make one change and the LLM can derive the rest; you just press tab.
It's interesting that the Cursor team's first iteration is still better than anything I've seen in their competitors. It's been an amazing moat for a year(?) now.
I agree. I wish they focused more on it. I'd love to be able to give it a few sentences of instructions to make it even more effective for me. It's so much more of a productivity boon than all the coding agent stuff ever was.
Making this prediction now, LLM pricing will eventually be priced in bytes.
Why: LLMs are increasingly becoming multimodal, so an image "token" or video "token" is not as simple as a text token. Also, it's difficult to compare across competitors because tokenization is different.
Eventually prices will just be in $/Mb of data processed. Just like bandwidth. I'm surprised this hasn't already happened.
> Why: LLMs are increasingly becoming multimodal, so an image "token" or video "token" is not as simple as a text token.
For autoregressive token-based multimodal models, image tokens are as straightforward as text tokens, and there is no reason video tokens wouldn’t also be. (If models also switch architecture and multimodal diffusion models, say, become more common, then, sure, a different pricing model more tied to actual compute cost drivers for that architecture are likely but... even that isn’t likely to be bytes.)
> Also, it's difficult to compare across competitors because tokenization is different.
That’s a reason for incumbents to prefer not to switch, though, not a reason for them to switch.
> Eventually prices will just be in $/Mb of data processed.
More likely they would be in floatint point operations expended processing them, but using tokens (which are the primary drivers for the current LLM architectures) will probably continue as long as the architecture itself is doninant.
> For autoregressive token-based multimodal models, image tokens are as straightforward as text tokens, and there is no reason video tokens wouldn’t also be.
In classical computing, there is a clear hierarchy: text < images <<< video.
Is there a reason why video computing using LLMs shouldn't be much more intensive and therefore costly than text or image output?
No, it'll certainly be more expensive in any conceivable model that handles all three modalities, but if the model uses an architecture like current autoregressive, token-based multimodal LLMs/VLMs, tokens will make just as much sense as the basis for pricing, and be similarly straightforward, as with text and images.
That's the thing, I can't visualize (and I don't think most people can) what "tokens" represent for image or video outputs.
For text I just assume them to be word stems or more like work-family-members (cat-feline-etc).
For images and videos I guess each character, creature, idea in it is a token? Blue sky, cat walking around, gentleman with a top hat, multiplied by the number of frames?
> For images and videos I guess each character, creature, idea in it is a token?
No, for images, tokens would, I expect, usually be asymptotically proportional to the area of the image (this is certainly the case with input token for OpenAIs models that take image inputs; outputs are more opaque); you probably won’t have a neat one-to-one intuition for what one token represents, but you don’t need that for it to be useful and straightforward for understanding pricing, since the mathematical relationship of tokens to size can be published and the size of the image is a known quantity. (And videos conceptually could be like images with an additional dimension.)
CPU/GPU time is opaque to me before I send my data, but tokens I can count before I decide to send it. That means I can verify the metering. With CPU time I send the data and then the company says "That cost X CPU units, which is $500".
This whole business model of trying to shave off or arbitrage a fraction of the money going to OpenAI and Anthropic just sucks. And it seems precarious. There's no honest way to resell tokens at a profit, and everyone knows it.
Sorry, how is this new pricing anything but honest? They provide an editor you can use to
- optimize the context you send to the LLM services
- interact with the output that comes out of them
Why does not justify charging a fraction of your spend on the LLM platform? This is pretty much how every service business operates.
For companies where that is their entire business model I absolutely agree. Zed is a solid editor with additional LLM integration features though, so this move would seem to me to just cover their costs + some LLM integration development funds. If their users don't want to use the LLM then no skin off Zed's back unless they've signed some guaranteed usage contract.
>There's no honest way to resell tokens at a profit, and everyone knows it.
Agree with the sentiment, but I do think there are edge cases.
e.g. I could see a place like openrouter getting away with a tiny fractional markup based on the value they provide in the form of having all providers in one place
The whole business model even for OAI/Anthropic is unsustainable.. they are already running it at a huge loss atm, and will do for the foreseeable future. The economics simply doesn't work, unfortunately or not
Good change. I’m not a vibe coder, I use Zed Pro llm integration more like glorified stack overflow. I value Zed more for being an amazing editor for the code I actually write and understand.
I suspect I’m not alone on this. Zed is not the editor for hardcore agentic editing and that’s fine. I will probably save money on this transition while continuing to support this great editor for what it truly shines at: editing source code.
Prediction: the only remaining providers of AI-assisted tools in a few years will be the LLM companies themselves (think claude code, codex, gemini, future xai/Alibaba/etc.), via CLIs + integrations such as ASP.
There is very little value that a company that has to support multiple different providers, such as Cursor, can offer on top of tailored agents (and "unlimited" subscription models) by LLM providers.
I recently started using Codex (OpenAI's Claude Code) and it has a VSCode extension that works like a charm. I tried out Windsurf a while ago. And the Codex extension simply does everything that Windsurf did. I guess it doesn't show changes at well, (it shows diffs in it's own window instead of in the file), but I can just check a git diff graphically (current state vs. HEAD) if I really wanted that.
I am really tempted to buy ChatGPT Pro, and probably would have if I lived in a richer country (unfortunetley purchase power parity doesn't equalize for tech products). The problem with Windsurf (and presumably Cursor and others) is that you buy the IDE subscription and then still have to worry about usage costs. With Codex/Claude Code etc., yeah, it's expensive, but, as long as you're within the usage limits, which are hopefully reasonable for the most expensive prices, you don't have to worry about it. AND you get the web and phone apps with GPT 5 Pro, etc.
If you look at even the Claude/OpenAI chat UIs, they kind of suck. Not sure why you think someone else can't/won't do it better. Yes, the big players will copy what they can, but they also need to chase insane growth and getting every human on earth paying for an LLM subscription.
A tool that is good for everyone is great for no one.
Also, I think we're seeing the limits on "value" of a chat interface already. Now they're all chasing developers since there's a real potential to improve productivity (or sadly cut-costs) there. But even that is proving difficult.
I don't know. Foundation models are very good, and you can get a surprising amount of mileage from them by using them with low level interfaces. But personally I think companies building development tools of the future will use LLMs to build systems with increasing capabilities. I think a lot of engineering challenges remain in scaling LLM's to take over day to day in programming, and the current tools are scratching the surface of what's possible when you combine LLMs with traditional systems engineering.
But the reason LLMs aren't used to build features isn't because they are expensive.
The hard work is the high level stuff like deciding on the scope of the project, how it should fit in to the project, what kind of extensibility the feature might need to be built with, what kind of other components can be extended to support it, (and more), and then reviewing all the work that was done.
I love this! Finally a more direct way for companies to sponsor open source development. GitHub Sponsors helps, but it is often so vague where the funding is going.
Unless companies also donate money to sponsor the code review that will be required to be done by real human being I could see this idea being a problem for maintainers. Yes you have to code review a human being as well but a human being is capable of learning and carrying that learning forward and their next PR will be better, as well as being able to look at past PRs to evaluate whether the user is a troll/bad actor or someone who genuinely wants to assist with the project. An LLM won't learn and will always spit out valid _looking_ code.
I was just thinking this morning about how I think Zed should rethink their subscription because its a bit pricey if they're going to let you just use Claude Code. I am in the process of trying out Claude and figured just going to them for the subscriptions makes more sense.
I think Zed had a lot of good concepts where they could make paid AI benefits optional longer term. I like that you can join your devs to look at different code files and discuss them. I might still pay for Zed's subscription in order to support them long term regardless.
I'm still upset so many hosted models dont just let you use your subscription on things like Zed or JetBrains AI, what's the point of a monthly subscription if I can only use your LLM in a browser?
> if they're going to let you just use Claude Code
I'm pretty sure that's only while it's in preview, just like they were giving away model access before that was formally launched. Get it while it's hot.
> I'm still upset so many hosted models dont just let you use your subscription on things like Zed or JetBrains AI, what's the point of a monthly subscription if I can only use your LLM in a browser?
This is her another reason why CLI-based coding agents will win. Every editor out there trying to be the middle man between you and an AI provider is nuts.
I'm glad to see this change. I didn't much use the AI features, but I did want to support Zed. $20 seemed a bit high for that signal. $10 seems right. $5 with no tokens would be nicer.
Tokens are an implementation detail that have no business being part of product pricing.
It's deliberate obfuscation. First, there's the simple math of converting tokens to dollars. This is easy enough; people are familiar with "credits". Credits can be obfuscation, but at least they're honest. The second and more difficult obfuscation to untangle is how one converts "tokens" to "value".
When the value customers receive from tokens slips, they pay the same price for the service. But generative AI companies are under no obligation to refund anything, because the customer paid for tokens, and they got tokens in return. Customers have to trust that they're being given the highest quality tokens the provider can generate. I don't have that trust.
Additionally, they have to trust that generative AI companies aren't padding results with superfluous tokens to hit revenue targets. We've all seen how much fluff is in default LLM responses.
Pinky promises don't make for healthy business relationships.
Tokens aren't that much more opaque than RAM GB/s for functions or whatever. You'd have to know the entire infra stack to really understand it. I don't really have a suggestion for a better unit for that kind of stuff.
Doesn’t prompt pricing obfuscate token costs by definition? I guess the alternative is everyone pays $500/mo. (And you’d still get more value than that.)
I am wondering why they couldn't have foreseen this. Was it really a failure to predict the need to charge for tokens eventually, or was planned from the start that way -- get people to use the unlimited option for a bit, they get hooked, then switch them to per-token subscriptions.
I completely get why this pricing is needed and it seems fair. There’s a major flaw in the announcement though.
I get that the pro plan has $5 of tokens and the pricing page says that a token is roughly 3-4 characters. However, it is not clear:
- Are tokens input characters, output characters, or both?
- What does a token cost? I get that the pricing page says it varies by model and is “ API list price +10%”, but nowhere does it say what these API list prices are. Am I meant to go to The OpenAI, Anthropic, and other websites to get that pricing information? Shouldn’t that be in a table on that page which each hosted model listed?
—
I’m only a very casual user of AI tools so maybe this is clear to people deep in this world, but it’s not clear to me just based on Zelda pricing page exactly how far $5 per month will get me.
List here: https://zed.dev/docs/ai/models. Thanks for the feedback, we'll make sure this is linked from the pricing page. Think it got lost in the launch shuffle.
It’s hard for me to conceptualise what a million tokens actually looks like, but I don’t think there’s a way around that aside from making proving some concrete examples of inputs, outputs, and the number of tokens that actually is. I guess it would become clearer after using it a bit.
> Token-agnostic prompt structures obscure the cost and are rife with misaligned incentives
Saying that, token-based pricing has misaligned incentives as well: as the editor developer (charging a margin over the number of tokens) or AI provider, you benefit from more verbose input fed to the LLMs and of course more verbose output from the LLMs.
Not that I'm really surprised by the announcement though, it was somewhat obviously unsustainable
As a corporate purchaser, "bring your own key" is just about the only way we can allow our employees to stay close to the latest happenings in a rapidly moving corner of the industry.
We need to have a decent amount of trust in the model execution environment and we don't like having tons of variable-cost subscriptions. We have that trust in our corporate-managed OpenAI tenant and have good governance and budget controls there, so BYOK lets us have flexibility to put different frontends in front of our trusted execution environment for different use cases.
The companies actually providing the models charge by token and this lets the tooling avoid having to do cost planning for something with a bunch of unknowns and push the risk of overspend to customers.
I really want to be able to see outline panel, project panel, agent panel, and terminal panel all at the same time. If I stay at $20/mo, can you guys fix the please?
For those of us building agentic tools that require similar pricing, how does one implement it? OpenRouter seems good for the MVP, but I'm curious if there are alternatives down the line.
I love this for Zed. I hate that I’m going to have deal with the model providers more directly. Because I don’t know what tokens are.
It’s like if McDonalds changed their pricing model to be some complex formula involving nutrition properties (calories, carbs, etc) plus some other things (carbon tax, local state taxes, whatever) and moved to a pay as you bite model. You start eating a Big Mac, but every bite has different content proportions, so every bite is changing in price as you go. Only through trial and error would you figure out how to eat. And the fact that the “complex formula” is prone to real time change at any point, makes it impossible to get excited about eating.
(I work at Zed)
No, you aren't. We care about you using Zed the editor, and we provide Zed Pro for folks who decide they'd like to support Zed or our billing model works for them. But it's simply an option, not our core business plan, and this pricing is in place to make that option financially viable for us. As long as we don't bear the cost, we don't feel the need (or the right) to put ourselves in the revenue path with LLM spend.
Will you consider providing a feature to protect me from accidentally using my Zed account after the $5 is exhausted (or else a plan that only includes edit predictions)? I can't justify to myself continuing my subscription if there's a risk I will click the wrong button with identical text to the right button, and get charged an additional 10% for it. I get you need to be compensated for risk if you pay up front on my behalf, but I don't need you to do that.
I understand that there's nothing you could do to protect me if I make a prompt that ends up using >$5 of usage but after that I would like Zed to reject anything except my personal API keys.
Their burn agent mode is pretty badass, but is super costly to run.
I'm a big fan of Zed but tbf I'm just using Claude Code + Nvim nowadays. Zed's problem with their Claude integration is that it will never be as good as just using the latest from Claude Code.
I wonder if first-party offerings like Codex and Claude will follow suit. Most "agents" are utter nonsense, but they cooked with the CLI tools. It'd be a shame to let go of them.
Eventually that is the plan. Like we saw with Claude Code, they want developers to get a taste of that unlimited and unrestrained power of a state of the art model like Opus 4, then slowly limit usage until you fully transition to metered billing and deprecate subscription based billing.
$10 GitHub Copilot Pro plan works for me in VSCode.
I've been exclusively using Claude Sonnet 4 model in VSCode and so far I've used 90% of the premium quota at the end of the months. I can always use GPT4.1 or GPT5-mini for free if need be.
So they're essentially charging $5/month for unlimited tab completions, when you get 2k for free. That seems reasonable, many could just not pay anything at all.
But in the paid plan they charge 10% over API prices for metered usage... and also support bring your own API. Why would anyone pay their +10%, just to be nice?
This is the same problem cursor and windsurf are facing. How the heck do you make money when you're competing with cline/roocode for users who are by definition technically sophisticated? What can you offer that's so wonderful that they can't?
seems fine - they're aligning their prices with their costs.
presumably everyone is just aiming or hoping for inference costs to go down so much that they can do a unlimited-with-tos like most home Internet access etc, because this intermediate phase of having to count your pennies to ask the matrix multiplier questions isn't going to be very enjoyable or stable or encourage good companies to succeed.
No, not at all! At my org it's around $7000 a month for the entire org - my personal usage is around $2-10 a day. Usually less than the price of my caffeinated beverages.
Zed and Warp were two promising Rust-based projects that I closely monitor. Currently, both projects are progressing towards becoming a generic AI Agentic code platform.
Zed was supposed to be the answer to Atom / Sublime Text in my opinion, and I kinda do want to use it as my main driver, but it just isn’t there yet for me. It’s shameful because I like its aesthetics as a product more than the competition out there.
Just this other day I tried using it for something it sort of advertised itself as the superior thing, which was to load this giant text file I had instantly and let me work on it.
I then tried opening this 1GB text file to do a simple find/replace on it only to find macOS run out of system memory with Zed quickly using 20gb of memory for that search operation.
I then switched to vscode, which, granted opened it in a buffered sort of way and limited capability, but got the job done.
Maybe that was a me issue I don’t know, but aside from this one-off, it doesn’t have a good extensions support in the community for my needs yet. I hope it gets there!
I feel like Zed stopped working on the editor itself since AI was rolled out. I also wanted it to be the open-source alternative to Sublime, but nothing comes close.
If the full magnitude of products that stopped working on the main product and started to try to shoehorn AI in was known, the economy would collapse overnight.
Intellij wasn't immune to it either. Number of bugs has exploded since they started adding ai features to their ide, none of which I used for more than 5 minutes.
It really is concerning. I keep an excel sheet with links of all companies I could apply to whenever i change jobs, and checking it the other day practically every row was now selling an ai product.
Yeah exactly this! I get they want to stay in the game and follow the market, but I’m sad they’re not being more aggressive on that original vision. I still think there could be a huge payoff for them if they invested more on their brand and aesthetics of a more polished and comfy editor.
The way I see it, we’re sort of living in a world where UX is king. (Looking at you Cursor)
I feel like there’s a general sentiment where folks just want a sense of home with their tools more than anything. Yes they need to work, but they also need to work for you in your way. Cursor reinvented autocomplete with AI and that felt like home for most, what’s next? I see so much focus on Agents but to me personally that feels more like it should live on the CI/CD layer of things. Editors are built for humans, something isn’t quite there yet, excited to see how it unfolds.
There have been improvements recently, but it still has some of the worst text rendering of any editor on macOS, if you have a non-4K display plugged in. Rendering text is kind of a big deal!
I think their recent push to delegate to CLI agents in the agent panel is the right direction. Claude Code has been running in Zed for the past month. Sure, there are SDK limitations and kinks to iron out, but it’s moving quickly. I’m into it.
I get what you are saying, and I think they are doing a good job there as well. That said, it still feels like something is missing in that whole workflow to me.
I sometimes worry if we are moving too fast for no reason. Some things are becoming standards in an organic way but they feel suboptimal in my own little bias bubble corner.
Maybe I am getting old and struggling to adapt to the new generation way of getting work done, but I have a gut feeling that we need to revisit some of this stuff more deliberately.
I still see Agents as something that will be more like a background thread that yields rather than a first class citizen inside the Editor you observe as it goes.
I don't know about you, but I feel an existential dread whenever I prompt an Agent and turn into a vegetable watching it breathe. — am I using it wrong? Should I be leaving and coming back later? Should I pick a different file and task while it's doing its thing?
Similar experience: I added a folder to my zed project that was too big, causing zed to lock up and eventually crash. But because the default setting was to re-open projects on launch, I was stuck in a loop where I couldn’t remove the folder either. Eventually found a way to clear the recent projects, load an empty editor, and change the setting to avoid it in the future.
Big files/projects is where Sublime really shines. I hope Zed can replicate that performance.
Ok, how big was your project?
My JetBrains IDEs (RustRover, Goland) probably would have choked out too.
You can open large codebases in Jetbrains IDEs and it takes forever to index, but it shouldn't outright crash or completely freeze.
You can open the kernel in CLion. Don't expect the advanced refactoring features to work, but it can deal with a ~40 million lines project folder for example
My experience with Zed was that it was a buggy mess with strange defaults. I had high hopes until I tried it but it was not even close to working for my pretty normal use case of coding C so I went back to Sublime.
same.
I reported some issue a few months back but saw they have thousands to deal with so mine will understandably stay open forever.
VSCode is my go to for large text file interaction on macOS.
TextEdit may be worth looking into as well? Haven’t tested it for large files before.
I have Sublime Text installed for the onlu use case of opening large files. Nothing comes close.
Googling around a bit, Sublime Text doesn't seem to be particularly good at this: https://forum.sublimetext.com/t/unable-to-open-a-large-text-...
In my experience, BBEdit will open files that kill other editors: "Handling large files presents no intrinsic problems for BBEdit, though some specific operations may be limited when dealing with files over 2GB in size."
While I don’t know if the claim is true, you’ve linked a post from 2012…
I have always found sublime to be the best at large files, well over 1gb. Since you mention bbedit, maybe this is some mac specific issue? I really don't know. But at least among people i know, opening large files has effectively become its main USP.
Should be noted that the linked post is almost 15 years old at this point too, so perhaps not the most up to date either.
Speaking of TextEdit, I like what the folks at CodeEdit are doing. They are moving slow and focusing on just the core parts. Maybe I should go give them a try too!
Vscode has a special optimizations in place for large files. That's why it works so good.
You can actually disable it in the settings if you want it to try and render the entire thing at once
Speaking of aesthetics, I switched back to VSCode but I ended up installing the theme "Zed One Theme" and switched the editor's font to "IBM Plex Mono".
I know it's not Zed, but I am pretty satisfied with the results.
Zed is the opposite of Sublime. Zed is VC funded and will eventually be enshittified. Sublime is not and has been going strong for many years.
Yeah fair point. I think CodeEdit is a perhaps a closer comparison there
I'm going through this with pricing for our product and the battle is:
Customer: wants predictable spend Sales/Marketing: wants an amazing product that is easy to sell (read does the mostest for the leastest) Advisors: want to see the SaaS model in all it's glory (i.e. 85% margin primarily from oversubscription of infrastructure) Finance: wants to make money and is really worried about variable costs ruining SaaS profit margin
A couple of thoughts:
1. AI-based cost is mostly variable. Also, things like messaging, phone minutes and so on are variable. Cloud expenses are also variable... There's a theme and it's different.
2. The unit of value most ai software is delivering is work that is being done by people or should be being done and is not.
3. Seems like it's time to make friends with a way to make money other than subscription pricing.
Token based pricing generally makes a lot of sense for companies like Zed, but it sure does suck for forecasting spend.
Usage pricing on something like aws is pretty easy to figure out. You know what you're going to use, so you just do some simple arithmetic and you've got a pretty accurate idea. Even with serverless it's pretty easy. Tokens are so much harder, especially when using it in a development setting. It's so hard to have any reasonable forecast about how a team will use it, and how many tokens will be consumed.
I'm starting to track my usage with a bit of a breakdown in the hope that I'll find a somewhat reliable trend.
I suspect this is going to be one of the next big areas in cloud FinOps.
My rant on token-based pricing is primarily based on the difficulty in consistently forecasting spend.....and also that the ongoing value of a token is controlled by the vendor...."the house always wins"
https://forstarters.substack.com/p/for-starters-59-on-credit...
There are enough vendors that it's difficult for any one vendor to charge too much per token. There are also a lot of really good open-weight models that your business could self-host if the hosted vendors all conspire to charge too much per token. (I believe it's only economical to self-host big models if you're using a lot of tokens, so there is a breakeven point.)
> I suspect this is going to be one of the next big areas in cloud FinOps.
It already is. There’s been a lot of talk and development around FinOps for AI and the challenges that come with that. For companies, forecasting token usage and AI costs is non-trivial for internal purposes. For external products, what’s the right unit economic? $/token, $/agentic execution, etc? The former is detached from customer value, the latter is hard to track and will have lots of variance.
With how variable output size can be (and input), it’s a tricky space to really get a grasp on at this point in time. It’ll become a solved problem, but right now, it’s the Wild West.
This is partially why, at least for LLM-assisted coding workloads, orgs are going with the $200 / mo Claude Code plans and similar.
Until the rug inevitably gets pulled on those as well. It's not in your interest buy a $200/mo subscription unless you use >$200 of tokens per month, and long term it's not in their interest to sell you >$200 of tokens for a flat $200.
> It's not in your interest buy a $200/mo subscription unless you use >$200 of tokens per month
This is only true if you can find someone else selling them at cost.
If a company has a product that cost them $150, but they would ordinarily sell piecemeal for a total of $250, getting a stable recurring purchase at $200 might be worthwhile to them while still being a good deal for the customer.
meanwhile me hiding from accounting for spending $500 on cursor max mode in a day
Did you actually get 500 bucks worth of work out of it?
The pricing model works as long as people (on average) think they need >$200 worth of tokens per month but actually do something less, like $170/month. Is that happening? No idea.
Maybe that is what Anthropic is banking on, from what I gather they obscure Max accounts actual token spend so it's hard for subscribers to tell if they're getting their moneys worth.
https://github.com/anthropics/claude-code/issues/1109
Well, the $200/mo plan model works as long as people on the $100/mo plan is insufficient for some people which works as long as the $17/mo plan is insufficient for some people.
I don't see how it matters to you that you aren't saturating your $200 plan. You have it because you hit the limits of the $100/mo plan.
It's probably easier (and hence, cheaper) to finance the AI infrastructure investments if you have a lot of recurring subscriptions.
There is probably a lot of value in predictability. Meaning it might be visible for a $200, to offer more tokens than $200.
I don't know about for people using CC on a regular basis, but according to `ccusage`, I can trivially go over $20 of API credits in a few days of hobby use. I'd presume if you are paying for a $200 plan then you know you have heavy usage and can easily exceed that.
Also seems like a great idea to create a business models where the companies aren't incentivised to provide the best product possible. Instead they'll want to create a product just useful enough to not drive away users, but just useless enough to temp people to go up a tier, "I'm so close, just one more prompt and it will be right this time!"
Edit: To be clear, I'm not talking about Zed. I'm talking about the companies make the models.
While Apple is incentivized to ship a smaller battery to cut costs, it is also incentivized to make their software efficient as possible to make the best use of the battery they do ship
That's not the same thing at all.
I agree that tokens are a really hard metric for people. I think most people are used to getting something with a certain amount of capacity per time and dealing with that. If you get a server from AWS, you're getting a certain amount of capacity per time. You still might not know what it's going to cost you to do what you want - you might need more capacity to run your website than you think. But you understand the units that are being billed to you and it can't spiral out of control (assuming you aren't using autoscaling or something).
When you get Claude Code's $20 plan, you get "around 45 messages every 5 hours". I don't really know what that means. Does that mean I get 45 total conversations? Do minor followups count against a message just as much as a long initial prompt? Likewise, I don't know how many messages I'll use in a 5 hour period. However, I do understand when I start bumping up against limits. If I'm using it and start getting limited, I understand that pretty quickly - in the same way that I might understand a processor being slower and having to wait for things.
With tokens, I might blow through a month's worth of tokens in an afternoon. On one hand, it makes more sense to be flexible for users. If I don't use tokens for the first 10 days, they aren't lost. If I don't use Claude for the first 10 days, I don't get 2,160 message credits banked up. Likewise, if I know I'm going on vacation later, I can't use my Claude messages in advance. But it's just a lot easier for humans to understand bumping up against rate limits over a more finite period of time and get an intuition for what they need to budget for.
Both prefill and decode count against Claude’s subscriptions; your conversations are N^2 in conversation length.
My mental model is they’re assigning some amount of API credits to the account and billing the same way as if you were using tokens, shutting off at an arbitrary point. The point also appears to change based on load / time of day.
Token based pricing works for the company, but not for the user.
> Additional usage billed at API list price +10%
I'm hoping that zed is someday able to get discount bulk pricing to resell through their providers.
With the 10% markup, is there any benefit to using zed's provider vs. BYOK?
I'm personally looking forward to this change because I currently pay $20/month just to get edit prediction. I use Claude Code in my terminal for everything else. I do wish I could just pay for edit prediction at an even lower price, but I can understand why that's not an option.
I'm curious if they have plans to improve edit prediction though. It's honestly kind of garbage compared to Cursor, and I don't think I'm being hyperbolic by calling it garbage. Most of the time it's suggestions aren't helpful, but the 10-20% of the time it is helpful is worth the cost of the subscription for me.
We have a significant investment underway in edit predictions. We hear you, more soon.
This is the one thing keeping me from switching from Cursor. I much prefer Zed in every other way. Exciting!
Yeah, Cursor tab completion is basically in the realm of magical mind reading and might still be the most insane productivity demonstration of LLM tech in software.
It obsoleted making Vim macros and multiline editing for example. Now you just make one change and the LLM can derive the rest; you just press tab.
It's interesting that the Cursor team's first iteration is still better than anything I've seen in their competitors. It's been an amazing moat for a year(?) now.
I agree. I wish they focused more on it. I'd love to be able to give it a few sentences of instructions to make it even more effective for me. It's so much more of a productivity boon than all the coding agent stuff ever was.
I could say the same about the AI-assisted autocomplete in IDEA. Wonder how they compare...
This is very very exciting.
That's great to hear, thanks!
I have never used Zed predictions but $20 for 500 prompts is quite a good deal. I use it mostly with Opus for some hard cases.
10 bucks on copilot and you get unlimited + unlimited gpt4.1 etc
Copilot is the best value by far
That's been my workflow also. Claude Code / OpenAI Codex most of the time, when I have to edit files Cursor's auto-complete is totally worth the $20.
Making this prediction now, LLM pricing will eventually be priced in bytes.
Why: LLMs are increasingly becoming multimodal, so an image "token" or video "token" is not as simple as a text token. Also, it's difficult to compare across competitors because tokenization is different.
Eventually prices will just be in $/Mb of data processed. Just like bandwidth. I'm surprised this hasn't already happened.
The problem is that tokens don't all equate to the same size. A megabyte of some random json is a LOT more tokens than a megabyte of "Moby Dick".
> Why: LLMs are increasingly becoming multimodal, so an image "token" or video "token" is not as simple as a text token.
For autoregressive token-based multimodal models, image tokens are as straightforward as text tokens, and there is no reason video tokens wouldn’t also be. (If models also switch architecture and multimodal diffusion models, say, become more common, then, sure, a different pricing model more tied to actual compute cost drivers for that architecture are likely but... even that isn’t likely to be bytes.)
> Also, it's difficult to compare across competitors because tokenization is different.
That’s a reason for incumbents to prefer not to switch, though, not a reason for them to switch.
> Eventually prices will just be in $/Mb of data processed.
More likely they would be in floatint point operations expended processing them, but using tokens (which are the primary drivers for the current LLM architectures) will probably continue as long as the architecture itself is doninant.
To clarify, "as straightforward" = same dimensionality? I guess it would have to be, to be usable in the same embedding space.
> For autoregressive token-based multimodal models, image tokens are as straightforward as text tokens, and there is no reason video tokens wouldn’t also be.
In classical computing, there is a clear hierarchy: text < images <<< video.
Is there a reason why video computing using LLMs shouldn't be much more intensive and therefore costly than text or image output?
No, it'll certainly be more expensive in any conceivable model that handles all three modalities, but if the model uses an architecture like current autoregressive, token-based multimodal LLMs/VLMs, tokens will make just as much sense as the basis for pricing, and be similarly straightforward, as with text and images.
Of course it’s more expensive. It’s still tokens, but considerably more of them.
That's the thing, I can't visualize (and I don't think most people can) what "tokens" represent for image or video outputs.
For text I just assume them to be word stems or more like work-family-members (cat-feline-etc).
For images and videos I guess each character, creature, idea in it is a token? Blue sky, cat walking around, gentleman with a top hat, multiplied by the number of frames?
> For images and videos I guess each character, creature, idea in it is a token?
No, for images, tokens would, I expect, usually be asymptotically proportional to the area of the image (this is certainly the case with input token for OpenAIs models that take image inputs; outputs are more opaque); you probably won’t have a neat one-to-one intuition for what one token represents, but you don’t need that for it to be useful and straightforward for understanding pricing, since the mathematical relationship of tokens to size can be published and the size of the image is a known quantity. (And videos conceptually could be like images with an additional dimension.)
No one is going to give up token-based pricing. The main players can twiddle their models to make anything any amount of tokens they choose.
Hm... why not tokens as reported by each LLM provider? They already handle pricing for images etc.
Why this instead of cpu/gpu time?
CPU/GPU time is opaque to me before I send my data, but tokens I can count before I decide to send it. That means I can verify the metering. With CPU time I send the data and then the company says "That cost X CPU units, which is $500".
I'm assuming token count and usage is pretty closely tied
This whole business model of trying to shave off or arbitrage a fraction of the money going to OpenAI and Anthropic just sucks. And it seems precarious. There's no honest way to resell tokens at a profit, and everyone knows it.
Sorry, how is this new pricing anything but honest? They provide an editor you can use to - optimize the context you send to the LLM services - interact with the output that comes out of them
Why does not justify charging a fraction of your spend on the LLM platform? This is pretty much how every service business operates.
There's now greater incentive for Zed to stuff more content in the prompts to inflate tokens used and thus profit more. Or at least be less zealous.
This is not a new concern. And is not unique to Zed.
For companies where that is their entire business model I absolutely agree. Zed is a solid editor with additional LLM integration features though, so this move would seem to me to just cover their costs + some LLM integration development funds. If their users don't want to use the LLM then no skin off Zed's back unless they've signed some guaranteed usage contract.
>There's no honest way to resell tokens at a profit, and everyone knows it.
Agree with the sentiment, but I do think there are edge cases.
e.g. I could see a place like openrouter getting away with a tiny fractional markup based on the value they provide in the form of having all providers in one place
The issue with a model like this (fixed small percentage) is that your biggest clients are the most incentivized to move away.
At scale, OpenRouter will instead get you the lower high-volume fees they themselves get from their different providers.
The whole business model even for OAI/Anthropic is unsustainable.. they are already running it at a huge loss atm, and will do for the foreseeable future. The economics simply doesn't work, unfortunately or not
Good change. I’m not a vibe coder, I use Zed Pro llm integration more like glorified stack overflow. I value Zed more for being an amazing editor for the code I actually write and understand.
I suspect I’m not alone on this. Zed is not the editor for hardcore agentic editing and that’s fine. I will probably save money on this transition while continuing to support this great editor for what it truly shines at: editing source code.
Prediction: the only remaining providers of AI-assisted tools in a few years will be the LLM companies themselves (think claude code, codex, gemini, future xai/Alibaba/etc.), via CLIs + integrations such as ASP.
There is very little value that a company that has to support multiple different providers, such as Cursor, can offer on top of tailored agents (and "unlimited" subscription models) by LLM providers.
I recently started using Codex (OpenAI's Claude Code) and it has a VSCode extension that works like a charm. I tried out Windsurf a while ago. And the Codex extension simply does everything that Windsurf did. I guess it doesn't show changes at well, (it shows diffs in it's own window instead of in the file), but I can just check a git diff graphically (current state vs. HEAD) if I really wanted that.
I am really tempted to buy ChatGPT Pro, and probably would have if I lived in a richer country (unfortunetley purchase power parity doesn't equalize for tech products). The problem with Windsurf (and presumably Cursor and others) is that you buy the IDE subscription and then still have to worry about usage costs. With Codex/Claude Code etc., yeah, it's expensive, but, as long as you're within the usage limits, which are hopefully reasonable for the most expensive prices, you don't have to worry about it. AND you get the web and phone apps with GPT 5 Pro, etc.
If you look at even the Claude/OpenAI chat UIs, they kind of suck. Not sure why you think someone else can't/won't do it better. Yes, the big players will copy what they can, but they also need to chase insane growth and getting every human on earth paying for an LLM subscription.
A tool that is good for everyone is great for no one.
Also, I think we're seeing the limits on "value" of a chat interface already. Now they're all chasing developers since there's a real potential to improve productivity (or sadly cut-costs) there. But even that is proving difficult.
I don't know. Foundation models are very good, and you can get a surprising amount of mileage from them by using them with low level interfaces. But personally I think companies building development tools of the future will use LLMs to build systems with increasing capabilities. I think a lot of engineering challenges remain in scaling LLM's to take over day to day in programming, and the current tools are scratching the surface of what's possible when you combine LLMs with traditional systems engineering.
I can imagine the near future where companies “sponsor” open source projects by donating tokens to “mine” a PR for a feature they need.
But the reason LLMs aren't used to build features isn't because they are expensive.
The hard work is the high level stuff like deciding on the scope of the project, how it should fit in to the project, what kind of extensibility the feature might need to be built with, what kind of other components can be extended to support it, (and more), and then reviewing all the work that was done.
I love this! Finally a more direct way for companies to sponsor open source development. GitHub Sponsors helps, but it is often so vague where the funding is going.
Unless companies also donate money to sponsor the code review that will be required to be done by real human being I could see this idea being a problem for maintainers. Yes you have to code review a human being as well but a human being is capable of learning and carrying that learning forward and their next PR will be better, as well as being able to look at past PRs to evaluate whether the user is a troll/bad actor or someone who genuinely wants to assist with the project. An LLM won't learn and will always spit out valid _looking_ code.
If companies want to help they can just... I don't know... give projects some money
More often than not, for individuals, it's barely contributing to their living costs
I was just thinking this morning about how I think Zed should rethink their subscription because its a bit pricey if they're going to let you just use Claude Code. I am in the process of trying out Claude and figured just going to them for the subscriptions makes more sense.
I think Zed had a lot of good concepts where they could make paid AI benefits optional longer term. I like that you can join your devs to look at different code files and discuss them. I might still pay for Zed's subscription in order to support them long term regardless.
I'm still upset so many hosted models dont just let you use your subscription on things like Zed or JetBrains AI, what's the point of a monthly subscription if I can only use your LLM in a browser?
> if they're going to let you just use Claude Code
I'm pretty sure that's only while it's in preview, just like they were giving away model access before that was formally launched. Get it while it's hot.
> I'm still upset so many hosted models dont just let you use your subscription on things like Zed or JetBrains AI, what's the point of a monthly subscription if I can only use your LLM in a browser?
This is her another reason why CLI-based coding agents will win. Every editor out there trying to be the middle man between you and an AI provider is nuts.
Wouldn't the last step just be an API? That would allow direct integration from everywhere.
There is one, developed by the Zed team in collaboration with Gemini. And Claude Code is also supported now.
https://agentclientprotocol.com/overview/introduction
I'm glad to see this change. I didn't much use the AI features, but I did want to support Zed. $20 seemed a bit high for that signal. $10 seems right. $5 with no tokens would be nicer.
Great work folks
Tokens are an implementation detail that have no business being part of product pricing.
It's deliberate obfuscation. First, there's the simple math of converting tokens to dollars. This is easy enough; people are familiar with "credits". Credits can be obfuscation, but at least they're honest. The second and more difficult obfuscation to untangle is how one converts "tokens" to "value".
When the value customers receive from tokens slips, they pay the same price for the service. But generative AI companies are under no obligation to refund anything, because the customer paid for tokens, and they got tokens in return. Customers have to trust that they're being given the highest quality tokens the provider can generate. I don't have that trust.
Additionally, they have to trust that generative AI companies aren't padding results with superfluous tokens to hit revenue targets. We've all seen how much fluff is in default LLM responses.
Pinky promises don't make for healthy business relationships.
Tokens aren't that much more opaque than RAM GB/s for functions or whatever. You'd have to know the entire infra stack to really understand it. I don't really have a suggestion for a better unit for that kind of stuff.
Doesn’t prompt pricing obfuscate token costs by definition? I guess the alternative is everyone pays $500/mo. (And you’d still get more value than that.)
I am wondering why they couldn't have foreseen this. Was it really a failure to predict the need to charge for tokens eventually, or was planned from the start that way -- get people to use the unlimited option for a bit, they get hooked, then switch them to per-token subscriptions.
I completely get why this pricing is needed and it seems fair. There’s a major flaw in the announcement though.
I get that the pro plan has $5 of tokens and the pricing page says that a token is roughly 3-4 characters. However, it is not clear:
- Are tokens input characters, output characters, or both?
- What does a token cost? I get that the pricing page says it varies by model and is “ API list price +10%”, but nowhere does it say what these API list prices are. Am I meant to go to The OpenAI, Anthropic, and other websites to get that pricing information? Shouldn’t that be in a table on that page which each hosted model listed?
—
I’m only a very casual user of AI tools so maybe this is clear to people deep in this world, but it’s not clear to me just based on Zelda pricing page exactly how far $5 per month will get me.
List here: https://zed.dev/docs/ai/models. Thanks for the feedback, we'll make sure this is linked from the pricing page. Think it got lost in the launch shuffle.
All makes sense. I presumed it was an oversight.
It’s hard for me to conceptualise what a million tokens actually looks like, but I don’t think there’s a way around that aside from making proving some concrete examples of inputs, outputs, and the number of tokens that actually is. I guess it would become clearer after using it a bit.
Now live: https://zed.dev/pricing#what-is-a-token. Thanks for the feedback
> Token-agnostic prompt structures obscure the cost and are rife with misaligned incentives
Saying that, token-based pricing has misaligned incentives as well: as the editor developer (charging a margin over the number of tokens) or AI provider, you benefit from more verbose input fed to the LLMs and of course more verbose output from the LLMs.
Not that I'm really surprised by the announcement though, it was somewhat obviously unsustainable
Why is most of the AI-tooling industry still stuck on this "bring your own key" model?
What would you propose as an alternative?
As a corporate purchaser, "bring your own key" is just about the only way we can allow our employees to stay close to the latest happenings in a rapidly moving corner of the industry.
We need to have a decent amount of trust in the model execution environment and we don't like having tons of variable-cost subscriptions. We have that trust in our corporate-managed OpenAI tenant and have good governance and budget controls there, so BYOK lets us have flexibility to put different frontends in front of our trusted execution environment for different use cases.
The companies actually providing the models charge by token and this lets the tooling avoid having to do cost planning for something with a bunch of unknowns and push the risk of overspend to customers.
I really want to be able to see outline panel, project panel, agent panel, and terminal panel all at the same time. If I stay at $20/mo, can you guys fix the please?
I’m tired of toggling between them.
This is much better for me but I really want a plan that includes zero AI other than edit prediction and BYOK for the rest.
But as a mostly claude max + zed user happy to see my costs go down.
For those of us building agentic tools that require similar pricing, how does one implement it? OpenRouter seems good for the MVP, but I'm curious if there are alternatives down the line.
I just asked this exact question about Zed pricing 2 days ago
https://news.ycombinator.com/item?id=45333425
I love this for Zed. I hate that I’m going to have deal with the model providers more directly. Because I don’t know what tokens are.
It’s like if McDonalds changed their pricing model to be some complex formula involving nutrition properties (calories, carbs, etc) plus some other things (carbon tax, local state taxes, whatever) and moved to a pay as you bite model. You start eating a Big Mac, but every bite has different content proportions, so every bite is changing in price as you go. Only through trial and error would you figure out how to eat. And the fact that the “complex formula” is prone to real time change at any point, makes it impossible to get excited about eating.
Now I see little value in subscribing to Zed Pro compared to just bringing my own API key. Am I missing something?
(I work at Zed) No, you aren't. We care about you using Zed the editor, and we provide Zed Pro for folks who decide they'd like to support Zed or our billing model works for them. But it's simply an option, not our core business plan, and this pricing is in place to make that option financially viable for us. As long as we don't bear the cost, we don't feel the need (or the right) to put ourselves in the revenue path with LLM spend.
I’m curious if there’s any way to completely disable/remove `zed.dev` provider from Zed, while keeping others available?
If you sign out of zed, zed's providers don't work. I believe you still see them in the AI panel, but it won't operate.
Will you consider providing a feature to protect me from accidentally using my Zed account after the $5 is exhausted (or else a plan that only includes edit predictions)? I can't justify to myself continuing my subscription if there's a risk I will click the wrong button with identical text to the right button, and get charged an additional 10% for it. I get you need to be compensated for risk if you pay up front on my behalf, but I don't need you to do that.
I understand that there's nothing you could do to protect me if I make a prompt that ends up using >$5 of usage but after that I would like Zed to reject anything except my personal API keys.
Yep, you can set your spend limit to $0 and it will block any spend beyond your $10 per month for the subscription
https://zed.dev/docs/ai/plans-and-usage#usage-spend-limits
Excellent. Thanks.
> [Zed Pro is] not our core business plan
What is the core business plan then?
https://zed.dev/blog/sequoia-backs-zed#introducing-deltadb-o...
Their burn agent mode is pretty badass, but is super costly to run.
I'm a big fan of Zed but tbf I'm just using Claude Code + Nvim nowadays. Zed's problem with their Claude integration is that it will never be as good as just using the latest from Claude Code.
Presumably the tab based edit-prediction model + $5 of tokens is worth the (new) $10 / mo price.
Though from everything I've read online, Zed's edit prediction model is far, _far_ behind that of Cursor.
I wonder if first-party offerings like Codex and Claude will follow suit. Most "agents" are utter nonsense, but they cooked with the CLI tools. It'd be a shame to let go of them.
Eventually that is the plan. Like we saw with Claude Code, they want developers to get a taste of that unlimited and unrestrained power of a state of the art model like Opus 4, then slowly limit usage until you fully transition to metered billing and deprecate subscription based billing.
Am I wrong in that GitHub Copilot Pro apparently has the best overall token spend when considering agentic editors?
$10 GitHub Copilot Pro plan works for me in VSCode.
I've been exclusively using Claude Sonnet 4 model in VSCode and so far I've used 90% of the premium quota at the end of the months. I can always use GPT4.1 or GPT5-mini for free if need be.
Yeah, that's what I've been using since its release as well. I don't really see a point in trying the competition. It can't be better than this.
Better than Gemini Pro 2.5? Github Copilot doesn't even support tooling in Zed yet. It's been months..
So they're essentially charging $5/month for unlimited tab completions, when you get 2k for free. That seems reasonable, many could just not pay anything at all.
But in the paid plan they charge 10% over API prices for metered usage... and also support bring your own API. Why would anyone pay their +10%, just to be nice?
This is the same problem cursor and windsurf are facing. How the heck do you make money when you're competing with cline/roocode for users who are by definition technically sophisticated? What can you offer that's so wonderful that they can't?
Thought this was 2015 for a sec and this was about Zed Shaw.
seems fine - they're aligning their prices with their costs.
presumably everyone is just aiming or hoping for inference costs to go down so much that they can do a unlimited-with-tos like most home Internet access etc, because this intermediate phase of having to count your pennies to ask the matrix multiplier questions isn't going to be very enjoyable or stable or encourage good companies to succeed.
Entirely predictable and what should've been done from the start instead of this bait-and-switch mere months after introducing agentic editing.
is this effectively what Cursor did as well? I seem to remember some major pricing change of their in the past few months.
In a way I would say they were even worse, instead of outright saying "we've increased our prices", they "clarified their pricing".
This is going to be a blood bath for many freelancers if the trend continues with other platforms. Mark my words.
How much are companies spending per developer on tokens? From what I read it seems like it might be quite high at $1,000 or more per day?
No, not at all! At my org it's around $7000 a month for the entire org - my personal usage is around $2-10 a day. Usually less than the price of my caffeinated beverages.
The first thing they do after getting funded by Sequoia lmao
Another one bites the dust :-( I hope at least Windsurf stays the same..
Say what you want but Sublime is the GOAT.
Zed and Warp were two promising Rust-based projects that I closely monitor. Currently, both projects are progressing towards becoming a generic AI Agentic code platform.
Until now I've never really come across a comment on Hackernews I thought was AI generated...
You are partially right, after I wrote the comment I used the writing tools in macOS to rewrite it in a professional tone.
The wording may sound AI generated but the gist of the comment is my true opinion