Loading

译:Building a Text Editor in the Times of AI 在AI时代构建文本编辑器

原文:https://zed.dev/blog/building-a-text-editor-in-times-of-ai
作者:Thorsten Ball
译者:Claude 3 Opus
发布时间:03/26/24

Building a Text Editor in the Times of AI
在AI时代构建文本编辑器

This is my fifth conversation with Zed's three co-founders Nathan, Max, and Antonio. You can read the previous one here.
这是我与Zed三位联合创始人Nathan、Max和Antonio的第五次对话。你可以在这里阅读之前的对话。

This time I had to address the elephant in the room: AI. I wanted to know how each of the founder's found their way to using AI, how they use it today, and how they would like to use it. We also talked about the nitty-gritty of the current implementation of AI features in Zed and what this year will bring for Zed in regards to AI. I also had to ask: is building an editor in times of AI not ignoring the sign of the times?
这一次,我不得不正视房间里的那头大象:AI。我想知道每位创始人是如何找到使用AI的方法的,他们今天如何使用它,以及他们想如何使用它。我们还讨论了目前在Zed中实现AI功能的细节,以及今年Zed在AI方面的计划。我还不得不问:在AI时代构建编辑器,难道不是对时代迹象的无视吗?

What follows is an editorialized transcript of an hour long conversation. I tried to preserve intent and meaning as much as possible, while getting rid of the uhms, the likes, the you-knows, and the pauses and course-corrections that make up an in-depth conversation.
以下是一个长达一小时的对话的编辑版记录。我尽量保留原意,同时去掉了那些无意义的词,比如"嗯"、"喜欢"、"你知道"等,以及深入对话中的停顿和调整。

(You can watch the full conversation on our YouTube channel.)
(你可以在我们的YouTube频道上观看完整的对话。)

Thorsten: When did you first use AI for programming? Do you remember?
Thorsten:你第一次使用AI编程是什么时候?还记得吗?

Nathan: I used it pretty early on. I think my first really eye-opening experience with it was using ChatGPT when it first came out and just having it do really basic stuff. I think I defined geometry, like a geometry library in it, sort of just for fun though. And I was blown away that I could even do those really basic things like defining a point and a circle and an area function and all these things that it was doing at the time. Things have gotten a lot more sophisticated since that moment, but that was kind of like this mind-blowing moment for me.
Nathan:我相当早就开始使用了。我想我第一次让人大开眼界的经历是在ChatGPT刚出来的时候使用它,让它做一些非常基本的事情。我想我定义了几何,就像一个几何库,只是为了好玩。我当时就震惊了,它甚至可以做那些非常基本的事情,比如定义一个点、一个圆和一个面积函数,以及所有这些它当时正在做的事情。从那一刻起,事情变得更加复杂了,但对我来说,那是一个让人大开眼界的时刻。

Thorsten: Was it mind-blowing or were you skeptical?

Thorsten:是让人大开眼界还是让你感到怀疑?

Nathan: It was mind-blowing. I don't understand the general like hate and skepticism toward AI that so many programmers have.
Nathan:那是让人大开眼界。我不理解很多程序员对AI的普遍讨厌和怀疑。

I remember in college, I studied natural language processing and I worked for my professor, right after school. He was the head of SRI doing like classic AI, Jerry Hobbs. I remember how fascinated I was with the idea of like, what is meaning? What is language? How does it work? And studying these like combinatorial, categorical grammar mechanisms, where it was like, we define grammar as this directional lambda calculus formalism and, you know, I was really curious and fascinated by all of that and but also came away frustrated because at the time it was... a language model was the dumbest thing ever. It was based on the frequency of tokens or something and you couldn't get anything out of it.
我记得在大学时,我学习自然语言处理,毕业后为我的教授工作。他是SRI的负责人,做的是经典的人工智能,Jerry Hobbs。我记得我当时对语言的含义、语言如何运作等问题有多着迷。我研究了这些组合的、范畴语法机制,它把语法定义为这种定向的lambda演算形式主义。我对所有这些都非常感兴趣和着迷,但也感到沮丧,因为当时的语言模型简直是世界上最愚蠢的东西。它基于标记的频率之类的东西,你从中什么也得不到。

So just the idea that I could sit there and in English ask it to do anything and have it do anything at all to me is mind-blowing. Right then and there. That's amazing. That's a freaking miracle that I never would have anticipated being good. So why everybody's not blown away by that fact that this exists in our world is beyond me. I just don't get it. It pisses me off, kind of, that people are so, so close-minded about it. Like yeah, you drove a Lamborghini into my driveway, but I don't like the color.

所以,我只是坐在那里,用英语问它做任何事情,让它做任何事情,这对我来说都是令人惊奇的。 那时候这就很了不起了。那简直是个奇迹,我从来没想到会这么好。所以,为什么每个人都不被这个存在于我们世界的事实所震撼,这超出了我的理解范围。我有点生气,人们对此如此封闭。是的,你把一辆兰博基尼开进我的车道,但我不喜欢它的颜色。

It's just this fixation on negativity and what's wrong and what it can't do instead of being amazed and blown away by what it can. And I guess that's just the personality difference between me and people that are like that. I am always looking at the glass half-full and I'm always looking at what's exciting. Now, I never bought a single NFT, right? Just to be clear. So I get that we, in technology, we have these hype cycles and it can get a little exhausting and you're like rolling your eyes at the latest hype cycle and people in your Twitter timeline in all capital letters talking about how this changes everything and is game changing. But I think in this case, it's actually pretty freaking amazing that we have this technology. Okay, I'll stop ranting.
这只是对消极和错误的执着,而不是对它的能力感到惊奇和震撼。我想这只是我和那些人之间的性格差异。我总是看到杯子是半满的,我总是在寻找令人兴奋的东西。现在,我从来没买过一个NFT,要说清楚。所以我明白,在科技领域,我们有这些炒作周期,这可能有点令人厌烦。你会对最新的炒作周期感到不耐烦,人们在你的Twitter时间线里用全大写字母说这改变了一切,具有颠覆性。但我认为,在这种情况下,我们拥有这项技术真的相当了不起。好了,我就说到这儿。

Thorsten: It's funny that you mentioned natural language processing because I come from the other side of the fence. I studied philosophy and I studied philosophy of language. Then when ChatGPT came out, everybody was saying that it doesn't "understand." And I was sitting there thinking: what does "understanding" even mean? How do you understand things? What is meaning? So, I was on the other side of the fence, also thinking that things aren't that easy and that this is super fascinating.
Thorsten:你提到自然语言处理很有趣,因为我来自另一个阵营。我学的是哲学,研究语言哲学。 当ChatGPT出来的时候,大家都说它不 "理解"。而我坐在那里想:"理解 "到底是什么意思?你是如何理解事物的?什么是意义?所以,我站在另一边,也认为事情没有那么简单,这超级有趣。

Antonio: I used ChatGPT right after — I don't know, I think Nathan prompted us to use it. I'm not an AI skeptic or anything — I'm amazed and I also use AI for non-coding tasks — but I've never had an eye-opening experience, I don't know.
Antonio:我在Nathan提示我们使用后马上使用了ChatGPT--我不知道,我想是Nathan提示我们使用的。我不是AI怀疑论者什么的--我很惊讶,我也使用AI做非编码任务--但我从来没有过让我大开眼界的经历,我不知道。

One thing I struggle a lot with with AI is what I do every day. I write code every day for multiple hours a day and I write it in Rust and in this pretty complex code base. And so my first use case for it was this to try to use it in our code base. And every time I try to do that there's always some friction.
我在AI方面遇到的一个很大的困难就是我每天要做的事情。我每天要写好几个小时的代码,用Rust写,在这个相当复杂的代码库里。所以我第一个用例就是尝试在我们的代码库中使用它。每次我尝试那样做的时候总会有一些摩擦。

But one thing that I really like and where I think it really shines is when it comes to generating complex pieces of code. Basically, there are certain patterns in code, right? But you can't really express those in regular expressions or by using Cmd-D to setup multi-cursors, but AI is really good at it. You can just say "okay, I want to apply this refactoring to these five functions" and I can just explain it in a way that I couldn't explain it with any tool like regex. There's a lot of interesting potential.
但我真正喜欢的一点,我认为它真的很出色的地方,是在生成复杂的代码片段时。基本上,代码中有某些模式,对吧?但你不能用正则表达式或者用Cmd-D来设置多个光标来表达这些,但AI在这方面真的很擅长。你可以说 "好的,我想把这个重构应用到这五个函数上",我可以用一种我无法用正则表达式解释的方式来解释它。这里有很多有趣的可能性。

Thorsten: Sounds like there was a bit of a disappointment moment.
Thorsten:听起来有点失望。

Antonio: Yeah. I don't know whether this thing hasn't seen enough Rust. Maybe that's a problem. But there's also a problem of how we integrate with it probably, right? Where we don't give it enough context. I think the problem of just feeding it the right... One thing that I've started to learn only recently is that crafting that context is essential.

Antonio:是的。我不知道这东西是不是没看过足够多的Rust。也许那是个问题。但如何与它集成可能也是一个问题,对吧?我们没有给它足够的上下文。我认为,如何恰当地表达这个问题......我最近才开始学到的一点是,精心设计上下文是至关重要的。

And you really need to kind of express it right. The machine really needs to understand what you're trying to do. Especially in a complex code base where you have, in your brain, like 50 references, but the machine can't know that. How could it? So, yeah, part of my disappointment is just the integration with the AI, not the tooling per se, but just like,
你真的需要以正确的方式表达它。机器真的需要理解你想做什么。特别是在一个复杂的代码库中,你的大脑里可能有50个引用,但机器不可能知道。它怎么可能知道呢?所以,是的,我的部分失望只是与AI的集成,而不是工具本身。

Nathan: We're not there yet, yeah.
Nathan:我们还没到那一步,是的。

Max: Yeah, the difference between using Copilot in the Zed code base — which I still do sometimes, but I wouldn't call it game changer for me — and then using it with some, say, JavaScript script that is a single file where all the context is there and the job of the script is to minimize a random test failure by reducing the steps or something, and it needs to read a bunch of files and invoke some shell commands, etc. The difference is large and in the latter case, the single JavaScript file, Copilot just knocks it out of the park.
Max:是的,在Zed代码库中使用Copilot--我有时还是会这样做,但我不会说这对我来说是个改变游戏规则的事情--然后在一些JavaScript脚本中使用它,比如说,一个单一的文件,所有的上下文都在那里,而且脚本的工作是通过减少步骤来最小化随机测试失败,等等。差别很大,在后一种情况下,单一的JavaScript文件,Copilot就会一鸣惊人。

So if we can get it to behave like that in our day-to-day work when we're working on a hundreds of thousands of line codebase, there's a lot of potential there.
所以,如果我们能让它在我们的日常工作中表现得像在处理数十万行的代码库时那样,那里有很大的潜力。

Thorsten: That's what I noticed in our code base when I use the chat assistant. I often thought, "oh, if you could only see the inlay hints, if you could see the types, then you wouldn't give me this answer." But yes, integration.

Thorsten:这就是我在我们的代码库中使用聊天助手时注意到的。我经常想,"哦,如果你能看到内嵌提示,如果你能看到类型,那你就不会给我这个答案了。"但是,是的,集成。

Nathan: And that's, again, our failing too. The most successful times I've ever had with it are times when I'm synthesizing together things that are already in the model's training data. I love that mode.

Nathan:再说一次,这也是我们的失败。我用它最成功的时候,都是在综合那些已经在模型的训练数据中的东西。我喜欢那种模式。

A lot of GPUI2's renderer I wrote just in the assistant panel purely from going "yo, I need a renderer that integrates with the Metal APIs. It's written in Rust." It wasn't perfect it was way faster than me configuring all these graphics pipelines and stuff. That's not something I've done a ton.
我用助手面板写了很多GPUI2的渲染器,纯粹是从 "哟,我需要一个与Metal API集成的渲染器。它是用Rust写的。"它并不完美,但比我配置所有这些图形管道等要快得多。那不是我做过很多的事情。

I love just like distilling something I need, out of the latent space of one of these models, where I'm providing a few parameters but it's mostly in the weights. But I'm guiding what's in the weights to sort of give me like this StackOverflow-on-acid type thing, where the knowledge is out there. I just need it in a certain shape and I want to guide it.

我只是喜欢从这些模型的潜在空间中提取我需要的东西,我提供一些参数,但大部分都在权重中。但我在指导权重中的内容,以某种方式给我这种StackOverflow加强版的东西,知识就在那里。我只是需要它以某种形状,我想指导它。

So I was playing with Claude this weekend in the bath, right? And I literally wrote an entire file index that used like lock free maps to store file paths, interpreted all the FS events coming out of the FS events API. It did everything asynchronously. You know, I wrote randomized tests for it, had a fake file system implementation and I was in the bath, right, on my phone. I didn't have a single moment where I was writing a curly brace. Now, I never ran the thing that it produced, but I reviewed it with my eyes and while it may had a few issues here or there, it was a very legit implementation that this thing wrote of something that took Antonio, Max and I days, days and days of continuous investment to work on. My knowledge of like having solved it before helped me guide it, but I don't know, there's almost some way in which it changes the scale of what you can do quickly.

所以我这个周末在洗澡的时候玩Claude,对吧?我真的写了一个完整的文件索引,使用无锁映射来存储文件路径,解释从FS事件API中出来的所有FS事件。它异步地做所有事情。你知道,我为它写了随机测试,有一个假的文件系统实现,而我是在洗澡的时候,在我的手机上。我没有一个时刻是在写大括号。现在,我从来没有运行它产生的东西,但我用眼睛审查了它,虽然它可能有一些问题,但它是一个非常合法的实现,这个东西写了一些我们花了几天几天的连续投资来解决的东西。我之前解决过这个问题的知识帮助我指导它,但我不知道,在某种程度上,它改变了你快速做事的规模。

And then sometimes it just falls flat on its face. For the simplest thing.
而且有时它在最简单的事情上也会彻底失败。

Thorsten: ChatGPT came out November 2022, right? When we all should have bought NVIDIA stock. Since then, did you adjust to AI and adjust how you use it? For example, people who use Copilot, they say they adjust to it and kind of leave some comments where they want to guide Copilot. Or did any of you ever get into the whole prompt engineering thing? Or did you reduce when you use it, after figuring out what it can and can't do?
Thorsten: ChatGPT是2022年11月出来的,对吧?那时我们都应该买英伟达的股票。从那时起,你们有没有适应AI,调整使用它的方式?例如,使用Copilot的人,他们说他们适应了它,会留下一些评论来指导Copilot。或者你们有没有进入整个提示工程的事情?或者在弄清楚它能做什么和不能做什么之后,你减少了使用它的频率?

Nathan: I don't really use Copilot for what it's worth. I find it annoying. It's in my face. I never was into running tests automatically on save either. I always just want to... I don't know. I prefer to interact with the AI more in a chat modality. So I'm really looking forward to the time we're about to invest, to get more into that context window.
Nathan:我其实并不真的使用Copilot。我觉得它很烦人。它就在我面前。我也从来不喜欢在保存时自动运行测试。我总是只想......我不知道。我更喜欢在聊天模式下与AI互动。所以我真的很期待我们即将投入的时间,进一步进入那个上下文窗口。

I just find Copilot to be kind of dumb. I don't know. Because they have to be able to invoke it on every keystroke they have to use a dumber model. And so I guess I just prefer more using a smarter model, but being more deliberate in how I'm using it. But I'm not married to that perspective. I think maybe some UX tweaks on Copilot could change my relationship, but I don't know. I guess I've been willing to sort of use it and even have my interaction with it be slower or less effective sometimes in the name of investing and learning how to use it.

我只是觉得Copilot有点傻。我不知道。因为他们必须能够在每次按键时调用它,所以他们必须使用一个更傻的模型。所以我想我更喜欢使用一个更智能的模型,但在如何使用它方面更加深思熟虑。但我并不固执于这个观点。我认为Copilot的一些用户体验调整可能会改变我的关系,但我不知道。我想我愿意使用它,甚至让我与它的互动有时会更慢或效率更低,以便投资和学习如何使用它。

And yeah, like at the time it saved me on certain really hard things like writing a procedural macro to, or enumerate all the Tailwind classes for GPUI. It kind of taught me how to write proc macros because I didn't know how.
是的,在某些真正困难的事情上,它节省了我的时间,比如编写一个过程宏来为GPUI枚举所有的Tailwind类。它教会了我如何编写过程宏,因为我不知道如何编写。

Thorsten: Exactly a year ago, I was at a conference and I was meeting programmer friends and we were all talking about ChatGPT and some of them were saying, "oh, it doesn't know anything. I just tried this and it doesn't know anything." But the queries or the prompts they used, they looked like the prompts people used 20 years ago with Google. Back when you still had this keyword search, people would type in, "where can I get a hot dog?" But that's not how it worked back then. One friend of mine, though, he said, "you know what I use it for? I use it like an assistant. I use it like an intern." So essentially, when he's debugging something and he needs a little program to reproduce the bug, he says to the AI, "Can you write me a little HTTP server that doesn't set a connection timeout" or something like that. Because he knows where the shortcomings are. And I think that a lot of us have had this over the past year, we started to get a feel for where the shortcomings are and adjust our use to it. So I was curious whether you had any of these moments.
Thorsten:正好一年前,我参加了一个会议,遇到了一些程序员朋友,我们都在谈论ChatGPT,其中一些人说,"哦,它什么都不知道。我刚试过这个,它什么都不知道。"但他们使用的查询或提示,看起来就像20年前人们用谷歌时使用的提示。那时你还有关键词搜索,人们会输入 "哪里可以买到热狗?"但那时候它不是这样工作的。不过,我的一个朋友说,"你知道我用它做什么吗?我把它当作助手用。我把它当作实习生用。"基本上,当他在调试一些东西,需要一个小程序来重现bug时,他会对AI说,"你能给我写一个HTTP服务器,不设置连接超时 "之类的。因为他知道缺点在哪里。我想我们很多人在过去一年里都有过这样的经历,我们开始体会到缺点在哪里,并调整我们的使用方式。所以我很好奇你们是否有过这样的时刻。

Max: I have a one in my day-to-day life. I use ChatGPT a lot instead of Google. And I've learned to say, "now, don't hedge this by saying, 'it depends'. I'm aware. Just tell me, give me an answer", so that ChatGPT doesn't say, "there are many possible responses to this."

Max:在我的日常生活中,我经常使用ChatGPT代替谷歌。我已经学会说,"现在,不要用'这取决于'来回避这个问题。我知道。告诉我,给我一个答案",这样ChatGPT就不会说,"对此有很多可能的回答。"

But I think I have a lot to learn about what to do in the programming world still. There's probably a lot of knowledge out there that I just haven't adopted into my workflow yet for prompting the assistant.
但我认为在编程世界里,我还有很多东西要学。可能有很多知识我还没有采用到我的工作流程中,用于提示助手。

Nathan: I think I have the advantage of just being not as good of a raw programmer as Max or Antonio. A lot of times when I'm pairing, I take more of a navigator role in the interaction. And so I just reach for AI more because I'm just not as fast at cranking out code. And so I think it's less frustrating to me.
Nathan:我认为我的优势在于,我不像Max或Antonio那样是一个纯粹的程序员。很多时候,当我配对时,我在互动中更多地扮演了领航员的角色。所以我更多地求助于AI,因为我并没有那么快地编写代码。所以我想对我来说它不那么令人沮丧。

Thorsten: When did you decide "we have to add this to Zed"? Was it being swept up in the hype and everybody asking for it, or was there a specific thing, or time when you said, "no, I need this in the editor."

Thorsten:你们什么时候决定 "我们必须把它加入Zed"?是被炒作所淹没,每个人都在要求它,还是有什么具体的事情,或者是你们什么时候说,"不,我需要它在编辑器里。"

Nathan: For me there's Copilot and then there's the Assistant. So Copilot, everybody asks for it. And I was like, "oh, I wanna see what it's like to work with this". But then I ended up not using it a lot. But for the other one, the assistant, it was just that I was using GPT4 and they were rate-limiting me. So then I was going into the SDK or the playground and writing text in a fricking web browser. And I'm just like, this is driving me crazy. I wanna write text in Zed.
Nathan:对我来说,有Copilot,然后还有助手。所以Copilot,每个人都要求它。我想,"哦,我想看看和它一起工作是什么样子的"。但后来我最终没有经常使用它。但对于另一个,助手,那只是因为我在使用GPT4,而他们在限制我的使用速率。所以我进入SDK或playground,在一个网络浏览器中写文本。我就像,这快把我逼疯了。我想在Zed里写文本。

And, I mean, that's what the assistant is right now. It's kind of pretty bare bones. It's like an API, it's like an OpenAI API request editor almost, one that isn't annoying to use from a text editing perspective. That's kind of where things are at right now, which isn't where they need to stay. We have a lot of work to do on it, but that's the thought process.
而且,我的意思是,这就是助手现在的样子。它有点非常简陋。它就像一个API,几乎就像一个OpenAI API请求编辑器,从文本编辑的角度来看,使用起来并不令人讨厌。这就是事情目前的状态,这不是它们需要停留的地方。我们在这方面还有很多工作要做,但这就是思考过程。

Thorsten: I kind of want to go into the weeds a little and ask you about the inline assist in Zed. Context for whoever's watching or listening or reading: you can select text in Zed, you can then hit ctrl-enter and you send a message along with the selected text and some more context to the AI and you can ask it to "change the return type of this" or whatever, "reorder this" or "use a macro", something like that. What then happens when the request comes back from the AI, is that you can see it type the text or change it and you can see it change word by word. It doesn't look like it's just shoving the text into the buffer. So I'm curious, what happens when the LLM request comes back and says, here's a snippet of code?

Thorsten:我有点想深入探讨一下,问你关于Zed中的内联辅助。为了给观看或听或阅读的人一些背景:你可以在Zed中选择文本,然后按ctrl-enter,你就会把选中的文本和一些更多的上下文以消息的形式发送给AI,你可以要求它 "改变这个的返回类型 "或者其他什么,"重新排序这个 "或 "使用宏",诸如此类。当请求从AI返回时,会发生什么,你可以看到它输入文本或改变它,你可以看到它一个字一个字地改变。看起来不像是把文本塞进缓冲区。所以我很好奇,当LLM请求回来说,这里有一个代码片段时,会发生什么?

Antonio: Basically, we implemented a custom version of the Needleman-Wunsch algorithm. There's several algorithms for fuzzy finding and they all stem from this dynamic programming algorithm, which is essentially about finding the lowest cost path from point A, the origin, which is where both strings start, and the end, which is where both strings end. So we're kind of doing this like diff, the streaming diff, because typically diff is this lossy function where you need to have like both texts entirely, but the problem is that the AI streams the response, chunk by chunk. But we don't want to wait for the entire response to come back before diffing. So we kind of have this like slightly modified version of Needleman, in which we try to favor insertions and deletions and we kind of look ahead a little bit and have a different cost function. That lets us produce these streaming edits. It's pretty fun project.
Antonio:基本上,我们实现了Needleman-Wunsch算法的自定义版本。有几种模糊查找的算法,它们都源于这种动态规划算法,本质上是找到从A点,也就是起点,也就是两个字符串的起点,到终点,也就是两个字符串的终点,成本最低的路径。所以我们在做这种diff,流式diff,因为通常diff是这种有损函数,你需要完整地拥有两个文本,但问题是AI是分块流式传输响应的。但我们不想等整个响应回来再做diff。所以我们有这种稍微修改过的Needleman版本,我们试图偏好插入和删除,我们稍微向前看一点,有一个不同的成本函数。这让我们能够生成这些流式编辑。这是一个相当有趣的项目。

Thorsten: So did you build this specifically for the inline assist? I assumed it code that's also used in the collaboration features, no?
Thorsten:所以你是专门为内联辅助构建的吗?我以为它也用于协作功能的代码,不是吗?

Antonio: No. What we tried at first actually was to have the AI use function calling to give us the edits, as opposed to, asking for a response and the AI just spitting it out, top to bottom. The initial attempt was like, "okay, just give us the precise edits that, you know, you want us to apply". But what we found out pretty early on was that it wasn't working very reliably. It was kind of tricky to have it produce precise locations.
Antonio:不是。我们最初尝试的实际上是让AI使用函数调用来给我们编辑,而不是要求响应,AI只是从上到下吐出来。最初的尝试是,"好的,只要给我们你想让我们应用的精确编辑就行了"。但我们很早就发现,它工作得不太可靠。让它产生精确的位置有点棘手。

It's really good at understanding what you're trying to do as a whole, but it's very hard to have it say, "okay, at point three, you know, row three, column two, I want to insert, delete, you know, five characters and insert, you know, these other six characters".
它真的很擅长理解你想做的事情作为一个整体,但很难让它说,"好的,在第三点,你知道,第三行,第二列,我要插入,删除,你知道,五个字符,插入,你知道,这另外六个字符"。

So we went back to the drawing board and we said it's good at spitting out texts, so let's just have it write what you wanted, and that's Nathan's idea to do it came in.
所以我们回到了绘图板,我们说它擅长吐出文本,所以让我们让它写出你想要的,这就是Nathan的想法进来了。

Nathan: And Antonio's algorithmic chops actually making it happen. Yeah.
Nathan:还有Antonio的算法能力真正实现了它。是的。

Thorsten: How, it's pretty reliable, right?
Thorsten:它相当可靠,对吧?

Antonio: Thanks. Yeah.
Antonio:谢谢。是的。

Nathan: It sometimes it overdraws. It... I don't know. It's not always reliable for me. I think that has to do with our prompting maybe. There's a lot of exploration to do here. I'll ask it to write the documentation for a function and it'll rewrite the function. That drives me crazy.

Nathan:它有时会过度绘制。它......我不知道。对我来说,它并不总是可靠的。我觉得这可能与我们的提示有关。这里有很多需要探索的地方。我会要求它为一个函数编写文档,而它会重写这个函数。这让我抓狂。

Thorsten: The prompting, sure, but the actual text insertion — every time I see these words light up, I'm like, what's going on here? How do they do this? How long did it take to implement this? I'm curious.
Thorsten:提示,当然,但实际的文本插入--每次我看到这些字亮起,我就在想,这里发生了什么?他们是怎么做到的?实现这个需要多长时间?我很好奇。

Antonio: Half a day. Yeah, I remember a day. Yeah, something like that.

Antonio:半天。是的,我记得是一天。是的,差不多就是这样。

Thorsten: No way.
Thorsten:不会吧。

Nathan: But to be fair, we had already really explored and just needed a little bit of push for the path matching. That took a little more time, wrapping our brains around it. And I think more of it stuck for you, Antonio, to put it that way.
Nathan:但公平地说,我们已经真正探索过了,只是需要一点推动来进行路径匹配。这花了更多的时间,我们的大脑需要适应它。我想更多的东西留在了你那里,Antonio,就这么说吧。

Antonio: Hahaha!
Antonio:哈哈哈!

Nathan: Cause, yeah, traversing that dynamic programming matrix still kind of boggles my mind a little
Nathan:因为,是的,遍历那个动态规划矩阵仍然让我有点困惑

Thorsten: Half a day — you could have said a week just to make me feel better.
Thorsten:半天--你本可以说一周,只是为了让我感觉好点。

Antonio: A week, yeah a week, no.
Antonio:一周,是的,一周,不是。

Max: Hahaha.
Max:哈哈哈。

Thorsten: So right now in Zed we have the inline assist, we have the chat assistant, which you can use to just write Markdown, and you can talk to multiple models. What's next? What's on the roadmap?
Thorsten:所以现在在Zed中,我们有内联辅助,我们有聊天助手,你可以用它来写Markdown,你可以与多个模型对话。接下来是什么?路线图上有什么?

Nathan: A big piece is just getting more context into what the assistant sees. Transitioning it away from an API client to starting to pull in more context. Kyle, who's been contracting with us, has a branch where we're pulling in the current file. Obviously we want more mechanisms for pulling context in, not only the current file, but all the open files, and you can dial in the context, opt in or out, and so on.
Nathan:一个重要的部分就是让助手看到更多的上下文。将它从API客户端转变为开始引入更多上下文。Kyle,他一直在与我们签约,有一个分支,我们在其中引入当前文件。显然,我们希望有更多的机制来引入上下文,不仅是当前文件,还有所有打开的文件,你可以调整上下文,选择加入或退出,等等。

But then also doing tool-calling where I can talk to the assistant and have it help me craft my context, if that makes sense. Also having it interact with the language server, but also using tree-sitter to sort of traverse dependencies of files that we're pulling in so that we make sure that we have all this in the context window. Of course, context sizes have gone up a lot, which makes this all a lot easier, because we can be more greedy in terms of what we're pulling in.
但接下来还要做工具调用,我可以与助手交谈,让它帮我制作上下文,如果这有意义的话。还要让它与语言服务器交互,但也要使用tree-sitter来遍历我们引入的文件的依赖关系,以确保我们在上下文窗口中拥有所有这些。当然,上下文的大小已经大大增加了,这使得这一切变得容易多了,因为我们可以在引入的内容方面更加贪婪。

So that's a big dimension, populating that context window more intelligently, but also giving the assistant tool calls that it can use to write a command in the terminal. I don't know if I want to give it the ability to hit enter, you know, but like, at least write it in and stage it and shift my focus over there so that I can run something. I could get help with whatever random bash incantation I might want to run. Having the assistant escape that little box and reach out and interact with other parts of the editor.
所以这是一个重要的维度,更智能地填充上下文窗口,但也给助手工具调用,它可以用来在终端中写命令。我不知道我是否想给它按回车的能力,你知道,但至少把它写进去,把它放在那里,把我的注意力转移到那里,这样我就可以运行一些东西。我可以得到帮助,运行任何我想运行的随机bash咒语。让助手逃离那个小盒子,伸出手来与编辑器的其他部分交互。

That's all really low hanging fruit. I think that we need to pick. That's what's next for me. And then we're also like experimenting with alternative completion providers, for the Copilot style experience. We'll see where that goes. It's still kind of early days there.
这些都是唾手可得的果实。我想我们需要选择。这就是我下一步要做的。然后我们还在尝试替代的补全提供者,为Copilot风格的体验。我们将看看它的发展。那里还处于早期阶段。

Max: I'm excited about another dimension of the feature set. Right now, all the stuff we were just been talking about, that system is very local. You select a block of code and its output is directed into that location.
Max:我对功能集的另一个维度感到兴奋。现在,我们刚刚谈论的所有东西,那个系统都是非常局部的。你选择一块代码,它的输出被定向到那个位置。

But being able to — just like code actions in Zed — say "inline this function into all callers" and get a multi-buffer opened up that says "I changed here, I changed here, I changed here. Do you want to save this or undo it?" I can then go look at what it did.
但能够像Zed中的代码操作一样说 "把这个函数内联到所有调用者",然后打开一个多缓冲区,说 "我在这里改了,我在这里改了,我在这里改了。你想保存还是撤销?"然后我可以去看它做了什么。

I want to be able to say, "extract a struct out of this that isn't in this crate, that's in a subcrate and depend on it in all these usages of this crate so they don't have to depend on all this other stuff" and then have it go, "here, I changed your Cargo.toml, I created a crate, I changed this, I did these sort of more complex transformations to various pieces of code in your code base. You wanna save this or undo it?"
我想能够说,"从这个板条箱中提取一个结构体,它不在这个板条箱中,而是在一个子板条箱中,并在这个板条箱的所有这些用法中依赖它,这样它们就不必依赖所有这些其他东西了",然后让它说,"在这里,我修改了你的Cargo.toml,我创建了一个板条箱,我修改了这个,我对你代码库中的各个代码片段做了这些更复杂的转换。你想保存还是撤销?"

I think that's gonna be a really powerful way of letting it do more stuff while keeping control. I think the multi-buffer is a good way to the user and ask "you want to apply all these transformations that I just made?"

我认为这将是一种非常强大的方式,让它在保持控制的同时做更多的事情。我认为多缓冲区是一个很好的方式,可以问用户 "你想应用我刚刚做的所有这些转换吗?"

Nathan: Speaking of multi-buffer, another really low-hanging fruit thing is invoking the model in parallel on multiple selections. When I pull up a multi-buffer full of compile errors that are all basically the same stupid manipulation that I needed to do, it'd be great to just apply an LLM prompt to every single one of those errors in parallel.
Nathan:说到多缓冲区,另一个真正唾手可得的东西是在多个选择上并行调用模型。当我打开一个充满编译错误的多缓冲区,而这些错误基本上都是我需要做的相同的愚蠢操作时,如果能并行地对每一个错误应用LLM提示,那就太好了。

Thorsten: Low-hanging fruits are everywhere — you could add AI to every text input basically, adding autocomplete or generation or whatsoever. There was an example last week, when I talked with somebody who wanted to use an LLM in the project search input where you could use the LLM to generate regex for you. That's cool, but at the same time, I thought, wouldn't it actually be the better step be to have a proper keyword search instead of having the LLM translate to a regex? I'm wondering whether there isn't the possibility of being trapped in a local maximum by going for the low-hanging fruit.
Thorsten:到处都是唾手可得的果实--你基本上可以在每个文本输入中添加AI,添加自动完成或生成等功能。上周有个例子,我和一个人交谈,他想在项目搜索输入中使用LLM,你可以用LLM为你生成正则表达式。这很酷,但与此同时,我在想,实际上更好的步骤难道不是进行适当的关键词搜索,而不是让LLM转换为正则表达式吗?我在想,是否有可能因为追求唾手可得的果实而陷入局部最大值。

Max: Meaning, like, how much of the current programming tool paradigm, like, regex search do we even want to keep? Or do we say that we don't even need that feature anymore?
Max:意思是,比如,我们到底还想保留多少当前的编程工具范式,比如正则表达式搜索?或者我们说我们甚至不再需要那个功能了?

Thorsten: Something like that, yeah. A year ago, everybody was adding AI to every field and obviously things changed and people now say, "this is not a good use case for that", and you're now also saying you want it to have access to files, and so on. How do you think about that? What do you see as the next big milestone?

Thorsten:对,差不多就是这样。一年前,每个人都在每个领域添加AI,显然情况发生了变化,人们现在说,"这不是一个好的用例",你现在也说你希望它能访问文件,等等。你怎么看这个问题?你认为下一个重要的里程碑是什么?

Nathan: Well, I guess I have different perspectives. There's a couple different things I want to respond to that with. One is we experimented with semantic search over the summer and the initial thing was that we generated all these embeddings with OpenAI's embedding API, which is designed for text and prose. I think less for code, that's at least my understanding, maybe people can comment on the YouTube video and tell me I'm wrong. So I don't know how good embedding models are in general for code, but what I did find is that with this initial experiment, that was literally that you would start typing your query and we would just show you the matching files or like the file and line number. And I was just using that a ton for navigation that it was just really useful to be able to mash keys.
Nathan:嗯,我想我有不同的看法。我想用几个不同的角度来回应这个问题。一个是我们在夏天尝试了语义搜索,最初的想法是我们用OpenAI的嵌入API生成了所有这些嵌入,这个API是为文本和散文设计的。我认为对代码来说不太合适,至少这是我的理解,也许人们可以在YouTube视频下评论告诉我我错了。所以我不知道嵌入模型对代码到底有多好,但我发现,通过这个初始实验,你开始输入查询,我们就会向你显示匹配的文件或文件和行号。我大量使用它进行导航,它真的很有用,可以快速按键。

It was better than the fuzzy finder and better than cmd-t, which is the symbol search on the language server. Because at least with rust-analyzer on our code base, that can be really slow. So I used it as kind of this just quick, convenient navigation tool.
它比模糊查找器好,也比cmd-t好,后者是语言服务器上的符号搜索。因为至少在我们的代码库上使用rust-analyzer,那可能会非常慢。所以我把它用作这种快速、方便的导航工具。

But then I was not super involved and we pivoted that prototype-modal-experience into a feature of our search. And then I just stopped using it because of the friction of using that was too high. And the quality of the results that we were getting at least then, wasn't really high enough. I want to get back to that, restoring that modal fuzzy navigation experience of using semantics to quickly jump to different areas. But it's not like a search result, not quite, it's more like this quick thing. So that's one thing.
但后来我没有太多参与,我们把那个原型模态体验转变成了我们搜索的一个功能。然后我就不再使用它了,因为使用它的摩擦太大了。而且我们当时至少得到的结果质量还不够高。我想回到那种状态,恢复使用语义快速跳转到不同区域的模糊导航体验。但它不像搜索结果,不完全是,它更像是这种快速的东西。所以这是一个方面。

But the other thing I want to say is like, I'm skeptical, I guess, of... I was skeptical of AI in general until I was proven wrong. So I want to be careful to be humble about the possibilities of what can be done. But, in general, where I'm at right now is that I really want to still see what is going on. I don't want a lot of shit like happening behind the scenes on my behalf where it's writing a regex and then running it because I don't have enough confidence that's going to work well.

但我想说的另一件事是,我对......我猜我对AI普遍持怀疑态度,直到事实证明我错了。所以我想谨慎地对可能做到的事情保持谦逊。但是,总的来说,我现在的立场是,我真的还想看看正在发生什么。我不希望在幕后代表我发生很多事情,比如它在写一个正则表达式然后运行它,因为我没有足够的信心认为它会工作得很好。

So until I get that confidence I like the idea of there being this very visible hybrid experience. The AI is helping me use the algorithmic, traditional tools. Even OpenAI has the code interpreter, right? They're not trying to get the LLM to add numbers. They just shell out to Python. And so I think giving the AI access to these more algorithmic traditional tools is like where I want to go.
所以在我获得这种信心之前,我喜欢有这种非常可见的混合体验的想法。AI在帮助我使用算法化的、传统的工具。甚至OpenAI也有代码解释器,对吧?他们不是想让LLM来加数字。他们只是把它交给Python。所以我认为给AI访问这些更算法化的传统工具的权限,是我想要的方向。

Thorsten: Do you have any thoughts on the context windows? When you have a large context window, you would think all of the problems are solved, right? Just shove the whole codebase in. But then you also have to upload a lot of code and it takes a while longer until the response come back. Any thoughts on this trade off between context size and latency?
Thorsten:关于上下文窗口,你有什么想法吗?当你有一个大的上下文窗口时,你会认为所有的问题都解决了,对吧?把整个代码库都塞进去。但是你也要上传很多代码,需要更长的时间才能得到响应。对这种上下文大小和延迟之间的权衡,你有什么想法吗?

Nathan: I'm still wrapping my brain around what causes the additional latency when the context size grows larger. In my mental model of a transformer, I don't understand why it takes longer, but I can see practically that it does. So yeah, I guess, I'm revealing my ignorance here.
Nathan:我还在思考当上下文大小增加时,是什么导致了额外的延迟。在我对transformer的心理模型中,我不明白为什么它需要更长的时间,但实际上我可以看到它确实如此。所以,是的,我想,我在这里暴露了我的无知。

But to me it seems like, Giving it everything is a recipe for maybe giving it too much and confusing it. Although my understanding is that this is also improving, they're getting less confused now by noise and the needle in the haystack problem. That's what I saw from Gemini, I'm still kind of waiting to get my API access. But what I saw was that it's very good at kind of plucking out details that matter among the sea of garbage.
但对我来说,给它一切可能会给它太多并让它感到困惑。尽管我的理解是,这也在改进,它们现在不会被噪音和大海捞针问题搞得那么糊涂。这就是我从Gemini看到的,我还在等待获得API访问权限。但我看到的是,它非常擅长从垃圾堆中挑选出重要的细节。

I don't know, that wasn't a very coherent thought other than it seems to me that we need to think about how to curate context for a while longer. And the times when I've interacted with models and been most successful has been either when I'm, again, like drawing from the weights, the latent space of that model, and very little needed in the context window because the problem I'm solving is sort of out there in the ether. Or I really set it up with the specific things that it needs to be successful.
我不知道,这不是一个很连贯的想法,除了在我看来,我们需要再考虑一段时间如何策划上下文。我与模型互动并取得最大成功的时候,要么是我再次从模型的权重、潜在空间中提取信息,而上下文窗口中几乎不需要什么,因为我要解决的问题就在那里。要么我真的用它需要成功的特定东西来设置它。

But to be fair, I think we have a lot to learn in this space. Yeah.
但公平地说,我认为我们在这个领域还有很多要学的。是的。

Thorsten: I asked because you said you used the fuzzy-search when you had it within reach, but once there was a little bit more friction you stopped using it. And I noticed, speaking of large context windows, that I already get impatient when I have to wait for ChatGPT sometimes. "Come on, skip the intro, give me the good stuff." With large context windows, I wonder whether I would rather skip asking when I know that the answer's gonna take 20 seconds to come back, or 10 seconds, or whatever it is.
Thorsten:我之所以问,是因为你说当模糊搜索在你触手可及的时候你会使用它,但一旦有一点摩擦你就不再使用它了。我注意到,说到大的上下文窗口,有时我必须等待ChatGPT时我已经变得不耐烦了。"来吧,跳过简介,给我好东西。"对于大的上下文窗口,我不知道当我知道答案要花20秒或10秒或无论多长时间才能回来时,我是否宁愿跳过提问。

Nathan: Yeah, I think the higher the latency, the more I'm going to expect out of what it responds with. I mean, I was just having a great time in the bath, while I waited for Claude to respond. I took a deep breath and felt the warm water on my body, you know, and then by the time it responds, I'm just reading it.
Nathan:是的,我认为延迟越高,我对它的回应就期望越大。我的意思是,我刚才在洗澡时玩得很开心,等待Claude回应的时候。我深吸一口气,感受到温水在我身上,你知道,等它回应的时候,我只是在读它。

Thorsten: I think you should redo this with a control group that also codes in the bath but without AI. Maybe the results are the same. It sounds like a fantastic bath. Let me ask some controversial questions... When I said I'm going to join Zed, people asked me, "oh, a text editor? Do we even have to write code two years from now, with AI?" What do you think about that? Do you think we will still type program language syntax into Zed in five years, or do you think that how we program will fundamentally change?

Thorsten:我觉得你应该重做这个实验,用一个对照组,也在浴缸里编码,但没有AI。也许结果是一样的。听起来是个梦幻般的洗澡。让我问一些有争议的问题......当我说我要加入Zed时,人们问我,"哦,一个文本编辑器?两年后我们还需要用AI写代码吗?"你怎么看?你认为五年后我们还会在Zed中输入程序语言语法吗,或者你认为我们编程的方式会从根本上改变吗?

Nathan: Yeah, it's a good question. I mean, I've tweeted out that it's kind of ironic that as soon as AI can write me a code editor, I won't need a code editor. But as of yet, it's not yet possible to sit down and say, build me a code editor written in Rust with GPU accelerated graphics. I don't know. I don't think AI is there yet.
Nathan:是啊,这是个好问题。我的意思是,我发过推特说,讽刺的是,一旦AI可以给我写一个代码编辑器,我就不需要代码编辑器了。但到目前为止,还不可能坐下来说,给我建一个用Rust编写的、有GPU加速图形的代码编辑器。我不知道。我认为AI还没有达到那个程度。

Now maybe that's the only product complex enough. Maybe the only thing that AI can't build is a code editor, but I'm skeptical right now. Maybe Ray Kurzweil is right and we're all just going to be like uploading our brains into the cloud and I just don't know. All I know is things are changing fast, but what seems true right now is at the very least I'm going to want supervisory access, like that Devon demo.
对,也许我们最终都要把大脑上传到云端,我不知道。我所知道的是,事情变化很快,但现在似乎至少我还是想要监督访问权,就像Devon演示的那样。

To me, a potential outcome is that editing code ends up feeling, for a while, like that Devon demo but with an amazing UX for having a human programmer involved in that loop, guiding that process so that we're not just spamming an LLM with brute force attempts. Instead there's this feedback loop of the LLM taking access and the human being involved has to correct that or guide that. Yeah, so it becomes this like human LLM collaboration, but the human is still involved.

对我来说,一个可能的结果是,编辑代码最终会有一段时间的感觉,就像Devon演示的那样,但有一个令人惊叹的用户体验,让人类程序员参与到那个循环中,指导那个过程,这样我们就不只是用蛮力尝试向LLM发送垃圾邮件。相反,有这样一个反馈循环,LLM获取访问权限,而参与的人类必须纠正或指导。是的,所以它变成了人类与LLM的合作,但人类仍然参与其中。

If that ends up not being true, yeah, I guess we don't need a code editor anymore. I don't know how far away that is, if it's ever gonna be here.
如果最后这不是真的,是的,我想我们不再需要代码编辑器了。我不知道那还有多远,如果它真的会到来的话。

They've been telling me for a long, long time that I'm gonna be riding around in these self-driving taxis and I've done it a couple times. But I will say, the taxi refused to park where we actually were in San Francisco. So we had to walk in pouring rain to the place where they pick us up. My mind is freaking blown that a car is automatically driving me, picking me up and driving me somewhere else, and at the same time, I'm a little annoyed that I'm walking through the rain right now to the place where it stopped. It sort of feels like the same thing happens with LLMs, right?
他们很长很长时间以来一直在告诉我,我会乘坐这些自动驾驶的出租车,我也这样做过几次。但我要说,出租车拒绝在我们实际所在的旧金山停车。所以我们不得不冒着倾盆大雨走到他们接我们的地方。我的思想被颠覆了,一辆车在自动驾驶我,接我,把我送到别的地方,同时,我有点恼火,我现在要冒雨走到它停下的地方。感觉就像LLM也发生了同样的事情,对吧?

Who knows what's gonna happen, but for the moment, I like creating software. I don't need to type the code out myself, but I do feel like I'd like to be involved more than just sitting down to a Google search box and being like, go be a code editor.

谁知道会发生什么,但就目前而言,我喜欢创建软件。我不需要自己输入代码,但我确实觉得我想参与其中,而不仅仅是坐在谷歌搜索框前,像一个代码编辑器那样。

Max: I'm bullish on code still being a thing for quite a while longer. I think it goes back to what Nathan said about the AI expanding the set of things that you can build, in a shorter amount of time, it makes it easier to explore a bigger space of ideas, because it's cheaper.
Max:我看好代码在相当长一段时间内仍然是一个事物。我认为这要回到Nathan所说的,AI扩大了你能在更短的时间内构建的事物集合,它让你更容易探索更大的创意空间,因为它更便宜。

I think there will be code that it won't be anyone's job anymore to write, but that's boring code anyway.
我认为会有一些代码不再是任何人的工作去编写,但反正那是无聊的代码。

But I think it's just gonna make it possible to have more code because it's cheaper to maintain it, it's cheaper to create it, rewrite it if we want a new version. There'll be all kinds of things that weren't possible before. Like right now, banks aren't able to deliver like good websites, and I think there may be a day where a bank could have a good website. There'll be software that is, for whatever reason, infeasible to deliver right now. It will be feasible to finally deliver. And I think this is going to be code and I'm still going to want to look at it sometimes.
但我认为它只会让拥有更多代码成为可能,因为维护它更便宜,创建它更便宜,如果我们想要新版本,重写它也更便宜。将会有各种以前不可能的事情。就像现在,银行无法提供好的网站,我认为可能会有一天,银行可以拥有一个好网站。将会有一些软件,由于某种原因,现在无法交付。最终将可以交付。我认为这将是代码,我有时仍然会想看看它。

Nathan: Yeah, it's an incredible commentary on the power of human incentives and the corruption of the banking system that a bank having a good website is the day before we achieve AGI.

Nathan:是啊,这是对人类激励的力量和银行体系腐败的一个令人难以置信的评论,银行拥有一个好网站,是我们实现AGI前一天的事。

Max: Ha ha ha ha.
Max: 哈哈哈哈。

Antonio: If you look at Twitter right now, it's like every post is saying AGI is coming out next month. I don't know. I don't really know. The honest answer for me is that I don't know. It's possible. That's one thing that annoys me about AI, just how opaque some of these things are.
Antonio:如果你现在看Twitter,就像每个帖子都在说下个月AGI就要出来了。我不知道。我真的不知道。对我来说,诚实的回答是我不知道。这是有可能的。这是AI中让我烦恼的一点,只是这些东西中有些是多么不透明。

In the past, with technology in general, if there were hard problems or complicated things, I could sit down and at least try to understand them and maybe even create them. With AI, unless you want to do something with ChatGPT or Claude, you have to spend millions of dollars. That part, I don't love that.
在过去,对于技术来说,如果有困难的问题或复杂的事情,我可以坐下来,至少尝试去理解它们,甚至可能创造它们。对于AI,除非你想用ChatGPT或Claude做点什么,否则你必须花上百万美元。那部分,我不喜欢。

That's where my doubts come from, because it's very possible that engineers and researchers from these companies are right there with AGI, right there with super human intelligence, but how much of it is hype? I don't know.
这就是我的疑虑所在,因为工程师和这些公司的研究人员很可能就在那里,就在AGI那里,就在超人智能那里,但有多少是炒作?我不知道。

Thorsten: Here's a question that I'd love your thoughts on. I use ChatGPT a lot to do the "chores" of programming, some CSS stuff, or some JavaScript, or I use it to generate a Python script for me to talk to the Google API, and it saves me a full day of headaches and trying to find out where to put the OAuth token and whatnot. But with lower-level programming, say async Rust, you can see how it starts to break down. You can see that this other thing seems relatively easy for the AI but this other thing, something happens there. And what I'm wondering is, is that a question of scale? Did it just see more JavaScript? Did it see more HTML than systems Rust code because it scraped the whole internet?
Thorsten:这里有一个问题,我很想听听你的想法。我经常使用ChatGPT来做编程的 "杂务",一些CSS的东西,或者一些JavaScript,或者我用它来为我生成一个Python脚本来与Google API对话,它为我节省了一整天的头痛和试图找出OAuth令牌该放在哪里之类的事情。但对于较低级别的编程,比如异步Rust,你可以看到它是如何开始崩溃的。你可以看到这另一件事对AI来说似乎相对容易,但这另一件事,那里发生了一些事情。我想知道的是,这是规模的问题吗?它只是看到了更多的JavaScript?它看到的HTML比系统Rust代码更多,因为它刮掉了整个互联网?

Max: I think solving problems that have been solved a lot of times, that require a slight tweak — I think it's great it works that way. Those are boring things because they've been solved a lot of times and I think the LLM is great at knocking those out. And some of the stuff that we do, which has been solved — I'm not going to say we're doing things that have never been done before every day — but a lot of the stuff we're doing day-to-day has not been solved that many times in the world. And it's fun. That's why I like doing it. So I'm not that upset that the LLM can't totally do it for me. But when I do stuff that is super standard, I love that the LLM can just complete it, just solve it.
Max:我认为,多次解决过的问题,需要稍作调整的问题--我认为它能这样工作很好。这些都是无聊的事情,因为它们已经解决了很多次,我认为LLM很擅长解决这些问题。我们做的一些事情,已经解决了--我不是说我们每天都在做以前从未做过的事情--但我们日常工作中的很多事情,在世界上还没有被解决过那么多次。而且很有趣。这就是我喜欢做这件事的原因。所以我并不那么沮丧,LLM不能完全为我做这件事。但当我做一些超级标准的东西时,我喜欢LLM可以直接完成它,解决它。

Nathan: I want the LLM to be able to do as much as it possibly can for me. But yeah, I do think that it hasn't seen a lot of Rust. I mean, I've talked to people in the space that have just stated that. Like they were excited, "oh, you're open sourcing Zed? I'm excited to get more training data in Rust." And I'm like, "me too", other than, you know, competitors just sitting down and saying, "build me a fast code editor" and then it's already learned how to do that and all this work comes to nothing. I don't know.
Nathan:我希望LLM能为我做尽可能多的事情。但是,我确实认为它没有看到很多Rust。我的意思是,我和这个领域的人交谈过,他们就是这么说的。比如他们很兴奋,"哦,你要开源Zed了?我很高兴能获得更多的Rust训练数据。"我也是这么想的,除了,你知道,竞争对手坐下来说,"给我建一个快速的代码编辑器",然后它已经学会了如何做,所有这些工作都白费了。我不知道。

But also if that were true, ultimately I'm just excited about advancing the state of human progress. So if the thing I'm working on ends up being irrelevant, maybe I should go work on something else. I mean, that'd be disappointing, I would like to be successful... Anyway, I don't know how I got on that tangent.

但如果这是真的,最终我只是对推进人类进步的状态感到兴奋。所以如果我正在做的事情最终变得无关紧要,也许我应该去做别的事情。我的意思是,那会令人失望,我希望能成功......不管怎样,我不知道我是怎么说到这个问题上的。

But writing Python with it, which I don't want to write but I need to because I want to look at histograms of frame times and compare them? Thank you. I had no interest in writing that and it did a great job and I'm good.
但用它写Python,我不想写,但我需要写,因为我想看看帧时间的直方图并比较它们?谢谢。我对写这个没有兴趣,它做得很好,我很满意。

Antonio: There's also another meta point, which I guess we didn't really discuss. Even in a world where the AI can generate a code editor, at some point you have to decide how do you want this feature to work? And I guess the AI could help you with that, but I guess there'll be a human directing that and understanding what the requirements are and what are you even trying to do, right?
Antonio:还有另一个元点,我想我们没有真正讨论过。即使在AI可以生成代码编辑器的世界里,在某个时候你也必须决定你希望这个功能如何工作?我猜AI可以帮你做到这一点,但我想会有一个人来指导它,理解需求是什么,你究竟想做什么,对吧?

Maybe that also gets wiped out by AGI at some point, but I don't know. Code at the end of the day is just an expression of ideas, yeah, and the knowledge that's within the company or within a group of individuals.

也许这在某个时候也会被AGI抹去,但我不知道。代码归根结底只是思想的表达,是的,是公司内部或一群人内部的知识。

I'm excited about AI in the context of collaboration. I think that would be like a really good angle for Zed as a collaboration platform.
我对协作背景下的AI感到兴奋。我认为这对于Zed作为一个协作平台来说会是一个很好的角度。

We've talked about querying tree-sitter for certain functions or the language server for references and that's some context you can give the AI. But what about all the conversations that happened? Like what about — going back to our previous interview — if it's true that code is a distilled version of all the conversations that have taken place, well, that's great context for the AI to help you write that code.
我们已经讨论过查询tree-sitter获取某些函数或语言服务器获取引用,这是你可以给AI的一些上下文。但是所有发生过的对话呢?就像--回到我们之前的采访--如果代码真的是所有已经发生的对话的精华版本,那么,这对于AI帮助你编写代码来说是非常好的上下文。

Nathan: And we can capture that context with voice-to-text models or do multi-modal shit.
Nathan:我们可以用语音转文本模型捕获这些上下文,或者做多模态的事情。

I mean, my real dream is to just create an AI simulation of Max or Antonio — GPtonio, you know? I love coding with them because it lets me kind of float along at a higher level of abstraction a lot of times. I don't know, maybe I'm just being lazy. I should be typing more, but sometimes I feel like when I'm typing, I get in the way or whatever. I just love pairing and I love being that navigator and being engaged. So a multimodal model that could talk to me and write code as it's talking and hear what I'm saying. And I can also get in there and type here and there. That'd be amazing. That's not the same thing as collaboration though.
我的意思是,我真正的梦想就是创建一个Max或Antonio的AI模拟--GPtonio,你知道吗?我喜欢和他们一起编码,因为它让我在很多时候能在更高的抽象层次上漂浮。我不知道,也许我只是懒惰。我应该打字打得更多,但有时我觉得当我打字时,我会碍事或什么的。我就是喜欢结对编程,喜欢当导航员,喜欢投入其中。所以一个多模态模型可以和我说话,在说话的同时写代码,听我说什么。我也可以进去,到处打字。那就太棒了。但这和协作不是一回事。

But it would learn from watching us collaborate. That's my main thing. You know, yeah.
但它会从观察我们的协作中学习。这是我的主要想法。你知道,是的。

Thorsten: You could train the AI based on all the edits that Antonio has done over the last year or something. And all the conversations.
Thorsten:你可以根据Antonio过去一年左右所做的所有编辑来训练AI。还有所有的对话。

Antonio: Right, why did those edits took place? What was the reasoning? Why is it better to do things this way and not that way? What's the internal knowledge, the implicit knowledge that's not written down anywhere? We have it just because of shared context. Just sharing that context with the AI.
Antonio:对,为什么要进行这些编辑?推理是什么?为什么用这种方式做事更好,而不是那种方式?什么是内部知识,哪些是没有写在任何地方的隐性知识?我们有这些知识只是因为共享的背景。只是与AI分享这个背景。

Nathan: When we started Zed, we always had this idea that wouldn't it be cool to just put my cursor on a character and say, show me that character being written. This idea that there was all this conversation and context tied to all this code and we could store it and make it easily accessible. But even that mode is like, it's a lot of shit to comb through, right? So having a tool that could be really good at distilling that down and presenting it in an intelligent way, that's amazing. And that's a really exciting development for collaboration, bringing all this data, that we could potentially capture but aren't really yet, to our fingertips in a new way.
Nathan:当我们开始Zed的时候,我们总是有这样一个想法,如果我把光标放在一个字符上,说,给我看看这个字符是如何写成的,那不是很酷吗。有这样一个想法,所有这些对话和上下文都与所有这些代码联系在一起,我们可以存储它并使它易于访问。但即使是那种模式,也有很多事情要梳理,对吧?所以有一个工具可以很好地提炼它并以智能的方式呈现出来,那就太棒了。这对协作来说是一个非常令人兴奋的发展,以一种新的方式将所有这些我们可能捕获但实际上还没有捕获的数据带到我们的指尖。

Thorsten: One last question, a look into the future. Do you know what Andy Warhol said about Coca-Cola? That it's the same whether the president drinks it or you drink it, that there's no premium Coca-Cola and normal Coca-Cola, but that everybody gets the same. Programming is like that to me. I can write code online with the best of the world and it looks exactly the same on GitHub. My avatar is next to their avatar. It's a level playing. I can put my website up and it's also thorstenball.com, right next to the websites of large companies. There's no exclusive club. And what I keep thinking about is that all you ever had to have to program was a computer, even a RaspberryPi is enough. But now with AI and LLMs, suddenly things have started to become really expensive. Too play in the AI game, you need a lot of money, or GPUs. Do you think about that? Do you see the shift or do you think that it's always been the case that if you wanted to have like a multi-million-users scalable web services you had to have money?
Thorsten:最后一个问题,展望未来。你知道安迪·沃霍尔对可口可乐说过什么吗?不管是总统喝它还是你喝它,都是一样的,没有高级可口可乐和普通可口可乐之分,但每个人都能得到一样的东西。编程对我来说就是这样。我可以和世界上最优秀的人一起在网上写代码,在GitHub上看起来一模一样。我的头像就在他们的头像旁边。这是一个公平的竞争。我可以把我的网站放上去,它也是thorstenball.com,就在大公司网站的旁边。没有排他性俱乐部。我一直在想的是,你要编程所需要的只是一台电脑,即使是树莓派也足够了。但现在有了AI和LLM,突然之间事情开始变得非常昂贵。要玩AI游戏,你需要很多钱,或者GPU。你考虑过这个吗?你看到这种转变了吗,或者你认为如果你想拥有一个可扩展的、拥有数百万用户的网络服务,你总是需要钱?

Antonio: It might also just be that today the cost of the hardware is because we're not there yet technologically, right? Things have gotten a lot cheaper in CPU land, there's so many of them now. So, I could see a world in which things become a commodity because they become so cheap.
Antonio:也可能只是因为我们在技术上还没有达到那个程度,硬件的成本才会这么高,对吧?在CPU领域,事情已经变得便宜了很多,现在有这么多CPU。所以,我可以想象一个世界,在这个世界里,事物因为变得如此便宜而成为商品。

Nathan: Think about the late 70s, right? Ken Thompson and Dennis Ritchie, writing Unix on a teletype printer, hooked to a deck PDP11 that was up to the ceiling of my room in here. Right? And talk about the democracy of access to compute. That wasn't always the case. It just seemed like we were in an era where compute ceased to be the limiting factor on innovation for a really long time. But maybe that's true again now and who knows where it's going.
Nathan:想想70年代后期,对吧?Ken Thompson和Dennis Ritchie在电传打字机上编写Unix,连接到一个PDP11的机箱,它一直延伸到我房间的天花板。对吧?谈论计算资源获取的民主化。情况并非总是如此。似乎我们身处一个时代,在很长一段时间里,计算能力不再是创新的限制因素。但也许现在又是如此,谁知道它会走向何方。

Thorsten: That's beautiful. Nice.
Thorsten:那真是太美了。很好。

posted @ 2024-04-08 11:02  .net's  阅读(28)  评论(0编辑  收藏  举报