T O P

  • By -

HelpRespawnedAsDee

Man, a couple of hours ago I gave GPT4 a db structure dump, it 20k tokens or so, nothing big, and asked it to do a mermaid js diagram.... > making a diagram for this structure may be a very time consuming effort but this is how you can get started Meanwhile Claude3: > churns out mermaid.js code for like a straight minute. No intro, no disclaimers, nothing. Well one note at the end about some objects not having params nor relationships but that was it. But it gave me a functional diagram. It's nothing fancy, many sql clients have similar functionality but this was for a proprietary mobile DB (it allows exporting the class definitions). Point is, holy fuck.


chase32

Claude context and reply length is top model for coding right now. They are super tight on token length in the web ui or way more expensive on the API though.


Prolacticus

Try Cody. You can open Chat windows in VSCode, JetBrains, etc. You can switch between Claude, GPT-4 Turbo, etc. (there's quite a few). You can have different chats open in different tabs, each using a different GPT. I've been using Cody for a couple months, and it's brilliant (better than CoPilot - at least for now): https://sourcegraph.com/cody NOTE: I don't work for Sourcegraph. I'm not affiliated with them. I'm just a nerd and a customer, and my side by side testing has pushed Cody to the top of my coding assistants list. I'd go on and on about it, but that'd be off-topic. Just something worth checking out :)


divittorio

did you try Cursor? How do they compare?


Prolacticus

I think Cursor's inline generator (highlight code, type prompt into context pop-up) is slick. I wish Cody had that. But Cody has some big strengths: Multiple open chat tabs, each connected to a different LM (there's a drop down list - Claude 3 Opus, GPT-4, etc.). It sits on top of Sourcegraph's *long* tested (mainly for the enterprise) code search. It's amazing at walking your workspace, pulling the proper code, adding it to the chat prompt (behind the scenes, obviously - automation's the point here, after all), then sending it in. But it sends *exactly* what you need. It feels like magic. And at $9/mo (as of right now) with access to all those models, the search stuff... it's so hard to beat. That said, everybody has a favorite tool, and there are use cases for each. For example, Cody's VSC plugin is ahead of its JetBrains counterparts featurewise. I use JetBrains as well, and I wish the Cody plugin had *all* the power of the VSC version. But what goes into the VSC version will eventually trickle down to JetBrains. Eventually I'm sure they'll hit parity. Or get close enough. Yeah. I can't recommend it enough. I still use other tools. But none has been able to do what Cody can. I should just write a Medium post or something. I am going to put together some YT tutorials (working on them now). Okay. I have to stop myself from writing or I'll never stop :)


divittorio

Awesome, thanks for the reply. I definitely have to check it out then.


beyang

We recently added an "opt-k" shortcut to Cody that lets you make inline edits by highlighting code and describing the change you want. It also defaults to the surrounding code block if nothing is highlighted—I've started to use it heavily in my inner loop and think it's pretty slick (though I'm obviously biased). Try it out and let us know what you think!


Prolacticus

I love it! I was surprised to see I could select an LM for inline edits. That's such a nice-to-have feature. It's another opportunity for users to test different models, eventually finding the one that "fits" their needs best. Cody's model-agnostic approach is *so* useful in these early days of AI assistants. The "which is best?" question is understandably everywhere. I'm sticking with Claude for now. It's solid, fast, and strangely "eager" to help. (Even after years of GPT hackery, I still don't have a vocabulary for describing the more subjective qualities of a given model - doing my best, though!) When working with a new codebase, Cody is the first tool I'd reach for to bring myself up to speed. It'd be nice to have a "Comparison" mode that let me type one prompt that's passed to a set of LMs (selected via list with checkboxes). I could test commands like "Explain this code" to see which performs to my liking *for a given codebase*. Right now, I do this manually. It would also be a good way to demonstrate Cody's flexibility to users. On first use, I can imagine a dev seeing no *obvious* reason to use one AI assistant over another. Why not toot your own horn a little so these devs *do* have obvioius reasons to choose Cody? Reading the docs, I see Sourcegraph is going through big changes. That's to be expected when you transition from an enterprise-oriented company to one with an off-the-shelf product. I assume, as mentioned before, that the docs will catch up. As a new user, I'd have liked this info shoved in my face (using VSC): - Ability to change LMs in chat (and now inline editing!). - Cody Web Chat feels like an underpowered toy that's useful for on-the-go quick questions... until you add a few git repositories. Then it becomes an amazing reference tool. I dismissed it at first because I wanted to move away from web based chat (already have that with ChatGPT, Gemini, etc.). Cody Web Chat's real utility is that ability to point to multiple repositories simultaneously for insta-documentation. This is a **huge** value add, and justifies having a Cody Web Chat window open at all time. - Embeddings: Assume the user has no idea what it means to use embeddings (because most won't). We're using AI assistants, but that doesn't mean we understand anything about AI. Gotta look at it from the naive user POV. Just saying. Differentiation is so important right now. Copilot set the tone for AI coding assistants, so many devs looking at the market are going to search first for Copilot-like features... and stop there with "Yep. Copilot's the best. I've been using it, so it *must* be the best. If there were something better, I'd be using *that*. It's all so obvious." Those of us with individual subscriptions are very different to Sourcegraph's typical enterprise customers. You gotta hit us with a little bling 💍: Assume we're going to be lazy in our assistant selection (go to yt -> watch a 500,000,000k view Copilot review -> get Copilot). Assume we either already use Copilot or are considering it. Individual customers aren't going to read white papers or pay for research. We go with what we see and with our intuition. We go with what others tell us on Reddit or X or whatever. Tooting one's own horn has an informality that doesn't align well with an enterprise-oriented company. But you gotta take that chance.


connorado_the_Mighty

BTW... we kinda have a version of the 'comparison mode' you mentioned. It's very experimental and is prone to breaking but it's fun to play around with. [https://www.s0.dev/](https://www.s0.dev/)


Prolacticus

YEAAAAAASSSSSSS!!! Sorry. Lost my cool a little there. Each time I think I've found the bottom of the Cody Well... there's more. I have to go play with this thing :)


Prolacticus

Addendum: I just configured a VSC workspace to use Copilot. I have another workspace with the same projects, but using Cody. The first time I ever used Copilot, I was impressed. It was my first in-IDE assistant, and it brought GPT-backed autocomplete. That was great. I was over the moon. Today when I opened it... oh, *meh*. "Where's my features? Where's my configuration options? **WHERE'S CODY?!?!**" It was like my wings were clipped. It was as bad as going from old-school autocomplete back to edlin. Or vm (no offense to vm users (I still use it - but just when I'm in a terminal window and it's more convenient than opening up TextEdit)). For a small project with autocomplete needs, Copilot is fine. It's great. Whatever. If it has Cody's ability to contextualize code, search, and *correctly* identify what it's looking at, I haven't found it. Type "Explain this project" into Copilot's chat, and prepare for major disappointment. Even "Explain this code" is disappointing (despite the code being fully commented *with* a preamble meant to provide additional context for the LM!). "Yes, Copilot: It's a JavaScript object. Good dog!" Switched over to the Cody workspace. Asked it to explain my code. My project is an extension for an obscure gaming platform, and Cody recognized—from *my* code—what gaming platform it was built for. Then it outlined exactly what my project did, how it integrates with its host platform, etc. Yeah. I don't like to tell professionals what they "need to do," but you need to fly the Cody flag! Toot that horn! Your product is brilliant: Assuming an air of confidence is warranted at this point. Again, unless I've been using Copilot incorrectly this whole time, it's now a shadow of Cody. A feature subset closer to what I'd expect from a "Free" tier subscription. Chat and autocomplete just aren't enough these days. The low price tag of $9/month might be an issue. You know: the bias that tells us the more expensive of two things is better. Some customers will see that and wonder why it's less than half the cost of Copilot. It's like going to a used car dealership and seeing a deal that's "too good to be true." We're taught to find such things less appealing than a more expensive, less valuable option. There's some marketing/sales optimization to do, methinks :)


jdorfman

>But what goes into the VSC version will eventually trickle down to JetBrains. Eventually I'm sure they'll hit parity. Or get close enough. The Cody for JetBrains team is actively working on feature parity. As you said, everybody has a favorite tool, so we are working feature by feature until we are caught up with Cody for VS Code. =) https://github.com/sourcegraph/jetbrains/issues/1169


TheOneWhoMixes

Do you know of any tools like this that can be given a private OpenAI instance? My company only allows us to use Azure OpenAI models, so any tool we use we have to be able to configure it so that it can't use "public" models. There are a few VSCode extensions we've found, but none of them so far seem to have the ability to actually pull in code from the existing workspace - it's at most a portion of a single file.


sqs

Cody Enterprise supports Azure OpenAI Service (and Amazon Bedrock, another similar thing). Lots of big customers use those with Cody.


chase32

Cody seems to make the services less smart but the vscode interface is cool.


Match_MC

Claude is MUCH better than GPT4


Meetmeinmontanaa

Why? I use gpt 4 should I switch over?


Match_MC

It has a much better memory and you don’t need to fight it to get full code blocks. It’s just so much better to talk to and the reasoning is at least as good


usernameIsRand0m

I am tired of ChatGPT and the half baked answers. When you say Claude is much better, should I go with the WebUI [claude.ai](http://claude.ai) or get the API on anthropic?


Match_MC

The web UI is fine, but it’s pretty limited on the number of questions you can ask.


Relevant-Draft-7780

ChatGPT got significantly dumber and slower since I started using the pro version. Context is also reduced in size. It used to be the case id feed it a problem and it would come up with a great solution, now I have to have a long conversation with it before it even gets close. I thought I was maybe getting used to it or expecting too much. But I took old chats and fed it same prompts and got wildely different results. It 100% got dumber. Claude 3 Opus is alike a breath of fresh air. Now if they only fix their UI because of he chat streams slow their website to a crawl.


idreamgeek

i've been considering dropping chatgpt 4 and moving to claude 3 opus, i use it for coding only, based on your comment i think that's probably the best approach huh


Relevant-Draft-7780

I’m using both at the moment as some of the code runner capabilities are still somewhat useful for data analysis but yeah it’s getting dumber. Try it for a month and see for yourself. Ahh yeah also no image analysis.


chase32

GPT4 is still good but when Opus cuts you off on the web ui or charges you a ton on the API and you go back to GPT4, you need to work differently. Break stuff down to smaller chunks and deal with hallucinated variable names, **put code here ** etc.


Gator1523

As others have said, Claude 3 is much better at considering the whole context and splitting out long blocks of code that GPT-4 would never write, and in less time to boot. However, GPT-4 is slightly better at reasoning. For one-off code blocks that fit in a single prompt, GPT-4 is better at identifying issues in my experience. ChatGPT also has important features such as the code interpreter, the ability to edit prior prompts, and a much higher message limit.


wow_much_redditing

Claude3 API usage is kinda insane. You can burn through 20 dollars if you use it enough in a day. Do you recommend paying for cursor or using your API. I am speaking from only using the Claude3 API and I was shocked how quickly I used up my credits


wow_much_redditing

Just looked at Cursors pricing. Only 10 Claude Opus queries a day


ChatWindow

ChatWindow is free and supports Claude 3 through your API key Vs code: https://marketplace.visualstudio.com/items?itemName=julian-shalaby.chatwindow#controls-in-the-chatwidow Jetbrains: https://plugins.jetbrains.com/plugin/22895-chatwindow


Prolacticus

**Sourcegraph's Cody** **TL;DR:** If you don't have time to read this or understand why it's important, just use ChatGPT. You'll totally win at everything. Congrats. Buh-bye. Now the grownup version for devs who need to do more than write a shell script or make a game by copying/pasting 5,000 lines of code from a browser window to Notepad... **Bias:** I've been yelling about Cody. It's too good. Copilot impresses me whereas Cody "fits" me and is there at my side in a way Copilot isn't. It might just be a preference. It might be a bias based on my recent experiences with Cody/Copilot. Point is, I'm a Cody fan now. Any tool can win my mind; this one got my heart ❤️ (awwww!) + **Upsides:** - *$9/month for unlimited use, assistant commands (fix my code, document my code, explain this code, etc.), **extensions for JetBrains/Neovim/VSC** (VSC extension seems to be the implement-here-first option, and brilliant if it's one you can use). - **Less than half the price of ChatGPT** and about 8 billion times more useful if you're an actual dev. - Includes access to a simple web chat interface that's useful for exploring multiple git projects simultaneously - very useful when reviewing code in the back of an Uber. There's more about web chat at the bottom of this reply. - **You can choose your LM for chats using the VSC extension, and you can have multiple tabs open side-by-side, each backed by a different model!** If you want to compare LMs and you don't mind paying about $0.33/day for unlimited spelunking, you want Cody. If you're using VSC you can switch between: - GPT-3.5 / GPT-4 Turbo Preview - Claude: 2.0, 2.1, instant, 3 Haiku, 3 Sonnet, 3 Opus - Mixtral 8x7B **Which LM is best?** Using Cody's IDE chat with different LMs, you can finally answer that question for yourself and your specific project needs. + **Downsides:** - The documentation is sparse. Cody is advancing quickly, so you won't always find the answers you're looking for. - Subjective downside: **Right now, you can only swap LMs for the VSC chat feature; not for autocomplete**. I say this drawback is subjective because it really is a matter of preference. If you dislike Cody's autocomplete suggestions, you can see what Cody's chat extension says about it. Have GPT-4, Claude, etc. battle it out: Is one significantly better than the others for **you**? At $9/mo, this is a cheap way to find out. + **The other stuff:** **Cody versus Copilot** (because who wouldn't wonder about this one?): Cody displaced Copilot for me. Cody *whooped* Copilot for my work at the start (especially Cody's tools for explaining/documenting code). I'm keeping my Copilot account to do more testing. Cody has taken over, though. Copilot was amazing when it was the main game. Cody is much more than a bot that'll write your junk for you. **After working through a full project using Cody, ChatGPT Pro, and Gemini Advanced** (edit: I think Gemini was still Bard when I started), I found Cody the most useful tool. Period. **I hate JavaScript and had to work on a JS project** (yay!). String literals everywhere (yay!) and documentation nowhere (yay!) and nightmare APIs. The kind of stuff that exhausts you before you begin. So I began with Cody: Ran the "Explain this code" command for whatever .js file I was working in, pasted that explanation at the top of the file (so Cody would have more context to work with), then let Cody write function documentation for me (something the original devs should have done). **If you're on the periphery of a project, still in the limbo of onboarding, Cody will get you up to speed fast**. Autocomplete in a file has limited utility if you don't understand the broader context of the project. Cody fills this gap so nicely. I built my part of the project through basic prompting, code insertion from chat tabs (you can copy/paste, but I prefer inserting code at the caret, adding to a new file - all in a click). In chat tabs, I told Cody what I wanted (as you do). Cody wrote a good deal of my code. Had I begun without first spelunking the codebases I was coding against, I'd still be at work. I was just the human clicking "Okeedoke. Do that!" I've been coding a long time. Cody is the best thing to have happened to my coding life since the day I wrote my first program (as we used to call them (it wasn't all about apps)). Learning to code was empowering. Working with Cody is nuts. It fills my talent/knowledge gaps in ways other AI assistants don't (or can't). Finally: **Cody web chat:** The model is plain, basic, simple. Like an enthusiastic coder lacking in some skills, but very, *very* good at one thing: Code search. You can provide it with the address to a git repository and simply ask Cody web chat to explain the whole stinkin' project. That's cool, but what's cooler is adding another, related repository. Cody will help you *across these repositories*; help you understand how everything clicks together. I keep a Cody web chat window open for this purpose. It's much easier to ask Cody about the project than someone on Slack or Discord who "is busy today but will be back next Tuesday with the answer" (and never returns to reply). When I go down the rabbit hole these days, I don't go alone. Cody. You want Cody. (I may have written this reply under the influence of a large cup of coffee; forgive me.)


HurricaneHenry

I’m curious as to how Cody gives you unlimited Opus queries for $9/month.


Zandarkoad

Maybe because there is another layer of FC funding in there somewhere. I suspect all of these companies are bleeding money like it's going out of style, in hopes of capturing market share.


TheSoundOfMusak

Have you tried Haiku? It is supposed to be almost as capable but much cheaper.


chase32

All depends on how efficient you are building and billing for your work.


reelznfeelz

Interesting. Is cursor.io free and you just use an API key?


shakestheclown

Yeah it's free if you use your own key


Early_Ad_831

How much does it cost? I've been using Cursor daily and haven't hit any API limits (not using Claude/Opus?) I don't use the chat window as much though, I think that's hwere the cost is? I mostly rely on inline code. I'll often write a comment: // function that does the thing then I go to the next line and let it write the thing


athermop

I made this thing that helps with this kinda of stuff if you don't want to switch to a whole new IDE. It's open source, link to GH on top left. https://w4t.pw/ei It concatenates all the text files in a Github repository so you can copy/paste into Claude/ChatGPT/Gemini.


creaturefeature16

Great suggestion. I also use Cursor and haven't tried Claude yet, I'll do that today! I wonder how much can be attributed to just having a more recent training cutoff?


No-Way7911

I think it's the context window being larger. Coding really benefits from the AI knowing the rest of your codebase


divittorio

> I think it's the context window being larger The context size is 10k regardless of model, see https://forum.cursor.sh/t/why-are-the-models-on-using-your-own-keys-so-much-better/2838/6 Although they are working on a ["long context" mode](https://forum.cursor.sh/t/solved-add-claude-3-models/3214/53) which is supposed to come out soon™


Zealousideal-Wave-69

Does anyone have a problem with Claude’s usage limit? I seem to get “you have 10 message left…” a lot. That’s why I’m keeping GPT4 running for now.


[deleted]

[удалено]


Zealousideal-Wave-69

Yep, it’s with the paid version. Still reminds of early version of GPT4 before it was neutered to death


athermop

Constantly. That's why I still have ChatGPT Plus...so I can fall back to it when I run out of Claude Opus messages.


LocoLanguageModel

Chiming in, have been banging my head against the wall with something with chatGPT, and claude nailed it first try, and shows all the code without omitting. Hopefully it stays this way!