Just curious, now that it's been a mo.. Uh, 8 days?
Mine have been glowing, I feel it's still underhyped, I've been able to achieve things preciously impossible, in part thanks to the huge context length - granted I haven't tested and don't have access to GPT-4-128k, but even in the small it often find I prefer Claude.
Unfortunately the message limit is still too small, and inference can get very slow, but if the trend continues and somehow we have a model of this level that's as fast as Groq, and with practically unlimited use, it seems pretty revolutionary. At this point I think that anyone working with code that doesn't see how this will change the craft is willfully blind, LLMs as they are today are already amazing mental prosthetics, and I couldn't see myself going without.
Even if I worked with 100% novel code that somehow couldn't benefit from LLM assistance, I still remain a computer user and I use LLMs to write scripts, to figure out how-tos constantly.
But I see this as moment where programmers got their first Photoshop. Sure, you still need to understand design, but the friction of production is greatly, greatly reduced.
I’ve been using GPT4 a bit and it’s pretty good at explaining stuff that I would otherwise need to look up myself but not groundbreaking.
What should I be doing with LLMs to vastly improve my productivity?