I work in a field with lots of cheap microchips. I can tell you that the amount of counterfeit copies flooding in from China as well as the speed in which they are copying is truly breathtaking.
> It's definitely possible for AI to do a large fraction of your coding, and for it to contribute significantly to "improving itself". As an example, aider currently writes about 70% of the new code in each of its releases.
That number itself is not saying much.
Let's say I have an academic article written in Word (yeah, I hear some fields do it like that). I get feedback, change 5 sentences, save the file. Then 20k of the new file differ from the old file. But the change I did was only 30 words, so maybe 200 bytes. Does that mean that Word wrote 99% of that update? Hardly.
Or in C: I write a few functions in which my old-school IDE did the indentation and automatic insertion of closing curly braces. Would I say that the IDE wrote part of the code?
Of course the AI supplied code is more than my two examples, but claiming that some tool wrote 70% "of the code" suggests a linear utility of the code which is just not representing reality very well.
Every metric has limitations, but git blame line counts seem pretty uncontroversial.
Typical aider changes are not like autocompleting braces or reformatting code. You tell aider what to do in natural language, like a pair programmer. It then modifies one or more files to accomplish that task.
Here's a recent small aider commit, for flavor.
-# load these from aider/resources/model-settings.yml
-# use the proper packaging way to locate that file
-# ai!
+import importlib.resources
+
+# Load model settings from package resource
MODEL_SETTINGS = []
+with importlib.resources.open_text("aider.resources", "model-settings.yml") as f:
+ model_settings_list = yaml.safe_load(f)
+ for model_settings_dict in model_settings_list:
+ MODEL_SETTINGS.append(ModelSettings(**model_settings_dict))
Point is that not all lines are equal. The 30% that the tool didn't make are the hard stuff. Not just in line count. Once an approach or an architecture or a design are clear then implementing is merely manual labor. Progress is not linear.
You shouldn't judge your sw eng employees by lines of code either. Those that think the hard stuff often don't have that many lines of code checked in. But it's those people that are the key to your success.
"The stats are computed by doing something like git blame on the repo, and counting up who wrote all the new lines of code in each release. Only lines in source code files are counted, not documentation or prompt files."
The exact same discussions were going on with "western" models. Don't remember the images of black nazis making the rounds because inclusion? Same thing. This HN tread is the first time I'm hearing about this anti-DeepSeek sentiment, so arguably it's on a lower level actually.
Don't worry, the way things are going, you'll have that in the US as well soon.
Ironically supported by the folks who argue that having an assault rifle at home is an important right to prevent the government from misusing its power.
reply