Hacker News new | past | comments | ask | show | jobs | submit login

Every metric has limitations, but git blame line counts seem pretty uncontroversial.

Typical aider changes are not like autocompleting braces or reformatting code. You tell aider what to do in natural language, like a pair programmer. It then modifies one or more files to accomplish that task.

Here's a recent small aider commit, for flavor.

  -# load these from aider/resources/model-settings.yml
  -# use the proper packaging way to locate that file
  -# ai!
  +import importlib.resources
  +
  +# Load model settings from package resource
  MODEL_SETTINGS = []
  +with importlib.resources.open_text("aider.resources", "model-settings.yml") as f:
  +    model_settings_list = yaml.safe_load(f)
  +    for model_settings_dict in model_settings_list:
  +        MODEL_SETTINGS.append(ModelSettings(**model_settings_dict))
  
https://github.com/Aider-AI/aider/commit/5095a9e1c3f82303f0b...





Point is that not all lines are equal. The 30% that the tool didn't make are the hard stuff. Not just in line count. Once an approach or an architecture or a design are clear then implementing is merely manual labor. Progress is not linear.

You shouldn't judge your sw eng employees by lines of code either. Those that think the hard stuff often don't have that many lines of code checked in. But it's those people that are the key to your success.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: