I can't speak for other people. But for myself I'd rather have it as a --option than another command. I dislike tools that install a lot of different commands, it just gets harder to remember them all.
My curl complains when -J/--remote-header-name and -C/--continue-at is combined.
$ curl -JLOC- "https://news.ycombinator.com/news.css?nZdOgS3Y18zj0ynCo50h"
curl: --continue-at and --remote-header-name cannot be combined
curl: try 'curl --help' or 'curl --manual' for more information
I was wondering if that should be the default for wcurl, but the bar only works for downloading single files, I was afraid of users thinking there's a bug whenever they downloaded more than one file if the bar wasn't there anymore.
the main reason I use wget is because it automatically retries downloads, which is vital for my not-so-great internet, and an option I wish was on in curl by default, as half the world uses curl, and then keep trying to automatically download 400MB files, which I have trouble finishing.
Then, if you look at the script, it's basically that, with some minor cleaning up and nicer error handling.
and you can use `-q` to have it exclude the curlrc and use only args you pass to it. curl has such an amazing amount of power, but also means it has a lot of options to utilize a lot of things folks take for granted (exactly like retries, ect)
Makes me think that probably a lot of unattended shell scripts out there should use -q in case someone has a .curlrc altering the behaviour of curl, and thus breaking the expectations of the script.
same can be said for wget with --no-config (and really any app that runs on any system) - if you're in the business of automating it, those config-features need to not be ignored (or down right overwritten to your scripts desire)
in docker-containers though, its more safe to assume no pre-loaded rc files are embedded (but 100% want to check your source-container for things like that) - but for running in some users workspace, the need to be careful is real.
most of the time though, those options should be safe; only really need to not have auto-retries are when trying to get super accurate date (like observing odd behavior, you dont want thing to simply "auto-work" when finetuning something, or triaging an upstream-server or dns issue)
i often write my scripts execing calls like `\curl -q ...` such that i get no user's alias of curl and ensures a consistant no-config amongst my system and others (although gnu curl vs mac curl (and other binaries of gnu vs mac vs busybox) are always the real fun part). (if the user has their own bash-function of curl then its on them for knowing their system will be special compared to others and results maybe inconsistent)
otherwise you have to specify -O for every URL specified. For the same reason, remove $1 and rely on all parameters being added at the end of line. The example above will only download the first URL.
(I personally think this should have been the default from the beginning, -O should have set the behaviour for all following parameters until changed, but that is too late to change now.)
There is also --remote-header-name (-J) which takes the remote file name from the header instead of the URL, which is what wget does.
> There is also --remote-header-name (-J) which takes the remote file name from the header instead of the URL, which is what wget does.
I don't think that's the case, that behavior is opt-in as indicated in wget's manpage:
> This can currently result in extra round-trips to the server for a "HEAD" request, and is known to suffer from a few bugs, which is why it is not currently enabled by default.
Curl does not handle reconnections or use caching like wget, for example, by default. And many other things which includes actually quite many arguments. You can see the list from this project.
MacOS comes with curl but no wget by default. Sure you can easily install it with homebrew. But it would be nice to have wget functionality by default.
More comparable comparison here is that you have scissors which are more complex to use and will give better results when used correctly. But you are lazy and get new basic scissors.
I felt weird reading the titles too, because maybe 2-3 years ago I have downloaded with `curl`, so why this?
And also, I am pretty sure `wget` can do it and better too.
As others pointed out, you can do that, you can also set them in .curlrc, or you can write a script if you want to allow for multiple URLs to be downloaded in parallel (not possible with an alias), or now you can just use wcurl :)
Note: wcurl sets a bit more flags than that, it also encodes the whitespaces in the URL and does parallel downloading of multiple URLs.
Sadly I can’t since it is dependent upon the util-linux version of getopt which means it fails on bsd and macOS systems. Understandable since it is always available on the specific target the script was written for and it does make life easier.
I knew getopt was linux-specific but I thoughts the only impact was that the long form argument (--opt) would not work. I turns out it doesn't run at all instead.
We should be able to fix this within the next few days, thank you!
wcurl does a lot more than that, though: parallel downloads, retries, following redirects (and other things, that are all written in the blog post).
There's also some value in providing a ready-to-use wrapper for users that install the curl package. Maybe wcurl will also show up in other distributions after a while, especially since Daniel also likes the idea.
I think that’s pretty ungenerous a take. Wrappers have a proven history of being really valuable contributions to tools (look at GCC and clang), I think making tools easier to use in a great goal.
This happens so often. A developer could just read a man page or learn something deeply by example, but they don't. Instead they struggle for the rest of their lives in ignorance, even going so far as to adopt a hack to avoid having to learn a new thing. They will go far out their way to not learn a new thing, and over time, cause themselves untold misery.
I think there's a bit more nuance than just ignorance. In some ways, I think reinventing the wheel is almost inevitable when we require backwards compatibility but prefer sane defaults. The expectations around what exactly should happen when making an HTTP request have evolved over time, but we can't change the way that `curl` behaves by default with potentially breaking stuff all over the place. In the same way that I don't judge people who use MacOS because it "just works" instead of using Linux like me, I don't think it's fair to treat people who don't want to spend a lot of time learning how to ask a tool to do what another tool will do without needing to spend any extra effort as if they're somehow failing as competent developers.
Except there are about a million ways to do this that don't involve releasing another tool. Everyone I know (including myself) would make an alias, that's entirely what they're for. If you want to get really fancy, you could make it a function in your shell's rc file. Or if you're REALLY zesty, you could even write an entire shell script for it.
To a senior, tools like this make no sense, because they're unnecessary, and contribute to bloat and waste, both for systems, and for the time spent.
Many people cherish simplicity. They don't want to remember arcane spells of args. They want sane/comfy defaults. It has nothing to do with ignorance, they simply refuse to use something your way.
The point of having man pages is that you don't have to remember "arcane spells of args". For huge manpages or manpages in general there is certainly a learning curve on how to read and apply them effectively. But it's also an underrated way of learning about the system you use and deploy your applications on.
You don't listen/understand: man pages are for learning. Just imagine a user refusing to learn and not because of said user's ignorance. You probably didn't invent a new class of programs that do something absolutely new, you just wrote a program with poor design/defaults that require the user to learn how to use said program rather things be intuitive in general.
I can't even get developers to read error messages, let alone documentation. The number of times I've had senior (by title) individuals send me screenshots of stack traces asking for help is too damn high. This field is a joke.
Yes, it happens a lot to me as well. However it baffles me why I would ever want to download and install a tool when I could just read a man page and create an alias for curl and some args. It's a perfectly capable tool that comes with every conceivable distribution.
It is ignorance, because aliases and shell scripts have existed for eons, and this is an overkill solution for a non-issue. I have a bunch of aliases for common flag combinations; I haven't released any of them as entirely separate programs.
Sorry you're getting downvoted and probably flagged, but I agree with you. The fact that someone made an entirely separate thing for this is mind-boggling, when a simple shell alias solves this problem, and takes literally 60 seconds to set up.
But then again, this is the same industry that thinks shipping 600 MB of Node packages and an entire instance of Chromium is what it takes to use ffmpeg, so, I'm not surprised, just disappointed.