By looking through the git history of your project, it doesn't seem you got it correct the first time either. I don't think gluing "UNIX tools" together biggest strength is "make it work fast & on the first try", but "have independent tools that does 'one thing' well".
In relation to your tool, I think curl provides very many more features that are easily accessible through command flags than the limited subset of HTTP capabilities you expose (for example, basic auth or different set of headers). The same argument goes for mailing, setting headers or such.
With that said, tools that does one thing and does it well are the ones that gets used, personally I'd just prefer it to be function in <your-shell> instead :)
I mean, a tool can be really useful (I write tools this size all the time) but some of them needs tweaks forever. I just think some 'tweaks' are already solved by other projects, that's why using already written tools that are somewhat UNIX-y sounds like a good idea to me. That's what I tried to say; of course I don't want you to write a 100% complete program in the first commit, that would make everything I write look really bad in comparison. Just be prepared for that pull request that lands basic auth in your project, and the next PR after that :)
Nice idea but it needs work. Firstly, and most importantly, any open source project lives and dies on it's documentation. Without a basic guide to what the thing even does no one is likely to to use or support the project. Give some love to your README.md file. How to use the project would be great.
Secondly, at the moment you're just doing a straightforward string comparison on the <body> of a page[1]. It'd be more useful if I could define something like a DOM querySelector or a regexp. It'd also be useful to look in the header at that page title.
[1] At least, I think so. I've never used Go so that's just what I gather from reading the source.
This is a really short little program I wrote for a quick need I had. I added a simple README. There are a ton of ways to improve it (regexp, DOM walking, automatically figure out MX, ...); if people want to do that I'd be happy to take PRs.
I tend to default to "stick it on Github and see if it helps someone else".
I assume I'm jealous for a project that brings nothing new compared to so many other solutions and still grabs 76 stars (as I write this). It seems, after all, github stars are another way to say "I'm popular" and not so much that a project is good.
That is a pretty rude comment and I would def. argue it reflects a pretty narrow view of the world. I think the project is alright and it looks quite useful if you need something to curl a site, check something, and blast an e-mail (essentially your own ITTT).
To the point I'm jealous for a project that brings nothing new compared to so many other solutions, I suspect the author of the program needed to call up a website check for an event and get notified; s/he probably found this to be the motivation for building this much more than getting github stars. Other people found it useful as well, and maybe it is easier for people to grep this implementation and build on it than other crawlers.
Most broadly, bitcoin combines a lot of well understood and older technologies into something completely knew. It seems this was your gripe, the project didn't do that. I just want to point out complex coordination and reorganization of current libraries/practices/technologies can be quite useful, novel and interesting.
edit: I actually concur with the above post a bit more now. I do think things done in Go get a but over hyped and if this is what parent was referring to, I suspect s/he was correct even if a bit prickly in expressing it.
The go devs are aware of it, and adamant that this stuff is fine and they don't want to make the flag package "any more complex" since it's so easy to install a different one (nevermind that of course people are going to use the builtin one...). I find this absolutely ridiculous given how nonstandard it is in today's shell scripts; -flag is supposed to be interpreted as -f -l -a -g or -f "lag" depending on the -f argument.
There's quite a lot of edge cases that can be triggered when fetching HTTP responses. Perhaps a small test suite would be beneficial in order to attract new developers that don't feel like breaking anything? (-:
I've built the same thing using NodeJS a couple of weeks ago, with phantomjs support (javascript execution), mandrill (emailing) & and some other nice options: https://github.com/mgcrea/node-web-watcher
A browser extension is handy, but requires your browser to be open in order work. A script on the other hand can just be thrown up on a server and forgot about.
I reposted because I got the following email from HN:
Hi there,
https://news.ycombinator.com/item?id=10443814 looks good, but didn't
get much attention. Would you care to repost it? You can do so
here: https://news.ycombinator.com/repost?id=10443814.
Please use the same account (jgrahamc), title, and URL. When these match,
the software will give the repost an upvote from the mods, plus we'll
help make sure it doesn't get flagged.
This is part of an experiment in giving good HN submissions multiple
chances at the front page. If you have any questions, let us know. And
if you don't want these emails, sorry! Tell us and we won't do it again.
Thanks for posting good things to Hacker News,
Daniel
I got the same mail and indeed my submission went from no attention at all to stay in the front page for a while. I figured out that maybe it had to be with posting time, maybe the email is sent when it's a good time to repost? Or the first upvote is crucial?
Interesting that the process needed you to repost it for the mods to boost it. Seems like they could have just fiddled with it without you having to manually interact with it.
Also interesting that HN is moving (has moved?) toward a curated site. HN asks for reposts of things they deem good. They also adjust downward the score of many articles. (As can be seen through large jumps on sites that track article ranks, some of which will be automatic from the flamewar detector, some of which is likely manual.).
It seems like we're reaching a "web 3.0" which uses users to do the expensive bit of an intial sift but then the site admins edit/curate that into their own vision.
We're moving away from user driven content, back to curated content with user-sourcing.
Web 3 or not, I'd see it as an extension of user sourcing, where users have various levels of moderation powers. I would guess these HN emails (I also received one recently and duly reposted) are triggered by some count of admins voting up unloved posts, maybe from a list filtered by a user karma threshold.
As Jeff Atwood says about StackOverflow, it should be possible for a sufficiently privileged user to do just about anything staff can do.
Not really a new concept as /. had the notion of metamoderation, but a richer model with multiple levels of user.
Could you please make this legal in the US by honoring robots.txt and scanning any links to the ToS for words forbidding "automated access", "crawling", "spidering", "polling", etc.?
Hey, jgrahamc neat tool, could you please add bins to repo? I know, I know I can compile my self. But not everybody has luxury install go just to try...