(co-author here) We automatically retry on failures in a while. We also log error messages in the worker (self-hosted) and have clear indicators in the cloud UI that something went wrong (with plans to add email alerts later).
The error handling is actually the hard part here. We don't believe that failing on inserts due to the endpoint being down is the right thing because that just moves the retry/error-handling logic upstream -- now you need to roll your own queuing system, backoffs etc.
Thanks for the reply. These are compelling points.
I agree not to fail on insert too by the way. The insert is sort of an enqueuing action.
I was debating if a microservice should process that queue.
Since you are a PaaS the distinction might be almost moot. An implementation detail. (It would affect the api though).
However if Postgres added this feature generally it would seem odd to me because it feels like the DB doing app stuff. The DB is fetching data for itself from an external source.
The advantage is it is one less thing for the app to do and maybe deals with errands many teams have to roll their own code for.
A downside is if I want to change how this is done I probably can't. Say I have data residency or securiry requirements that affect the data I want to encode.
I think there is much to consider. Probably the why not both meme applies though. Use the built in feature if you can, and roll your own where you can't.
The error handling is actually the hard part here. We don't believe that failing on inserts due to the endpoint being down is the right thing because that just moves the retry/error-handling logic upstream -- now you need to roll your own queuing system, backoffs etc.