Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting, thanks for sharing how you're doing it.

Is there any mileage in doing this with triggers? I have a _very_ legacy system which needs caching adding. Rather than dig through the code to invalidate the cache every time a record is updated/deleted in 20+ tables, I am thinking that being able to listen to the SQL executed and invalidate the cache based on the tables involved would be a clean approach.

But not found any way to make that possible - yet.



Yeah I also used triggers at the start. There are 2 things that this implementation achieve over triggers.

1. Triggers have an 8000 byte limit. I ran against these limits pretty quickly

2. You need to attach the trigger each time you create a new table. With this you can set and forget


There's no 8k limit with triggers? Could you expect on what you mean by that?

One of the motivating factors for not using triggers is that the implied overhead is very significant. By logging changes separately which triggers the write volume is roughly doubled, and the overhead of insertions is much much higher (a lot of fast/bulk path can't be used, the trigger processing needs to be performed).


Sorry I meant I was using triggers to sent a NOTIFY which has an 8k byte limit.

Very true what you mention about trigger overhead. Also you don’t get guaranteed atomicity


I wrote a node.js package a few years ago that gets past the 8000 bytes by sending multiple notifications.

https://github.com/numtel/pg-live-select


You can use apache ignite as a caching layer over your db. (Or as a db itself). You can then register a ContinuousQuery to listen for changes. A grpc service can be stood up to stream these changes to other clients, even front end ones (via grpc-web).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: