It's so simple and it can run anything, and it was also relatively easy to have the CGI script run inside a Docker container provided by the extension.
In other words, it's so flexible that it means the extension developers would be able to use any language they want and wouldn't have to learn much about Disco.
I would probably not push to use it to serve big production sites, but I definitely think there's still a place for CGI.
Not anymore! Now the Salesforce status page just says:
> The Salesforce Trust site is currently experiencing a service disruption. During this time, users will be unable to access the Trust site or receive notifications about service-impacting incidents or maintenances. We will continue to provide updates here until the issue is resolved.
And the heroku status page actually has recent upates, although they are just repeating this every hour:
> Heroku continues to investigate and remediate an issue with intermittent outages.
It's not super simple, but here's how we did it for a long time:
All of your income that's not invested for retirement goes to a joint account, every 1st of the month.
From that joint account, you then move the money to many different accounts and/or prepaid credit cards:
- Day to day expenses, like groceries.
- "Irregularities": municipal taxes, school taxes, things that don't happen every month, like car inspection, veterinary, vacation expenses. You add everything in a spreadsheet that you keep, and divide by 12. Add some padding.
- Home improvements, aka "IKEA" account. Fixed amount every month. If it's a good that you buy to add you our home (coffee machine, chair, curtain, etc.), that's the account. If it's a consumable (gas for the car), use "day to day expenses".
- Individual accounts: each one of you have an account that you decide what to do with. Buying clothes, hobby, etc. aka "I'm an adult, I do what I want".
- Kids: There's always something to buy. Clothes, etc.
- Automatic payments are taken from the main joint account, but you could also have another account if it makes it easier.
This method can get confusing and you may need to move money from one account to another one when the wrong card was used.
But the main advantage is that it automatically budgets for you. You see the amount remaining for the month for everything.
Otherwise, you may think that you still have plenty of money to spend on the grocery, so you buy a nice coffee machine, and then the mortgage and car payments happen and you're left with nothing for the rest of the month to do the grocery.
The main goals we had when using this method were to never use a credit card (and lose track of how much money we have), and also accommodate for the discrepancy in our revenues at the time.
For projects where I know the team will remain small (less than let's say 15 developers), I usually push to keep the architecture as simple as possible.
I've used something similar in the past, but kept the expiration code in the app code (Python) instead of using "fancy" Postgres features, like stored procedures. It's much easier to maintain since most developers will know how to read and maintain the Python code, that's also commited to the git repository.
Also, instead of using basic INSERT statements, you can "upsert".
INSERT INTO cache_items
(key, created, updated, key, expires, value)
VALUES (...) ON CONFLICT ON CONSTRAINT pk_cache_items
DO UPDATE SET updated = ..., key = ..., expires = ..., value = ...;
And since you have control over the table, you can customize it however you want. Like adding categories of cache that you can invalidate all at once, etc.
Postgres is also pretty good at key/values.
In other words, I agree that using Postgres for things like caching, key/values, and even maybe message queue, can make sense, until it doesn't. When it doesn't make sense anymore, it's usually easy to migrate that one thing off of Postgres and keep the rest there.
Also, one benefit that's not often talked about is the complexity of distributed transactions when you have many systems.
Let's say you compute a value inside a transaction, cache it in Redis, and then the transaction fails. The cached valued is wrong. If everything is inside of Postgres, the cached value will also not be commited. One less thing to worry about.
As mentioned this is a distributed system though, and realistically most micro services etc aren’t being fully rigorous about multi-phase transactionality and proper rollbacks etc.
Realistically it is the norm to yolo updates at a service and if it fails then the whole thing 500s and things are just in an unexpected state. Often it is not even possible to guarantee successful rollback etc - if your update back to original state fails then what is the application state now? Undefined and potentially invalid, pretty much. Most people just replay the request again and hope it succeeds.
Obviously the right answer is “don’t do that” or “offload that complexity into graphQL or something” but in the real world… people don’t.
Transactions can fail because they conflict with other transactions happening at the same time.
It's not an application bug. It's real life transactions happening on a production system. It's normal for that to happen all the time. The app can retry, etc., but it should be expected to happen.
Having to deal with distributed transactions is not something easy. Especially when they're part of many different systems.
For example, you'd have to wait until the transaction commits successfully before setting the value in the cache, which makes it hard to read.
Also, life in general happens. Compute a value, cache it, save things to the database, make API calls, and then a network error happens cancelling everything that you've just done. Having code that handles this kind of possibility is relatively hard to write/read.
Right but postgres isn't going to help with this if the application developer isn't doing safe and proper transaction management in the first place. What you described is a bug in the application logic for when and how to update the cache.
It's super hard to get this right. E.g. if you only update the cache after the transaction commits, you might commit without updating the cache, or if 2 writers interleave, the first one to commit might make the final update to the cache with a stale value.
Congratulations for the launch. Definitely a good idea. I'm a developer and personally won't need this, but I'll definitely suggest this when I think this could replace some "real coding".
http://grawl.it
A crawler to find broken links on you websites. 404s for example. If it get some traction, I might put more time into it, but right now it's on stand by. Feedback appreciated.