This is so important if you want to avoid incredibly gnarly race conditions. In particular for us: jobs being run even before the transaction has been fully committed to the database.
We utilise a decorator for our job addition to external queues, such that the function that does the addition gets attached to Django's "on transaction commit" signal and thus don't actually get run until the outer database transaction for that request has been committed.
That is the simple, but unreliable way to fix the issue. If your python process crashes or is killed between sending the commit and enqueueing the job, the job will never be enqueued.
A possible solution to this is to use a "transactional outbox" pattern, but that has many of the same drawbacks of using postgres as a queue.
Dependabot is based on releases from the various package repositories; running of main is pre-release - hence they’re probably using GitHub Actions to pin their Gemfile-defined Rails version to a commit hash.
I remember working with SagePay as a payment provider back in 2008 (before we knew of Stripe!) and finding it interesting that card address validation was only done on the numbers in a full address.
For example, from "20 Windsor Road, London, SE1 6JH" it would extract 2016 and validate that against the banks details.
I thought that was quite a smart way as UK addresses can come in all forms, shapes and sizes (as the post shows) – but the minimal bits required to be correct are indeed the numbers as all postcodes have them and an incorrect number would mean a incorrect postcode.
Edit: the funny bit was that they made you work this out and send it along with the request rather than just handling it internally :)
I live on a road where all the houses on one side were built first and numbered 1 2 3 4... The houses on the other side were built later and have names and no numbers.
This seems to me like a combination of multiple foot-guns, first being the Docker one - followed by the fact Mongo was not configured to authenticate the connection.
Heroku by default run PostgreSQL open to the world (which is problematic for other reasons) but they get away with it by relying on PG's decent authentication.
My default is to prefer to build systems with multiple layers of security, such that there is no reliance on a single issue like this.
As a Mac-using Python shop, we had serious file-sync performance issues when mounting our codebase inside a container via docker-compose. Nix completely freed us from them and allowed us to develop with Python natively speedily and without all the serious faff & headaches that usually comes with getting reproducible builds on everyone machines.
If you'd like to know more, I spoke at DjangoCon Europe late last year [1] on our setup; it's still paying serious dividends for us!
Yes! And it’s actually not either/or for us, we still use Docker Compose to run our services (Redis, PostgreSQL, etc) that don’t require file syncing with the host. It’s good at that.
Much of the world returned to offline teaching "last semester".
It was like that before the pandemic. If you expect academic integrity in an online class, you are deluded. The students here were just dumb enough to do it in a class-wide chat. Smart cheaters do it in small, trusted, cliques.
I have pre-pandemic experience as a student and as a TA in both an Israeli university and a top American publish university.
The Israeli systems does it as I described above, and the American university does not. The US university held an exam for 350 students in a single conference hall. The proctors were the TAs. We could not check IDs, students seated themselves, and there were no effective supervision. The easiest way to cheat was just to switch the exam forms with a friend.
We intentionally created a massive incentive to cheat. Worse, we curved the grades, punishing non-cheaters.
I tried to protest, but was completely ignored because it was always done like that.
From experience, I can tell you that many people simply refer to this entire domain as CORS despite that S standing for Sharing. The Same Origin Policy is treated verbally more like the default state of CORS in some circles.
It is very confusing and I’m not entirely sure how it ended up like that.
It's a common for protocols, mechanisms and policies to be confused in terms of intention, and often to be misunderstood. SameSite was another example recently, a lot of people don't understand that Site and Origin have specific meanings in the browser world.
From my own time observing the process of how these things get drafted up, it's because the creators of these mechanisms work in a committee and in a circle in which everyone is highly familiar with their specific terminology. There is no thought given to accessibility of general understanding for 'the masses' and that eventually manifests itself in this way. I'm not saying they should or shouldn't be giving thought to naming, just pointing out what I observe.