Hacker Newsnew | past | comments | ask | show | jobs | submit | peterldowns's commentslogin

pg_dump has a few annoyances when it comes to doing stuff like this — tricky to select exactly the data/columns you want, and also the dumped format is not always stable. My migration tool pgmigrate has an experimental `pgmigrate dump` subcommand for doing things like this, might be useful to you or OP maybe even just as a reference. The docs are incomplete since this feature is still experimental, file an issue if you have any questions or trouble

https://github.com/peterldowns/pgmigrate


Yeah unfortunately I think that it's not really possible to hit the speed of a TEMPLATE copy with MariaDB. @EvanElias (maintainer of https://github.com/skeema/skeema about this) was looking into it at one point, might consider reaching out to him — he's the foremost mysql expert that I know.

Thanks for the kind words Peter!

There's actually a potential solution here, but I haven't personally tested it: transportable tablespaces in either MySQL [1] or MariaDB [2].

The basic idea is it allows you to take pre-existing table data files from the filesystem and use them directly for a table's data. So with a bit of custom automation, you could have a setup where you have pre-exported fixture table data files, which you then make a copy of at the filesystem level, and then import as tablespaces before running each test. So a key step is making that fs copy fast, either by having it be in-memory (tmpfs) or by using a copy-on-write filesystem.

If you have a lot of tables then this might not be much faster than the 0.5-2s performance cited above though. iirc there have been some edge cases and bugs relating to the transportable tablespace feature over the years as well, but I'm not really up to speed on the status of that in recent MySQL or MariaDB.

[1] https://dev.mysql.com/doc/refman/8.0/en/innodb-table-import....

[2] https://mariadb.com/docs/server/server-usage/storage-engines...


Is this basically using templates as "snapshots", and making it easy to go back and forth between them? Little hard to tell from the README but something like that would be useful to me and my team: right now it's a pain to iterate on sql migrations, and I think this would help.

Thats exactly what it is, just try it with the provided docker-compose file you will get it then.

Really interesting article, I didn't know that the template cloning strategy was configurable. Huge fan of template cloning in general; I've used Neon to do it for "live" integration environments, and I have a golang project https://github.com/peterldowns/pgtestdb that uses templates to give you ~unit-test-speed integration tests that each get their own fully-schema-migrated Postgres database.

Back in the day (2013?) I worked at a startup where the resident Linux guru had set up "instant" staging environment databases with btrfs. Really cool to see the same idea show up over and over with slightly different implementations. Speed and ease of cloning/testing is a real advantage for Postgres and Sqlite, I wish it were possible to do similar things with Clickhouse, Mysql, etc.


Great books! Strongly recommend for anyone into fantasy stuff.

And they are still coming!

I just finished Lies Weeping, which is #12, I think. There’s 2 more on the way. I suspect they are already written.


That's a really cool prompt idea, I just tried it with my neighborhood and it nailed it. Very impressive.

I'm happy to see they're investing in Actions — charging for it should help make sure it continues to work. It's a huge reason Github is so valuable: having the status checks run on every PR, automatically, is great. Even though I'm more of a fan of Buildkite when it comes to configuring the workflows, I still need something to kick them off when PRs change, etc.

Charging a per-workflow-minute platform fee makes a lot of sense and the price is negligible. They're ingesting logs from all the runners, making them available to us, etc. Helps incentivize faster workflows, too, so pretty customer-aligned. We use self-hosted runners (actually WarpBuild) so we don't benefit from the reduced default price of the Github-hosted runners, but that's a nice improvement as well for most customers. And Actions are still free for public repos.

Now if only they'd let us say "this action is required to pass _if it runs_, otherwise it's not required" as part of branch protection rules. Then we'd really be in heaven!


This pricing model will continue to incentivize them internally to not fix the hundreds of clearly documented issues that causes CI to be incredibly slow. Everything from their self-inflicted bottlenecking of file transfers to the safe_sleep bug that randomly makes a runner run forever until it times out. All of it now makes them more money

> charging for it should help make sure it continues to work

It's there a particular reason you're extending the benefit of the doubt here? This seems like the classic playbook of making something free, waiting for people to depend on it, then charging for it, all in order to maximize revenue. Where does the idea that they're really doing this in order to deliver a more valuable service come from?


Yeah. This is a reaction to providers like blacksmith or self-hosted solutions like the k8s operator being better at operating their very bad runner then them, at cheaper prices, with better performance, more storage and warm caches. The price cut is good, the anticompetitive bit where they charge you to use computers they don't provide isn't. My guess is that either we're all gonna move to act or that one of the SaaS startups sue.

I appreciate being able to pay for a service I rely on. Using self-hosted runners, I previously paid nothing for Github Actions — now I do pay something for it. The price is extremely cheap and seems reasonable considering the benefits I receive. They've shown continued interest in investing in the product, and have a variety of things on their public roadmap that I'm looking forward to (including parallel steps) — https://github.com/orgs/github/projects/4247?pane=issue&item....

Charging "more than nothing" is certainly not what I would call maximizing revenue, and even it they were maximizing revenue I would still make the same decision to purchase or abandon based on its value to me. Have you interacted with the economy before?


> The price is extremely cheap

and you expect it to stay this way?


> and seems reasonable considering the benefits I receive.

> I would still make the same decision to purchase or abandon based on its value to me.


I don't think it makes sense to charge per minute just for logs. If they want to charge for log retention, sure, go ahead. But that is pennies, let's be real.

At CloudX (https://cloudx.io) we’re building a new supply-side advertising platform for mobile publishers. Yes, it's ads, and yes, there's AI involved, so stop reading here if that's not interesting to you.

It's a gnarly infra problem with huge scale, combined with an interesting product space that we think legitimately benefits from tasteful AI automation. We're doing cool things with Nitro Enclaves to prove that our auctions are fair. And our founding team have done this before with great success, first at MoPub (sold to Twitter) and MAX (sold to AppLovin).

We're hiring (all remote) for:

- Senior Fullstack Engineer https://jobs.gem.com/cloudx/am9icG9zdDogum5THF3fORqb1eEupFQx

- Senior Infrastructure Engineer https://jobs.gem.com/cloudx/am9icG9zdDo4vl4A1sEcc7sDQf8ZYiqR

- Senior Android SDK Engineer https://jobs.gem.com/cloudx/am9icG9zdDqPAWu1cxr3PmEuuTIdliI6

- Senior iOS SDK Engineer https://jobs.gem.com/cloudx/am9icG9zdDpOy3Qmt1fLsOu4gKwtwWTz

Our philosophy is to keep the team small, well-paid, and productive. We deploy every day and our monorepo CI suite takes about a minute to pass.

The best way to apply is directly through those jobs pages but if you have other questions you're welcome to email me at peter@cloudx.io. I always post here on HN because it's where I got my career started back in highschool; I will personally make sure you get a response if you apply.


Agreed. Recently started a new gig and set up Mise (previously had used nix for this) in our primary repos so that we can all share dependencies, scripts, etc. The new monorepo mode is great. Basically no one has complained and it's made everyone's lives a lot easier. Can't imagine working any other way — having the same tools everywhere is really great.

I'll also say I have absolutely 0 regrets about moving from Nix to Mise. All the common tools we want are available, it's especially easy to install tools from pip or npm and have the environments automanaged. The docs are infinity times better. And the speed of install and shell sourcing is, you guessed it, much better. Initial setup and install is also fantastically easier. I understand the ideology behind Nix, and if I were working on projects where some of our tools weren't pre-packageable or had weird conflicting runtime lib problems I'd get it, but basically everything these days has prebuilt static binaries available.


Mise is pretty nice, I'd recommend it over all the other gazillion version-manager things out there, but it's not without its own weak spots: I tried mise for a php project, neither of the backends available for php had a binary for macos, and both of them failed to build it. I now use a flake.nix, along with direnv and `use flake`. The nix language definitely makes for some baffling boilerplate around the dependencies list, but devs unfamiliar with nix can ignore it and just paste in the package name from nixpkgs search.

There's also jbadeau/mise-nix that lets you use flakes in mise, but I figured at that point I may as well just use flake.nix.


The beauty of mise is that as long as someone is hosting a precompiled binary for you, it's easy to get it. I just repro'd and yeah, `mise use php` fails for me on my machine because I don't have any dev headers. But looks like there's an easy workaround using the `ubi` downloader:

https://github.com/jdx/mise/discussions/4720#discussioncomme...

or see the first comment on this thread to see a way to explicitly specify where to find the binaries for each platform:

https://github.com/jdx/mise/discussions/4720#discussioncomme...

Having these kind of "eject" options is one of the reasons I really appreciate Mise. Not sure this would work for you but I'd rather be able to do this than have to manage/support everyone on my dev team installing and maintaining Nix.


Nix is just one installer (I steer devs toward Determinate's installer) so technically not super-different from needing Docker. Lots of files in /nix but actually less disk use since the store has much more fine-grained de-duping than container images. Nix is still a big bite though, and for most projects I wouldn't make nix a requirement, but the project in question is itself a build system with reproducibility requirements in its design, so I'm not losing too much sleep over this one. The final artifacts don't depend on Nix anyway.


same here, hope they fix this soon


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: