The problem with no collaboration is that it can lead to fragility. Context transfer is costly, but without it work bottlenecks around key individuals. When those individuals are explicitly disincentivized to transfer knowledge preemptively, you lose the slack in the system. Vacations and exits become more fraught. Deadlines are harder to hit reliably. You have less systemic resiliency.
Instead, encourage targeted collaboration (in particular, pairing: collaboration with a goal of accomplishing something) within the scope of a team, and avoid cross-team collaboration, which is the expensive part.
I've been writing Rails code since 2007. There's a reason the stack has gotten more complicated with time, and virtually no team has ever done it right by this definition.
The trouble with an omakase framework is not just that you have to agree to the initial set of choices but that you have to agree with every subsequent choice that's made, and you have to pull your entire dev team along for the ride. It's a very powerful framework, but the maintainers are generally well-meaning humans who do not possess a crystal ball, and many choices were made that were subsequently discarded. Consequently, my sense is that there are very few vanilla Rails apps in the wild anywhere.
(I'm old enough to remember what it was like to deploy a Rails application pre-Docker: rsyncing or dropping a tarball into a fleet of instances and then `touch`ing the requisite file to get the app server to reset. Docker and k8s bring a lot of pain. It's not worse than that was.)
> I'm old enough to remember what it was like to deploy a Rails application pre-Docker: rsyncing or dropping a tarball into a fleet of instances and then `touch`ing the requisite file to get the app server to reset.
If this is what you remember, then you remember a very broken setup. Even an “ancient” Capistrano deployment system is better than that.
Or there was “git push heroku main” or whatever it was back in the day. Had quite a moment when I first did that from a train – we take such things for granted now of course...
Yeah, it also wasn’t difficult to do the equivalent without heroku via post-commit hook.
Honestly, even setting up autoscaling via AMIs isn’t that hard. Docker is in many ways the DevOps equivalent of the JS front end world: excessive complexity, largely motivated by people who have no idea what the alternatives are.
Me too. I'm not responding specifically to you with the parent comment. That said, "autoscaling", as a concept, didn't really exist prior to AWS AMIs (or Heroku, I guess).
My point is that a lot of devs reach to Docker because they think they need it to do these "hard" things, and they immediately get lost in the complexity of that ecosystem, having never realized that there might be a better way.
My recollection is that this is what many Capistrano setups were doing under the covers. Capistrano was just an orchestration framework for executing commands across multiple machines.
More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.
> More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.
Well, OK, so you remember a bad setup that was bad for whatever reason. My point is that there's nothing about your remembered system that was inherent to Rails, and there were (and are) tons of ways to deploy that didn't do that (just like any other framework).
Capistrano can do whatever you want it to do, of course, so maybe someone wrote a deployment script that rsynced a tarball, touched a file, etc., to restart a server, but it's not standard. The plain vanilla Cap deploy script, IIRC, does a git pull from your repo to a versioned directory, runs the asset build, and restarts the webserver via signal.
This was before Git! (Subversion had its meager charms.) Even after Git became widespread, some infra teams were uncomfortable installing a dev tool like Git on production systems, so a git pull was out of the question.
The main issue that, while not unique to Rails, plagued the early interpreted-language webapps I worked on was that the tail end of early CI pipelines didn't spit out a unified binary, just a bag of blessed files. Generating a tarball helped, but you still needed to pair it with some sort of an unpack-and-deploy mechanism in environments that wouldn't or couldn't work with stock cap deploy, like the enterprise. (I maintained CC.rb for several years.) Docker was a big step up IMV because all of the sudden the output could be a relatively standardized binary artifact.
This is fun. We should grab a beer and swap war stories.
If you call a stable, testable and „reproducible” (by running locally or on some dev machine) tarball worse than git pull then you are the one who is killing the solutions that work with unpredictable and unsafe world.
I think beer with swapping stories is a good idea, because I would love to learn what to avoid.
Capistrano lost its meaning when autoscaling went mainstream (which was around 15 years ago now), yet people kept using it in elastic environments with poor results.
AMIs were still pretty novel at the time I started (around 2007 like the GP). The standard deployment in the blogs/books was using Capistrano to scp the app over to like a VPS (we did colo) and then run monit or god to reboot the mongrels. We have definitely improved imho!
Totally, around that time I did that too (although I was working with LAMP stacks so no Capistrano), but with the rise of AWS, Capistrano got outdated. I know that not everyone jumped board on cloud that early, and even the ones that did, there was an adaptation period where EC2 machines were treated just like colo machines. But Ruby also used to be the hipster thing before 2010 so... :)
Anyway, never liked Capistrano so I'm probably biased
The primary benefit of containerization is isolation. Before docker, you'd drop all your code on a shared host so you had to manage your dependencies carefully. Specifically I remember having to fight with mysql gem a lot to make sure that there no conflicts between installed versions. With docker, you build your image, test it and ship it.
We had vm-per-app before docker, so it was still build the image, test, and ship, but it actually had everything it needed inside the vm.
Docker helps with the portability due to it's ubiquitous it is now, but it's not like the vm requirement went away, the docker image still generally runs in a vm in any serious environment, and a lot more attention has to be paid to the vm:docker pairing than the previous hypervisor:vm pairing.
It is very funny to me that the sibling comment calls this "a very broken setup" and for you "it doesn't sound like a big deal".
It's all about perspectives, or you really just never had to deal with it.
The happy path ain't a big deal. But think of the unhappy ones:
* What if a server gets rebooted (maybe it crashed) for any reason anywhere in the process. Maybe you lost internet while doing the update. Were you still dropping tarballs? Did the server get it? Did it start with the new version while the other servers are still on the old one?
* What about a broken build (maybe gem problem, maybe migration problem, may other). All your servers are on it, or only one? How do you revert (push an older tarball)
A lot more manual processes. Depends on the tool you had. Good tooling to handle this is more prevalent nowadays.
I use Kubernetes for almost everything (including my pet projects) and I see the value it brings, even if for increased complexity (although k3s is a pretty good middle ground). But none of these things you mentioned are unsolvable or require manual intervention.
> What if a server gets rebooted
Then the rsync/scp would fail and I would notice it in deployment logs. Or it should be straightforward to monitor current version across a fleet of baremetal.
> Maybe you lost internet while doing the update
True, but even Ansible recommends running a controller closer to target machines.
> What about a broken build
That's what tests are for.
> maybe migration problem
That's trickier, but unrelated to deployment method.
Never said they were unsolvable. You asked for elaboration about pains of back then before lots of the tools most take for granted existed. You seem to think we are talking about massive problems, but it's more about a thousand papercuts.
> What about a broken build. All your servers are on it, or only one?
The ones you pushed the image are on the new image, the ones you didn't push the image are on the old image.
> How do you revert (push an older tarball)
Yes, exactly, you push the older version.
The command pushes a version into the servers. It does exactly what that says. There's nothing complicated to invent about it.
All the interpreted frameworks use the same semantics, because it works extremely well. It tends to work much better than container orchestration, that's for sure.
> A lot more manual processes.
It's only manual if it's not automated... exactly like creating a container, by the way.
I spoke to the authors of this database while running my last startup about the exact problem this sets out to solve—the need for time-travel features in particular domains (versioning for everything!). Opens up a range of interesting options for novel user experiences—really glad that this is seeing the light of day.
I reviewed the article in question and have a little more color to add for some of the skeptics commenting below.
Advocates of pre-merge review point out (correctly) that peer review is valuable: humans are fallible, and a second pair of eyes often helps. Maybe I read a different article, but I don't think the author disputes this. What gets lost in the discourse around the dominant PR-based, asynchronous workflow is that it comes with tradeoffs. Do you understand what you're giving up to get back in your preferred mode of working? Are you so certain it's more appropriate for your present circumstances?
_Forced_ pre-merge review has a number of negative tradeoffs that aren't always visible to the teams that use it. For one thing, it can lead to "review theatre": casual pull request reviews can't meaningfully detect most bugs; reviews that can are hugely time consuming and as a result quite rare; poor PRs are sometimes "laundered" by the review process; poor reviewers encourage bikeshedding; but even a bad review can introduce a cycle time hit and a bunch of context-switching as both the submitter and reviewer bounce back and forth. If you work at a shop that uses PRs and has none of these problems, I salute you; I have not.
The answer to all of these from PR advocates tends to be, well, maybe make the pre-merge review process itself better, to which I say: you are making a slow process slower; if your team is trying to move quickly, instead of adding additional padding around a slow process, maybe try to smooth out a fast one?
A good framing question I ask my teams is: if your goal was to get high quality code into production as often as possible, what processes would you tweak and why? Where would you invest and where would you pull back? There are lots of great ways to ship high-quality code quickly without pull requests; we did it all the time before they were invented.
The article also encourages other methods of involving your colleages. "Talk to your team before you start, so you can get better ideas and avoid rework."
So if you do that or even pair program it will be easy for the involved (if you involved them good enough) to review.
I use Scala frequently for internal microservices development. I also write a fair amount of Clojure. I believe that a lot of good and robust software is built in Scala, but that it suffers from a kind of complexity fetishization that offsets, for debugging purposes, a lot of the positive effects of robust typing. Compiler speed is also a problem for me personally.
Given my choice of options, I will choose to build new systems in almost any other JVM language, including stock Java 8. However, the language isn't going anywhere, and is worth learning. I find that feedback is fastest in Clojure. Your mileage, as always, may vary.
I think that talking about a 10x engineer or 0.5x engineer misses the right optimization: good software is built in teams, so build a strong team. Cross-skilling, pairing, and blocker removal all help give people a collective sense of commitment and autonomy: you have the right to touch the systems you need to accomplish your job, and it's my job as servant-leader to remove whatever's stopping you. But you have less leverage and less resilience without that sense of collective ownership.
I've likewise experienced both 10x and 0.5x situations. The latter invariably arose from sitting in a corner, divorced from context or support. The former is extremely rare and I've only achieved it on personal projects. But I'll settle for a motivated and healthy team of 5xers any day.
Cursive is an excellent way to write Clojure and has improved my workflow tremendously. Highly recommended; I bought a license. Congratulations to Colin on a 1.0 release.
I'm not clear what technique this article is trying to describe, but anyone that spends two weeks pre-writing tests is not performing TDD. That simply isn't how it works.
The cycle is: red, green, refactor, repeat. That's per test. It shouldn't take long. It works nicely.
I remember working out of the same office as these guys back when they were just getting started, and their success and impact has been truly amazing. Recognition well deserved.