if (looping.mode === "off") {
// If you waited for a day, you deserve to see this workaround...
// Since there is no way to not loop a gif using gifwrap,
// let's just put a reeeeaaaaallly long delay after the last frame.
return 8640000;
}
Nice, this reminds me that I made myself some time ago an avatar for a forum, and it has these crazy eyes standing still. Unless a few minutes passed, when they rolled. Only one person noticed it.
I once grabbed the generic profile picture (a silhouette) for an internal bug tracker, tilted it slightly, then set it as my avatar. One person commented something looked off, but they couldn’t put their finger on what.
Love that! Not many folks will notice, but those that do, it will make their day (or make their skin crawl). Speaking of which, there's two or three Easter Eggs in this app as well :)
Though they share the word "algebraic", algebraic data types != algebraic effects. And while Java has good support for concurrency primitives and concurrent data structures, it does suffer from the problem highlighted in the article:
> Over time, the runtime system itself tends to become a complex, monolithic piece of software, with extensive use of locks, condition variables, timers, thread pools, and other arcana.
I'm not an expert on this, but my understanding is that the problem that algebraic effects tries to solve is to improve language semantics to make it easier to separate different levels of abstraction (e.g. separating the what from the how), while also encoding the performed effects into the type system.
Scala Native and GraalVM Native Image are projects with different goals, so I wouldn't say that one makes the other unnecessary.
Both projects aim to compile to native code, and have use-cases for projects where the JVM startup time is too high. However, one of the main goals of Native Image is to offer as much partial evaluation (PE) at compile-time as possible. Scala Native does also seem to do some PE, but my understanding is that it's less than what Native Image does. However, Scala Native has the advantage of working on a representation of the source code instead of the byte code, and may therefore be able to do certain Scala-specific optimizations that would be more difficult for Native Image.
I think different projects may find that either one or the other project may be more suitable to their needs, so I think both projects can coexist.
> However, one of the main goals of Native Image is to offer as much partial evaluation (PE) at compile-time as possible.
This is tangential, but I wonder if Native Image's focus on compile-time PE, and the overall design of GraalVM, would make it feasible to AOT-compile a sufficiently static subset of JavaScript to efficient native code. If so, that could influence my choice of language for new projects.
First AOT binaries are slower than JIT, there's nothing efficient about it, at least for high level languages.
Second you can already make javascript binaries thanks to GraalJs.
The biggest added value by far is the polyglotism/interop with the other language universes.
This is correct, though the inline feature does also unlock some optimizations. For instance, see this example from the docs where a recursive method is optimized to a few sequential commands when some of its parameters are constants:
I think the Scala 3 book [1] might be what you're looking for!
There's also an ongoing video series called "Let's talk about Scala 3" by the Scala Center and 47 Degrees [2], which are more in-depth on certain topics, but still quite beginner-friendly. Especially the video on setting up a development environment, and the video on Scala 3's data structures might be good introductions.
I wrote a blog post about this once[1] because I was totally shocked how few people actually use a separate domain for their status page. It's like < 0.5% and I'll never understand why.
> >>But you can’t use two completely different DNS providers for your status page and your primary page unless you are using a distinct domain.
> You can.
Depends on who's who though. Yes, it can be separate if they were different non-apex (AKA like www.google.com and mail.google.com) domains and apex domains (like google.com) can use at least two (depending on your registry) different DNS servers.
However, by design there is an implication that your apex DNS servers are synchronised (or reasonably so). So for example, one of your provider have malfunctioned and instead of just pulling it offline it answered your requests with an IP address you don't control (let's say 198.51.100.17, which is not a routable address) with a exceedingly long TTL (say a week). If a client-side DNS resolver followed it through the heart, it will not allow anything to reach the intended server, even the functions server for non-apex domains.
Plus, registry issues (there is only one registry in the end) and if they messed up, the website is going down (unless they are prudent) etc.
I was happy when yarn first came onto the scene and gave npm the kick in the butt it needed to improve.
Now I wish yarn could be deprecated and we could go back to a single package manager. There's unfortunately segmentation in different areas around package managers, e.g. electron seems to prefer yarn. And for package maintainers there's extra overhead to document and test installation with both npm and yarn.
I hear you, but things are not really moving in that direction, because it's not that simple. The closer you look into what they do and how, the clearer it becomes that [npm7 vs yarn1 vs yarn2 vs pnpm] is the current set of legit choices, for various reasons.
Yarn v2 PnP is simply a lifesaver if you have a medium+ sized monorepo.
We have a monorepo with 180 packages here. Without pnp, it takes 1h+ just to npm install any new third party package in any local package, it’s a joke. With pnp it takes 18s.
So yes, from my point of view NPM is completely inadequate for any serious JS codebase.
We have a pretty large monorepo codebase (460 packages and counting) that we're migrating from yarn v1 to yarn v2. I'll say it's definitely not a plug-n-play migration (pardon the pun).
Some issues we ran into:
- it can be difficult to reason about the layout of peer dependencies. Often times libraries that rely on Symbols or referential equality break and you need to mess with packageExtensions or add resolutions or unplug. Debugging mostly consists of throwing stuff at a wall and see what sticks
- file watching at large enough projects breaks w/ file descriptor exhaustion errors, forcing you to find and unplug the offending libraries
- there's a number of known incompatible libraries (flow being one of the most prominent) and the core team approach to these at this point follows paretto (20% effort for 80% results, e.g. special casing for typescript), meaning I don't believe there will ever be a 100% compatibility milestone w/ the ecosystem
- it's much more black-boxy in terms of day-to-day debugging (e.g. it's much harder to manually edit files in node_modules to trace some root cause)
- we ran into inscrutable errors deep in packages that interface w/ C++ and basically were only able to fix by pinning to an earlier version of a library that did not depend on said package.
- migration cost is heavily proportional to codebase complexity. My understanding is that Facebook gave up on it completely for the foreseeable future and ours has similarly been a significant time investment (and we're not even targeting strict mode yet)
The pros:
- install times and project switching times are indeed fast, even in our codebase that contains multiple major versions of various packages
- yarn v2 has many interesting features, such as protocols (though it's debatable if you want to commit to lock-in to benefit from those)
Regarding TypeScript, I think it's important to point out that we have a working PR in the TypeScript repository that we've been maintaining for about a year now. It's not so much special casing as being ahead of trunk. I still hope the TypeScript team will show interest eventually and we'll be able to streamline the development.
I meant special casing in the sense that this a conscious effort specifically targeted at Typescript support, as opposed to some generic design that would cater to a large class of projects.
Mind you, I understand that there are legitimate reasons to approach it this way now (e.g. technical limitations, differences in opinion wrt project governance, cost/benefit on long tail, etc). I'm mostly cautioning the unaware that one shouldn't necessarily expect that every package will work under yarn v2 (though an overwhelmingly large majority does work just fine).
From what I've seen, the "unplug" command is supposed to allow you the ability to temporarily unzip a package so that you can do the traditional "hand-edit a file in a dependency" debugging approach.
Yes, but when you're dealing w/ transitive dependencies, often times you need to jump between many packages. And you then need to clean up after your debugging since you typically don't want to leave things unplugged if they don't need to be (as that affects performance).
I'm not saying it's impossible to debug, just that you end up having to jump through more hoops.
Well, you gotta clean up files you've hand-edited in `node_modules`, too, if you've been adding a bunch of `console.log` statements :)
at least this way it's just deleting the temp package folder or running whatever the "replug" command is, instead of having to go figure out all the files you were editing.
Eh, node_modules hacking is certainly not great by any stretch of the imagination, but once you work with it long enough, there's a bunch of stuff that you just get efficient at. Spamming undos in open files is fairly easy. If the editing ends up being a real fix, then you upstream it and install again. There's also considerations about jump-to-definition and similar tools, etc.
You can't accidentally commit your debugging (unplug edits package.json and there's no replug command) and you don't end up with 3 unplugged folders for the same package (that's a whole can of worms on its own). There's also some yarn 2 specific pitfalls regarding __dirname in local packages, symlinking semantics, etc.
Anyways, getting way too into the weeds here, I better stop now lol :)
I tried yarn 2 on a greenfield project, but discovered:
- pnp is made possible in part by mysterious “patches” to certain dependencies that don’t work well with it. Mysterious as in they’re obfuscated, and there isn’t much detail besides commit history. This is blocking if you wanna try out, say, the TypeScript 4.1 beta and the patch isn’t ready yet. But more importantly um... I do not want my dependency manager mysteriously patching stuff with obfuscated code?????
- it applies these patches even if you disable pnp, so same objections to the entire yarn 2 approach (currently)
So I’m back on yarn 1 and apparently gonna need to look at npm 7 at this point.
I wrote the above before caffeine really kicked in, so I neglected to add: pnp is itself achieved in part by obfuscating your entire dependency tree. That takes a loooot of trust, on a matter where trust has already exceedingly deteriorated. In hindsight, I regret even considering it.
They probably mean the idea that once you're in PnP, you can "kind of, sort of" peer into zipfs deps, but not in the same way that was possible in bare node_modules.
That said, I think yarn 2's PnP + zero installs (https://yarnpkg.com/features/zero-installs) is lovely with CI. Instead of tacking 40+ seconds to resolve deps every build, vending deps with PnP on is much cheaper than the node_modules equivalent.
(Not a real edit): my last gig was with a lot of well prepared juniors, but they lacked the confidence to go look inside dependencies to find out what was happening. I tried to encourage setting breakpoints or logging or whatever felt comfortable in required packages. It was hard.
Turning that into a blob is even more discouraging.
As I mentioned elsewhere in the thread, Yarn v2 does have an "unplug" command that will extract a given dep into a folder for the time being. Does help with that use case.
And if you trust that they’re fundamentally the same thing, that’s a great escape hatch. I personally tried to use two new things together and discovered that one is transparent and one is opaque magic... and given the opportunity to do harm, I found the opacity of one alarming. I don’t trust yarn to manage dependencies in pnp, because what I saw in how they handled a special case was completely black box. Literally binary blob patches with no explanation of what it’s doing or why. Completely impossible to audit without reverse engineering or auditing the entire tool. Why would I trust “unplug” to do anything but misdirect?
FWIW, if I wanted to confirm whether an "unplugged" package had been modified, I'd just download the original tarball from NPM, extract it, and diff the two folders.
I mean the way that yarn 2 “installs” typescript is by patching it with some manually maintained base64 blob that (I assume) corresponds in some way to the base64 blob that pnp produces. Both are probably something you can reverse engineer... if that’s how you want to trust your package manager I guess? Idk I only learned that the patching was a thing because it failed when I tried to install an “unsupported” package. I was alarmed by trying to track down what was happening and saw the patch has no explanation. I was more alarmed when yarn2 tried to apply the patch even with pnp disabled.
Hmm. Okay, digging around in the Yarn repo, I see this "generate patch" setup code [0]. Looks like they're trying to cherry-pick some specific commits from the TS repo based on the TS version, and specifically apply them to the TS file.
The "base64" bit is referenced here [1].
I would assume this specifically relates to the fact that TS does not have native support for Yarn PnP as a filesystem approach. The Yarn team has been keeping an open PR against TS [2] and trying to convince the TS maintainers to merge it, but it hasn't happened yet.
A bit odd, and I can understand why you're concerned, but it also looks like there's a very understandable reason for this.
I would have assumed that this doesn't get applied if you install TypeScript via the Yarn v2 `node_modules` linker, but would have to try it out and actually see.
This blob is literally our open PR, applied to the various TS releases. You can rebuild it using `gen-typescript-patch.sh` (we actually have a GH Action that does this on CI, to prevent malicious uncontrolled changes), and the sources are auditable in my PR.
Note that it gets applied regardless of the linker since it would cause the cache to change when going from a linker to another, and we wanted to make the experience smoother, but that it's a noop for non-PnP environments.
Sorry if this makes it harder but I honestly recommend reading up on pnpm (https://pnpm.js.org/) before committing to npm7. Npm7 auto-installs peer dependencies(!) and pnpm has some remarkable advantages over npm or yarn.
Pnpm is indeed better than npm but I found it’s symlinking approach less compatible than yarn v2, (nextJS for example didn’t support pnpm until very recently) while also having less deterministic module resolution, creating version compatibility problems that disappeared with yarn v2.
Did you try out pnpm, by chance? I’ve read a few good things, but it doesn’t seem to get mentioned all that often. So I’m curious what people with larger projects think about it.
I did try it but it caused two problems as compared to yarn v2: the dependency resolution algorithm seems less deterministic or strict, causing version incompatibilities that yarn v2 did solve, and also the symlinks are poorly supported by many tools (nextJS until very recently, react native etc...). Also the install is longer with pnpm. However it has less runtime performance impact.
Cannot figure out why you are being down voted. YarnV1 and NPM are horrible if you have a large dependency tree. YarnV2 was the first time I enjoyed the package manager.
could it be that in some languages (like my language for example) serious is kinda of a synonym for large?
I've been downvoted at times for using it to mean exactly that, but I can't help it after more than 40 years of thinking in a language different from English.
Well yarn exists for this purpose so I guess I’m not the only one doing this. And if the alternative involves having to manage independent versioning of 180 packages and their inter-dependencies then no thanks.
I’m not saying the situation is completely perfect (yarn v2 had its rough edges in the beginning for example), but it’s not too bad either. This monorepo is the best organized codebase of this size and diversity I’ve ever seen.
Feel free to explain alternative methods to manage 180 packages with 7 developers while sleeping at night.
> The package-lock.json file is not only useful for ensuring deterministically reproducible builds. We also lean on it to track and store package metadata, saving considerably on package.json reads and requests to the registry. Since the yarn.lock file is so limited, it doesn’t have the metadata that we need to load on a regular basis.
So I guess there are some performance benefits with npm 7 compared to Yarn 1?
And interestingly, Yarn 2 actually goes quite a long way off what a lot of Node users wanted from it originally (at least we have no interest in moving to it).
If you just use "yarn" as you'd think of it, you are probably still using Yarn 1, so I guess it's being thought of as a different parallel project
Are there reasons to go back to npm? I switched back when yarn came out and haven't looked back. Been super happy with yarn. Can't say the same about npm.
People are more likely to already have npm installed and to be familiar with it. So there's an argument to be made that all else being equal, picking npm lowers the barrier to entry for new contributors. This consideration could be especially important for open source projects.
That's a valid point, but I don't think the barrier is particularly high. I've done the switch from npm to yarn once. It was a process measured in hours to understand the differences. It's not like Git vs Subversion or something like that.
I don't think that's very compelling, versioning-wise (it's still independently versioned). Futhermore, the official node docker images come with yarn pre-installed, and there appears to be no way to bundle in a specific npm version in source control, like you can with `yarn policies set-version` (v1). That has worked wonders for us. Before yarn we used to have problems with developers using different versions of npm on their machines/build agents, and .nvmrc/"engines" doesn't help you there other than being an "error gate". The yarn executable acting like a shim delegating to the checked-in version is brilliant for versioning (especially CI).
I've been considering switching to pnpm for political reasons since using open source projects that are ultimately at the mercy of big corps (npm > Microsoft, yarn > Facebook) makes me slightly uneasy. But I'm hesitant to because pnpm seems so new.
Have you encountered any regularly occurring issues or headaches regarding pnpm?
Thank you. I was not aware of this. Also, last I heard transitioning from yarn v1 to v2 was not straightforward. Do you know if this is still the case?
FWIW, I recently tried a branch where I migrated our existing repo from Yarn v1 to v2.
The immediate issues I ran into were lack of Yarn v2 support for some features critical for internal enterprise usage: no support for the `strictSsl` / `caFile` config options from NPM / Yarn v1, and an inability to read lockfile URLs that were pointing to an internal NexusRepository instance for proxying NPM package installation.
Both issues were resolved very quickly by the Yarn team. I then ran into a problem where the post-install build steps could not run in a locked-down corporate security environment, and that issue was also addressed very quickly, with the Yarn team putting up a PR that tried different process launching approaches and iterating until one worked for me.
Having sorted out those issues, I was able to move on to actually following the steps in the Yarn v2 migration guide [0]. The steps worked basically as advertised. The `@yarnpkg/doctor` tool identified several places where we were relying on imports that hadn't been strictly declared, so I fixed those. Starting up the app caused some thrown errors as other non-declared imports were hit, so I kept iterating on fixing those.
I also used the `@yarnpkg/pnpify --vscode` option to generate some kind of settings file for VS Code, and added the suggested "zip file system" extension to VS Code. That allowed me to right-click a library TS type, "Go to Definition", and show a file that was still packed in a package tarball.
I had to switch off to other tasks and haven't had time to go back and finish trying out the migration. But, parts of our codebase were running correctly, and it looked like I just needed to finish out the process of checking for any remaining non-declared dependencies.
Can't vouch for how this would work out in production or a larger build setup, but things looked promising overall.
I've had a few issues using pnpm with other tools (Renovate, Dependabot, etc.) but at least with Renovate the issues have / are being worked out. I'm happy with pnpm so far and will continue to adopt it incrementally as it's popularity grows.