For node applications, startup time is impacted by IO a many files is less nice for IO wait times. So bundling does make a material impact for non-bundled backend applications and large libraries. I do agree, most impact is had when using bundling at a moment closer to the deployment.
I use it to compile backend code. For those use-cases, IMO, vite itself is not so interesting (although I do use vitest). Using tsdown gives me a simplified API to compile my BE code so I can publish it to NPM. Nothing more nothing less. It’s faster and less work to orchestrate using TSC for CJS and ESM output, so very high ROI for me.
I’m using tsdown for a collection of packages and am switching a current project (https://flystorage.dev)over to it. I use it in “unbundle” mode, which doesn’t bundle but does file for file transpilation. To me, it’s an opinionated rolldown configuration with a simplified API. You can script up in a couple of lines of code which packages in a monorepo to compile and what formats to compile for. An example of that can be found here: https://github.com/duna-oss/deltic/blob/main/tsdown.config.t...
Compared to using plain tsc to compile the code, is that it’s a lot quicker. The compiled code has some odd conventions, like using void 0 instead of undefined, but … whatever works!
So far, it has been an easy-entry high-ROI tool that helps me publish TS/JS tools quite easily.
I want to move a project I work in over from tsc to tsdown this week at some point, also with the unbundled mode.
Currently we're using tsc with the new build mode to build everything at once, but the result is incredibly brittle and requires a lot of unnecessary extra configuration all over the place that tends to confuse people when they need to add extra packages or make changes somewhere. It's also very slow (hopefully something that will be fixed by tsgo, eventually).
My initial plan was to have a separate tsdown config in each package and use pnpm to build the entire monorepo (or at least, the parts necessary for each sub-application) in parallel. But your config also looks like a useful approach, I'll explore that as well. Thanks for sharing!
Resolving by hash is a half solution at best. Not having automated dependency upgrades also has severe security downsides. Apart from that, lock files basically already do what you describe, they contain the hashes and the resolution is based off the name while the hash ensures for the integrity of the resolved package. The problem is upgrade automation and supply chain scanning. The biggest issue there is that scanning is not done where the vulnerability is introduced because there is no money for it.
Do you suppose that automated dependency upgrades are less likely to introduce malicious code than to remove it? They're about compliance, not security. If I can get you to use malicious code in the first place I can also trick you into upgrading from safe code to the vulnerable code in the name of "security".
As for lock files, they prevent skulduggery after the maintainer has said "yeah, I trust this thing and my users should too" but the attacks we're seeing is upstream of that point because maintainers are auto-trusting things based on their name+version pair, not based on their contents.
> If I can get you to use malicious code in the first place I can also trick you into upgrading from safe code to the vulnerable code in the name of "security".
Isn't the whole point that malicious actors usually only have a very short window where they can actually get you to install anything, before shut out again? That's the whole point of having a delay in the package-manager.
Who is going to discover it in that time? Not the maintainers, they've already released it. Their window for scrutiny has passed.
There is some sense in giving the early adopters some time to raise the alarm and opting into late adoption, but isn't that better handled by defensive use of semantic versioning?
Consider the xzutils backdoor. It was introduced a month before it was discovered, and it was discovered by a user.
If that user had waited a few days, it would just have been discovered a few days later, during which time it may have been added to an even wider scope of downstream packages. That is, supposing they didn't apply reduced scrutiny due to their perception that it was safe due to the soak period.
Its not nothing, but its susceptible to creating a false sense of security.
The maintainers did notice in both of the recent attacks, but it takes time to regain access to your compromised account to take the package down, contact npm, etc.
All recent attacks have also been noticed within hours of release by security companies that automatically scan all newly released packages published to npm.
So as far as I know all recent attacks would have been avoided by adding a short delay.
The xz backdoor went undetected so long partly because the build scripts were already so hairy and baroque that no one noticed the extra obfuscations that ran code out of a binary blob in test data. None of which was even in the source repo, it was dropped into the package build scripts externally just before pushing them to the apt/rpm package repositories.
I’ve given up on hopes of having funding on open source. My open source packages account for about 1.2% of all PHP code downloaded from Packagist (package manager) but unless there is a commercial effort behind it, I do not see it happening. A couple devs in highly hyped companies is able to generate a following big enough to solicit some non trivial amount of funding but the majority just doesn’t care enough about it to fund it. In the end, is open source maintainers are stupid enough to give our code away for free, so who’s really to blame for this. Perhaps it’s an overly pessimistic view, but not a view that has historically been disproven.
MIT is pumped to enable current ecosystem, precisely. Companies say "This my code when I need it, and it's your code when it breaks", and developers read the fine print very late, because they thought exposure is valuable.
GPL & AGPL is effective against that, but companies are afraid of it since it tells "code is a collaborative effort, and you have to share what you did with the code".
Because of this, I share most of the code I write for myself, and strictly use (A)GPLv3 as a license. I don't care what companies do or what riches I possibly ignore. My principles are not for sale.
Being responsible generates no value for the shareholders. Being able to be reckless and ignore everyone while making business is.
> Companies say "This my code when I need it, and it's your code when it breaks", and developers read the fine print very late, because they thought exposure is valuable.
I think that this is an accurate description of working relationship. But, the fine print (MIT license) explicitly says that the companies are responsible:
> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED
That line allows shifting the blame upstream without any friction.
Exhibit A: Company X uses library Y by Mr. Z., which is used by another 100 or so companies. Mr. Z. is happy because he's quasi-famous because of all the exposure. A bug has been found in Y by users of Company X, which is not interested in fixing it.
- Users: Hey Company X, this feature provided by libY is broken.
- Company X: This makes us lose money, but it's complicated. Tell Mr. Z.
- Mr. Z: There's no warranty whatsoever.
- Company X: You either fix it, or we spread the word that you're irresponsible and everyone will inevitably migrate to libW.
- Mr. Z: OK. Lemme look at that.
Mr Z. drops everything, fixes problem, maybe gets a Thanks!, and might feel better. Company X and other hundred gets free labor for their problems, and one person burns out.
Why? Because nobody tried to understand how GPL works, and companies said MIT or no cookie points anyway.
So, another developer is bought with hope vapor. He gets nothing in the end, while the company is printing money in two ways by not buying an expensive library and selling its capabilities.
I don't claim to have first-hand experience, that was just a suggestion. But there is a recent study on how maintainers respond to bug bounties here: https://arxiv.org/abs/2409.07670 .
The title of the linked HN story is "Microsoft offered FFmpeg small one-time payment instead of support contract".
So FFmpeg said that they need a contract for that, and they have given a couple thousand dollars as a one-time contribution.
I mean, "a few thousand dollars" for something underpinning Teams, is unacceptable. They probably charge 10x much for a small client for their yearly license.
My point is if that FFmpeg, tried to raise more awareness of the issue, say talk to news outlets, they could get much more funding from MSFT.
Furthermore, big companies like Google, Microsoft care a lot about security. So they could raise money for security engineering like fixing memory corruption issues.
Of course, FFmpeg could complain Google, Microsft doesn't care about all the
high severity vulnerabilities in FFmpeg.
That would be much more of an eye catcher.
Please see what Daniel has shared today. Link is in the comment you replied to.
Open Source software became so common that the tragedy of the commons applies to it. IOW, there'll be always someone who will accept exposure as a valid form of payment either being very rich or being desperate or not caring.
I did read that link before commenting, and there's nothing in there about users damaging Daniel's reputation after he declines to do free work for them?
> there'll be always someone who will accept exposure as a valid form of payment either being very rich or being desperate or not caring
Why is this, especially in the cases of being rich or not caring about compensation, a problem? I have done a lot of Open Source work for free, and a lot of Open Source work while paid by companies, and I don't feel like I've been exploited or otherwise mistreated in either case.
It's not a problem, it's just a fact. I personally don't care about the compensation either, but not everyone is motivated the same about developing software.
On the other hand, I believe requesting somebody's time for free is unethical, esp. if you are a company and wanting something from other parties at a certain quality at a certain time.
Somebody using your code and getting business done with it might not feel exploitative, and it might be true for you, and me. However, if they demand support from you, in X hours, at Y quality, and expecting you to "stop, drop and roll" for them, now that's exploitative. This is what I'm trying to say.
Many young people, who happened to write good code and their good code picked up by corporations are exploited like that. Not all of them know the better or have the gravitas to tell "go fix yourself", and this allows exploitation to continue.
I'm very grateful for people who write this code to enable this massive and wonderful ecosystem. I try to help them by filing high quality bug reports, submitting patches if I can and monetarily support a couple of them. I'm not against open source, but prefer Free Software more, because it's fairer towards the developers and the users. I don't like companies running away with someone's effort and come back and low-key threaten for free work.
Also, again talking about Microsoft, there's the WinGet/AppGet saga, which is ugly in its own right.
> Not all of them know the better or have the gravitas to tell "go fix yourself", and this allows exploitation to continue.
Agreed there, but then this is what I think we should be arguing for. Not "companies are wrong to use software without paying" but "companies are wrong to demand work from (and especially to make threats to) volunteers" and "volunteer maintainers should be well supported by the community (and anticipate such) when they decline to extend software".
> Agreed there, but then this is what I think we should be arguing for.
I mean, the original comment (by me) you replied to is intended to portray a scenario where the company threatens the developer for not fixing a bug which affects the company in short notice, for free.
Possibly I read more into your comment than you were trying to say, but I interpreted you as saying "and so we should shame companies for not paying" as opposed to "and so we should shame companies for threatening"?
You dove a little deeper than I intended. In short:
- Companies use Free or Open Source Software: That's great.
- Companies give feedback (bug reports, RFCs, developer time etc.) to said projects: That's awesome.
- Companies wait for the developer and have no hard feelings when their requests are done for free, or rejected because it doesn't fit developer's vision: That's the way it should be.
- Companies pressure/threaten developer for features, timeline, requests and expect the developer to do as they say for free: Hell no!.
If they see eye to eye and let the developer be, it can be done for free. If they try to treat said developer as their employer who works for internet cookie points, now we have a problem.
The GPL can't solve the FOSS funding situation, its relatively easy to comply with, and still not send any money (nor code) back upstream to maintainers.
As our resident GPL expert, you're right, but the reality differs a bit, with all the respect.
Companies doesn't like GPL because it mandates them to show hang their laundry outside. In turn, this creates a code quality pressure which companies doesn't want to pay for. Also, this visibility creates another, more psychological pressure on companies by exposing the external stuff they are using.
As a result, companies become more vulnerable to external pressure since somebody can point out what they are using without supporting and calling them out on it.
This can potentially send more money to developers, but this will not create value for the shareholders. Because having another yacht is more important than a pesky person's mental health and living conditions.
The GPL doesn't mandate public disclosure of code, just offering code to your users, who probably won't even know what source code is, let alone download it, tell anyone about it, modify it or redistribute it.
The EU CRA law is going to start creating the code quality pressure you mention too, with financial and other penalties. So they will have to do the right thing eventually. Hopefully that will make the GPL more acceptable to them.
The external pressure thing applies to the permissive licenses too, since companies have to provide attribution as part of the MIT/BSD/etc licenses, usually by having copies of their copyright notices in the system settings of their devices, for example curl is permissively licenced, all the car companies use it, none of them sponsor curl, and curl is now complaining about that. Of course, its extremely unlikely any of those companies care. The CRA might make them care though.
> The GPL doesn't mandate public disclosure of code, just offering code to your users...
That's the theory, and it's correct. We have discussed this with you before. However, a SaaS running AGPL code has to put it "out there", or mail to any user as soon as they register, so in this case it's moot.
Considering many GPL software is also distributed over the net, the code has to be "out there", again, in practice. Unless you are RedHat and selling the GPL software in question, which is perfectly fine.
> The external pressure thing applies to the permissive licenses too,...
Finding the copyright notices buried at the bottom of a text with the length of a Hollywood movie end-credits roll which is in turn buried 5 levels of menus is practically impossible if you don't try it. I can argue that GPL's condition is "in your face" when compared to permissive licenses.
Also, who will dig and find that I have used a specific library if I conveniently forgot to add its copyright line to this already long wall of text? "What will they do? Sue me from their mother's basement?" the companies think 99% of the time.
busybox has a tool to detect their inclusion in an embedded image, but that's GPL to begin with.
The GPL and BSD notices are usually in the same place, in the Settings -> About -> Legal notices dialog or similar.
> Also, who will dig and find that I have used a specific library if I conveniently forgot to add its copyright line to this already long wall of text?
People will still find out. The router I have violates both the BSD license, and the GPL. It simply has no copyright notices at all. The only indication it violates both is the web server 404 page links to the micro_httpd homepage, and the network filesystem feature uses the word samba. Thats probably more common than deliberately incomplete copyright notices. Even more common is wilful deliberate GPL violations.
More realistically, users are going to say "Hey Company X, this feature is broken." They won't know or care about libY. I would have replied with "There's no warranty whatsoever. Please submit a bug report and we will prioritize it accordingly. We do accept pull requests."
The bug might have low impact in most cases but doesn't work with how Company X is using libY, so it might not get fixed for a while. If this is hurting them, they can fix it themselves and submit a PR. Or they can work with them to prioritize their bug, which puts them on the other foot. If it's a huge problem that affects half the web, then Mr. Z will be working on it anyway.
If I were Mr. Z, I would know the problems Company X will have replacing libY with libW, and wish them the best of luck if they bring it up. No one's paying me, if they want to use something else, good riddance. Especially if they are threatening me. But I get it, people are different.
I'm sorry, but what kind of fantasy is this? Here's how it works in reality:
- Customers: Hey Company X, this feature provided by libY is broken.
- Company X: This makes us lose money, but it's complicated. Tell Mr. Z.
- Customers: We don't care who Mr. Z is or who is responsible. If your company does not fix the problem we are going to fucking murder you.
No paying customer will ever accept that a company tries to shift the blame to somebody else. So Mr. Z is free to ignore anything that company asks from him, reputation intact.
This I would strongly dispute. I’ve seen it first hand many times that developers who ignore such things are definitely finding the negative consequences of it. It takes very careful maneuvering not to get burned, either by reputation damage or to burn out.
So your "reputation" among a bunch of parasites takes a hit? Who cares about what they think? They're not giving you any money anyway. They're just using you.
It's like if a group of bums in the park think I'm a cool guy because I give them cigarettes when they ask. Great. And if I stop giving them free cigarettes then they say amongst themselves "man, that guy is a real jerk". Ok, should I care about what a bunch of free loading bums think?
Of course I understand that I will be down voted for this. Because people who love being victimized hates when people point out that they're being taken advantage of.
While you might see them as parasites, their community reputation may be very different. To fit into your scenario, you may need to get work from the other bums.
If people demand that you work for free for their monetary benefit and badmouth you if you don't, then that's not a "community". Those are people you want nothing to do with. Most businesses understand that they have to pay for every benefit or service they get from third parties.
Most professional developers aren't that stupid. The problem is
students, and the underemployed more broadly, write code to make a name
for themselves, which isn't entirely irrational.
A bit unfortunate they used the term domain model here. Domain models here are purely data-centric, whereas domain modeling focuses mainly on behavior, not underlying data structures. The data that is used in domain models is used to facilitate the behavior, but the behavior it the code focus.
From a modeling perspective, there is certainly inherent complexity in representing data from domain models in different ways. One can argue though that this is a feature and not a big. Not the same level of nuance and complexity is needed in all of the use-cases. And representational models usually are optimized for particular read scenarios, this seems to mandate argue against that, favoring uniformity over contextual handling of information. It will most likely scale better in places where the level of understanding needed from the domain model is quite uniform, though I have seen most often that use-cases are often complicated when they do not simplify concepts that in their code domain model is very complex and nuanced.
Saying “hire good people and give them room to do good work” is bad advice is the same as judging a dish while using half a recipe. It’s execution centric and execution alone is never enough, you need direction. A strategy, a vision, a need. Without a need from a customer it’s nothing.
One thing that consistently bugs me is how upgrading from the lowest type of runners to _literally_ anything else renders your paid for included minutes useless. I do not understand who those wouldn't just be multiples of the base runner, much like how Mac and Windows runners are. Seems like the crediting logic is there, but knowing software, there is probably some accidental complexity causing this part not to leverage it. That said, it's very frustrating from a paying customer perspective.
The same goes for minute crediting, where splitting things up to run concurrently is actively discouraged because they individually all round up to the next minute. For example; have 3 concurrent tasks that run 1m10s? You're billed 6 minutes. I get that a run would be rounded up, but come on.
As somebody who's spent over a decade making interactions with filesystem easier, I really understand why somebody would be tired. I originally made Flysystem for PHP to reduce the consumer-end complexity of using many types of filesystems (S3, FTP, SFTP, GCS, GridFS). I've recently made the move towards the TypeScript ecosystem, for which I've built https://flystorage.dev (a TS equivalent of Flysystem). Looks like this could be an easy adapter to include. Will put it on my research list, thanks for sharing!