A guess that's probably correct: Many torrent sites (where the client can download .torrent files from when given an URL) their infra sucks. This includes expired certificates. Users don't want to deal with that shit. Developers don't want to deal with users complaining. It's not really considered a risk because lots of those torrent sites (used to) just use HTTP to begin with, so who cares, right?
> Until you need to fix a 3 year old build that has some insane wizardry going on.
My experience with Gradle is that it's the "3 year old build" that is almost certainly a death knell more than the insane wizardry part. My experience:
git clone .../ancient-codebase.git
cd ancient-codebase
./gradlew # <-- oh, the wrapper, so it will download the version it wants, hazzah!
for _ in $(seq 1 infinity); do echo gradle vomit you have to sift through; done
echo 'BUILD FAILED' >&2
exit 1
I dislike Gradle as much as you probably do, but between Maven and Gradle, the one that "vomits" stuff on the command line is definitely Maven.
Gradle errs by going too far to the other end: it just doesn't log anything at all, even the tasks that are actually being run (vs skipped... do you know how to get Gradle to show them?? It's `gradle --console=plain`, so obvious!! Why would anyone complain about that, right?!) or the print outs you add to the build to try to understand what the heck is going on.
Having worked with Maven and Gradle, I'd say Gradle was worse in the average case, but better in the worst case. There are way more Gradle projects with unnecessary custom build code because Gradle makes it easy to do.
On the other hand, when builds are specified in a limited-power build config language, like POM, then when someone needs to do something custom, they have to extend or modify the build tool itself, which in my experience causes way more pain than custom code in a build file. Custom logic in Maven means building and publishing an extension; it can't be local to the project. You may encounter projects that depend on extensions from long-lost open source projects, or long-lost internal projects. On one occasion, I was lucky to find a source jar for the extension in the Maven repository. It can be a nightmare.
The same could happen with Gradle, since a build can depend on arbitrary libraries, but I never saw it in the wild. People depended on major open-source extensions and added their own custom code inside the build.
When I used Maven, extensions had to be published to and pulled from a public repo. We couldn't even use the private repo that we used for the rest of our libraries, because the extension had to be loaded before Maven read the file where our private repo was configured.
Whereas a Gradle build can read Groovy files straight from disk.
Then you don't have a standard build, you have a build with multiple steps that needs to be documented and/or scripted. In an organization where every other project builds in a single step with "mvn package", and people can check out a repo and fire up their IDE and stuff just works, people are going to get bent out of shape because from their perspective, things aren't working out of the box.
A slightly more powerful build tool that supports custom code in the build doesn't force users to script around it. You can create an arbitrarily customized build that builds with the same commands as a Hello World project. (It's a double-edged sword, to be sure, because people don't try as hard to avoid customization as they would with Maven.)
You can, but why should you need to? Why can't the build tool take the plugin code directly off of disk, build it, and use it? This kind of orchestration of manual steps is what build tools are meant to be good at
Sure. But adding ability to self-modify the build drastically increases the complexity of a build tool. Maven developers decided that they want to avoid that.
I'm using several maven plugins (not extensions) that are defined within the reactor project itself. It works well.
You do need to split your build into multiple projects governed by a reactor but you'll have that anyway as soon as you have more than 1 module. Then you just always build the reactor. Pretty much the same idea as gradle.
1: I believe that you encountered errors, programming is packed to the gills with them, but correlation is not causation in that just because it did not immediately work in your setup does not mean it's impossible or forbidden
My problem with gradle is that they keep making breaking changes for low value things like naming of options, so I have to chase deprecation warnings, and can never rely on a distro supplied gradle version
Gradle devs, please get over yourself and stay backward compatible.
It allows it's users to actually use their computer as a computer instead of a glorified phone.
MacOS nannies you left and right, preventing you from doing things you want to do because Apple says no.
Windows historically didn't have such restrictions because it's a desktop operating system and not a gimped phone. They're slowly being added, but it takes time to overhaul an entire architecture while maintaining backwards compatibility (which MacOS also doesn't care about at all).
Linux is of course far more "hackable" but there aren't as many computer illiterates using it.
LOL you should be upvoted as your comment perfectly captures the blind arrogance of the software industry.
When you call people computer illiterate, you are blind to the technocrat injustice imparted onto the general populace.
> The obnoxious behavior and obscure interaction that software-based products exhibit is institutionalizing what I call "software apartheid":”
> ― Alan Cooper, The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
> “When programmers speak of "computer literacy," they are drawing red lines around ethnic groups, too, yet few have pointed this out.”
> ― Alan Cooper, The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
You too can see the light and rise above the elitism of computer literacy. You know, there are many smart people that are too prideful to put up with what computer people demand as computer literacy. They suffer in silence, you will not have their loyalty, and they will switch to competing software the moment they are able to.
What? I never said being computer illiterate is bad. Plenty of fine people are computer illiterate. And plenty of fine people are fantastic at things I'll never be good at. That's fine.
Each user gets a token associated with them. On each request you first check if they are authenticated via your auth of choice. If so, you take the token associated with this auth from your database. If not, you take the token sent via cookie. If no token is available, you generate one and set it.
Then when a user "signs up" you do the same thing. If they sent a token via cookie, you associate this token with their auth in your database. If they somehow didn't have a token yet they were probably blocking cookies, but you can just generate a new one at that point.
If a user logs in again later while they already had a token you can choose to migrate all data from that token to their login token, so no data that was created prior to login gets lost.
The point is that there's essentially no difference between regular profiles and shadow profiles. Both are just profiles. And a profile can be authenticated with using its token, or using an associated auth provider.
I know this, but do you think all of these parts should need to be coded by hand? It's just a complete waste of time. That's what I meant by writing hacky and ugly code to achieve this.
Plus, in Next, there was no way to associate a token to a social login during the new login in the backend. The only way is to do it via a separate request in the frontend and then relogin the user.
> With the PinePhone modem.. It was quickly found that the Quectel modem ran a stripped down version of Android on its ARM core, with adb shell available over the modem’s USB interface. When a few adventurous hackers started probing it and got shell access, they found tools like ffmpeg, vim, gdb and sendmail compiled in – certainly not something you’d need on a cellular modem, but hey.
EG25 is an IoT modem and those tend to expose some extra functionality such as HTTP clients or TTS synthesis over AT commands. Some even document how to compile and run software on them - though of course it's only about the application CPU and not the actual modemy stuff that runs on separate DSPs with proprietary signed Qualcomm firmware.
Most (all?) standalone modems are basically screenless smartphones/SBCs with integrated modem these days.
how are you going to authenticate the user? now you need to solve that if you didn't have a web login before.
---
Guess @dang decided to rate limit my account again so I can't post replies :-)
> Some token that every account gets generated? It's really not that much to ask honestly.
How is the user going to know this token when they visit the website on their laptop? Keep in mind that the Google requirement is that you link to this delete page from the play store, where the user is not authenticated with your app. You can't just generate an URL containing this token.
Btw, if you (or anyone) don't want to be rate limited on HN, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.
It would be a good first step for you stop being sneaky about this. Very hard to respect you as a moderator when you employ underhanded tactics like sneakily rate limiting accounts and trying to gaslight people into thinking this is an universal limit. I suppose it's a (small) step up from the shadowbanning you used to.
Perhaps change the message to something like: "Your account has been rate limited. For more information email [...]"
And honestly, having people beg via email is just gross power tripping behavior.
It's just an attempt to manage overwhelming case load with limited resources. It's on my list to build a system that gives better feedback.
On the other hand, I'm not sure that it won't just make things worse, since not everyone is going to respond as well as you might to a message like "Your account has been rate limited."
Amplifying on dang's comment: from my own experience moderating, many people respond in a strongly negative fashion to moderation, up to and including prolonged attacks on the site itself and threats to moderators. Effective moderation on large sites is a careful balance between transparency and pragmatism, to the extent that even well-intentioned initiatives such as the Santa Clara Principles (<https://santaclaraprinciples.org/>) may not be practical.
Something I note having been caught up on both sides of this issue: as moderator and moderated.
HN itself is not one of the super-sites, but it is amongst the better discussion platforms on the internet here and now (boys), and has been for far longer than virtually any other instance I can think of (dating to 2007). Metafilter would be the principle other exemplar.
Usenet, Slashdot, Kuro5hin, Google+, Reddit, Ello, Diaspora*, Imzy, FB, Birdsite, and others, would be amongst the failures IMO. Not all are now defunct (though about half that list are), none remain usable.
And perhaps more to the point - you USED to be able to use normal Java file apis and syscalls outside of Java, but that functionality has been gradually whittled away (in the name of legitimate security improvement) over the years, meaning "basic" IO functionality your apps relied upon could be taken away at any point and replaced by less ergonomic Java-only APIs with less functionality.
Fun fact: the official Dropbox Android app used to use inotify to watch for changes to the publicly writeable synced files in order to sync back to the cloud! Had to be replaced by java Storage Access Framework APIs later.
Another fun fact is that the Android sdk came with a JNI wrapper around inotify but it buggily stored inotify handles in a static (global, within an app vm) dictionary, meaning you'd silently lose tracking if you created more than one Java file watcher object that happened to be for the same path, so I had to rewrite that JNI wrapper back in the day to avoid that global state bug.
Since you have such strong opinions on the matter, and experience, why don't you contribute to the SyncThing android app and implement this? Alternatively you could grab your time machine, travel back several years and let them know to anticipate this arbitrary change google would pull in the future.
I professionally contribute (and have contributed) to many projects to make them compatible with Play Store policies (that's my job after all), but I have limited time and generally the attitude of SyncThing developer kinda annoys me since it's an attitude of that developer in your PR that will spend weeks of time arguing over code implementation instead of fixing the comments in a day.
I think it's because he doesn't have a time machine and doesn't have time to donate to rewrite someone else's project that the owner expressly doesn't want rewritten.
This is what we call "Put up or shut up". It's easy to bash someone for not wanting to spend many hours of their time to work they have no interest in, just because some third party is now demanding it. The change is absolutely arbitrary, also. There used to be no way to grant apps access to specific folders. This is when the app was written. This still works. Google's own apps work that way. But now Google has also implemented additional ways to access the filesystem, and they are demanding people who don't even work for them to rewrite their projects.
It would be understandable if they demanded new apps to adhere to these new policies. But blocking older apps, that were written when there literally wasn't an alternative available, to do a full rewrite or be banned from updating? Absurd.
If you are now claiming your question was rhetorical, it doesn't mean that when answered, it is snark.
> It's easy to bash someone
No one's being bashed. Lets put it in stark relief. Here's the bashing you replied to: "But that's really hard to do if you didn't begin with cross platform architecture that doesn't take into account having to modularize the filesystem layer for Android/iOS :/"
> The change is absolutely arbitrary
No, it isn't.
We can tell you know that, because you immediately say "There used to be no way to grant apps access to specific folders."
> But now
"Now" == "3 years ago"
> demanding people who don't even work for them to rewrite their projects
They're not demanding anything, other than Google demanding Google can't keep taking updates to an app they put on their store, with full filesystem access, 3 years later, unless they patch the filesystem access. As other comments note, there's plenty of storefronts for these situations.
n.b. it's dangerous to take updates with unpatched security flaws, because bad actors buy apps/browser extensions. This is roughly the minimal action Google needs to take to avoid being liable - it's been 3 years, it looks like complicity to claim its store is safe and secure, then repeatedly take updates to products on the store with known security vulnerabilities, for years.
> But blocking older apps, that were written when there literally wasn't an alternative available, to do a full rewrite or be banned from updating
Though interpretative charity, I'm guessing you mean Google won't put updates from the vendor on the store, unless the update includes stopping asking for full filesystem access with 0 alternatives.