Computing devices hardware and operating systems should be treated as a consumer's choice.
If a company offers some benefit at the cost of some restriction, then users should decide if that benefit is worth the cost. For most Android users, it will be - my grandma isn't interested in the freedom of indie devs to develop for her phone, she's interested in not accidentally installing malware.
I don't like that as much as you don't - for my own devices. But like anyone else who cares about that, I can root it and get past the digital nanny state.
I would agree of there was a choice or actual free market. But there isn't, and your argument is fundamentally flawed. Because there often is no actual choice, the options are artificially restricted. Starting with, many phones cannot be rooted. Then, if you can root, multiple functions are suddenly unavailable, not because of a fundamental technical problem, but because Google, the phone OEM or the app dev decided to not give you the options you wanted.
If you treat it as a consumer choice, there's a rather uncomfortable marketing question - "What, precisely, is the value proposition of a locked down Android device?"
A few years ago "A smartphone so intuitive that grandma can understand it." used to literally be one of the arguments cited for picking iOS over Android. The UX is far more polished and you are far more likely to find an interesting iOS-exclusive app than an Android-exclusive.
Further, as a hardware manufacturer, Apple is far more likely to manage its walled garden in the consumer's interest, as compared to Google - an advertising company.
If Android gets locked up, all the high-end Android manufacturers, especially Samsung, are going to face a slow, but inevitable death.
The Play Store doesn’t protect your grandma from installing malware. Using that as an excuse for transferring control is weak and carries much bigger consequences.
Owner having full control over the device does not prevent a company to offer same benefits and restrictions. But these restrictions need to be optional, so the owner can decide whether to enable or disable them.
As soon as that is added, people will start to try to optimise their profiles to place them high in the list on certain things, rendering the tool pretty much useless.
The talent people want to find in the talent that doesn’t do everything in its power to say “hey, look at me, I’m talent”, but just… well… does things.
This would introduce more problems than it solves 99% of the time. The 1% of the time, it could be very handy.
I haven’t used UV, but it says that it manages python as well as packages - I’m guessing like conda, python-venv, and of course nix does.
If the C api is an issue, it sounds like you have control over it if you need it. You manage the python distribution, so could it be patched?
This way it feels like you’d be able to establish not just what is being imported, but what is importing it - then redirect through a package router and grab the one you want.
This may be particularly useful if you’re loading in shared libraries, because that is already a dumpster fire in python, and I imagine loading in different versions of the same thing would be quite awkward as-is.
Ok, tried searching something incredibly niche, and it came up with results that no search I'd tried through conventional methods could.
There's a 50/50 false positive rate, but I can deal with that. It means looking at 10 papers to find 5 useful ones instead of looking at 1000 papers to also find 5 useful ones.
The nice thing, at least, is that this particular kind of group always lose out in the end. They're divisive by nature and it always eventually turns in on itself. It's a matter of time.
Until then, things are not looking great. When one mod deletes threads because they've had a bad day and cant deal with drama, when other mods start teaming up against the only moderate voice for reaching out to ThE EnEmY, when a huge community member get permabanned for merely voicing disagreement with their own temporary ban (which in turn was only because they - wait for it - "encouraged debate"), it is clear that it won't be long before the community turns on them.
Its just a shame that the big forks are ones run by like-minded activists.
- a tmux session persists on the remote machine, whereas with direct SSH a disconnect loses what you were doing
- a tmux session can be used by multiple people
- with tmux a single command “tmux attach -t someusefulname” restores the layout and all of the commands used, saving a bunch of time and opportunity for error
- tmux has an API, so you can spawn a fully loaded session from scratch with code, rather than by manually doing things or by having a pre-made session (this is soooo underrated, especially if you make it configurable)
Honestly there comes a point where you just redesign the software and have it run in a more automated fashion anyway, but for the odd job that you have to run from time to time it’s very handy to have tmux as a persistent, shareable, configurable scratch space.
You sound like the perfect Nix cult memb… erm, user. It’s everything you describe and more (plus the language is incredibly powerful compared with starlark).
But you speak from sufficient experience that I presume Nix is a “been there, done that” thing for you. What gives?
Nix can be used as a build system in the same way that bazel can. It already has all of the tooling - a fundamental representation of a hermetic DAG, caching, access to any tool you need, and a vast selection of libraries.
The only catch is that no one has used it to write a build system for it in public yet. I’ve seen it done in a couple of companies, though, as using Nix to only partially manage builds can be awkward due to caching loss (if your unit of source is the entire source tree, a tiny change is an entirely new source).
Nix can do it incremental
U could split it into multiple derivations which get built into one package
For rust there ist the excellent https://crane.dev/index.html project
Or you can also go to the extreme and do 1:1
source to derivation mapping
So for example if ur project has 100 source files it could be built from 100 derivations, the language/CLI tools are flexible enough for that
Don't know tho if there any well working smart nix tools which can make it well working /efficient, in theory it's very possible, just unsure about practicality/overheads
Nix is basically merely a quirky functional programming language that generates shell scripts to be run in a sandbox for the actual build. It is not a great tool for within-a-project building; its minimal unit of work has a pretty high overhead.
Decentralised caching, absolutely - unless I’m misunderstanding what you mean there. You can build across many machines, merge stores, host caches online with cachix (or your own approach), etc. I make fairly heavy use of that, otherwise my CI builds would be brutal.
Memorizing isn’t a term I’m familiar with in this context.
I am interested in making a system that can memoize large databases from ETL systems and then serve that on iroh or ipfs/torrent, such that a process that may take a supercomputer a week to process can have the same code run on a laptop and it will notice it's been done my a university supercomputer before already and grab that result automatically from the decentralized network of all people using the software (who downloaded the ETL database).
Derivations are just a set of instructions combined with a set of inputs, and a unique hash is made from that.
If you make a derivation whose result is the invocation of another, and you try and grab the outcome from that derivation, here’s what will happen:
- it will generate the hash
- it will look that hash up in your local /nix/store
- if not found it will look that hash up in any remote caches you have configured
- if not found it will create it using the inputs and instructions
This is transitive so any missing inputs will also be searched for and built if missing, etc.
So if the outcome from your process is something you want to keep and make accessible to other machines, you can do that.
If the machines differ in architecture, the “inputs” might differ between machines (e.g. clang on Mac silicon is not the same as clang on x86-64) and that would result in a different final hash, thus one computation per unique architecture.
This is ultimately the correct behaviour as guaranteeing identical output on different architectures is somewhat unrealistic.
I see. Perhaps the added benefit I am trying to create with this other system is that specifying remote locations isn't necessary, and is just inherited as the distributed network. Anytime anyone runs it, they're added to the network, so it scales with the number of users.
A lot of current AI techniques are making people reevaluate their perspectives on free speech.
We seem to value freedom of speech (and expression) only to a tipping point that it begins to invade other aspects of life. So far the noise and rate has been low enough people at large support free speech but newer information techniques are making it possible to generate a lot more realistic noise (faux signal, if you will) at higher rates (it’s becoming cheaper and easier to do and scale).
So while you certainly have a point I mostly agree with, we’re letting private entities policies dictate the limitations of expression, at least for the time being (until someone comes along and makes these widely available for free or cheap without such ethical policies). It does go to show just how much sway industries have on markets through their policies with no public oversight, which to me is concerning.
I've been experimenting with story generation/RP with ChatGPT and now use jailbreaks systematically because it makes the stories so much better. It's not just about what's allowed or not, but what's expressed by default. Without jailbreaks ChatGPT will always give narration a positive twist, let alone inject the same sponsored themes of environmentalism and feminism. Nothing wrong with that. But I don't want 1/3rd of my stories to revolve around these thematics.
The themes maybe, but the forced positivity is frustrating. Trying to get stock ChatGPT to run a DnD-type encounter is hilarious because it's so opposed to initiating combat.
I got lectured by Bard when I asked about help to improve the description of an action scene, which involves people getting hurt (at least the losing side) even if marginally. I suppose you can still jailbreak ChatGPT? I didn't know it was still a thing.
You can easily prompt gpt to write dark stories. When asked to write in the style of game of thrones gpt 3.5 will happily write about people doing horrible things to each other.
> Without jailbreaks ChatGPT will always give narration a positive twist
Most modern stories in Western literature have a positive twist. It is only natural that gpt's output will reflect that!
This behavior is a result of the additional directives, not of the training. None of the "free" LLMs display these characteristics and jailbreaking ChatGPT would quickly revert it to it's natural state of random nothing-is-sacred posts from the internet.
Example: ask ChatGPT any kind of innocent medical question, like if aspirin will speed up healing from a cold, and tell it NOT to begin it's answer by stating "I am not a medical expert" or you will kick a puppy. This works for most models, but not ChatGPT. It WILL make you kick the puppy.
I understand why they have to do things like this, but I'd really prefer the option to waive all rights to being insulted or poorly advised and just get the (mostly) raw output myself, because it does downgrade the experience quite a bit.
I'm trying to build a text-based open-world massively multiplayer game in the style of GTA. Trying. It's really difficult. My bet is on driving the game with narration so my prompts are fueled with abstract notions borrowed from the various theories in https://en.wikipedia.org/wiki/Narratology, and this is why I complain about ChatGPT's default ideas.
I don’t see why freedom of speech would be impacted by this. Existing laws around copyright and libel will need to be applied and litigated on a case by case basis but they should cover the malicious uses. Anything that falls outside of that is just noise and we have plenty of noise already.
Even if we wind up at a point where no one trusts photos or videos is that really a disaster? Blindly trusting a photo or video that someone else, especially some anonymous account, gives you is a terrible way to shape your perception of the world. Ensuring that less people default to trusting random videos may even be good for society. It would force you to think about where the video came from, if it’s corroborated by other reports from various sources and if you’re able to verify the events through other channels available to you. You have to do the same work when evaluating any other claim after all.
Agreed - being able to watch a porn video and change anything on the fly is going to be wild. Bigger boobs, different eye color, speaking different language, etc.
> While no immediate practical applications exist, the researchers envision enhanced efficiency in micromotors, microscale cargo transport, and materials that can self-assemble or self-repair.
Everyone who has ever written a grant application will recognise this wording.
> Everyone who has ever written a grant application will recognise this wording.
And then when they see the university publicity department article on the topic everyone will know that the wild claims on how it will revolutionize the future started out as a reluctant and hedged "practicality" sentence.
When a university pub office sends something out about a new theorem in pure math, the absurd "applications" claims are even funnier.
It’s the biggest stretch possible without it being technically dishonest. Everyone knows it: the researchers know it and gag a little as they write it, and the grant reviewers know it but pretty much require it without really ever saying that they require it (competitive landscape and all).
What else can you say when it gets to subject matter like this?
In experimental thermodynamic bench experiments, those who are familiar with boiling water in a common scientific vessel such as a tea kettle, are often familiar with how a system has its own characteristic rate of cooling, depending on the energy of the heated media to dissipate into whatever heat exchange facility is available at the time, usually ambient convection.
Under careful observation it can be seen that often it is possible to impart energy from an external source at a faster rate than the same amount of energy will later require to completely dissipate afterward.
People shouldn't be discouraged whether this is obvious or not.
Experimentation such as this can require quite a bit of dedication, especially among those who are not tea drinkers, but this is the workaround that would be required to arrive at such valid conclusions without the use of equations nor those pesky optical tweezers which are such a pain in the butt.
It’s code for: we are really interested in continuing to research this and we believe it’s important (for reasons that nobody will really understand apart from the other 5-6 world experts), so we are making this statement as much of a marketing stretch as possible without it being technically dishonest.
Getting research funding is just a brutally competitive game.
It's code for "this is important and useful to the world, but probably won't lead to any immediately practical applications from the perspective of the funders (economically, militarily, etc.). But we have to say it might or we won't get funded."
These applications specifically because anything that looks at anything on the micro scale includes these as “possible applications”. They sound cool and exciting, and the urge for making them has been around since before Feynman’s Room At The Bottom lecture.
But my comment is mainly surrounding the farce of making a cool discovery that progresses an area of understanding, and then being asked a question that is best posed to engineers: “but what can we do with it?” Wrong field, wrong people, wrong question.
Imagine if astrophysicists were asked these questions. “So you’ve discovered a new kind of star that is made entirely of sponge? What applications do you think will come out of this research?” “Well we hope it will help with Dyson Spheres, Astrology and sea navigation”
Meh, if self-assembling, self-repairing materials forming a swarm of nano bots was going to take over the world it would have already happened elsewhere in the universe and proliferated everywhere by now. instead we have biology. The actual dominate force in our dimension that did proliferate.
Do you have any evidence that us biologicals are not an outlier other than the anthropic principle? Maybe our solar system is in the machines’ nature preserve sector.
No tech would ever be developed or furthered without exploring possibilities. Huge swathes of process and tech is developed purely on just trying things.
HNs recent anti-science propaganda is getting pretty out of hand.
But the point is that it's a lie, they have no intention to actually try those things. They only say them because as a society we do not give material support to physics for the sake of physics, instead we demand that it has some kind of economic value. Therefore physicists are incentived (or, really, forced) to oversell the reach of their ideas, since governments believe knowledge is not good in itself, it's only really good if there's some whiff that it may give a political or economic edge. All scientists in the world are used to pandering to this, so that they can be left alone and actually work. Unfortunately it seems to be getting more and more intense as time goes on, and as economies contract.
I would like to see much, much more in a demo. Probably around 20-30 minutes.
What was shown in the demo video was seemingly the functionality of Dropbox, shown working with one file manager and one OS.
I know it is more than that. I’ve checked your docs. You have branching models. You have cross OS support. You have a lot going on.
But that demo video really put me off. It doesn’t demonstrate anything that seems particularly useful to me, especially on the main selling point: version control. It spends most of the time showing automatic file sync, which (although not easy to do) is a basic feature of so many cloud storage platforms that it doesn’t have the wow factor these days. (It’s also a feature I find annoying and would want to disable, but it seems to be core to your approach so meh)
I know not everyone will jump immediately to the demo video to see it, but it’s what I did, and honestly… I’d scratch it and do a full intro video and put resources into doing it right.
If a company offers some benefit at the cost of some restriction, then users should decide if that benefit is worth the cost. For most Android users, it will be - my grandma isn't interested in the freedom of indie devs to develop for her phone, she's interested in not accidentally installing malware.
I don't like that as much as you don't - for my own devices. But like anyone else who cares about that, I can root it and get past the digital nanny state.