Hacker Newsnew | past | comments | ask | show | jobs | submit | CJefferson's commentslogin

No, AI are terrible at finding these types of patterns.

You could hypothetically use AI to write algorithms to find the patterns, but people have already spent a long time super-tuning them.

AIs can't even (at least I keep checking) solve Sudokus as well as my mother -- they aren't good with piles of numbers and complex patterns.


AI has been used in astrophysics for a long time, now. AI is more than genAI of the past 3-4 years... Classification tasks are handled by AI now because it's the only thing that gives you some accuracy at scale.

A friend recommended to me having a D&D style roleplay with some different engines, to see which you vibe with. I thought this sounded crazy but I took their advice.

I found this worked suprisingly well, I was certain 'claude' was best, while they like grok and someone else liked ChatGPT. Some AIs just end up fitting best with how you like to chat I think. I do definately also find claude best for coding with as well.


But cloudflare do block things. They tend to block things as a rule the American government wants blocking.

The problem is they want to be the people who choose what gets blocked, rather than elected governments.

To me, this whole thing is crazy, certainly pull out if you like, but I'm shocked how many people seem to be siding with the profit-making company over an elected government.


I can confirm that. Got blocked due to a frivolous report. Cloudflare blocked me and categorized my site as phishing. (censoring me from anyone that uses their systems to browse)

No support. No responses to emails or requests for a review by a human

They also sent a notice to my hosting provider. My hosting provider promptly looked at my site and closed the ticket. It was pretty clear to anyone that the report was malicious.

So yes, Cloudflare censors (to quote Matthew Prince) with "No judicial oversight. No due process. No appeal. No transparency"

Granted this could be just due to lack of staff and support


They requested a worldwide block, as a bolivian citizen I have not voted for any italian government officials. This article seems heavily biased, ignoring this specific point is really strange.

I guess Bolivian people like to watch soccer live too while that match stream was paid for by an Italian media company. I am not in favour of any of this, but it is easy to defend that request? Legal or fair or not?

If you ignore the fact that the requests that these companies have made previously show incompetence, like when they randomly blocked google drive due to it being used to host copyrighted content. Do you want them randomly disabling CDNs or other sites globally if any user happens to use them for piracy?

https://www.ansa.it/canale_tecnologia/notizie/cybersecurity/...


Actually, to me this is the perfect argument to make AI music not have copyright.

Normally the copyright is owned by the creator. Algorithms can't own copyrights, so there is no copyright. There is already legal history on this.


I don't mind rebasing a single commit, but I hate it when people rebase a list of commits, because that makes commits which never existed before, have probably never been tested, and generally never will be.

I've had failures while git bisecting, hitting commits that clearly never compiled, because I'm probably the first person to ever check them out.


Sometimes it feels like the least-bad alternative.

e.g. I'm currently working on a substantial framework upgrade to a project - I've pulled every dependency/blocker out that could be done on its own and made separate PRs for them, but I'm still left with a number of logically independent commits that by their nature will not compile on their own. I could squash e.g. "Update core framework", "Fix for new syntax rules" and "Update to async methods without locking", but I don't know that reviewers and future code readers are better served by that.


In mercurial you could have those in phase hidden for future reference. In jujutsu you can have those in a local set, but not push upstream. Only unfortunate thing with jujutsu is because it is trying to be a git overlay, you lose state that a mercurial clone on another machine would have.

I wonder how relevant and feasible this workflow would be: https://graydon2.dreamwidth.org/1597.html

Where you have two repositories, one "polished" where every commit always passes, and another for messier dev history.


It seems to me the "Not Rocket Science" invariant is upheld if you just require all PRs to be fast-forward changes. Which I guess is an argument in support of rebase, but a clean merge counts too. If the test suite passes on the PR branch, it'll pass on main, because that's what main will be afterward. Ideally you don't even test the same commit hash twice.

If you have expensive e2e tests, then you might want to keep a 'latest' tag on main that's only updated when those pass.


I'd guess it's much less accurate than that.

Part of genetics is pattern matching, and last time I checked I still can't find a model that can correctly solve hard Sudokus (well, assuming you don't pick a coding model that writes a Sudoku solver.. maybe some of them are trying to do genetics by doing correct algorithms), a trivial job if you write a program that is designed to do it.


To be honest, just getting to the point where house prices don't rise above inflation, maybe even stay fixed (so inflation eats away at their value), would be a massive accomplishment. The main problem at the moment is prices keep rising above inflation in most places, year after year.

Why have shelves full of books that haven’t been borrowed in 15 years? what benefit is that providing?


I once borrowed a book, to find a previous borrowers receipt in it, placed as a bookmark. Upon inspection it turned out that the previous borrower was myself(!) (I recognized the library card number), about ten years earlier.

So probably, no one had borrowed it in the time between. I was very happy the book had not been thrown out.


You can find entertaining stuff there. My interests can be really niche. I remember once finding an amazing book in our college library from the sixties or seventies about the use of LSD in treating psychiatric disorders. While I didn't agree with all the suggestions in there, it was a fascinating time capsule (with colour illustrations, many of them by patients). With the microdosing debate, it's probably relevant again.

Yet when I took the book off the shelf it looked like no one had touched it in many years.


What you are saying is especially true for fiction, less so for nonfiction. Many nonfiction topics are important and require a large volume of materials to remain as reference. For example, you never know when it might be important to know how something was manufactured 50 years ago, or what happened in Congress 20 years ago, or what a newspaper reported a hundred years ago. This makes it really hard to judge which items could be culled. I'm inclined to agree that borrow rates are relevant but they are not the only thing that matters. The possibilities of digitization and interlibrary loan make culling less risky, but someone still has to decide to keep unpopular reference materials for them to remain available.


Almost every library regularly throws out books, and all librarians I know are happy with this. New books arrive regularly, and unless you plan on your library growing unlimited, you need to, in general, a 1 in 1 out policy.


You have to rely on implementation for anything to do with what happens to memory after it is freed, or really almost anything to do with actual bytes in RAM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: