Hacker Newsnew | past | comments | ask | show | jobs | submit | tmsbrg's commentslogin

Seems it's overloaded now. I like the UX though. My usual question with any hosting is how do you avoid this being abused by hackers, scammers, etc.? Right now it's easy to just create any VMs for free based on a mail account, that seems ripe for exploitation (maybe it's down now cause someone's exploiting it?)

Some talks which sound really brilliant. I love [0] exploiting a memory leak for years before it's fixed. Also [1] I'm really curious about the custom crypto used in Chinese apps. Oh and curious about the found [2] GPG vulnerabilities. I think some of the politics ones are actually also very interesting. Looking forward to the streams.

[0] https://fahrplan.events.ccc.de/congress/2025/fahrplan/event/... [1] https://fahrplan.events.ccc.de/congress/2025/fahrplan/event/... [2] https://fahrplan.events.ccc.de/congress/2025/fahrplan/event/...


The first one is indeed fascinating...almost deserving of its HN post :-)

It has been submitted six times in the last 10 months, with a grand total of 1 comment...I though this site had Hacker in the title...

https://hn.algolia.com/?q=https%3A%2F%2Fgfw.report%2Fpublica...


Common misconception. HN uses "hacker" as in "the people who do the work that makes Y Combinator rich" rather than "someone who plays with technology". HN hackers are contrasted with CEOs and so on - people whose job is not the on-the-ground work.

> HN uses "hacker" as in "the people who do the work that makes Y Combinator rich" rather than "someone who plays with technology"

I do believe that originally Y Combinator indeed did celebrate the people who play with technology, but I guess over the many years the focus has shifted.


Well, HN was originally "Startup News". https://news.ycombinator.com/announcingnews.html

You can also look at the posts from the first day :) https://news.ycombinator.com/front?day=2006-10-09


I'm surprised there's no mention of the SameSite cookie attribute, I'd consider that to be the modern CSRF protection and it's easy, just a cookie flag:

https://scotthelme.co.uk/csrf-is-dead/

But I didn't know about the Sec-Fetch-Site header, good to know.


SameSite doesn’t protect against same-site cross-origin requests, so you are staking your app’s security on the security of the marketing blog.

Thanks very much for your comment. I posted elsewhere that I felt like SameSite: Lax should be considered a primary defense, not just "Defense in depth" as OWASP calls it, but your rationale makes sense to me, while OWASP's does not.

That is, if you are using SameSite Lax and not performing state changes on GETs, there is no real attack vector, but like you say it means you need to be able to trust the security of all of your subdomains equally, which is rarely if ever the case.

I'm surprised browser vendors haven't thought of this. Like even SameSite: Strict will still send cookies when the request comes from a subdomain. Has there been any talk of adding something like a SameSite: SameOrigin or something like that? It seems weird to me that the Sec-Fetch-Site header has clear delineations between site and origin, but the SameSite header does not.


Browser vendors have absolutely thought about this, at length.

The web platform is intricate, legacy, and critical. Websites by and large can’t and don’t break with browser updates, which makes all of these things like operating on the engine in flight.

For example, click through some of the multiple iterations of the Schemeful Same Site proposal linked from my blog.

Thing is, SameSite’s primary goal was not CSRF prevention, it was privacy. CSRF is what Fetch metadata is for.


> Thing is, SameSite’s primary goal was not CSRF prevention, it was privacy.

That doesn't make any sense to me, can you explain? Cookies were only ever readable or writable by the site that created them, even before SameSite existed. Even with a CSRF vulnerability, the attacker could never read the response from the forged request. So it seems to me that SameSite fundamentally is more about preventing CSRF vulnerabilities - it actually doesn't do much (beyond that) in terms of privacy, unless I'm missing something.


What do you mean with same-site cross-origin requests?

See the same-site section of https://words.filippo.io/csrf/

Oh, thanks. I learned something new. Never knew that different subdomains are considered the same "site", but MDN confirms this[0]. This shows just how complex these matters are imo, it's not surprising people make mistakes in configuring CSRF protection.

It's a pretty cool attack chain, if there's an XSS on marketing.example.com it can be used to execute a CSRF on app.example.com! It could also be used with dangling subdomain takeover or if there's open subdomain registration.

[0] https://developer.mozilla.org/en-US/docs/Glossary/Site


It's why I like Sec-Fetch-Site: the #1 risk is for the developer to make a mistake trying to configure something more complex. Sec-Fetch-Site delegates the complexity to the browser.

It’s a real problem for defense sites because .mil is a public suffix so all navy.mil sites are the “same site” and all af.mil sites etc.

Yep SameSite lax, and just make sure you never perform any actions using Get requests, which you shouldn’t anyway.

Unsubscribe often need to be GET, or at least start as GET

list-unsubscribe header sends a POST. Probably makes more sense to just use a token from an email anyway.

The way the list-unsubscribe header works, it essentially must use a token when one click unsubscribe (i.e when the List-Unsubscribe-Post: List-Unsubscribe=One-Click header is also passed) is used, and since GMail has required one click unsubscribe for nearly 2 years now, my guess is all bulk mail senders support this. Relevant section from the one click unsubscribe RFC:

> The URI in the List-Unsubscribe header MUST contain enough information to identify the mail recipient and the list from which the recipient is to be removed, so that the unsubscription process can complete automatically. Since there is no provision for extra POST arguments, any information about the message or recipient is encoded in the URI. In particular, one-click has no way to ask the user what address or from what list the user wishes to unsubscribe.

> The POST request MUST NOT include cookies, HTTP authorization, or any other context information. The unsubscribe operation is logically unrelated to any previous web activity, and context information could inappropriately link the unsubscribe to previous activity.

> The URI SHOULD include an opaque identifier or another hard-to-forge component in addition to, or instead of, the plaintext names of the list and the subscriber. The server handling the unsubscription SHOULD verify that the opaque or hard-to-forge component is valid. This will deter attacks in which a malicious party sends spam with List-Unsubscribe links for a victim list, with the intention of causing list unsubscriptions from the victim list as a side effect of users reporting the spam, or where the attacker does POSTs directly to the mail sender's unsubscription server.

> The mail sender needs to provide the infrastructure to handle POST requests to the specified URI in the List-Unsubscribe header, and to handle the unsubscribe requests that its mail will provoke.


I was thinking more about the unsubscribe footer links still very common in emails.

I don’t think CSRF has anything to do with those?

The endpoints serving those links can't be protected as well. Unless they serve a form that posts, which may not be legal if it requires extra clicks

The OWASP CSRF prevention cheat sheet page does mention SameSite cookies, but they consider it defense in depth: https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Re....

I don't understand the potential vulnerabilities listed at the linked section here: https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc...

They give 2 reasons why SameSite cookies are only considered defense in depth:

----

> Lax enforcement provides reasonable defense in depth against CSRF attacks that rely on unsafe HTTP methods (like "POST"), but does not offer a robust defense against CSRF as a general category of attack:

> 1. Attackers can still pop up new windows or trigger top-level navigations in order to create a "same-site" request (as described in section 2.1), which is only a speedbump along the road to exploitation.

> 2. Features like "<link rel='prerender'>" [prerendering] can be exploited to create "same-site" requests without the risk of user detection.

> When possible, developers should use a session management mechanism such as that described in Section 8.8.2 to mitigate the risk of CSRF more completely.

----

But that doesn't make any sense to me. I think "the robust solution" should be to just be sure that you're only performing potential sensitive actions on POST or other mutable method requests, and always setting the SameSite attribute. If that is true, there is absolutely no vulnerability if the user is using a browser from the past seven years or so. The 2 points noted in the above section would only lead to a vulnerability if you're performing a sensitive state-changing action on a GET. So rather than tell developers to implement a complicated "session management mechanism", it seems like it would make a lot more sense to just say don't perform sensitive state changes on a GET.

Am I missing something here? Do I not understand the potential attack vectors laid out in the 2 bullet points?


Because of clientside Javascript CSRF, which is not a common condition.

What do you mean with clientside Javascript CSRF?

Client side js is not particularly relevant to csrf.

I mostly agree, but that's the logic OWASP uses to argue you should still be doing explicit tokens even if you're using SameSite and Sec-Fetch.

But that's not what owasp argues. Fetch Metadata is recommended as a primary, standalone defense against CSRF (you can be forgiven for not knowing this - I worked on getting the doc updated and it landed a couple weeks ago, then was reverted erroneously, and fixed yesterday)

I’m confused, how does this prevent a CSRF attack?

SameSite or not is inconsequential to the check a backend does for a CSRF token in the POST.


The only reason CSRF is even possible is because the browser sends (or, well, used to send) cookies for a particular request even if that request initiated from a different site. If the browser never did that (and most people would argue that's a design flaw from the get go) CSRF attacks wouldn't even be possible. The SameSite attribute makes it so that cookies will only be sent if the request that originated them is the same origin as the origin that originally wrote the cookie.

I think I understand now, the Cookie just is not present in the POST if a user clicked on, for example, a maliciously crafted post from a different origin?

Exactly.

Never needed the CSRF and assumed that cookies was always SameSite, but can see that it was introduced in 2016. Just had the sitename put into the value of the cookie since, and never really needed to think about that.

Just feels like all these http specs are super duck tapped together. I guess that is only way to ensure mass adoption for new devs and now vibe coders.


I'm not sure I'm understanding your solution

If the domain name is in the cookie value then that can't be used when submit another request from another domain. Yes you can configure the dns to bypass that, but at that point it is also pointless for CSRF.

Not to be rude, but from your comments you don't appear to understand what the CSRF vulnerability actually is, nor how attackers make use of it.

Cookies can still only be sent to the site that originally wrote them, and they can only be read by the originating site, and this was always the case. The problem, though, is that a Bad Guy site could submit a form post to Vulnerable Site, and originally the browser would still send any cookies of Vulnerable Site with the request. Your comment about "if the domain name is in the cookie value" doesn't change this and the problem still exists. "Yes you can configure the dns to bypass that" also doesn't make any sense in this context. The issue is that if a user is logged into Vulnerable Site, and can be somehow convinced to visit Bad Guy site, then Bad Guy site can then take an action as the logged user of Vulnerable Site, without the user's consent.


Given what was written, I'm not quite sure the author does either.

> Just had the sitename put into the value of the cookie since, and never really needed to think about that.

How would that help? This doesn't seem like a solution to the CSRF problem


No? The whole point of SameSite=(!none) is to prevent requests from unexpectedly carrying cookies, which is how CSRF attacks work.

What does this even mean?

I’m not being rude, what does it mean to unexpectedly carry cookies? That’s not what I understand the risk of CSRF is.

My understanding is that we want to ensure a POST came from our website and we do so with a double signed HMAC token that is present in the form AND the cookie, which is also tied to the session.

What on earth is unexpectedly carrying cookies?


The "unexpected" part is that the browser automatically fills some headers on behalf of the user, that the (malicious) origin server does not have access to. For most headers it's not a problem, but cookies are more sensitive.

The core idea behind the token-based defense is to prove that the origin server had access to the value in the first place such that it could have sent it if the browser didn't add it automatically.

I tend to agree that the inclusion of cookies in cross-site requests is the wrong default. Using same-site fixes the problem at the root.

The general recommendation I saw is to have two cookies. One without same-site for read operations, this allows to gracefully handle users navigating to your site. And a second same-site cookie for state-changing operations.


This is "not allowing cross site at all" so, technically it's not "request forgery" protection. Yes, this is very semantic, but, CSRF is a vulnerability introduced by enabling CS and CORS. So, technically, same-site cookies are not "protection" against CSRF.

I don't understand your distinction at all. I may not quite grok your meaning here, but CORS is usually discussed in the context of allowing cross-origin AJAX calls.

But cross origin form posts are and have always been permitted, and are the main route by which CSRF vulnerabilities arise. Nothing on the client or server needs to be enabled to allow these form posts.

Furthermore, the approach detailed in the article simply has the server block requests if they are cross site/origin requests, so I'm not sure what the semantic difference is.


Yeah, CORS is not a safety mechanism. It’s a procedure of loosening the default safety mechanism of not sharing any response data from a cross site request with client side JavaScript.

Cs and cors have nothing to do with csrf... Though, yes, neither does same-site

I don't know why I said same-site cookies have nothing to do with csrf. They can be helpful as defense in depth, but not primary defense.

I haven't seen any proposed attack vectors where they are insufficient primary defense when using SameSite Lax as long as you don't do any sensitive state change operations on non-mutative methods like GET.

I feel like people are just parroting the OWASP "they're just defense in depth!" line without understanding what the actual underlying vulnerabilities are, namely:

1. If you're performing a sensitive operation on a GET, you're in trouble. But I think that is a bigger problem and you shouldn't do that.

2. If a user is on a particularly old browser, but these days SameSite support has been out on all major browsers for nearly a decade so I think that point is moot.

The problem I have with the "it's just defense in depth" line is people don't really understand how it protects against any underlying vulnerabilities. In that case, CSRF tokens add complexity without actually making you any safer.

I'd be happy to learn why my thinking is incorrect, i.e. where there's a vulnerability lurking that I'm not thinking of if you use SameSite Lax and only perform state changes on mutable methods.


As I said in another comment, I'm against immortality because old people need to make way for new generations. But this comment is cute. I like the idea that we'd be there and we're able to see how people are doing, but we're not influencing the world anymore. Though I could also imagine at some point it could become depressing in bad times when there's nothing you can do, or boring after tens of thousands of years of repetition. I can also imagine some bad spirits trying to break out and influence worldly affairs.


> old people need to make way for new generations

The main problem with extended lifespan will not be that some people will amass extreme wealth and power while living centuries, and they'll oppress the younger generations, who will not have a fair chance in life.

The much more likely problem will be that old people will not adjust to the new technologies. Lots of them will be victims to "pig butchering" schemes. Or they'll simply be illiterate in the new ways of life. If medicine makes tremendous progress, we might end up with a good chunk of our society being elderly, healthy, but socially unadjusted and estranged. Especially with more and more people being childless. Imagine someone who is 110 years old, with no living relatives, secluded in a nursing home, not knowing how to use the internet, or whatever the equivalent of that will be at that point in time.

These people deserve pity. But to they need to "make way for new generations"? That feels a bit eugenic to me.


I'm not sure why people have it in their heads that this "making way" requires one to be cast into the formless void instead of, like, a gated community.


I do think we're significant more likely to solve immortality than the problem of getting old rich powerful people to relinquish their grip on wealth and power


> the problem of getting old rich powerful people to relinquish their grip on wealth and power

This is a solved problem, guillotines worked wonders for this back in the day.


Exactly... Nothing can stop the masses. Plus, we have laws that can change and adapt.


Maybe we could set it up so the “spirits” can just talk to the “living” when the latter start the conversation. That seems like a reasonable way of setting things up.

It’s all a bit fanciful of course—we’d basically be setting up an emulation of various spiritual beliefs, and there’s no reason to believe anybody would go along with the constraints. But it is fun to think about.


Impossible to know if there is something like Sheol after death, so we thought, "why not make our own eternal emptiness?"


Not the argument I expected. I'm also against people living forever, but more because it's a way for society to go forward and get rid of old ways of thinking. There's a saying that science advances one death at a time. And can you imagine a world where current leaders are still in power 1000 years later? Or where the leaders of 1000 years ago were still in charge? Whenever I hear people talk about living forever I think of how it'd be something tech billionaires and autocrats would use to oppress us forever. No thanks.


I'm also against people living forever, but more because it's a way for society to go forward and get rid of old ways of thinking.

Well, I'd like to get rid of the old way of thinking that death is good :p

And can you imagine a world where current leaders are still in power 1000 years later?

Leaders generally don't rule for life in functioning countries, and the mortality of individual Kims has not helped the people of North Korea.

I think of how it'd be something tech billionaires and autocrats would use to oppress us forever.

How are these people currently oppressing you, and how would the existence of longevity treatments make that worse?


> Leaders generally don't rule for life in functioning countries, and the mortality of individual Kims has not helped the people of North Korea.

I guess you'd say most people in the world don't live in functioning countries then? China, Russia, much of the middle east and Africa are not democratic and sometimes the death of a dictator is the only way to move them forward. USA and many democracies in the west are also backsliding so maybe soon few people will live in a "functioning country".

Counterpoint on Kim: The death of Stalin or Mao Zedong released a death grip on their respective countries. You can't ignore that getting rid of natural death would make individual centralization of power a worse problem.

>How are these people currently oppressing you, and how would the existence of longevity treatments make that worse?

Just one example: Trump using sanctions to block the ICC from doing it's job (and thus letting people in Gaza die and blocking steps of justice against Israel). The fact is that the centralization of power in modern times into individual hands is already unprecedented. Old people are already ruling the world and they'd do everything to rule it forever.


Even experts create C/C++ code that is routinely exploited in the wild (see: pegasus malware, Zerodium, Windows zero days, Chrome zero days, etc.). No, please don't vibe code anything security critical, and please don't create unnecessary security risk by writing it in unsafe languages such as C/C++. The only advantage I can see is it creates some fun easy targets for beginning exploit developers. But that's not an advantage for you.


So the AI basically hallucinates a webapp?

I guess any user can just run something /api/getdatabase/dumppasswords and it will give any user the passwords?

or /webapp?html=<script>alert()</script> and run arbitrary JS?

I'm surprised nobody mentioned that security is a big reason not to do anything like this.


Jimmy Maher is really a great history writer. The way he writes is very compelling. He made a whole history of windows which I somehow read through completely[0].

I can also recommend his other site, Analog Antiquarian[1] where he writes more about the larger history. His Magellan series that's going on now is really amazing, makes you feel like you're really experiencing the epic voyage through South America and South East Asia.

[0] https://www.filfre.net/2018/06/doing-windows-part-1-ms-dos-a...

[1] https://analog-antiquarian.net/


Jimmy Maher has that rare mix of deep research and genuinely engaging storytelling. He somehow makes technical or historical rabbit holes feel like page-turners


He really is great, I'm glad to see him writing "analog" history as well as digital. Excellent work for a guys who's essentially a hobbiest.


I wish we could just let software "die" aka be stable without constant updates. For software that doesn't have a significant attack surface (security) it'd be amazing. But because of the bitrot of constantly changing underlying APIs and platforms, oftentimes if you find some Python script that hasn't been updated for a few years it'll already be broken in some horrible ways due to dependencies changing and no longer being compatible with current versions of certain libraries.

Think of how much time is wasted because so much software that's been written but not maintained and can't be used because of how libraries have "evolved" since then.


The Python 2-3 transition and some developments after made it definitely not so boring for me, but hopefully it'll be more stable in the coming decades ;).

"Cannot live without" is a strong wording, but software that I use a lot and that's mature/stable in my experience: shell (zsh, bash, sh), GNU utils, vim, nmap, xfce, git, ssh, mpv, Xorg, curl, and lots of little old CLI tools.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: