Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The lesson for any programmers reading this is to always set an upper limit for how much data you accept from someone else. Every request should have both a timeout and a limit on the amounts of data it will consume.


As a former boss used to say: "Unlimited is a bad idea."


That doesn't necessarily need to be in the request itself.

You can also limit the wider process or system your request is part of.


While that is true, I recommend on the request anyway, because it makes it abundantly clear to the programmer that requests can fail, and failure needs to be handled somehow – even if it's by killing and restarting the process.


I second this: depending on the context, there might be a more graceful way of handling a response that's too long then crashing the process.


Though the issue with ‘too many byte’ limits is that this tends to cause outages later then time has passed and now whatever the common size was is now ‘tiny’, like if you’re dealing with images, etc.

Time limits tend to also defacto limit size, if bandwidth is somewhat constrained.


Deliberately denying service in one user flow because technology has evolved is much better than accidentally denying service to everyone because some part of the system misbehaved.

Timeouts and size limits are trivial to update as legitimate need is discovered.


Oh man, I wish I could share some outage postmortems with you.

Practically speaking, putting an arbitrary size limit somewhere is like putting yet-another-ssl-cert-that-needs-to-be-renewed in some critical system. It will eventually cause an outage you aren’t expecting.

Will there be a plausible someone to blame? Of course. Realistically, it was also inevitable someone would forget and run right into it.

Time limits tend to not have this issue, for various reasons.


> Practically speaking, putting an arbitrary size limit somewhere is like putting yet-another-ssl-cert-that-needs-to-be-renewed in some critical system. It will eventually cause an outage you aren’t expecting.

No, not at all. A TLS cert that expires takes the whole thing down for everyone. A size limit takes one operation down for one user.


But not putting the limits, leaves the door open to a different class of outages in the form of buffer overflows, that additionally can also pose a security risk as could be exploitable by an attacker. maybe this issue would be better solved at the protocol level, but in the meantime size limit it is.


Nah, just OOM. Yes, there does need to be a limit somewhere - it just doesn’t need to be arbitrary, but based on some processing limit, and ideally will adapt as say memory footprint gets larger.


> putting yet-another-ssl-cert-that-needs-to-be-renewed in some critical system

I found a fix for this some years back:

    openssl req -x509 -days 36500


That's a lead into one of my testing strategies. It's easy to set the timeouts too short randomly, the buffer size too small. Use that to make errors happen and see what the system does. Does it hiccup and keep going or does it fall on it's face?


Then you kill your service which might also be serving legitimate users.


It depends on how you set things up.

Eg if you fork for every request, that process only serves that one user. Or if you can restart fast enough.

I'm mostly inspired by Erlang here.


Fork at every request isn't going to make a fast server.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: