Hacker Newsnew | past | comments | ask | show | jobs | submit | silon5's commentslogin

IMO, GPL2 has all the stuff against tivoization already there (preferred form for modification -- if I can't modify it for actual hardware, it's not enough).


then why can you not modify your tivo?

GPL2 hasn't had that problem present when it was written, and so tivo found a way to prevent practical modification, even tho they followed the letter of the license.

In my eyes, most, if not all open source software should use AGPL, and dual license a commercial license offer for those people who want to buy it for modification. You should contribute, or pay up, else the tragedy of the commons will occur.


AGPL is a market failure as I see it. I understand and sympathize with what that license is trying to do, but in practice, it just means that many companies won't touch that software (or will only touch it in a fashion where they don't modify that part of the system), meaning that there are far fewer adopters at all, and of those that adopt, fewer modify the software, meaning that it evolves more slowly than products with more used licenses.


Biggest problem seems to be kind-of unclear rules where it stops, especially when it comes to web applications (templates, linked assets, ...).

There are surprisingly few "trustworthy" comments on that out there, most stuff you find is a bunch of people going "I think XXX, but IANAL" on stack overflow.


Honestly, I'd sooner prefer to see everything licensed under BSD / MIT (Expat, X11) / ISC / etc. "copycenter" / "copyfree" licenses, for the simple reason that very few people in their right mind would use the AGPL at all (let alone in a project that doesn't involve writing network-facing software), and I'd rather see more software be compatible with as many free software licenses as possible. Aside from public domain, such non-copyleft licenses are a dream for writing free software, since the license doesn't get in the way of using such code in, say, Apache'd or GPL'd or MPL'd or whatever-L'd code.

(A)GPL, in other words, should be reserved for things that aren't meant to be reusable by other codebases. For everything that should be reusable, LGPL is about the limit for something being usable (and even that can be difficult to work with).


If GPLv2 were sufficient against Tivoization than Tivo would have been able to be sued for violating it.

It wasn't, and they amended GPLv3 to make sure that if Tivoization ever happened with GPLv3 code, they could sue for that.


Because they are enums done right. Enums in C and many other languages are often just a helpers to define integer constants (or abused in this way) and can be evil.


IMO, before http is deprecated, we need public key in DNS support, bypassing the CA system. It would possibly be a lower level of security than CA cert, but would be good for many sites.


That's kind of the issue. There's basically two circumstances where I want to connect with a remote site:

1. I don't care who they are, I just want to read their content (any site I'm not going to log into, e.g. blog posts, etc)

2. I care who they are, I need to know they're them (banks, HN, Twitter, etc.)

The current CA system provides the second one, but fundamentally it would be nice if, with the lack of a CA-verified certificate, the server/browser would just encrypt the connection anyway.


TLS doesn't require CA. Browsers just decided they do, so they're rejecting any such https connections (anon DH and anon ECDH connections).


Strongly agreed especially with your last sentence.


DRM = SkyNet


I notices Firefox sometimes starts busy looping on 2 cores while playing youtube (usually when "buffering"). IMO, they should really move the decoding threads into separate processes so they can be restarted easily (just like Flash was).


Or losing data on crash because load/save cycle is untested.


But it has to scan/mark all the objects, not just the the unused ones. This has large overhead in big-mem apps. Not to mention that usually this forces the whole app into RAM (makes swap useless) and also potentialy drains cpu caches. And to amortize this it often has more memory overhead (up to 2x) than typical fragmentation in a manually managed app.


It seems like virtualization is about to become useless in the future and CPU-emulating+jit compiling systems are the future, because SGX can't be virtualized.


More likely that hypervisors will be refactored to take advantage of the additional protection level.

See PrivateCore (now Facebook), http://en.wikipedia.org/wiki/PrivateCore & http://security.stackexchange.com/questions/53165/is-it-poss...


My HTC hero was pre-bent and more durable than any plastic Nokia... I'd love 40% magnification of it, it'd last a week too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: