Hacker Newsnew | past | comments | ask | show | jobs | submit | jdhwosnhw's commentslogin

A slightly stronger (and more relevant) statement is that the number of mutually nearly orthogonal vectors you can simultaneously pack into an N dimensional space is exponential in N. Here “mutually nearly orthogonal” can be formally defined as: choose some threshold epsilon>0 - the set S of unit vectors is nearly mutually orthogonal if the maximum of the pairwise dot products of between all members if S is less than epsilon. The statement of the exponential growth of the size of this set with N is (amazingly) independent of the value of epsilon (although the rate of growth does obviously depend on that value).

This is pretty unintuitive for us 3D beings.


That would also be an incorrect phrasing. This entire thread is a good illustration of the difficulty of speaking precisely about probabilistic concepts.

(The number of successes has zero uncertainty. If you flip a coin 10 times and get 5 heads, there is no uncertainty on the number of heads. In general, for any statistical model the uncertainty is only with respect to an underlying model parameter - in this example, while your number of successes is perfectly known, it can be used to infer a probability of success, p, of 0.5, and there is uncertainty associated with that inferred probability.)


> How much of a chance do you think we have of meaningfully changing a government, if they can guess with 80% degree accuracy how everyone voted, based on their chats and social networks

This doesnt really detract from your overall point, but you may be underestimating how easy it already is for the government to tell how you will vote, without use of networking information. Just knowing someone’s educational level and zip code is enough to guess their voting preferences to a high degree of accuracy (the latter component being the reason why gerrymandering is so effective).


Unfortunately I’m in the same boat. What appears especially telling is:

> tVNS applied for 30 min daily over 7 consecutive days increased VO2peak by 1.04 mL/kg/min (*95% CI: .34–1.73*; P = .005), compared with no change after sham stimulation (−0.54 mL/kg/min; *95% CI: −1.52 to .45*)

(emphasis mine) The 95% CIs for the case and control groups overlap. Seems borderline irresponsible to have a the abstract reporting a significant result.


I’m not sure what you mean by “higher population” but fyi what determines the required number of samples is a function of the full shape of the underlying distribution. For instance the Berry Esseen inequality puts bounds on the convergence rate as a function of the first two central moments of the underlying distribution. But the point is that the convergence rate to Gaussian can be arbitrarily slow!

https://en.m.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem


I’ve heard this statistic before and it always strikes me as basically a non-sequitor. You’re writing down two percentages as if they are meaningful with respect to one another, but they arent.

If we as a society agree that some sort of progressive tax system is good (based on the fact that the mere act of survival comes with fixed costs, that naturally impact low-wealth holders over high-wealth holders) then we presumably expect higher wealth people to shoulder a larger burden of the cost of maintaining society, relative to that wealth.

The top 1% hold >30% of all wealth in the US, which, by the logic I described above, makes your 40% figure sound not just not exorbitant, but possibly too low.

https://www.federalreserve.gov/releases/z1/dataviz/dfa/distr...


I would still say its completely wrong, given that this explanation makes explicit predictions that are falsifiable, eg, that airplanes could not fly upside down (they can!).


> Yep, for me it confirms all the reasons why I think python is slow

Yes, that is literally the explicit point of the talk. The first myth of the article was “python is not slow“


I thoroughly disagree with this sentiment.

In my experience, the most helpful approach to performing RCA on complicated systems involves several hours, if not days, of hypothesizing and modeling prior to test(s). The hypothesis guides the tests, and without a fully formed conjecture you’re practically guaranteed to fit your hypothesis to the data ex post facto. Not to mention that in complex systems there is usually 10 benign things wrong for every 1 real issue you might find - without a clear hypothesis, its easy to go chasing down rabbit holes with your testing.


That's a valid point. What I originally meant to convey is that when issues arise, people often assign blame and point fingers without any evidence, just based on their own feelings. It is important to gather objective evidence to support any claims. Sounds somewhat obvious but in my career I have found that people are very quick to blame issues on others when 15 minutes of testing would have gotten to the truth.


Very reasonable, I fully agree on that front


I think the GP is in a different world than you.

If you can grab an oscilloscope and gather meaningful data in 15 minutes, why would you spend several hours hypothesizing and modeling?

If you can't, then spending several hours or days modeling and hypothesizing is better than just guessing.

So I think that data beats informed opinions, but informed opinions beat pure guesses.


I agree with both of you. I think it’s sort of a hybrid and a spectrum of how much you do of each first.

When you test part of the circuit with the scope, you are using prior knowledge to determine which tool to use and where to test. You don’t just take measurements blindly. You could test a totally different part of the system because there might be some crazy coupling but you don’t. In this system it seems like taking the measurement is really cheap and a quick analysis about what to measure is likely to give relevant results.

In a different system it could be that measurements are expensive and it’s easy to measure something irrelevant. So there it’s worth doing more analysis before measurements.

I think both cases fight what I’ve heard called intellectual laziness. It’s sometimes hard to make yourself be intellectually honest and do the proper unbiased analysis and measuring for RCA. It’s also really easy to sit around and conjecture compared to taking the time to measure. It’s really easy for your brain to say “oh it’s always caused by this thing cuz it’s junk” and move on because you want to be done with it. Is this really the cause? Could there be nothing else causing it? Would you investigate this more if other people’s lives depended on this?

I learned about this model of viewing RCA from people who work on safety critical systems. It takes a lot of energy and time to be thorough and your brain will use shortcuts and confirmation bias. I ask myself if I’m being lazy because I want a certain answer. Can I be more thorough? Is there a measurement I know will be annoying so I’m avoiding it?


Looks like your personal website isnt set up yet, its giving a wix error


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: