As a former LLVM developer and reviewer, I want to say:
1. Good for you.
2. Ignore the haters in the comments.
> my latest PR is my second-ever to LLVM and is an entire linter check.
That is so awesome.
> The code is of terrible quality and I am at 100+ comments on my latest PR.
The LLVM reviewers are big kids. They know how to ignore a PR if they don't want to review it. Don't feel bad about wasting people's time. They'll let you know.
You might be surprised how many PRs even pre-LLMs had 100+ comments. There's a lot to learn. You clearly want to learn, so you'll get there and will soon be offering a net-positive contribution to this community (or the next one you join), if you aren't already.
Wait and see, then change the policy based on what actually happens.
I sort of doubt that all of a sudden there's going to be tons of people wanting to make complex AI contributions to LLVM, but if there are just ban them at that point.
fastmath is absolutely not the default on any GPU compiler I have worked with (including the one I wrote).
If you want fast sqrt (or more generally, if you care at all about not getting garbage), I would recommend using an explicit approx sqrt function in your programming language rather than turning on fastmath.
> [Trawling around online for information] trained and sharpened invaluable skills involving critical thinking and grit.
Here's what Socrates had to say about the invention of writing.
> "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
I mean, he wasn't wrong! But nonetheless I think most of us communicating on an online forum would probably prefer not to go back to a world without writing. :)
You could say similar things about the internet (getting your ass to the library taught the importance of learning), calculators (you'll be worse at doing arithmetic in your head), pencil erasers (https://www.theguardian.com/commentisfree/2015/may/28/pencil...), you name it.
>I mean, he wasn't wrong! But nonetheless I think most of us communicating on an online forum would probably prefer not to go back to a world without writing. :)
What social value is an AI chatbot giving to us here, though?
>You could say similar things about the internet (getting your ass to the library taught the importance of learning)
Yes, and as we speak countries are determining how to handle the advent of social media as this centralized means of propaganda, abuse vector, and general way to disconnect local communities. It clearly has a different magnitude of impact than etching on a stone tablet. The UK made a particularly controversial decision recently.
I see AI more in that camp than in the one of pencil erasers.
> Its too shallow. The deeper I go, the less it seems to be useful. This happens quick for me.
You must be using a free model like GPT-4o (or the equivalent from another provider)?
I find that o3 is consistently able to go deeper than me in anything I'm a nonexpert in, and usually can keep up with me in those areas where I am an expert.
If that's not the case for you I'd be very curious to see a full conversation transcript (in chatgpt you can share these directly from the UI).
I have access to the highest tier paid versions of ChatGPT and Google Gemini, I've tried different models, tuning things like size of context windows etc.
I know it has nothing to do with this. I simply hit a wall eventually.
I unfortunately am not at liberty to share the chats though. They're work related (I very recently ended up at a place where we do thorny research).
A simple one though, is researching Israel - Palestine relations since 1948. It starts off okay (usually) but it goes off the rails eventually with bad sourcing, fictitious sourcing, and/or hallucinations. Sometimes I actually hit a wall where it repeats itself over and over and I suspect its because the information is simply not captured by the model.
FWIW, if these models had live & historic access to Reuters and Bloomberg terminals I think they might be better at a range of tasks I find them inadequate for, maybe.
> I unfortunately am not at liberty to share the chats though.
I have bad news for you. If you shared it with ChatGPT (which you most likely did), then whatever it is that you are trying to keep hidden or private, is not actually hidden or private anymore, it is stored on their servers, and most likely will be trained on that chat. Use local models instead in such cases.
I think that leaks like this have negative information value to the public.
I work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.
The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)
The other side of it: some statements made internally can be really bad but employees brush over them because they inherently trust the speaker to some degree, they have additional material that better aligns with what they want to hear so they latch on the rest, and current leaders' actions look fine enough to them so they see the bad parts as just communication mishaps.
Worse: employees are often actively deceived by management. Their “close relationship” is akin to that of a farmer and his herd. Convinced they’re “on the inside” they’re often blind to the truth that’s obvious from the outside.
Or simply they don’t see the whole picture because they’re not customers or business partners.
I’ve seen Oracle employees befuddled to hear negative opinions about their beloved workplace! “I never had to deal with the licensing department!”
Okay, but I've also heard insiders at companies I've worked completely overlook obvious problems and cultural/management shortcomings issues. "Oh, we don't have a low-trust environment, it's just growing pains. Don't worry about what the CEO just said..."
Like, seriously, I've seen first-hand how comments like this can be more revealing out of context than in context, because the context is all internal politics and spin.
Sneaky wording but seems like no, Sam only talked about "open weights" model so far, so most likely not "open source" by any existing definition of the word, but rather a custom "open-but-legal-dept-makes-us-call-it-proprietary" license. Slightly ironic given the whole "most HN posters are confidently wrong" part right before ;)
Although I do agree with you overall, many stories are sensationalized, parts-of-stories always lack a lot of context and large parts of HN users comments about stuff they maybe don't actually know so much about, but put in a way to make it seem so.
Open weights is unobjectionable. You do get a lot.
It's nice to also know what the training data is, and it's even nicer to be aware of how it's fine-tuned etc., but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
> but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
Sure, that's cool and all, and I welcome that. But it's getting really tiresome of seeing huge companies who probably depend on actual FOSS to constantly get it wrong, which devalues all the other FOSS work going on, since they wanna ride that wave, instead of just being honest with what they're putting out.
If Facebook et al could release compiled binaries from closed source code but still call those binaries "open source", and call the entire Facebook "open source" because of that, they would. But obviously everyone would push back on that, because that's not what we know open source to be.
Btw, you don't get to "run it as you like", give the license + acceptable use a read through, and then compare to what you're "allowed" to do compared to actual FOSS licenses.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.
Having been behind the scenes of HN discussion about a security incident, with accusations flying about incompetent developers, the true story was the lead developers new of the issue, but it was not prioritised by management and pushed down the backlog in place of new (revenue generating) features.
There is plenty of nuance to any situation that can't be known.
No idea if the real story here is better or worse than the public speculation though.
> I think that leaks like this have negative information value to the public.
To most people I'd think this is mainly for entertainment purposes ie 'palace intrique' and the actual facts don't even matter.
> The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
> What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
What conclusions exactly? Again do most people really care about this (reading the story) and does it impact them? My guess is it doesn't at all.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in.
> That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
Not only that, but how can we know if his interpretation or "feelings" about these discussions are accurate? How do we know he isn't looking through rose-tinted glasses like the Neumann believers at WeWork? OP isn't showing the missing discussion, only his interpretation/feelings about it. How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.
Some topics (and some areas where one could be an expert in) are much more prone to this phenomenon than others.
Just to give a specific example that suddenly comes to my mind: Grothendieck-style Algebraic Geometry is rather not prone to people confidently posting wrong stuff about on HN.
Generally (to abstract from this example [pun intended]): I guess topics that
- take an enormous amount of time to learn,
- where "confidently bullshitting" will not work because you have to learn some "language" of the topic very deeply
- where even a person with some intermediate knowledge of the topic can immediately detect whether you use the "'grammar' of the 'technical language'" very wrongly
are much more rarely prone to this phenomenon. It is no coincidence that in the last two points I make comparisons to (natural) languages: it is not easy to bullshit in a live interview that you know some natural language well if the counterpart has at least some basic knowledge of this natural language.
I think its more the site's architecture that promotes this behavior.
In the offline world there is a big social cost to this kind of behavior. Platforms haven't been able to replicate it. Instead they seem to promote and validate it. It feeds the self esteem of these people.
It's hard to have an informed opinion on Algebraic Geometry (requires expertise) and not many people are going to upvote and engage with you about it either. It's a lot easier to have an opinion on tech execs, current events, and tech gossip. Moreover you're much more likely to get replies, upvotes, and other engagement for posting about it.
There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
> There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
HN is the digital water cooler. Rumors are a kind of social currency, in the capital sense, in that it can be leveraged and has a time horizon for value of exchange, and in the timeliness/recency biased sense, as hot gossip is a form of information that wants to be free, which in this context means it has more value when shared, and that value is tapped into by doing so.
I too worked at a place where hot button issues were being leaked to international news.
Leaks were done for a reason. either because they agree with the leak, really disagree with the leak, or want to feel big because they are a broker of juicy information.
Most of the time the leaks were done in an attempt to stop something stupid from happening, or highlight where upper management were making the choice to ignore something for a gain elsewhere.
Other times it was there because the person was being a prick.
Sure its a tiny part of the conversation, but in the end, if you've got the point where your employees are pissed off enough to leak, that's the bigger problem.
This is a strangely defensive comment for a post that, at least on the surface, doesn't seem to say anything particularly damning. The fact that you're rushing to defend your CEO sort of proves the point being made, clearly you have to make people believe they're a part of something bigger, not just pay them a lot.
The only obvious critique is that clearly Sam Altman doesn't believe this himself. He is legendarily mercenary and self serving in his actions to the point where, at least for me, it's impressive. He also has, demonstrably here, created a culture where his employees do believe they are part of a more important mission and that clearly is different than just paying them a lot (which of course, he also does).
I do think some skepticism should be had around that view the employees have, but I also suspect that was the case for actual missionaries (who of course always served someone else's interests, even if they personally thought they were doing divine work).
The headline makes it sound like he's angry that Meta is poaching his talent. That's a bad look that makes it seem like you consider your employees to be your property. But he didn't actually say anything like that. I wouldn't consider any of what he said to be "slams," just pretty reasonable discussion of why he thinks they won't do well.
I'd say this is yet another example of bad headlines having negative information content, not leaks.
To me, there’s an enormous difference between “they pay well but we’re going to win the race” and “my employees belong to me and they’re stealing my property.”
Notably, I don’t see him condemning Meta’s “poaching” here, just commenting on it. Compare this with, for example, Steve Jobs getting into a fight with Adobe’s CEO about whether they’d recruit each other’s employees or consider them to be off limits.
But I've also experienced that the outside perspective, wrong as it may be on nearly all details, can give a dose of realism that's easy to brush aside internally.
Your comment comes across dangerously close to sounding like someone that has drunk the kool-aid and defends the indefensible.
Yes, you can get the wrong impression from hearing just a snippet of a conversation, but sometimes you can hear what was needed whether it was out of context or not. Sam is not a great human being to be placed on a pedestal that never needs anything he says questioned. He's just a SV CEO trying to keep people thinking his company is the coolest thing. Once you stop questioning everything, you're in danger of having the kool-aid take over. How many times have we seen other SV CEOs with a "stay tuned" tweet that they just hope nobody questions later?
>if you had actually been in the room for more of the conversation you'd probably feel different
If you haven't drunk the kool-aid, you might feel differently as well.
SAMA doesn't need your assistance white knighting him on the interwebs.
Perhaps you think it's anti-American to believe that Israel is committing war crimes in Gaza. Perhaps I think it's anti-American to believe that the Jan 6 rioters should have been pardoned.
I'd certainly expect visitors to be held to the same standards as the natives. This is the problem, as a US citizen I don't want to be respectful and quiet, especially when I disagree with my government.
> I'd certainly expect visitors to be held to the same standards as the natives.
Visitors are held to a higher standard than natives. Visitors do not have control, a vote, etc: they are temporarily permitted by the privilege of policy at the time.
> as a US citizen I don't want to be respectful and quiet, especially when I disagree with my government.
Good, don't be! You're not at risk of having a visa revoked or go unissued.
Telling the US government it's broken is a favor to the US government. Freedom of speech is a gift to both the people of this country and the institution itself, helping it be pure and accountable. It's the force that prevents us from becoming like China.
Those who seek to stop that regulating force are undermining what makes America great. Where those voices of dissent were born isn't pertinent.
This is akin to the fallacy of saying that the accountability of "real name" policies on web forums make higher quality comments, and then you actually look at the contents of Faceboot. I mean, actual US citizens just voted this tiny-minded failure of a "president" in for the second time, because apparently he hadn't damaged the country enough the first time. Having a stake didn't help there, right? Either people are unaware they are harming themselves (stupidity/anti-intellectualism), don't care because others are getting harmed "more" (spite), or are in social media bubbles pushed by hostile actors (agent provocateurs don't actually need physical presence).
I feel like this is a ridiculous bad-faith argument. You know damned well that banning people from the country for having a JD vance meme on their phone is not stopping international agents. Arguing by presently demonstrably false hypotheticals as though they were reality makes me think it's a waste of everybody's breath talking to you.
It would be a stupid position. I was failing to explain that not all rights like the freedom of speech necessarily make sense to apply to foreigners who are given the privilege to enter the country. I am not necessarily firm in this position the other poster made an argument that they can speak because what does it matter which is a good point.
Okay but that's not what this is about. This is saying that a foreigner cannot express private thoughts online at any point before they enter the United States.
I assume someone who goes by "15155" would believe that having private conversations online can be useful. Or do you want to post your identifying information?
You do you, and we'll have the parties at my house then. Enjoy quietly playing Catan or whatever.
Your extrapolation to the national level is fallacious. Many of our academic institutions were deliberately hosting foreigners, with the explicit goal of being melting pots of ideas. That gave the US an exceptional cultural cachet around the globe. This whole thing is an exercise in attacking and destroying our traditional distributed institutions in favor of centralized autocratic control.
Which elected a democratically-elected representative.
That is how democracies work.
If there's anything the executive has power over besides commander in chief, it would be leader in chief of defining what is actually, American.
The fact that prior presidents have actually abdicated this important role, doesn't mean it didn't exist. This is why traditions of the State of the Union, etc exist. The executive gets to call the plays towards unity for Americanism.
discriminating in employment due to one's affiliation is illegal in state and federal employment [1]. That does not mean one can break ToS and for example, publish on a massive public platform, your private opinion (which can be misconstrued as your employer's). Most employers have ToS against online activity during employment, for that reason.
It is also illegal to do the same for students. [2]
Faculty is already protected under tenure rules. And even for the nontenured, who really needs protecting ? Only 5.7% of all faculty are registered as conservative as of 2020 [3]
My point remains. "Filtering out" is illegal. Setting the stage on what is american, is not.
When it comes to allowing foreighn students to come to US, which from my understanding is a likely path to citizenship, the executive branch gets to decide, which is basically elected by 51% of population every 4 years.
I prefer the exec branch over no purity test, or delegating to some other "expert" institution.
51% of the voting population. Not the majority of the population. Big difference in numbers there, only 65.3% participated. So, less than a third of Americans voted for the current president… why people don’t vote, I’ll never understand.
This technique is orthogonal to integer mod. Indeed the author multiplies by their magic constant and then does an integer mod to map into their hashtable's buckets.
This technique is actually just applying a fast integer hash on the input keys to the hashtable before mapping the keys to buckets. You can then map to buckets however you want.
The additional hash is useful if and only if the input hash function for your table's keys doesn't appear to be a random function, i.e. it doesn't mix its bits for whatever reason. If your input hash functions are indeed random then this is a (small but perhaps measurable) waste of time.
Using prime-numbered table sizes is another way to accomplish basically the same thing. Dividing the input hash key by a prime forces you to look at all the bits of the input. In practice these are written as division by a constant, so they use multiplies and shifts. It's basically a hash function. (Though I'd use multiply by a magic number over divide by a prime, mul alone should be faster.)
I think the post talks about exactly this? The method is combining hashing the keys and finding a position in the target range. There's a bit where he talks about how Knuth uses the term 'hash function' as the combination of these two operations, while modern texts look at the two operations in isolation.
So maybe one way of looking at this is as an efficient fusion operation, which doesn't look special when you look at the ops in isolation, but combine to something that is both fast and advised problems with input patterns.
The way I understood this article, the problem Fibonacci hashing seems to solve is that it turns a hashing strategy that would require a prime modulo into something that can use a power of two modulo.
I think there are some hashing functions around that are already designed to solve that problem at "step 1".
So the question just boils down to which is faster
1. Good for you.
2. Ignore the haters in the comments.
> my latest PR is my second-ever to LLVM and is an entire linter check.
That is so awesome.
> The code is of terrible quality and I am at 100+ comments on my latest PR.
The LLVM reviewers are big kids. They know how to ignore a PR if they don't want to review it. Don't feel bad about wasting people's time. They'll let you know.
You might be surprised how many PRs even pre-LLMs had 100+ comments. There's a lot to learn. You clearly want to learn, so you'll get there and will soon be offering a net-positive contribution to this community (or the next one you join), if you aren't already.
Best of luck on your journey.