In 2014, one benefit of Stack Overflow / Exchange is a user searching for work can include that they are a top 10% contributor. It actually had real world value. The equivalent today is users with extensive examples of completed projects on Github that can be cloned and run. OP's solution if contained in Github repositories will eventually get included in a training model. Moreover, the solution will definitely be used for training because it now exists on Hacker News.
I had a conversation with a couple accountants / tax-advisor types about them participating in something like this for their specialty. And the response was actually 100% positive because they know that there is a part of their job that the AI can never take 1) filings requires you to have a human with a government approved license 2) There is a hidden information about what tax optimization is higher or lower risk based on their information from their other clients 3) Humans want another human to make them feel good that their tax situation is taken care of well.
But also many said that it would be better if one wraps this in an agency so the leads that are generated from the AI accounting questions only go to a few people instead of making it fully public stackexchange like.
So +1 point -1 point for the idea of a public version.
LOL. As a top 10% contributor on Stack Overflow, and on FlashKit before that, I can assure you that any real world value attached to that status was always imaginary, or at least highly overrated.
Mainly, it was good at making you feel useful and at honing your own craft - because providing answers forced you to think about other people's questions and problems as if they were little puzzles you could solve in a few minutes. Kept you sharp. It was like a game to play in your spare time. That was the reason to contribute, not the points.
hehe yea this existing of course. like these guys https://yupp.ai/ they have not announced the tokens but there are points and they got all their VC money from web3 VC. I'm sure there are others trying
Companies don’t have a legal obligation to publicly disclose revenue in many countries, so if you’re selling business insights you’re always on the lookout for indicators that can be used as a proxy to revenue.
Yes, but also to fake how well they are doing to potential, or current investors.
IMHO, these aren't smart investors.. because this should be something that comes up in due diligence, the amount of money left, the current burn rate, and what the company is doing about the latter. If the company was on paper fully staffed, but also actively hiring. That would be for me an indicator that either the hiring is fake, so what else are they faking. Or that the hiring is real, and they are fiscally irresponsible.
There's another angle to all of this, and that's obviously the company isn't fully staffed, there's still some space in the runway for another hire. It's just that right now its a buyers market from the perspective of the company.. So, well, beggars can be choosers.. They're just holding out until that golden candidate comes along. This obviously sucks, and there SHOULD be a maximum length a company can have a job ad out before they have to explain why it's taking so long.
It's not uncommon for countries to require citizens to disclose Who and How many jobs they applied for this week to collect social security.. There should be something similar for companies who have job ads out.
I don't think 2 is true: when OpenAI model won a gold medal in the math olympiads, it did so without tools or web search, just pure inference. Such a feat definitely would not have happened with o1.
> Just to spell it out as clearly as possible: a next-word prediction machine (because that's really what it is here, no tools no nothing) just produced genuinely creative proofs for hard, novel math problems at a level reached only by an elite handful of pre‑college prodigies.
> For OpenAI, the models had access to a code execution sandbox, so they could compile and test out their solutions. That was it though; no internet access.
We still have next to no real information on how the models achieved the gold medal. It’s a little early to be confirming anything, especially when the main source is a Twitter thread initiated by a company known for “exaggerating” the truth.
Well Google got the same results and the official body confirmed that. Would it be nice to know exactly how it was done ? Sure, but this is something that happened.
If you're not going to believe researchers when they tell you how they did something then sure, we don't know how they did it.
Given how much bad press OpenAI got just last week[1] when one one of their execs clumsily (and I would argue misleadingly) described a model achievement and then had to walk it back amid widespread headlines about their dishonesty, those researchers have a VERY strong incentive to tell the truth.
It illustrates that there is a real risk to lying about research results: if you get caught it's embarrassing.
It's also worth taking professional integrity into account. Even if OpenAI's culture didn't value the truth individual researchers still care about being honest.
This exact statement could be said about literally any corporation or organization. And yet, corporations still lie and mislead, because deception helps you make money and acquire funding.
In OpenAI’s case, this isn’t exactly the first time they’ve been caught doing something ethically misguided:
True, but aren't the math (and competitive programming) achievements a bit different? They're specific models heavily RL'd on competition math problems. Obviously still ridiculously impressive, but if you haven't done competition math or programming before it's much more memorization of techniques than you might expect and it's much easier to RL on.
The people working inside the company may be both judge and party to the issue, it’s not always a bad idea to call in consultants. Do you prefer independent and somewhat misinformed or stakeholder to the issue and knowledgeable?
If you can't trust your people, why are they your people? The consultants are going to get the story from your people anyways (even if they do their own data collection, your people are going to tell them where to look), so it's not like you're actually eliminating the bias, you're just obfuscating it.
You hire consultants when obfuscation is the point - it's not Jim from down the hall saying this, its the consultants. Sometimes there are legitimate reasons for obfuscation, but it's always some variation on "so and so needs to hear this, just not from me."
reply