The official reason is availability -- it's nigh impossible for a gamer to buy a GPU right now for something close to MSRP. The real reason is probably a desire to prevent miners from selling their GPUs on the second hand market, thus increasing sales of new GPUs.
Nvidia is still trying to fulfill orders for 3080 from launch day. The availability problem is real. They do not need to artificially increase the demand for their GPUs if they already are selling way more than they can deliver.
That's true today, but will it remain true when the cryptocurrency bull run ends, we enter a bear market, and mining becomes far less profitable? Not just that, but the most profitable coin to mine, Ethereum, is on track to move away from Proof-of-Work, so we can expect a lot of second-hand cards to be sold at fire-sale prices in 2022.
The problem isn't miners. The problem is no one wants to stop bots and scalpers from buying them en masse.
If eBay prevented scalping and e-commerce created bot protections, demand would stop being absurd.
Nvidia stopped selling cards on their site because they couldn't figure out how to prevent bots from buying them all. Not to mention Nvidia is selling cards directly to large miners by the pallet load.
Why should ebay prevent scalping? That's the entire reason their site exists, to see who is willing to pay the most for scarce goods.
Ebay is not going to go away just because it hurts gamers' fee-fees. Nor should it. And I say this as someone who is generally opposed to mining and generally upset with 30-series availability. Demanding that ebay shut down is both a ludicrous over-reaction and will not solve anything anyway.
You're literally blaming the market for providing a clearing-price. If you want to reduce demand for gaming cards, forcing miners into their own product segment is the most plausible way to do that, then the clearing-price falls.
I don’t think scalpers can drive up prices long term. What’s the difference between all the scalpers having the cards and nvidia having the cards? Either way people are only going to pay what the card is worth to them. If cryptocurrency miners buy the cards, they’re not reselling them.
Remember about ten years ago, how there was a fad to raise money, then use it to buy every single item on the shelves of a small convenience store? The intention was to keep small locally-owned stores in business by buying more from them. However, even though it brought a lot of profit on that one day, it meant that the shelves were empty for the next few weeks. The regulars saw that, and needed to find somewhere else to shop. The regulars left, and some never came back, leaving the store in worse financial position as before.
Cryptocurrency miners are driving up the prices of GPUs. NVIDIA wants to make sure that they have stock available for their regular customers, because that is where the long-term profit comes from. Ramping up production is not feasible on the short time scale that cryptocurrencies have been around, nor is it known whether cryptocurrencies will be around for long enough to recover such an investment.
TL;DR: Cryptocurrency miners are messing up the long-term GPU market, and NVIDIA is trying to maintain that market.
>Ramping up production is not feasible on the short time scale that cryptocurrencies have been around, nor is it known whether cryptocurrencies will be around for long enough to recover such an investment.
Bitcoin has been around since 2010. They've had plenty of time to realize this was coming. Even the thickest person could've spotted this wave coming in 2013, in addition to the continued rise of computer and console gaming.
Hard to say. I started mining bitcoin in 2013 and it had already moved past GPUs. Bitcoin wasn't taken very seriously back then, and the alt coins were taken even less seriously, so I can see NVidia not taking it seriously too.
But they didn't want to think about it for even five minutes and figure out a friendly way to do this. They should have given gaming sites purchase invites to hand out to members.
There are other ideas too, higher prices on raw hardware but cash-back incentives if bought with games or gaming hardware.
Now everyone hates them. AMD couldn't have paid for such marketing. AMD gives everyone ECC support, unlocked cards. They're (currently) the anti Intel/Nvidia, and the market darling.
Officially it is a supply/demand issue. The crypto miners are buying up all the high end cards, so NVIDIAs main target audience (gamers, workstations, etc.) end up empty handed.
What a lot of people seem to think: Used up cards could end up flooding the market while miners migrate to the newest cards, cutting into NVIDIAs profit or NVIDIA wants to make more money by selling pure mining cards that can't be reused for anything else.
This doesn't make sense to me: why don't they simply price their cards 3x higher and sell them all to miners? Selling to the highest bidder is kind of "Capitalism 101". Their profits would skyrocket.
And what about gamers? Well... they can buy used previous gen cards from miners.
Some companies might be interested in keeping their long term market happy instead of loosing their flagship products to what they might consider a relatively short lived craze.
> Well... they can buy used previous gen cards from miners.
Yeah, last gen cards, which would ruin any lead NVIDIA has over its competition. Going by the steam hardware survey NVIDIA currently owns the PC market, that wont last if they whole inventory is bought up by crypto miners and might lead to serious long term consequences if gamers and engine developers shift their focus to AMD and Intel.
There is evidence that many of the new GPUs are being bought by crypto miners and there is vocal out cry because some feel these cards should be for consumers (gamers), which is frankly bizarre.
I don't know what is bizarre about it. I think cryptocurrencies are fundamentally flawed due to their environmental impact, and should be banned on that merit alone. Add in the inability to reverse fraudulent transactions and their role in the rise of ransomware, and cryptocurrencies are easily something that should be banned.
I don't see it as bizarre to be frustrated that one's hobby is being priced out of reach by what amounts to an environmentally-damaging pyramid scheme.
>I think cryptocurrencies are fundamentally flawed due to their environmental impact, and should be banned on that merit alone
A GPU is a GPU, why is it's environmental impact fine if it's 31 million people pretending to be a cowboy in RDR2 but bad if it's being used for financial transactions.
For the same reason that sending a letter to a friend is different from using the "Send-a-Dime" chain letter. One is something that improves the human condition and brings enjoyment, while the other is a pyramid scheme with negative externalities under the guise of a get-rich-quick scheme.
That is not at all inconsistent with my viewpoint. In the same way that a traditional pyramid scheme enriches a few while the majority lose money, bitcoin speculation can enrich a few before either the bubble pops naturally or it has legislation made against it for being environmentally damaging.
The same number of financial transactions can be handled with an exponentially smaller amount of energy. A secure distributed ledger of financial transactions does not inherently need nearly this much computational power to maintain, as is evidenced by Ethereum's impending move to proof-of-stake.
You can't really say the same for video games. To reduce the carbon footprint of a user playing RDR2, you either need newer more energy efficient hardware, or you need to alter the experience the game provides to make it less computationally expensive.
> A secure distributed ledger of financial transactions does not inherently need nearly this much computational power to maintain, as is evidenced by Ethereum's impending move to proof-of-stake.
You can claim that as evidence after Ethereum has actually moved to proof-of-stake and operated in that mode for a significant length of time without any notable vulnerabilities. Proof-of-stake has some known drawbacks compared to proof-of-work; in particular, at least in naïve implementations, there is nothing to prevent a malicious party from staking the same coins in multiple chains (forks) simultaneously, a flaw which proof-of-work systems are specifically designed to avoid by making the proof depend on each chain's history. One assumes that the Ethereum developers came up with some sort of mitigation for that issue, among others, but it has yet to see real-world testing with significant funds at risk should it fail.
I'd claim traditional databases as an example of a secure distributed ledger of financial transactions. Every single credit card processing network does exactly this. What cryptocurrencies do is add "trustless" as a requirement, and that's where the power consumption comes in. I also think that's a weird requirement to have, precisely because treating every interaction as adversarial introduces so much overhead.
The requirement exists regardless of how hard or easy it is to implement. "Trustless" is a requirement because history shows that there are many circumstances where existing payment networks cannot be trusted. Payment networks sometimes refuse to do business with certain parties merely because it's not profitable due to bad credit, a higher-than-average chargeback rate, public relations, or other reasons. Even when the networks themselves are not actively antagonistic, they are vulnerable to political influence which may take the decision out of their hands.
Where trust is feasible transactions can be settled cheaply in separate records and not on the blockchain itself. The Lightning network is one such protocol; support for inter-account transfers on the same exchange is another. However, it's good that the trustless option exists for the cases where trust would not be justified.
>To reduce the carbon footprint of a user playing RDR2, you either need newer more energy efficient hardware, or you need to alter the experience the game provides to make it less computationally expensive.
Or add extra tax on those who play video games to cover the carbon impact or ban high powered GPUs so they can't even cause that impact anyway, keep all gaming at low end mobile device processor levels.
It's all still code running over the same circuits whether it's being used to verify crypto or let people pretend to be a cowboy. If crypto is taxed more for it's impact then so should video games and maybe things like Marvel movies for the impact of their CGI rendering.
or does the argument of environmental impact go out of the window when its Rockstar and Disney making money from running processors full blast rather than some nobody getting rich from crypto.
There's two separate use cases/markets both wanting the same product with significantly different budgets - gaming and crypto mining.
The crypto mining market has way more money to burn because they're making it back over time via mining, which means that unless there's enough supply for both markets the gaming market gets (almost) nothing until the crypto mining market has bought all the cards they want.
I have no idea what the solution to this problem is, and I don't think poorly designed driver restrictions is it. But it is an actual problem.
Are there any historical examples of similar situations?
Supply and demand should normalize when enough capacity has been provided.
The challenge is at least two fold in my opinion, first of all semi conductor fabrication capacity is stretched right now and additional capacity has a significant capital requirement. Secondly it's probably not clear if crypto demand will stick around long enough to warrant additional capacity, the bottom has fallen out of it before.
It seems like a good strategy for NVIDIA to prevent losing market share among gamers. They could maximize profits near term by seling cards at whatever the market will bear. But they'd yield their gamer share to AMD, and that would have long term negative consequences.
While Golang could be used for that, the original target was to be a "systems language", more of a niche. There are people building websites and even games, but you won't find all the expected bells and whistles. There could even be current shortcomings in the language itself for such use cases, so you would be more "on your own" with such a solution, at the moment.
RoR might have effected my definition of what 'quick' means :D. Things I mentioned in the comment above are indeed 'quick' in ror.
I assume you are implying these are not quick in golang ? I think it proves my point that assemble everything yourself is a silly ideology when applied generically to all things.
This new proposal is explicitly a simplification of the previous check/handle proposal. And for the record, there was a lot push back, but also a lot of support for the check/handle proposal. Language design is an iterative process.
Maybe. But I’ve been developing golang full time in high scale concurrency environments for 4 years and working with a team of similar people. It’s an opinion that is near universally shared on that team.
At high concurrency levels almost everything abandons standard golang concurrency patterns and tools.
Consumer facing systems. System wide throughput between 6-12 million QPS (daily low/high) (query body average size is 1.5KB). Each server tops at ~130K QPS. On a system with 2 1 GB NICs we pop the NIC. On a 10G we pop the CPU.
Current bottleneck is the golang http/net libs. Would likely need to rewrite it from the NIC up to do better.
That's an issue with the http/net libraries, not the concurrency model.
At really high throughout you can run into into issues with the kernel's networking and driver stack. I've encountered situations with my own homegrown event libraries (mostly C or Lua+C; I've never used Go) that were bottlenecked in the kernel. I've also seen issues that were fundamentally related to the use of poor buffering and processing pipeline strategies that resulted in horrible performance. For example, I can get an order of magnitude greater streaming throughput using my own protocol and framing implementations than when using ffmpeg's, though I use ffmpeg's codecs and a non-blocking I/O model in both cases, all in C. And that's because of how I structured the flow of data through my processing pipeline.
There's is no general model of concurrency that can solve that, and I've never seen any model that was easier in the abstract to tweak than any others. Those are implementation issues.
I don’t know if it would have been holistically better, golang has lots of advantages.
But the concurrency would have been more straight forward on the JVM because the language allows for more choices & their are lots of options that get you there.