I rewatched it in recent weeks and enjoyed all the bits that I enjoyed years ago during the first watch. The stories I found a bit tedious first time (High Sparrow plotline, Arya and faceless men) weren't as miserable; I think I was expecting them to drag on even more. My biggest grievance on the rewatch was just how poorly it's all tied up. I again enjoyed The Long Night through the lens of 'spectacle over military documentary'. The last season just felt like they wrote themselves into a corner and didn't have time and patience to see it through. By that point, actors were ready to move on, etc.
I don't really view this as the show runners fault. GRRM was unable to complete his own work. The show worked best when it drew from the authors own material (GRRM was a screenwriter himself and knew how to write great dialog/scenes).
It's absolutely the producer's fault. They actively choose to release the product they did instead of making more episodes, taking long, bringing other people in to help, etc.
Martin has claimed he flew to HBO to convince them to do 10 seasons of 10 episodes instead of the 8 seasons with just 8 episodes in the final one [1]. It was straight up just D.B. Weiss and David Benioff call how the series ended.
Maybe ChatGPT is sticky enough that people won't switch. But since we're talking about something as old as Google Video, we could also talk about AltaVista, which was "good enough" until people discovered a better and more useful alternative.
A lot of "normal people" are learning fast about ChatGPT alternatives now. Gemini in particular is getting a lot of mainstream buzz. Things like this [1] with 14k likes are happening everyday on social. Marc Benioff's love for Gemini broke through into the mainstream also.
I couldn't even get ChatGPT to let me download code it claimed to program for me. It kept saying the files were ready but refused to let me access or download anything. It was the most basic use case and it totally bombed. I gave up on ChatGPT right then and there.
It's amazing how different people have wildly varying experiences with the same product.
It's because comparing their "ChatGPT" experience with your "ChatGPT" experience doesn't tell anyone anything. Unless people start saying what models they're using and prompts, the discussions back and forth about what platform is the best provides zero information to anyone.
Did you wait a while before downloading? The links it provides for temporary projects have a surprisingly brief window where you can download them. I've had similar experience when even waiting 1 minute to download the file.
Since LLMs are non deterministic it's not that amazing. You could ask it the same question as me and we could both get very different conversations and experiences
Google trains its own AI with TPU's, which are designed in house. Google doesn't have to pay retail rates for Nvidia GPUs, like other hyperscalers in the AI rat race. Therefore, Google trains its AI for cheaper than everyone else. I think everyone else "loses big" other than Google.
But ... I don't understand why this is supposedly such a big deal. Look into it, calculate, and a very different picture comes forward, nVidia reportedly makes about 70% margin on their sales (which is COGS, in other words nVidia still pays about $1400 for chips and memory to produce a $4500 RTX5090 card, and that cost is rising fast).
When you include research for current and future cards, that margin drops to 55-60%.
When you include everything on their cash flow statement it drops to about 50%.
And this is disregarding what Michael Burry pointed out: you really should subtract their stock dilution which is due to stock-based compensation, or about 0.2% of 4.6 trillion dollars per year. Michael Burry's point is of course that this makes for slightly negative shareholders' equity, ie. brings the margin to just under 0, which is mathematically true. But for this argument let's very generously say it eats about another 10% out of that margin. As opposed to the 50% it mathematically eats.
Google and Amazon will have to be less efficient than nVidia, because they're making up ground. Let's very generously say that's another 10%, maybe 20%.
So really, for Google making their own chips saves them at best 30% to 40% on the price, generously. And let's again ignore that Google's claim is that they're 30% to 50% less efficient than nVidia chips, which for large training runs translates directly to dollars.
So for Google, TPUs are just about revenue neutral. It probably allows them to have more chips, more compute than they'd otherwise have, but it doesn't save them money over buying nVidia chips. Frankly, this conclusion sounds "very Google" to me.
It's exactly the sort of thing I'd expect Google to do. VERY impressive technical accomplishment ... but can be criticized for being beside the point. It doesn't actually matter. As an engineer I applaud that they do it, please keep doing it, but it's not building a moat, not building revenue or profit, so the finance guy in me is screaming "WHY????????"
At best, for Google, TPUs mean certainty of supply, relative to nVidia (whereas supplier contracts could build certainty of supply down the chain)
Every time I see a table like this numbers go up. Can someone explain what this actually means? Is there just an improvement that some tests are solved in a better way or is this a breakthrough and this model can do something that all others can not?
That is not entirely true. At least some of these tests (like HLE and ARC) take steps to keep the evaluation set private so that LLMs can’t just memorize the answers.
You could question how well this works, but it’s not like the answers are just hanging out on the public internet.
Installed it. Thought it might be cool to ask it how to improve my site UI. It thought for about 2 minutes, supposedly made changes. It says it created a "searchable, filterable grid layout" but I don't see any difference on the page. I wonder what's up.
1. Maybe the domain matching missed? You can check this by going to the library tab and seeing if it appears in "Modifications for Current Page" when you're on the site.
2. Maybe there was a silent error. Our current error system relies on chrome notifications and we've come to realize that many people have them disabled. That means you don't get a clear error message when something goes wrong. We are actively working on this.
3. The script could be caught be a content policy. Checking console log could help to see if there are any errors.
4. Maybe the script just doesn't work on the first try. Can't guarantee it will work perfectly every time. You can try to update the script (Library -> click Modify on the script) and say that it didn't work/you don't see any changes.
Indeed, captcha vs captcha bot solvers has been an ongoing war for a long time. Considering all the cybercrime and ubiquitous online fraud today, it's pretty impressive that captchas have held the line as long as they have.
reply