Unfortunately, this is a fairly difficult task. In my experience, even SOTA models like Nano Banana usually make little to no meaningful improvement to the image when given this kind of request.
You might be better off using a dedicated upscaler instead, since many of them naturally produce sharper images when adding details back in - especially some of the GAN-based ones.
If you’re looking for a more hands-off approach, it looks like Fal.ai provides access to the Topaz upscalers:
They also offer (multiple; confusing product lineup!) interactive apps for upscaling video on their own website - Topaz Video and Astra. And maybe more, who knows.
I have access to the interactive apps, and there are a lot of knobs that aren't exposed in the Fal API.
edit: lol I found a third offering on the Topaz site for this, "Video upscale" within the Express app. I have no idea which is the best, despite apparently having a subscription to all of them.
I'm dimestore cheap, I'd be exploding to frames and sharpening and reassembling with a ffmpeg>irfanview process Lol. It would be awfully expensive to do it with an AI model and the results would be expensive. Would a photo/video editing suite do it? Google photos with a pro script, or Adobe premiere elements, or would you be able to do it in yourself in DaVinci resolve? Or are you talking hundreds of hours of video?
FYI that is an extremely challenging thing to do right. Especially if you care about accuracy and evidentiary detail. Not sure this is something that the current crop of AI tools are really tuned to do properly.
This is a good point. Some of the tools have a "creative mode" or "creativity" knob that hopefully drives this point home. But the simpler ones don't, and even with that setting dialed back it still has the same fundamental limitations/risks.
The issue is the they always say "Here's the final, correct answer" before they've written the answer, so of course the LLM has no idea if it's going to be right before it starts, because it has no clue what it's going to say.
I wonder how it would do if instead it were told "Do not tell me at the start that the solution is going to be correct. Instead, tell me the solution, and at the end tell me if you think it's correct or not."
I have found that on certain logic puzzles that it simply cannot get right, it always tells me that it's going to get it quite "this last time," but if asked later it always recognizes its errors.
Yup. It's the colons after every paragraph's first sentence:
> It worked because it solved a real problem: Kenyans were already sending money through informal networks. M-PESA just made it cheaper and safer.
> Here’s why this matters: M-PESA created a payment rail with near-zero transaction costs.
> The magic is this: You’re not buying a $1,200 solar system.
> It gets even better: there are people who will pay for credits beforehand.
It's just again and again and again. It's sounds 100% ChatGPT.
Maybe this is 100% written by hand by someone who reads too many ChatGPT-generated articles. Possibly the author just spends a ton of time chatting with ChatGPT and have picked up its style. Or it's just more AI-written than OP wants to admit.
We are so cooked.
We spend more time trying to suss out if something was written by AI than actually reading the article.
So many legitimate ways of writing are now “ai” style.
I used to use emdash a lot, but now I deliberately avoid it because it’s an AI smell - using the less “correct” version instead. E
No, it sounds like the author is well aware of that, and is instead just trying to get a read on what the gov's various systems are saying about him, so he can stay well within buffers of that.
He explicitly says that none of his data on the app would convince an official.
The point is - while all of these systems are fuzzy at the edges, that is not a bug. Letting people reside in a few countries at the same time, and to pick a tax residency like a new winter jacket is a non-objective for the border, tax and residency systems.
It's actually relatively simple to follow the rules that lead you down the well estabilished residency paths if you do the opposite of what the article suggests and leave enough of a buffer for every required number, so you don't need to think about it and the precise count can be handwaved by the officials.
Conversly, if you try to minmax the rules, you might find that most important systems still have an arbitrary human decision maker, who simply decides whether to apply a complex ruleset to the letter, or to be lenient.
> No, it sounds like the author is well aware of that, and is instead just trying to get a read on what the gov's various systems are saying about him, so he can stay well within buffers of that.
You don't need an app for that. You just behave like a normal person.
?? I'm a normal person, and I don't need to count my days in and out of a country, I just take vacations when I like.
If OP did that, they'd lose their visa.
It sounds like you've never known anyone on a green card, or waiting for citizenship, or on a visa that requires a certain number of days in the country.
> To apply for British citizenship, you need to prove you were physically in the UK on your application date but five years ago.
This is an insane bureaucratic requirement (to have been in the country exactly 5 years on the day prior), and not someone the vast majority of people need to worry about. How does "just behave like a normal person" help keep you on the right side of multiple overlapping Kafkaesque requirements?
While "isn’t just wrong — it’s profoundly counterproductive" does sound pretty AI-ish, "his eccentrically rational mind" definitely does not. So either an AI was used to help write this, or we try to remember that AI has this tone (and uses emdashes) precisely because real people also write like this.
How is pardoning people like Fauci, or even Hunter, that Trump was clearly going to target as part of an "enemies" list, more "self-serving" than literally pardoning anyone that makes you/give you millions of dollars?
(Changpenh Zhao - made him billions; Trevor Milton - donated $1.8 million; Walczak - his mom donated millions)
You don't have to prove it to me that Trump is a lot more self-serving than Biden. This should be obvious to anyone with half a brain.
That said, this shouldn't be a competition of who is "more self-serving". Just because your neighbour murdered two people, doesn't mean that you get to murder one.
That's why it was not just one individual -- he also pardoned Fauci, members of Congress who served on the J6 investigations, and Gen. Milley for the same reason.
It's clear that he was correct that Trump was going to target his political enemies, but it sounds like he can't win here -- if he pardons everyone including Comey, people would say he's abusing the power by pardoning everyone. If he only pardons a few then he's accused of leaving others "high and dry."
Sperm whales may have more massive brains, but they have fewer cortical neurons total, and of course a much smaller brain to body mass ratio.
But more importantly for this conversation, our brains use up a staggering 20-25% of our resting metabolic needs. A whale brain uses something like 3%.
For us to be able to devote 20% of our calories to our brains, we simply needed to have a huge excess in the number of calories we had available. This is why the cooking hypothesis makes sense. Once we were smart enough to get lots of excess calories, that opened the door to this new fitness landscape of organisms that could devote a ridiculous proportion of their food to their brains. It wasn't that we gave up something else, it's that this wasn't even a possibility before.
reply