Hacker Newsnew | past | comments | ask | show | jobs | submit | data-ottawa's commentslogin

Maybe it's because I'm a data scientist and not a dedicated programmer/engineer, but setup+tooling gains this year have made 2025 a stellar year for me.

DS tooling feels like it hit much a needed 2.0 this year. Tools are faster, easier, more reliable, and more reproducible.

Polars+pyarrow+ibis have replaced most of my pandas usage. UDFs were the thing holding me back from these tools, this year polars hit the sweet spot there and it's been awesome to work with.

Marimo has made notebooks into apps. They're easier to deploy, and I can use anywidget+llms to build super interactive visualizations. I build a lot of internal tools on this stack now and it actually just works.

PyMC uses jax under the hood now, so my MCMC workflows are GPU accelerated.

All this tooling improvement means I can do more, faster, cheaper, and with higher quality.

I should probably write a blog post on this.


Excited to give this one a try.

I've been using the previous Claude+Chrome integration and had not found many uses for it. Even when they updated Haiku it was still quite slow for some copy and paste between forms tasks.

Integrating with Claude Code feels like it might work better for glue between a bunch of weird tasks. As an example, copying content into/out of Jupyter/Marimo notebooks, being able to go from some results in the terminal into a viz tool, etc.


How is AI not a stochastic parrot? That’s exactly what it is. That never precluded it from being useful.

Yeah -- stochastic just implies a probabilistic method. It's just that when you include enough parameters your probabilities start to match the actual space of acceptable results really really well. In other words, we started to throw memory at the problem and the results got better. But it doesn't change the fundamentals of the approach.

In my experience, it's not that the term itself is incorrect but more so people use it as a bludgeoning force to end conversations about the technology. Rather than, what should happen, is to invite nuance about how it can be utilized and it's pitfalls.

Colloquially, it just means there’s no thinking or logic going on. LLMs are just pattern matching an answer.

From what we do know about LLMs we do know that it is not trivial pattern matching, the output formulated is literally by the definition of machine learning itself completely original information not copied from the training data.


And a parrot (or human) is not stochastic? The truth is we don't actually know. So the usually included "just" is unjustified.

Exactly. After all, how can WE confidently claim that we’re more than stochastic parrots?

The window being minimized animates.

It goes from bottom to top of the window with a continuous effect, it squishes down in width then gets warped/pulled to the dock icon to minimize.

The effect is like when Genie from Aladdin enters or leaves the lamp, but without smoke.


Are these AI filters, or just applying high compression/recompressing with new algorithms (which look like smoothing out details)?

edit: here's the effect I'm talking about with lossy compression and adaptive quantization: https://cloudinary.com/blog/what_to_focus_on_in_image_compre...

The result is smoothing of skin, and applied heavily on video (as Youtube does, just look for any old video that was HD years ago) would look this way


It's filters, I posted an example of it below. Here is a link: https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...


It's very hard to tell in that instagram video, it would be a lot clearer if someone overlaid the original unaltered video and the one viewers on YouTube are seeing.

That would presumably be an easy smoking gun for some content creator to produce.

There are heavy alterations in that link, but having not seen the original, and in this format it's not clear to me how they compare.


you can literally see the filters turn on and off making his eyes and lips bigger as he moves his face. It's clearly a face filter.


To be extra clear for others, keep watching until about the middle of the video where he shows clips from the YouTube videos


I would but his right "eyebrow" is too distracting


It's a scar in his eyebrow from a bicycle accident as a child: https://www.facebook.com/watch/?v=2183994895455038


You're misunderstanding the criticism the video levies. It's not that he tried to apply a filter and didn't like the result, it was applied without his permission. The reason you can't simply upload the unaltered original video, is that's what he was trying to do in the first place.


What would "unaltered video" even mean.


The video before it was uploaded.


The time of giving these corps the benefit of the doubt is over.


Wouldn't this just be unnecessary compute using AI? Compression or just normal filtering seems far more likely. It just seems like increasing the power bill for no reason.


Video filters aren't a radical new thing. You can apply things like 'slim waist' filters in real time with nothing more than a smartphone's processor.

People in the media business have long found their media sells better if they use photoshop-or-whatever to give their subjects bigger chests, defined waists, clearer skin, fewer wrinkles, less shiny skin, more hair volume.

Traditional manual photoshop tries to be subtle about such changes - but perhaps going from edits 0.5% of people can spot to bigger edits 2% of people can spot pays off in increased sales/engagement/ad revenue from those that don't spot the edits.

And we all know every tech company is telling every department to shoehorn AI into their products anywhere they can.

If I'm a Youtube product manager and adding a mandatory makeup filter doesn't need much compute; increases engagement overall; and gets me a $50k bonus for hitting my use-more-AI goal for the year - a little thing like authenticity might not stop me.


one thing we know for sure is that since chatgpt humiliated Google, all teams seem to have been given carte blanche freedom to do whatever it takes to make Google the leader again, and who knows what kind of people thrive in that kind of environment. just today we saw what OpenAI is willing to do to eke out any advantage it can.


The examples shown in the links are not filters for aesthetics. These are clearly experiments in data compression

These people are having a moral crusade against an unannounced Google data compression test thinking Google is using AI to "enhance their videos". (Did they ever stop to ask themselves why or to what end?)

This level of AI paranoia is getting annoying. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of.


"My, what big eyes you have, Grandmother." "All the better to compress you with, my dear."


Why would data compression make his eyes bigger?


Because it's a neural technique, not one based on pixels or frames.

https://blog.metaphysic.ai/what-is-neural-compression/

Instead of artifacts in pixels, you'll see artifacts in larger features.

https://arxiv.org/abs/2412.11379

Look at figure 5 and beyond.


Like a visual version of psychoacoustic compression. Neat. Thanks for sharing.


Then they should improve psychovisual grounding of their compressors by a lot.


I'm commenting on the paper, not the sensationalist thread it was posted in.


Agreed. It looks like over-aggressive adaptive noise filtering, a smoothing filter and some flavor of unsharp masking. You're correct that this is targeted at making video content compress better which can cut streaming bandwidth costs for YT. Noise reduction targets high-frequency details, which can look similar to skin smoothing filters.

The people fixated on "...but it made eyes bigger" are missing the point. YouTube has zero motivation to automatically apply "photo flattery filters" to all videos. Even if a "flattery filter" looked better on one type of face, it would look worse on another type of face. Plus applying ANY kind of filter to a million videos an hour costs serious money.

I'm not saying YouTube is an angel. They absolutely deploy dark patterns and user manipulation at massive scale - but they always do it to make money. Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs. Improving compression would do all three. Less bandwidth reduces costs, smaller files means faster start times as viewers jump quickly from short to short and that increases revenue because more different shorts per viewer/minute = more ad avails to sell.


I agree I don't really think there's anything here besides compression algos being tested. At the very least, I'd need to see far far more evidence of filters being applied than what's been shared in the thread. But having worked at social media in the past I must correct you on one thing

>Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs.

You can't know this. Almost everything at YouTube is probably A/B tested heavily and many times you get very surprising results. Applying a filter could very well increase views and time spent on app enough to justify the cost.


Activism fatigue is a thing today.


Whatever the purpose, it's clearly surreptitious.

> This level of AI paranoia is getting annoying.

Lets be straight here, AI paranoia is near the top of the most propagated subjects across all media right now, probably for worse. If it's not "Will you ever have a job again!?" it's "Will your grandparents be robbed of their net worth!?" or even just "When will the bubble pop!? Should you be afraid!? YES!!!" and also in places like Canada where the economy is predictably crashing because of decades of failures, it's both the cause and answer to macro economic decline. Ironically/suspiciously it's all the same re-hashed redundant takes by everyone from Hank Green to CNBC to every podcast ever, late night shows, radio, everything.

So to me the target of one's annoyance should be the propaganda machine, not the targets of the machine. What are people supposed to feel, totally chill because they have tons of control?


It's compression artifacts. They might be heavily compressing video and trying to recover detail on the client side.


Good performance is a strong proxy for making other good software decisions. You generally don't get good performance if you haven't thought things through or planned for features in the long term.


I had a teacher who said "a good programmer looks both ways before crossing a one way street"


Nitpick: "Is the next Game of Thrones book out yet?"

This is always "No", because the latest book can never be the next book.


I find the qwen3 models spend a ton of thinking tokens which could hamstring them on the runtime limitations. Gpt-oss 120b is much more focused and steerable there.

The token use chart in the OP release page demonstrates the Qwen issue well.

Token churn does help smaller models on math tasks, but for general purpose stuff it seems to hurt.


Increasingly where the desks and servers are is critical.

The cloud act and the current US administration doing things like sanctioning the ICC demonstrate why the locations of those desks is important.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: