Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it just me, or does every single AI innovation lately seem to be.. pointless?

Like, if I had the time and resources to work in AI, stuff like substituting faces and even stable diffusion would be about the last things I would ever work on.

What would I start with? Something more like the MS Office software of the 1980s, only automated. I would have real-world applications that, wait for it, perform work so I don't have to. AKA automation.

TBH this stuff exhausts me to such a degree that I almost can't even follow it anymore. It's like living in a bizarro reality where nothing works anymore. A waking nightmare. A hellscape. Am I the only one who feels this way?



> What would I start with? Something more like the MS Office software of the 1980s, only automated. I would have real-world applications that, wait for it, perform work so I don't have to. AKA automation.

But that's what Stable Diffusion et al is: automation. It's just not automation for what you spend your time on, but it is automation for what a countless amount of people spend their time on, doing stock photos/drawings/clip art for endless amount of articles and other generic videos.

I also think that the current "creative automation" comes from a perspective where many people have said for a long time that computers will of course be able to do boring, repeatable jobs like counting numbers and what not, but it will never be able to do the job of an "artist". But now, it seems like at least a subset of "artists" will be out of job unless they find a way to work with the new tools rather than against them.


I think it depends on if you are a designer or an artist. In my mind a designer works on a specification for a client and produces it, they are interchangeable, they dont put their signatures on the work because its not theirs, its the clients.

Artists however wont have to worry about ai. Ai music already exists but people still support mainly real artists on streaming platforms and go to concerts, because provenance matters for artists and it doesn't for designers. Could an ai make a warhol? Probably not because what made warhols art popular was that he essentially worked outside of the training set and provided something previously unseen. Machine learning is bound to the training set. You can make generic corporate bathroom art for hotels or filling empty picture frames with it, but there will still be real artists and galleries and concerts and museums, because often times people value the provenance of the artist much more than even the work itself.


> Ai music already exists but people still support mainly real artists on streaming platforms and go to concerts, because provenance matters for artists

I don't think that's what's going on. The top pop singers are generally already singing things written by other people against accompaniment written by other people. I think by far the biggest reason few people are listening to AI music is it's just not as good as human music yet?


Thats true, but people are still listening for the artist and not the talent team. Drake is famous for having his writers live in tents in the studio. I bet songwriters desperate enough for work to live in a tent probably are cheaper than making a contract with whoever is trying to sell this stuff to major record labels.


So if the talent team becomes partly AI-based, no change on public perception. They still listen to "the artist".


That would depend on if its cheaper to have an AI team or a real team. For example mcdonalds had all the tech to replace burger flippers by the 1980s if not earlier. Just retool a robot that builds Fords to hold a spatula. The reason they still hire people is because its cheaper than what the maintenance contract on such a robot costs.


That's a good point

Just like how "Calculator" used to be a job title for humans who manually performed calculations.


"Ai music already exists but people still support mainly real artists on streaming platforms and go to concerts, because provenance matters for artists"

I don't listen to my favorite music because of who made it but because I like it.

If music that I like started bring made by AI then it's listen to it without hesitation.

In fact a lot of music bring made today is already a collaboration between humans and machines and had been so for a long time.


But I get what zack is pointing at, it's automating the wrong stuff. It's like still having to work in a coal mine but thank god you don't have to take care of kids at night because someone invented autosnatcher.

A lot of world is grinding in pain due to extremely bad software and the money and brain power keeps pouring everywhere but there. Well not entirely.. there was a lot of money thrown on these bad applications, but it evaporated due to software services companies subpar engineering.


Thank you, that's what I was trying to say. Tech innovation today is almost always some variation of let them eat cake.

I had the fortune/misfortune of moving furniture for 3 years right out of college 20 years ago to support my internet business at the time. I saw how the vast majority of people toil away their lives to make rent and child support each month. That experience shattered my will to such a degree that I came out of it a different person.

My concern is that the divide between working poor and techie riche is now so vast that they can't even see one another. If the wealthy and powerful could see, they would invest less in profitable schemes and more in shared prosperity. But they can't. So wealth inequality continues to grow unabated, with AI being just another tool to profit from another's labor or eat their lunch outright.


Its a big question I think about regularly. I have quite a lot to say about it. I think the whole structure is subtle and it's not as simple as rich/poor divide. I'll be back.


These innovations aren't really pointless though? They may not be relevant to your interests, but they have practical applications and Stable Diffusion especially is already seeing a lot of interest from artists and people who need art but don't need something 'custom' enough to pay for a human. In both cases they are saving lots of 'basic' work that might have either been done by a human before or not done at all.

Plus, these are the things we hear about because they look flashy. There is plenty of work behind the scenes on applying these innovations to more 'practical' matters like automation.


Exactly the opposite here. The recent progress in text/image/sound generation is the first thing that's actually made me interested in ML/AI. If I could restructure market priorities so all of the data scientists working on ad tech and recommendation engines and virtual assistants were working on this stuff instead I'd do it in an instant.

I can also say from experience that extreme negative feelings like "other people are doing things I don't find interesting and it makes me exhausted and miserable" were, for me, a sign of clinical depression.


You're right, and I agree with your conclusion about depression.

The catch is that I feel most depression is environmental today. It's singularly exhausting to struggle while watching people who have the means to enact real innovation and change squander their potential on yet another gimmick.

The only thing that's really helped me was to realize that it's all a gimmick. Life itself is a divine comedy. We each determine our own definition of meaning since science can't provide one.

Using the book/movie Contact as an example, basically my philosophy has shifted away from Ellie Arroway (Jodie Foster) and more towards Palmer Joss (Matthew McConaughey). In it, he wrote a book called "Losing Faith: The Search for Meaning in the Age of Reason":

https://www.youtube.com/watch?v=HFcHpamkHII

This spiritual battle we're engaged in between science and faith has been with us since the beginning. It's crushing down on us harder and harder now as science works to stamp out the last vestiges of our individuality and humanity. That sentiment might not make a lot of sense to many on this site, but it's the daily lived experience of billions of people forced to spend nearly the entirety of their lives toiling under subjugation (working towards another person's life purpose) to survive.

You're also right about interest and motivation. I find AI to be perhaps the last frontier, since there's a chance it could explain consciousness and maybe even give us access to something like an all-knowing oracle or even a means to contact aliens. It's pretty much the most interesting thing there is, and why I got into computer programming in the first place. I'm just sad quite literally that I squandered so much time on running the rat race and never got a chance to contribute and "get real work done".


"640K ought to be enough for anybody." - etc etc etc.

It's becoming increasingly clear that many of these techniques that for the past 10 years that have been derided "pointless" or novelty now have real applications.

For one example, the automotive industry - ignore the hype on autonomy, computer vision is already delivering real benefits for active safety systems. I use github copilot every day - its not perfect but good enough to add value to my workflow. Apple's automated tagging of my photo library via computer vision allows me to discover hundreds of images of my life and family I'd forgotten all about. Stable diffusion can clearly replace an artist in some cases, ignoring the moral/ethical issues.

I'm extremely excited for future of all this, frankly. The first step into such a new paradigm is always hard - people made the exact same "home computers are pointless" arguments in the late 70s/early 80s. I don't think anyone agrees with that anymore...


> I use github copilot every day - its not perfect but good enough to add value to my workflow.

Wow I thought this copilot thing was just like a joke. People use it to write software? I'm really curious, now. Can you share examples of the code you're writing with it?


You can use it for just about anything that is text - it will complete configuration files, not just code, and even does a reasonable job a lot of the time writing method docs automatically. Thinking of it as just for code is already a narrow application. Sharing code examples is also of little relevance - just try it yourself and see if it fits your needs. The dollar/value ratio maps for my needs to avoid trips to docs or search engines for syntax confirmation - it may or may not for you.

If it gets it wrong sometimes I don't really care, all the time it takes is pressing tab to complete or not to complete if suggestion is garbage. It's just an extension to the tab key's functionality, which is why it's so easy to use every single day when it integrates to most text tools.

I also think if this is what we can have today, the future of code completion tools is very exciting.


> I use github copilot every day - its not perfect but good enough to add value to my workflow.

Could you please share an example? I saw a description of copilot but couldn't imagine what it might be useful for.


Neural processing is ubiquitous in phones (all the flagships do neural enhancement on the image, and iirc apple does gesture recognition neurally), and all Ice Lake and newer laptop chips (plus lol Rocket Lake), as well as all Zen4 chips have neural instructions too. NVIDIA's had it for 2 gens now. It's pretty well across the stack at this point. And it's getting used for different things in different places. Things like face recognition or gesture recognition are natural fits in a low-power environment, reduces power consumption of those features hugely. PCs and servers can do content generation or some other larger "inference" tasks.

Neural game upscaling is a huge win too. The neural-TAAU upscalers (DLSS 2.x and XeSS) perform a lot better than FSR still, even compared to FSR 2.0/2.1. And there will be other things they can figure out how to ML-accelerate as adoption continues, I'm sure. It enables some solutions to hard problems that don't have good deterministic algorithms, and you can run a lot bigger models than you can without acceleration.

also fabrice bellard (of course) wrote a neural compressor... https://bellard.org/nncp/

it's also likely going to be useful in level design and asset creation as well, although of course as we're seeing now with Stable Diffusion there's some interesting legal questions with content creation.

optical flow engine (not neural) is another big win for computer vision stuff, too, offloads a few tough tasks to hardware, like object tracking and motion estimation. hooking into that with zoneminder or something would be awesome.

I'm also stoked for shader execution reordering too. This is similar to what Intel calls "ray binning", it basically is a best-effort "re-alignment" of the thread (state,etc) to the most similar execution group for memory read alignment/coalescing. I think one or another raytracing implementation (Intel or NVIDIA) they were coalescing based on material.

Intel talks about theirs here: https://www.youtube.com/watch?v=SA1yvWs3lHU

Ada whitepaper: https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvid...

I am interested to hear what AMD is doing with RDNA3, supposedly there is a new RDNA3 ISA instruction for matrix acceleration but rumors are it's not a full high-performance matrix unit like CDNA and >=Turing? I don't know why you'd add an instruction without some hardware acceleration though. So maybe less than the other implementations... which is a little disappointing.


We see these negative articles because they get clicks and attention but there is actually a ton of legit uses for this. In the VFX industry for example doing a face swap (sometimes called a replacement) is a fairly standard practice. Car crash scene where a union stunt performer is being paid to stand in for an actor? Instead of finding one that looks as similar as possible to the actual performer you can have any competent compositor just replace the head of the main performer to the stunt performer. Giving said compositor more tools to do this faster and easier is great - maybe now they can just replace a few individual frames and let the AI do the rest and then tweak the end result as needed.


? Nvidia is a graphics company so they are automating work so that they don't have to, that being making graphics.


I think it's a move in the right direction - but I've never been a fan of convolutional networks.

There's significant potential value in networks that can regenerate their input after processing.

It can be used to detect confusion - if a network can compress + decompress a piece of input, any significant differences can be detected and you can tell that there's something that the network does not understand.

This sort of validation might be useful, if you don't want your self driving car to confidently attempt to pass under a trailer that it hasn't noticed - which tends to kill the driver.


You just have a very slim vision of what work people do. The tools listed could be added to photo, special effects and film editing tools and provide value.


Vision and language are two of the pillars of human intelligence.

Perfecting these two things opens whole universes of potential.

Ask a robot to “do the dishes”. It has to know what that means, find the dishes, find the sink, the on/off mechanism for the water. These are all language and vision tasks.

The balance, navigation, picking/placing etc seems like a minor subroutine fed by vision metadata.


I like that the term chosen by AI researchers for many of these applications is "transformer". That way, I can look forward to a future where transformers do my dishes.


"Autobots, wash up!"


"What is my purpose?" "You wash the dishes." "Oh. My god."


They aren't doing it because it's useful. They are doing it because it's easy and they take what they can get. Further, it's not farfetched to imagine that models that can understand and predict how objects and faces move is a stepping stone to more useful stuff.


You're not the only one that feels that way. I wish I could add something constructive beyond this.



Well we DO have GitHub copilot which I have yet to use


You're right, much of what I'm struggling with is my own inability to keep up with the rapid pace of change. I'm turning into an old man yelling at clouds, like Grampa Simpson.

Then I project when the answer is just under my nose.

It's good to be reminded from time to time that what we're asking for may already be here, we just need to realign our perception to see it. I truly believe that's what meditation/prayer and manifestation (magical thinking) are all about.


> Is it just me, or does every single AI innovation lately seem to be.. pointless?

... but, I think you have really missed the point! Maybe you think government is here to help rather than to govern minds too! And that what is shown in the news is a good faith attempt to relay reality!

If you are managing the world, companies, etc - perception is everything! If you are able to control what people perceive, and they receive everything via a screen, well, who cares about truth? The imagery, the ideas - that needs to convince... and that is pretty much all that you need to manage the masses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: