Any ideas how the signal is transmitted once the drone is underwater? Typical RC transmitters run 2.4 GHz. Some systems use 700 MHz. The former wouldn't penetrate at all, the latter might penetrate to a shallow depth, but I think that what's shown in the video is deep enough that the signal would get attenuated. There doesn't seem to be a tether and the drone doesn't seem to have any computer vision capabilities to fly itself whilst under water. I wonder how they solved this.
As far as I can tell from their report (https://gitlab.com/hybrid-drone/paper), the authors did not consider underwater RF transmissions in their prototype design and simply use standard COTS components typically used for DIY UAVs.
IMU that determines rotation rates and acceleration without vision. A drone can fly and hold attitude with some accuracy without any outside help, it doesn’t need cameras or outside control. Only GPS to counteract drifting but that can be neglected in the very short term.
I actually just submitted something quite relevant - a side project we soft-launched today that uses LLMs to analyse and group browser tabs across various browsers and devices. It turns out that it’s super demanding for today’s models, but with some optimisations and experimentation, we made it work pretty well.
Running local LLMs might be a heavy order currently to get the job done, but in the near future I don't see a reason why it couldn't be done and it would definitely be quite helpful to people who have either many tabs open or a lot of bookmarks they want to organise.
Ironically, Opera's extension shop is dead and no one reviews new submissions, but for anyone else who is interested in learning more about how we use LLMs to improve the browsing experience or even give the product a shot, here is a link:
This is useful. Why the step of involving LLMs though? I note you cluster the tabs and then GPT-4 is involved to name them. But my tab groups don't usually need names - just the icons tells me what I mostly need to know. Could this work locally better using much smaller sentence-transformer models?
That's a good question. If you have many tabs open from the same few websites (depending on what those websites are), maybe just grouping them based on domain names would be enough to provide context.
But LLMs are needed if you want the product to have a deeper understanding of everything that you are reading and really organise it into groups. You might be reading about architecture across three devices, multiple browsers, from a bunch of different websites. This gives you the opportunity to reunite them and really dive into that topic when you need to.
LLMs are also used to create summaries of each page. So, if some content takes 30 minutes to read, you can have them extract all the interesting information for you in bullet points and based on that you can decide if its worth spending the 30 minutes or if you would rather just close that tab.
So, in short, it's about capabilities. You can have just simple statistical models and regex rules filtering similar websites into predefined categories, or you can have a tool that truly organises your reading and shortens it for you. But for the latter, you need complex models handling a lot of context.
There is a more egregious example of Google abusing its dominance with Google Analytics that people often don’t talk about.
With Google Analytics, they offer a free service. Obviously, competitors can’t do that if their business is solely analytics and they don’t have other ways to monetise the service. That’s not the problem. The problem is how Google then use that free service to gain an advantage with their core money-making business - AdWords and DV360:
As anyone in the industry knows, due to many factors (including different attribution methodologies), ad platforms show significantly different results from analytics platforms. The discrepancies easily reach into the double-digit percentages. Any discrepancy of such magnitude is concerning to an advertiser running paid ads. The same discrepancy exists between Google AdWords and Google Analytics too. Except when you integrate the two. Then, Google AdWords data are plugged directly into Google Analytics, eliminating the discrepancy rate and making it seem to any advertiser as if Google AdWords is a far more reliable traffic source then any of their competitors.
It would be very difficult to extrapolate the monetary value of what Google have earned doing this. It is a much more sophisticated and hard to detect example of dominance abuse.
There is a very important point made in this article that is unfortunately lost in-between all the other arguments:
“Meanwhile, the ability to track users wherever they go tends to shift ad revenue from higher quality sites to less reputable ones. “The way the adtech system works is, it follows the reader from Wired.com all the way down to the cheapest possible place, the basement bottom-feeders on the internet, and will serve you the ads there.””
Many people don’t like seeing ads. But they do like receiving the free content that those ads pay for. And would find it far more annoying if all the content was locked behind paywalls. Whether we like it or not, digital advertising is a powerful equaliser that gave free access to vast amounts of information to anyone from any country and from any income bracket.
But the shift towards audience targeting has stripped high-quality content creators of their share of the value they create and has instead spread it out to countless click-baity websites and apps designed entirely to profit of off targeted advertising.
So diminishing our reliance on targeted advertising is not only great for the user, it would be truly game-changing for high-quality content providers as well. For some reason, this mutually beneficial outcome is often forgotten and I’m glad that Wired pointed it out.
The claim about quality sounds incredibly dubious and really gives away the game is about felony interference with a business model. "High quality" is downright narcissistic witg delusions of grandeur.
High production value can and is absolute shit. Gell-Mann amnesia effect says hi. The problem isn't the competition but that the old guard sucked at their job.
I am no fan of targetted advertising but I despise attempts to control the internet for the sake of dinosaur propagandists. They need to just die already and stop crapping out bullshit.
You don’t need targeted advertising to achieve that. You don’t need FB, Google, and a thousand other lesser-known companies tracking every step you make in order to know that you might be interested in cycling. Instead you, as a small advertiser, would target the right context and would only show your ads in articles specifically-dedicated to cycling or whatever other niche you are after.
Sure, there are niche categories that would be hard to get. But for your two specific examples you could use those simple sets of criteria:
1. Target specific articles with advice for older runners in running publications.
2. Target Italian recipes in cooking websites only for users located in Denver.
You don’t need to do those things manually, it can be automated. Sure, it might not deliver the scale that FB and Google are promising, but to say that it’s impossible to do contextually is not accurate.
Use Google Search ads to capture all direct intent.
I think it’s well defined in the article - the author means behavioural advertising that has to do with targeting any information specific to the user (whether we consider that personal or not).
The proposed alternative is contextual advertising which puts the targeting emphasis entirely on the context in which the ad is shown, not on the person seeing the ad. The idea is that if it is executed correctly, contextual targeting will lead to the same person anyway, just without sacrificing his or her privacy.
I understand completely your argument, and the article argument - I've used plenty of contextual advertising, and it's better than what we call ROS (Run Of Site), but it's worst than targeting based on interests.
But let me give you a inside view: there's been a war from major media groups to try to take down Facebook and Google (let's see what they come up with for Amazon).
This solution for them is the best solution for news websites, that since 2013 have seen their advertising budgets being siphoned to Google and Facebook.
All because they cannot compete with their level of data granularity - and trust me they tried! Hell some tried to build their own data systems to have more refined targeting.
I can even tell you one of the solutions on the table was to force Google and Facebook to share their data with media groups.
So this "white knight in shinning armor" article, despite showing valid arguments, is most definitely biased, because it's in their best interest that contextual advertising prevails and interest/behavioral falls. For years their agenda is to get back media investment from the big boys.
So bare this in mind when you read these articles.
Sure, it is in the author’s employer best interest for the industry to switch to contextual advertising. But it is also in the best interest of the end user. So why is that a bad thing?
I also don’t believe that behavioural advertising is better by definition. Even the article mentions some research to prove the contrary. I’ve seen it from first-hand experience as well. Which, of course, is anecdotal. But what is not is the fact that the world’s largest and most profitable advertising product, Google Search, is completely reliant on contextual rather than behavioural targeting.
The fact that publishers so far have not been able to do something about FB’s and Google’s duopoly doesn’t necessarily mean they will never be able to. Or at least I certainly hope that it doesn’t.
>But it is also in the best interest of the end user. So why is that a bad thing?
Like said, the bad thing would be more wastage and potentially less relevant advertising.
The problem with advertising is that a campaign success is a multi variable equation that fiddles with human attention/retention of attention/memory and many other factors that makes us humans and regulate our perception.
For example, I don't consider Google Search contextual base. I put it in a category of it's own, because you're tapping into people declaring their intent and interest, and reducing it to context is dangerous - because it's more than that.
I used to say to clients: Facebook knows what you like, Google knows what you're looking for, and Amazon knows what you're buying.
Google does have Contextual Segmentation but it's used mainly for display advertising (GDN - Google Display Network). There was a time you could even select/exclude domains, but I don't know if that's still available.
Also, Facebook tried to leverage their data for display advertising outside of Facebook and they ended up closing Audience Network... so even interest based targeting seems to work within a context itself.
>The fact that publishers so far have not been able to do something about FB’s and Google’s duopoly doesn’t necessarily mean they will never be able to.
This would be a long conversation, but in a brief way, I think they tried to compete by placing themselves next to Facebook and Google, while they should have set themselves apart and leverage their strengths - name content quality wise.
Thank you for the thoughtful comment. I do actually agree with many of your points. And you are right, this could be a much longer discussion. Let me just add this - your last sentence doesn’t have to be in past tense. I believe publishers still can set themselves apart and leverage their strengths. The next couple of years will show if GDPR and the CCPA can actually help to achieve that.
Author of the story here. Just made an account so I could chime in to say how great it is to see such a substantive discussion. On the point about being biased in favor of the media: Yes, absolutely, I believe a democratic society requires a thriving independent press to function, and that public policy should try to help that happen.