I think a decent middle ground would be to allow contextual advertising and ban personalized advertising. That is, it would be fine to show you ads based on where you are, what you are doing or what you are searching on the internet, but not based on what you did on another website or where you had lunch yesterday.
Of course this would add friction for finding the appropriate targets but it would still allow pretty decent business for adtech. it just would be a bit different.
(I'm pretty sure that the line between contextual and personalized ads is blurry, but I leave that to be solved by lawmakers and judges. Its kind of their core competence. And to be clear, what I personally think should be done would be much, much stricter ban, but this is a compromise proposal I think should be agreeable by all parties who are the slightest interested in the harm current adtech is doing)
Don't forget Germany. If you look at the amount of PV built in Germany early this century and make some admittedly strong assumptions about learning curve, one could argue the Energiewende, then usually called failure, singlehandedly accelerated PV development by decades. I don't recall Germany ever credited on that.
If cheap LED light bulbs had been around we wouldn't have need legislation in the first place. Both Germany's solar subsidies and the EU prohibiting (high power) incandescent light bulbs were cases where existing alternatives were bad (solar was way too expensive to be practical, non-incandescent light bulbs sucked), but legislation intentionally created demand for them anyways in hopes that with demand there would be research and scaling effects that create better cheaper products. In both cases it worked, even if the transition was a bit painful in both cases.
Don't knock CFLs. We still have the very first 2 we brought back in 1985, 13W Philips Prismatics. Been in continuous use, both outdoors under a portico. Still going strong.
They're fragile as heck, though, and contain mercury (albeit a small quantity in a relatively less-harmful form). Breakage needs to be handled appropriately, and disposal is as hazardous waste.
LEDs are more efficient, offer better (and often more flexible) light quality, are damndably rugged, and have far less toxic material load. Given the balance, I'd be swapping out CFLs (and have been).
I remember some old tidbit about the American westward expansion, most railroad projects failed and went bankrupt and were sold for pennies on the dollar to the ultimate owners.
A lot of them got built with per-mile subsidies and cashed out via shoddy construction. The ones that focused on long-term financial sustainability more often did fine, but it is a lesson in perverse incentives (though some would argue that afterwards cheap overbuilt lines facilitated much faster and more extensive westward expansion of people).
Just today there was a newsletter from Construction Physics about Strap Rail. Literally wooden rails with a iron plate strapped on top put in the mud. Only in the US, 10 times cheaper. But more expensive to maintain and gone in years instead of decades for normal iron rails though.
By building the initial rails cheaply, they could then bring in equipment and supplies over those rails to rebuild the railroad to a much better quality, and at a lower cost than if they had to bring that equipment and supplies in without the rails in the first instance.
That doesn't mean they always actually invested the money to rebuild properly... but it was sound engineering theory.
The lesson, which we learned in the dot-com era and will likely learn again in the AI era, is that the benefits of step-change new infrastructure technology do not accrue in the long run to the infrastructure builders—the technology only creates the step-change if it finds its way to being a commodity!—but diffuses throughout the new, ultimately much larger, more productive economy as a whole.
But since then there was an endless stream of negative press especially in English speaking countries against German energy policies, so not much of this positive comments are still remembered.
I wonder if one could write a skill called something like "Ask the damn user" that the model could use when e.g. all needed files are not in the context.
For this year's numbers there are two possible stories that come to my mind:
1. The jobs are going ~constantly downhill and any new, revised number is going to be worse than previous
2. There is a conspiracy and initially the numbers are systematically inflated and/or afterwards deflated because reasons... that are completely incomprehensible to me. I don't even understand if this conspiracy theory should be pro or against trump.
> People will have less or no motivation to create them
Not sure if we surf the same internets... In the web I am surfing, the more "motivation" (trying to get ad revenue) the author has, the crappier the content is. If I want to find high quality information, invariably I am seeking authors with no "motivation" whatsoever to produce the content (wikipedia, hacker news, reddit with a heavy filter etc.) I'm pretty sure we would be better off if the whole ad industry vanished.
No, it's much worse than that. In real life you talk about pages and pages of documents and power points and meetings after meetings if you happen to need a computer/server/configuration that's not in the pre-approved list. (I really wish I was exaggerating. And of course no, not all employers are like this to state the obligatory obvious.)
> We know that universal solutions can’t exist and that all practical solutions require exotic high-dimensionality computational constructs that human brains will struggle to reason about. This has been the status quo since the 1980s. This particular set of problems is hard for a reason.
This made me a bit curious. Would you have any pointers to books/articles/search terms if one wanted to have a bit deeper look on this problem space and where we are?
I'm not aware of any convenient literature but it is relatively obvious once someone explains it to you (as it was explained to me).
At its root it is a cutting problem, like graph cutting but much more general because it includes things like non-trivial geometric types and relationships. Solving the cutting problem is necessary to efficiently shard/parallelize operations over the data models.
For classic scalar data models, representations that preserve the relationships have the same dimensionality as the underlying data model. A set of points in 2-dimensions can always be represented in 2-dimensions such that they satisfy the cutting problem (e.g. a quadtree-like representation).
For non-scalar types like rectangles, operations like equality and intersection are distinct and there are an unbounded number of relationships that must be preserved that touch on concepts like size and aspect ratio to satisfy cutting requirements. The only way to expose these additional relationships to cutting algorithms is to encode and embed these other relationships in a (much) higher dimensionality space and then cut that space instead.
The mathematically general case isn't computable but real-world data models don't need it to be. Several decades ago it was determined that if you constrain the properties of the data model tightly enough then it should be possible to systematically construct a finite high-dimensionality embedding for that data model such that it satisfies the cutting problem.
Unfortunately, the "should be possible" understates the difficulty. There is no computer science literature for how one might go about constructing these cuttable embeddings, not even for a narrow subset of practical cases. The activity is also primarily one of designing data structures and algorithms that can represent complex relationships among objects with shape and size in dimensions much greater than three, which is cognitively difficult. Many smart people have tried and failed over the years. It has a lot of subtlety and you need practical implementations to have good properties as software.
About 20 years ago, long before "big data", the iPhone, or any current software fashion, this and several related problems were the subject of an ambitious government research program. It was technically successful, demonstrably. That program was killed in the early 2010s for unrelated reasons and much of that research was semi-lost. It was so far ahead of its time that few people saw the utility of it. There are still people around that were either directly involved or learned the computer science second-hand from someone that was but there aren't that many left.
But then that sounds more like that person explained it wrong. They didn't explain why it is necessary to reduce to GRAPHCUT, it seems to me to beg the question. We should not assume this is true based on some vague anthropomorphic appeal to spatial locality, surely?
It isn’t a graph cutting problem, graph cutting is just a simpler, special case of this more general cutting problem (h/t IBM Research). If you can solve the general problem you effectively get efficient graph cutting for free. This is obviously attractive to the extent you can do both complex spatial and graph computation at scale on the same data structure instead of specializing for one or the other.
The challenge with cutting e.g. rectangles into uniform subsets is that logical shard assignment must be identical regardless of insertion order and in the absence of an ordering function, with O(1) space complexity and without loss of selectivity. Arbitrary sets of rectangles overlap, sometimes heavily, which is the source of most difficulty.
Of course, with practical implementations write scalability matters and incremental construction is desirable.
Well, previously you said that it (presumably "it" broadly refers to spatial reasoning AI) is a "high dimensional complex type cutting problem".
You said this is obvious once explained. I don't see this as obvious, rather, I see this as begging the question--the research program you were secretly involved in wanted to parallelize the engineering of it so obviously they needed some fancy "cutting algorithm" to make it possible.
The problem is that this conflated the scientific statement of what "spatial reasoning" is. There's no obvious explanation why spatial reasoning should intuitively be some kind of cutting problem however you wish to define or generalize a cutting problem. That's not how good CS research is done or explained.
In fact I could (mimicking your broad assertions) go so far as to claim, the project was doomed to fail because they weren't really trying to understand something, they want to make something without understanding it as the priority. So they were constrained by the parallel technology that they had at the time, and when the computational power available didn't pan out they reached a natural dead end.
Indexing is a special case of AI. At the limit, optimal cutting and learning are equivalent problems. Non-trivial spatial representations push these two things much closer together than is normally desirable for e.g. indexing algorithms. Tractability becomes a real issue.
Practically, scalable indexing of complex spatial relationships requires what is essentially a type of learned indexing, albeit not neural network based.
Looking through some old DARPA budget docs[1], it seems like there's a chance that what's being discussed here falls under DARPA's "PE 0602702E TACTICAL TECHNOLOGY" initiative, project TT-06.
It was a national security program with no public face. I was recruited into it because I solved a fundamental computer science problem they were deeply interested in. I did not get my extensive supercomputing experience in academia. It was a great experience if you just wanted to do hardcore computer science research, which at the time I did.
There are several VCs with knowledge of the program. It is obscure but has cred with people that know about it. I’ve raised millions of dollars off the back of my involvement.
A lot of really cool computer science research has happened inside the government. I think it is a bit less these days but people still underestimate it.
I'm not surprised that the government does great research, but I wonder how much good does that research does, if it's unpublished and disappears after budget cuts.
If you can identify blocks of code you need to write that are easy to define reasonably well, easy to review/verify that it is written correctly but still burdensome to actually write, LLMs are your new best friend. I don't know about how other people think/write, but I seem to have a lot of that kind of stuff on my table. The difficult part to outsource to LLMs is how to connect these easy blocks, but luckily thats the part I find fun in coding, not so much writing the boring simple stuff.
“Rent-seeking is the act of growing one's existing wealth by manipulating the social or political environment without creating new wealth”
So, renting out a home. Just the manipulation of social and political environment has already been done. Rent sought, not rent seeking.
Rent as in rent paid to live in a home fits the definition of “economic rent” perfectly. Because housing rent is an example of economic rent. The cognitive dissonance i am pointing out is that seeking economic rent is bad, but using already created structures to obtain economic rent is… not bad somehow?
>So, renting out a home. Just the manipulation of social and political environment has already been done. Rent sought, not rent seeking.
That makes sense for the land, but not so much for the actual structure that sits on top. The land is going to exist no matter what. the same can't be said of the apartment building .
That’s the “without creating new wealth” part of the definition of rent seeking. Now, I’ve lived in a lot of rentals in my life - and not one of my landlords built the home they were renting out. Most or all of those homes had the cost of building them paid off decades ago.
And not all renting out is rent seeking! On occasion in cities with decreasing home prices, the landlord is subsidising the tenant. That is rare though!
Agreed - there are rare circumstances when landlords are losing money. When that happens landlords will usually seek rent increases, or changes to housing / zoning / development rules, etc.
"Paying" is a bit too ambiguous term. Let's say we go to have a lunch, but I forgot my wallet at the office. You pay my lunch and once we are back at the office, I pay you back. Who paid my lunch, you or me? Your company pays VAT in the technical sense you paid my lunch and your company does not pay VAT in the economical sense I paid my lunch.
Of course this would add friction for finding the appropriate targets but it would still allow pretty decent business for adtech. it just would be a bit different.
(I'm pretty sure that the line between contextual and personalized ads is blurry, but I leave that to be solved by lawmakers and judges. Its kind of their core competence. And to be clear, what I personally think should be done would be much, much stricter ban, but this is a compromise proposal I think should be agreeable by all parties who are the slightest interested in the harm current adtech is doing)