How does this in any way reflect on either the privacy of DuckDuckGo (I don't think they ever claimed or even vaguely implied that if I search for, say, Banana, no one else will be able to see the results page for Banana, as that would be absurd), or Google, who are simply indexing publicly accessible pages, which is what they've always done since coming into existence.
What violation of privacy or hint thereof is happening here?
None. Thus my question of why the Curt responses and passive aggression. DDG has positioned itself as the anti Google and lo and behold Google still finds out what's being searched there, though not necessarily who searched it. If you're a bit cynical, it's vaguely interesting.
> when you can't toss a printf debug or log statement into a function without changing function signatures all the way up
But you absolutely _can_ do this in Haskell.
Sure, it's considered very unsafe and shouldn't be used in production, but for printf debugging it's fine.
Admittedly, production-ready logging requires type signature modification, but if you subscribe to Haskell's idea that side-effects should be reflected in type signatures, then I don't see that as excessive.
That depends heavily on where you live. If the most common type of power outage is one that affects a small region, it's quite likely internet connectivity could remain unaffected. I had this experience in northern Sweden, where I had a couple of power interruptions but zero network interruption.
It is not just about power outages, it is also about cloud services closing, having outages etc - as in this post. If you only have devices that will work locally, that becomes much less of a problem.
Worth noting that the Institute for Energy Research was founded by the ex-director of public policy for Enron, and has received donations from the likes of ExxonMobil and the American Petroleum Institute, so while this post may well contain valid points, it produced by an organization likely to have a bit of a conflict of interest when it comes to anthropogenic climate change.
I think this really is not worth noting and the rapid appearance of a bunch of astroturfing comments along these lines is really disappointing.
This post deserves to be dealt with on its own, and it’s perfectly fine as a statistical commentary on these graphs.
It happens to be wrong in the conclusion, but not for any kind of political bias-based reason.
If you aren’t willing to engage with people who hold a different view and have power to stop your preferred policies, why would expect them to do the same for you, and why would you expect your policies to ever be enacted?
You’re looking for reasons to dismiss something on purely superficial grounds, and effectively disallowing any possibility that certain groups could actually present data that forces you to change your beliefs.
It strikes me as even worse and more dangerous dogmatism than what comes out of the right-wing climate denying think tanks.
So in some superficial judgement, you decided it must be a hit piece without reading it and then confirmation-bias-googled some funding sources to reinforce your view still without reading it, and then use a rhetorical quip to act like this is justified?
It seems like you’re just admitting to what I claimed. You dismiss things based on superficial details but don’t admit they are superficial.
To be very clear, I don’t agree with the OP post at all, but it was thoughtfully written and the point about statistical significance being very misleading for inference goals is really valid, especially for climate predictions.
It actually takes some statistical effort to point out why the posts conclusions aren’t valid (eg it’s the trend of temperature increase that affects policy, not the nearness of the observations to the model’s mean prediction).
As someone who works professionally in statistics, I say this post is of higher quality than a large amount of even published research, especially in social science, and seems like fair un-extreme skepticism that deserves to be honestly and sincerely engaged with, and not dismissed out of hand because you spent 5 seconds googling the buzzword name of a funding entity you dislike.
There's only so much time in the day, and there's so much information, presented in so many ways, and it's so common for these sorts of things to be biased to the point of propaganda, that I've developed a heuristic that says (in this case) I shouldn't waste my time with this particular piece.
However,
> As someone who works professionally in statistics, I say this post is of higher quality than a large amount of even published research, especially in social science
...this makes me sit up and take notice. I'll go back and actually read the thing now.
- - - -
I got as far as the first paragraph:
> As an economist who writes on climate change policy, my usual stance is to stipulate the so-called “consensus” physical science (as codified for example in UN and US government reports), and show how the calls for massive carbon taxes and direct regulations still do not follow. For an example of this discrepancy that I often mention, William Nordhaus recently won the Nobel Prize for his work on climate change economics, and he calibrated his model to match what the UN’s IPCC said about various facts of physical science, such as the global temperature’s sensitivity to greenhouse gas emissions, etc. Even so, Nordhaus’ Nobel-winning work shows that governments doing nothing would be better for human welfare than trying to hit the UN’s latest goal of limiting warming to 1.5 degrees Celsius.
Yeah, thoughtfully written crazy-talk waste-of-time bullshit. Like a wedding cake made out of Crisco, I'm sorry I even tasted it.
It sure seems like you did not try to sincerely read it. That paragraph has virtually nothing to do with the rest of the post. It just reinforces that you seem to only consider opinions or arguments that start out from a position you already agree with, and are happy to dismiss things without reading them if you don’t.
> It sure seems like you did not try to sincerely read it.
I started out sincerely and I did dig a little further than I
indicated, but the author failed another two heuristics already in the first
paragraph. Specifically:
> my usual stance is to stipulate the so-called “consensus” physical
science
I read a lot of fringe science (crackpots) for fun and to scan for
up-and-coming new science/tech, and that's exactly the kind of sentence a
crackpot writes. He uses "so-called" and scare quotes for the idea of
consensus physical science. That's how crackpots talk. Not damning in
itself, but a very bad sign.
Then:
> Nordhaus’ Nobel-winning work shows that governments doing nothing would
be better for human welfare than trying to hit the UN’s latest goal of
limiting warming to 1.5 degrees Celsius.
Now that is classic "black is white; up is down" inverted-logic
propaganda. It's straight out of the playbook.
Even so, I clicked through to see wtf he's talking about [1], and he's got
some table (Table 4 on [1]) and he says:
> The first row of the table shows what the DICE model—as of its 2007
calibration—estimated would happen if the governments of the world took
no major action to arrest greenhouse gas emissions. There would be
significant future environmental damages, which would have a
present-discounted value of $22.55 trillion.
So, ouch, right? But then he says:
> In contrast, the second row shows what would happen if the governments
implemented an optimal carbon tax. Because emissions would drop, future
environmental damages would fall as well; that’s why the PDV of such
damage would be only $17.31 trillion. However, even though the gross
benefits of the optimal carbon tax would be some $5 trillion as a result
(because of the reduction in environmental harms), these gross benefits
would have to be offset by the drag on conventional economic growth, or
what is called “abatement costs.” Those come in at a hefty $2.20 trillion
(in PDV terms), so that the net benefits of even the optimal carbon tax
would be “only” $3.07 trillion.
Notice that he's talking about economic benefits? "$5 trillion ... reduction in environmental harms" ... that's endangered species that didn't go extinct, forests and rivers and seas that aren't cut down or dried up or poisoned, fisheries that haven't collapsed. Ya feel?
So there it is. When he says "governments doing nothing would be better
for human welfare than trying to hit the UN’s latest goal of limiting
warming to 1.5 degrees Celsius" he actually means the welfare of the
economy. The global ecology is still fucked to the tune of
$22,550,000,000,000 in the do-nothing scenario.
Frankly, I find it absurd.
It's reminds me of that New Yorker cartoon, "Yes, the planet got
destroyed. But for a beautiful moment in time we created a lot of value
for shareholders." https://www.newyorker.com/cartoon/a16995?verso=true
So, already in the first paragraph, he's shown that he's a propagandist who values money over living things. And so, according to my own world view, I can safely discount anything he has to say on the subject. The saddest part is that I don't doubt that he's sincere and thinks of himself as a good person. (He's not twirling his mustache and cackling evilly, eh?) But I'm not going to waste my time reading his screed. As I said, there is too much other, higher-quality information in the world today, and only so much time to read it.
> That paragraph has virtually nothing to do with the rest of the post.
Then what is it doing there? Not to beat up on the guy but that's
another strike against him as a writer, no?
> It just reinforces that you seem to only consider opinions or arguments
that start out from a position you already agree with, and are happy to
dismiss things without reading them if you don’t.
I can understand why it seemed that way but it's just not true. The
fundamental rule of Information Theory says that the unpredictability
of a message is a measure of its information content. I actually seek out
information that contradicts or modifies my current models of the world.
This article isn't that. (I mean, you can predict what he's going to say from the title alone. As I did, successfully.)
You said yourself that you don't agree with the conclusions of the
article, so what exactly am I missing by skipping it? I mean I could
spend that time reading up on statistics or something, eh?
In any event, well met, and have a Happy New Year.
Is illegal and I don't think anyone here is arguing against that.
> Liver disease
That's harming you, not someone else. If the state pays for the healthcare requirements that then arise then a more nuanced argument can be made, but surely if it's your own liver and you pay for healthcare then you're harming no one else.
> Domestic violence
Once again, already illegal and I don't think anyone here is arguing against that.
My argument against gun control is similar to my argument on the “War on Drugs”. It wouldn’t be enforced equally. Do you think they are more likely to arrest one of the good ol boys in the south for having a gun illegally or a minority?
Also, the US has a poor history of actually getting rid of things that people want.
> it's trivial to generate text that could match any given hash
Source on this?
Wikipedia states what I have heard before which is that MD5 collision attacks are pretty trivial now, but carrying out a preimage attack as you describe remains theoretical at this time.
There is a way to defate this that is much simpler, in the examples on the page 5 out of 7 examples has the answer in the question. Just do MD5 sum of every word/combination of words in the question and you would find the answer to many of the questions. This together with a targeted dictionary would propbably give you a very high success rate for little cost. MD5/SHA-familly hashes are inexpensive to compute, you can do billions of the in a second. If you cant find the answer, then just request a new challenge untill you find one you can answer.
It's no faster than any existing means of communication since you still need to send classical information along with the quantum, though I guess it's nice that it can be used for almost completely unbreakable encryption.
It feels like complete overkill for no practical advantage that I can see.
Photographers don't automatically gain copyright of a photo by virtue of it being a photo - they gain copyright when they create what the law considers a copyrightable work.
In practice, most photos involve creating or capturing a scene in a unique or new way, and this adds something new sufficient to make the photograph a new work and hence subject to copyright.
The act of photographing a public domain painting in such a way that you just reproduce the painting and add nothing new, however, doesn't necessarily create a new work - it may instead count as a reproduction of that original work and hence subject to the original work's copyright, as no new copyrightable material is added.
There is a bit of a grey area in that if I say, arrange a whole bunch of public domain art in a particular way and photograph it, I could quite reasonably argue that my arrangement itself consists of a work and so my photographs are subject to copyright. Similarly if I parody or otherwise transform a public domain work, I can assert that my work is copyrightable as it is transformative. This 3D scan doesn't fall into this area however since the scan was clearly intended to reproduce the original work, as opposed to create a new copyrightable work.
As far as I know, 3d scanning is not a matter of just pushing a button on a device, whereas a photo can be, so the presumption ought to be that the former involves more creativity than the simplest example of the latter.
I would say they are definitely not orthogonal. Things that aren't automated and aren't easy (and some that are easy) require choosing from a near-infinite number of alternatives in a way that can't be or hasn't been defined in a mechanistic way. If I claim that seems like a reasonable definition of creativity, what do you think is missing from it?
It's a question of expression vs reproduction. A 3d scan copies the object being scanned, without interpretation or (intentional) embellishment. It's the same with photographs taken for documentation purposes. The skill required doesn't matter.
I used to be a photographer, and a tech at a high-end photo lab (e.g. we had a couple of Condé Nast magazines as clients). A lot of jobs incurred more work from the technically demanding end (high resolution scans, color matching) than the creative end.
It's not always clear-cut, there's usually going to be a bit of both sides. There's the whole idea of derived works, where the changes you've made are yours but the work as a whole is also entangled in the copyright of the original. But the more the new work is (meant to be) a faithful reproduction, the more it's "just" a copy and not a new riff. Again, it's about the intention and the difference in content between old and new, not so much about how much work or skill was required.
I'm not experienced in creating complex 3D models, but my passing acquaintance with such things (including trying to write a program to convert points to a mesh once) makes me think the claim there is no interpretation or more than one way to create a model of an object is absurd. This seems like an extreme example of assuming something you haven't done is trivial.
Another separate point, supposing for the sake of argument 3D models are not creative... It is well known that, say, a "white pages" style phone directory was ruled not to be copyrightable. But does that mean it is a copyright violation?
It's not about how easy it is, or how many ways there are to do the thing. The point of a 3d scan is to reproduce the facts of the object being scanned: there's this surface at this position with this normal, another one there with these properties, etc. Choices to be made might trade accuracy for less cost (in time and/or money), but those choices don't change the fact that the scan's purpose and value rest in the facts being recorded. It's like the quake fast inverse square root vs the pedestrian n => 1/sqrt(n), the skill in writing fast inverse square root doesn't imbue the result of its evaluation with any of that.
WRT your second paragraph, I'm not sure I understand. How could there be a copyright violation on something that can't be copyrighted?
Ok, if the facts that are used to make a phone directory can't be copyrighted, then why would the facts that are used to make a 3D scan be copyrightable?
I mean, nobody other than the scanner chose or wrote down all those numbers. The creator of the object made an object. Seems like the same distinction to me as a house vs. its phone number and address which locate it.
I'm not quite following wrt the phone book/ house / phone number / address.
Per wikipedia[1] copywrite is on "the original expression of an idea in the form of a creative work".
The bust is that expression, the scan is a copy of that and therefore subject to the same copyright as the bust itself (in this case, no copyright).
For a house, I suppose the original expression would be the architect's plans, the house itself is a performance of that expression (when you commission plans from architects part of the terms are under what conditions you can build according to those plans; where I'm from standard terms are for a single performance, at the site the plans were originally commissioned for).
A photo of the house (partially) copies the house which (partially) copies the plans, therefore the photo is subject to the same copyright as the plans (depending on what the jurisdiction says about photos of things in public space). However, a photo may also contain creative expression in the framing, lighting, etc, and is therefore also subject to its own copyright. So to distribute the photo you need permission to distribute this original expression, plus permission to distribute the underlying expression. The more concrete example of this is with models -- advertisers need permission from both the photographer and the model being depicted in the photograph.
There's no creative expression in a phone number or an address, so copyright doesn't apply. Even if there were, there are usually exceptions to enable interoperation (AFAIK Oracle including a poem as part of the database wire protocol[2] hasn't been tested in court, but it's basically the same thing as in Sega vs Accolade[3], which didn't go well for Sega).
I hope something in there answered your question.
Again I'm not a lawyer, this is my understanding from my past work both as a photographer in my own name (i.e. I own the copyrights) and doing work-for-hire. And more recently contracting with architects and arguing over the terms.
As I understand it the reasoning given tends to be that ad auctions happen extremely quickly, and therefore too quickly to check some single source of truth for your current budget.
Therefore, the budget is only "eventually consistent" - i.e. at the end of the campaign or after a day or so, the number you're being charged is accurate. However during the campaign itself it's not possible to guarantee that things won't go slightly over budget as each individual ad auction cannot feasibly check the central budget.
That said, it definitely feels like there should be a way to implement this such that it backs off as the budget is approached, so that overshoots are likely to be minor, rather than 100% of the budget as apparently happens on a regular basis.
They don't need a bug fix as a stopgap. The fair thing to do would be refund the excess. If I set a limit and they display ads and exceed it, why should it be my problem?
Back when I was in the ads org, there was no month without a big important future launch, that was supposed to make us make less money in the short term.
I suspect a factor that makes this trickier is cost per click. You're bidding for a click not a particular spend, so if your cost per click is ~$0.01 then overrunning by 100 clicks only costs $1 over budget. If you're more like $1-5 a click (not uncommon) then you can easily get to $100 over budget.
My guess is that this is worse the lower your budget, and $100 a day is a low budget for Google ads when you're considering the whole industry.
I would expect that with a budget of, say, $5000 a day, you might still be ~$100 out.
> TypeScript doesn't count because all type information is stripped during runtime
What's the point of keeping this information around at runtime if you do all your type checks at compile time?
Based on this categorisation, Haskell for example isn't a strongly typed language, and yet Haskell's type checking is one of its biggest selling points, so this doesn't seem quite right.
In practice, you don’t do all your type juggling at compile-time in even moderately complex libraries or applications. So TypeScript could certainly use better RTTI, I think. design:type metadata is insufficient.
What violation of privacy or hint thereof is happening here?