Agreed. In addition to sounding like spitballing, even the linked articles (from what I saw) don't show a clear way in which the former employee came to 70%. Overall this article comes across as fear-mongering but one of the linked sources was an interesting blog post (IMO) about why people should _stop_ providing specific estimations of p(doom).
Seems predictable that this announcement would come right after switching to unlimited PTO. I would imagine this reduces the money that the company plans to pay out by quite a bit if they used to pay outgoing employees for unused PTO. Maybe I'm being unnecessarily cynical, though.
I think it is more about the employees who stay. Giving unlimited PTO is a way to make the remaining employees more happy. So it is like a way to try to balance out the negative effects of the layoff on team moral...
There is nothing unlimited about unlimited PTO. Its a removal of a paid benefit. When they let you go they don't have to pay for accrued PTO.
There is no contract stating how much PTO you can take, its up to your manager, could be zero, could be four weeks, who knows. Could change when your manager changes.
Its an accounting trick to remove PTO from the accounting books and remove a paid benefit to employees.
It's mixed. From my personal observations the more junior folks and the people less up to date on the discourse around unlimited PTO are happier, but people who are aware of the knock-on effects on company culture, pressure to deliver, etc. are unhappy.
I wish I could get unlimited UNpaid time off. 90% of the value I create at work happens during initial factory line bringup and the last few weeks before shipping something. I'd go hang out somewhere cheap and live like a king most of the year
I was really excited to see something like this. I really dislike my company's current logging solution (I mostly take issue with how slow querying logs is most of the time). Unfortunately the live demo didn't work. Maybe it was hugged to death? In any case it might be nice to get the demo more stable!
> I remember when Gitlab decidedly said they'll do business with anyone back when people were shaming tech companies for helping and cooperating with ICE a few years ago. That left a bad taste in my mouth.
I could see how this would leave a bad taste in your mouth, but I'm not sure it follows that Github is the lesser evil.
I can't think of specific examples (aside from the EEE philosophy brought up in the post and copilot if you consider that to be a Bad Thing), but Microsoft seems to have done plenty of Bad Stuff in the past. Maybe a comparable amount, if not more, Bad Stuff than GitLab?
That's true. But I use VSCode every day. FWIW, I tried the de-Microsofted version and it worked well for almost a year. But then the plugin store split happened and it just made life more difficult than sucking it up and going back to vanilla VSCode. So I'd be a huge hypocrite if I said Github is a step too far.
> My issue with this movement, is it's amplified the worst of our human nature by having social media (which I recognize I'm consuming right now).
What movement do you mean?
> If you don't want global censoring of opposing ideas we need to have a better way of performing human interactions online.
I absolutely agree with this. My experience with online interactions is that they've been filled with polarized sentiment for a long time (I've heard the sentiment "never read the comments" about news articles on the internet for at least 5 years), not just in the last few months.
Movement as in general cultural shift with the advent of social media. Not really a specific movement as in #somemovement.
For example. I'd fucking love to have this conversation with a group of people and instead of getting a ^ or (Insert down arrow) see your expression, you either nodding along, or making a slight facial tweak that lets me know I hurt your feelings so I can see, oh crap, I shouldn't have said "movement" and instead said cultural shift. That real-time feedback that build empathy and makes me not want to piss you off, or makes me walk away thinking, we'll we're never going to see eye to eye.
The movement from me having lunch with friends, to losing them on facebook because they support #somemovement and I have a nuanced opinion about it.
EDIT: I'm also not implying I hurt your feelings with the word movement, I'm creating a narrative specific to this thread to create an example of how in a in person setting I might have picked up on that, but in text I have to be super clear instead of being able to have an easy back and forth to get to the nuance of what I meant.
Feeling trepidation before posting sentiments that have complex and potentially problematic histories within public discourse does not equate to being targeted by cancel culture.
If I post "the Republic party interfered with Donald Trump's impeachment investigation by not allowing witnesses to testify before Congress" on my social media I'm sure I would get some backlash; that does not mean that I have been cancelled. It means that I've chosen to post decisively about an issue that might not be as black and white as I consider it to be.
males are not females is not a problematic statement. "all lives matter" certainly has a problematic history and "we should not be giving hormone blockers to children" brings up a pretty complex social issue for a lot of folks.
I think this framing of the issue is pretty interesting. There are a decent number of articles that talk about how cancel culture affects celebrities, but I do think it would be pretty hard to quantify the effects of cancel culture. It seems hard to define.
Personally, I'm not totally sold by the letter from Harper's. But I don't have data one way or another to support my bias. I don't believe at face value that cancel culture is the root cause (or even a root cause) of the problems folks see with American public discourse. I wonder how to quantify something like this.
Well, when people argue against BLM/M4BL I often hear calls to statistics. That's a potential bias of my own, but I don't think it's unreasonable to think about what metrics we might be able to use as indicators of whether cancel culture is created by an unsubstantiated bias that some folks have or if it might actually be a phenomenon that has a real impact on the way that the average person communicates.
Basically what I'm saying is that I personally don't feel or notice a lot of "cancel culture" within my own life, and I'm trying to better understand where people feel like it comes from. Data might not be necessary, but it might also make the impacts more clear. I'm just wondering about how to frame the issue in a way that makes sense to me.
as of ~15:45 PDT 5/15 searching saddleback bbq lansing on Google and DDG the links in the advertisements go to the restaurant's website, not GrubHub or DoorDash for me
I'm struggling to figure out how they have a Security and Response team to deal with the fallout of these issues without having enough privacy/security/customer-focused developers/product folks to proactively bring up these concerns. Google _seems_ like the type of company to do at least a little bit of risk modeling before the release of software. If they knew they were going to listen to recordings, how did this concern not get brought up? If it was brought up, did folks just decided it wasn't important enough to protect against?
They have the security and response team activated because someone disclosed that they do this, not to investigate the fact that they do it. They're there to plug the leak.
If the privacy policy was written in a language I could read (just because its english doesn't mean its readable english) then maybe I would have known that
It is pretty nefarious. In traditional research and product development protocols, you would have people opt into something like this, and optionally pay them for it.
If Google gave out a hundred thousand Google Home units for free to test subjects, with informed consent, there would be no big deal. It would cost Google $2.5 million, and it'd probably be enough data.
If my web site policy discloses "I may randomly send a thug to your house to shoot your children," and you come, visit, click through the license which warned you, and then I shoot your family, that doesn't mean I'm not doing something super-evil.
Google seems to be doing something super-evil here. Their response -- plugging the leak -- seems equally evil. People have a right to know what's being done with their data, and at least under European law, Google has a legal and ethical obligation to disclose things like this in language people can understand.
GDPR is rather well-written here. It looks like Google is breaking it, and currently trying to shoot the whistle-blower.
> If my web site policy discloses "I may randomly send a thug to your house to shoot your children," and you come, visit, click through the license which warned you, and then I shoot your family, that doesn't mean I'm not doing something super-evil.
You kinda had me until you lost me here. Analogies need to make sense. If you have to go this far with your analogy then that says more about your own argument than the other side's.
>If you have to go this far with your analogy then that says more about your own argument than the other side's.
I never got this argument. In mathematical proofs, reducto ad absurdum is an acceptable method of showing an assumption false. It shows that a statement ("Users agreed to TOS, so it's not malign") has an exception. The example is extreme to make sure nobody can argue the statement's still valid.
He's not saying the punishment should be on par with murder. He's just saying there is a line of moral acceptability, but where it lies is up for debate.
You're missing the point, which is that you can slip anything into a privacy policy or other long agreement, no matter how outrageous it may be, and nobody will read it. Putting anything there does not make it ethical or legally binding.
A privacy policy is definitely the right place for privacy issues. My point is exactly as vharuc made above: Putting something there neither makes it ethical nor unethical. A contract or license is not an excuse for bad behavior.
* If my privacy policy is a copy of HIPAA, that's an ethical privacy policy.
* If my privacy policy is as Google's here, it seems unethical without clear informed consent (which a disclaimer in a novel-long privacy policy doesn't provide).
* If your privacy policy says you'll collect incriminating information about me, and sell it to the highest bidder for use in blackmail, it's unethical even with attempts at informed consent.
You're confusing an analogy with a counterexample.
Analogies need to be analogous. Counterexamples can be extreme (and it is often helpful if they are; then they're obvious counterexamples).
Please take a minute to reread the discussion.
Coincidentally, I've noticed a pretty consistent pattern of downvotes on anything criticizing Google on Hacker News. Either a lot of readers from Google who drank the cool aid, or astroturf -- I'm not quite sure which.
What’s specifically super evil about humans transcribing random and anonymous commands to the google assistant? They’re hired and expected to be professional with their own contractual agreements around their own behavior and ethical standing.
Literally all the major companies in speech rec (aka assistants) do exactly this. The accuracy of the speech models would be extremely poor otherwise.
Come on, I'm sure Google's privacy policy allows them to listen to audio with no metadata in order to improve their service. The team is responding to the public leak of the audio, which is a violation of Google's privacy policy.
> the security and response team activated because someone disclosed that they do this
They're not chasing down a whistle blower for notifying the public that human transcription takes place. That information was already in the public domain in Google's privacy policy. The team is investigating the source of the leaked audio files, which was a violation of user privacy.
I think that the outrage doesn't come from supervised learning, rather that it's contracted out to a third party and seems to be done in a irresponsible way. I think that the fact that most of the public that uses these devices would be surprised by the fact that their voice is being recorded and transcribed is the irresponsible part. Of course a ML engineer is going to want to go the route of human labeling the audio data, and these folks seem to have won. You can blame the public for being uniformed, but it's new technology, and most aren't going to have read up on the methods used, even if it's no secret. Many have suggested other opt-in methods, which would also provide (arguably less real world) data. I think that many would prefer to trade a less accurate service for not having strangers listen to their conversations.
Honestly what risk is there? Outside of some internet echo chambers no one really cares and all this will be forgotten by tomorrow if it even takes that long.
https://www.lesswrong.com/posts/EwyviSHWrQcvicsry/stop-talki...