Hacker Newsnew | past | comments | ask | show | jobs | submit | more Bartweiss's commentslogin

It's been fascinating to see the rise, fall, and rise of digital watches among techies.

I remember 1990s Dilbert having an entire storyline about the engineers getting into a calculator-watch arms-race. In real life, it was pretty common to laugh about how a $50 digital Casio could do far more things than a Rolex.

By about 2010 (or perhaps even by the iPod Touch or Palm Pilot), I stopped hearing that. Watches had lost all of their unique functions to smartphones, so their raison d'etre was either "rugged and cheap" or "jewelry" and calculator watches almost vanished.

Circa 2015, we get Pebble gen 2, Apple Watch, and Fitbit Blaze: smart watches have phone integrations, fitness tracking, and don't look like hell anymore. Since then, they've increasingly aimed for design good enough to wear with a suit; the Galaxy Watch is always-on and analog.

These days, I see two splits among watch-wearing engineers: smartwatch vs not, and practical vs decorative. So the result is quadrants like:

Fitbit | Galaxy Watch

Casio | Longines


> The CEO will be under tremendous pressure if he/she tries to optimize for a 1 year timeframe (for example) as opposed to quarter-by-quarter.

Its bizarre to talk to well-meaning execs (even below C-suite) at public companies and hear them overtly say this. "Well we know X and Y are sound investments for the company's success, but it's a question of finding a way to sell something that long-term without tanking our stock price."

I try not to cry market inefficiency without good evidence, but "shareholders promote good corporate governance" starts feeling pretty bizarre when the people running a company describe shareholders like corporate raiders encouraging them to destroy value for a quick payout.

> I wish boards can come up with a compensation structure for execs which optimizes for long term.

For all the talk about "when founders should get out of the way" and "what makes a good founder doesn't always make a good CEO", it's interesting to see that research still finds companies with founder-CEOs performing substantially better. Higher share prices (which might stem from overconfidence), but also better long-term financials, more R&D spending, more influential patent filings, etc.

And that doesn't necessarily mean founders are super-geniuses, exceptional managers, or even unusually attuned to their market. They get less of their salaries in cash, hold options and stocks longer, and vary their behavior less in response to compensation structure. (Also, they often hold so much stock they can't sell in full without panicking the market.)

So it really does look like we just haven't found a good way to compensate non-founder CEOs: their behavior is extremely responsive to their compensation, but nobody has found a scheme that makes them act long-term to the degree a founder would.


Not only is an easily discovered public event poor leverage, it becomes much worse leverage if it comes up in an interview.

When companies (or governments) try to manipulate employees, they frequently rely on some kind of willful ignorance. Wells Fargo is a great example: they set impossible performance targets and turned a blind eye to fraud, then fired and blacklisted whistleblowers - ostensibly for knowing about that same fraud!

If a shady employer wants leverage, even public events can suffice as long as they can claim ignorance. For example, most stock option grants are immediately lost if you're fired, but even at-will employment can't be terminated specifically to deprive someone of their options. So an employer might give a generous options package, then "discover" the IG video and use it for dismissal at just the right time to prevent a profitable exercise. But if that video comes up during hiring, it's no longer a plausible reason for later dismissal, at least without committing perjury regarding the interview.

I can't even work out a scenario where "lots of people know about this including us" is an effective way to manipulate someone.


In any real going business, cost of hiring and training someone of a type who is eligible for compensation with stock options and the subsequent morale dip if discovered doing such manipulation would by far outweigh the benefits of doing this. So this is largely a tin foil hat scenario


For any business with decent size, absolutely. There are a thousand ways to claw back options, and the reason they don't get used is that doing it even once would make hiring practically impossible.

For a small enough company? It falls in the same category as "diluting out of one guy's shares" - bad morals and bad business, but it still happens.


I notice this pattern all the time in guides to "polite" workplace communication. Their examples are hypothetical, so they look at how positive something sounds without considering the underlying content, or go even further and change content to improve tone. The advice looks good on paper, but using it when there's an actual task at hand might just sound sarcastic or disingenuous. The worst example I've ever seen was something like:

> Instead of "I need that report by the end of the day", try saying "I really appreciate you working to get that report out soon, it's a big priority right now!"

That's absolutely insane, because those are two completely different statements. The second one sounds less demanding because it's not the same request. So the tip isn't positive communication advice, it's either a schedule rework or failing to convey a deadline.

As for this specific example:

> By adding an emoji below, it's clear that the sender is embarrassed to make this last-second request, and isn't trying to come across as sarcastic, rude, or overbearing

That wasn't clear to me at all. If you type in "embarrassed", Slack will only suggest :flushed:, although I'd also have understood :sweat_smile:. I guess the monkey was meant as "I'm hiding my face with shame", but Slack calls that emoji ":see_no_evil:", and at first glance it seemed like "I'm trying to not to look over your shoulder, but is this done yet?". If the problem is "making a last second request", there's no particular reason that emoji are the best way to address it - one example simply has more content than the other. So I like your direct phrasing, and I might add:

> Hi <name>, will you be able to have the report on X ready by <time>? I'm sorry it's such short notice, thank you!


I'd really just prefer to keep emojis out of any professional requests. If after work you want to go out for :beers: :D then sure, but if you're asking me to work late on a project, no amount of emojis will improve my mood.


I'm with you there, and it gets even worse when a company has their own emojis with a completely obscure meaning that is somehow expected to be understood. Like for some reason people in my company reply to messages sent out of the context of the channel with an emoji of the pokemon Charmander breathing fire (:charangry:). My own subtle form of protest is to use random emojis that really don't have any meaning. A personal favorite is :shallow_pan_of_food:.


imo, this is part of a more general problem with emojis. the "name" of the emoji does not always correspond to the image very well, so you have to confirm that the image actually conveys the tone you are going for. then as the recipient, you might sometimes wonder whether the image or the name of the image carries the intended meaning.


Eh, it probably buffers against overreaction, especially when a correction in fundamentals is being mixed with a reaction to new pressure.

But this is still a good point: if the market really is overheated then short-term monetary policy won't change that, and we can expect a lasting hit regardless of how disease issues play out. And it's not necessarily going to be obvious what's market movement and what's disease-related; I wouldn't be surprised if some over-hyped companies seize this as a chance to lower guidance faster than they normally could without spooking investors.


This is why the whole idea of "in the public interest" exists.

If a reporter received these same recordings in the mail, they would quite likely publish them. If they received a recording of a random person discussing their medical concerns, publishing that would be an outrageous breach of ethics.

(Hence the Gawker/Thiel debacle also. When Ted Haggard was caught having gay extramarital affairs, it was considered fit for publication because he was an evangelical preacher fighting against gay marriage. When a random private individual is outed, its not a public interest matter and can be libelous even when accurate. Thiel fell somewhere in between under both legal and journalistic rules, so we got a debate.)

I'm pretty baffled to see the parent comment imply that private discussions between politicians should inherently be kept secret. We could discuss specific news stories, reporters who violate attribution rules, and whether Varoufakis was bound by privacy laws or Eurogroup confidentiality rules. We could even argue the publication is in the public interest, and yet makes Varoufakis unfit to serve by destroying his ability to function with trust.

But just as you say, treating "that's a private discussion" as the end of matter would excuse Watergate also.


This is a novel and important result in antibiotics. It's also a proof-of-concept for using ML to produce vital drugs with novel mechanisms, rather than incidental alterations or discoveries in noncompetitive spaces. It might be an incremental speedup or computing-power advance in ML drug discovery also, but it could equally just be the result of a lucky break or a particularly large lab-test budget. (In which case, "why didn't someone do it already?" is closer to asking why nobody else bothered to win the lottery.)

It's not a major theoretical advance in ML drug-discovery techniques or the first big step in ML drug discovery. It's certainly not the invention of ML drug discovery or neural nets as an ML technique, both things I've seen implied in news stories on this work.

This is attention-worthy, absolutely. (I'll leave "publication-worthy methodology" to experts.) But it's newsworthy on actual merits, as a drug breakthrough and a demonstration of an increasingly-important technique. So I share the frustration when lazy or confused reporting implies this is the same style of ML-theory breakthrough as CNNs, Transformers, or even neural nets themselves.


I think the criticism is that it's not obvious whether success here was a function of improved performance, expanded throughput, expanded testing, or sheer luck.

Chess engines have clearly improved in both design and computing power over the years; doubling an engine's resources or pitting a new engine against an old one produces straightforwardly better play. But the drug-discovery technique in use here may not be "playing better" in terms of producing higher-quality predictions.

To extend the chess metaphor:

- Deep Fritz is a stronger player Deep Blue even with 4% as much computing power. This story does not appear to be an algorithmic breakthrough of that source.

- Deep Blue lost to Kasparov in 1996, then beat him in 1997 with double the computing power. That's a clear improvement in play, but not an improvement in efficiency. This story might represent such a change, modelling more prospective drugs to test higher-confidence candidates.

- If an AI that can only win 2% of games against humans plays 10 games, it has an 18% chance of beating someone. But over 100 games, it has an 87% chance of a win. This result might be a team with a larger testing budget claiming the 'first win' without any AI-side improvement.

- If a dozen grandmaster-level chess AIs play GMs, one of them will have to get the first win against a human. Labeling this result a 'breakthrough' in AI terms might be outright publication bias among equivalent projects.

As far as the drug, none of that really matters, except that efficiency improvements would have more potential to increase drug discovery. The drug itself is still useful, and the discovery is a proof of concept; in 1980 no possible computer would have beaten Kasparov. But this is being hailed as a breakthrough in AI in seriously questionable ways. The BBC article, for example, managed to imply that this specific project was novel and important for using neutral nets to produce a significant result.


> How am I confident that this is realistic if you literally say its generated?

This is a particularly good question since it's recently been shown that even neural nets trained on real data often pick up substantial, predictable dataset biases.

Practically every single-dataset-trained CNN seems to pick up stylistic quirks in the photos or labels it's trained on. The most visible result is that the CNNs perform better on same-dataset test examples than they do in the wild, sometimes vastly better. More startlingly, it's possible to work backwards from this: the training source of a "finished" CNN can be discerned by looking for certain types of error, and adversarial examples can be predictably constructed based on training source.

Tagged imagesets undoubtedly have stronger and harder-to-remove 'fingerprints' than text data like addresses, but I'd be shocked if the problem was nonexistent for text. My first reaction to "synthetic sensitive user data" for ML is to worry about winding up with systematic errors coming from the generation scheme.


cf "radioactive data" to tag datasets and see which downstream models used those datasets for training: https://ai.facebook.com/blog/using-radioactive-data-to-detec...


A related interpretation: your debugging tools need to be at least as good as your programming tools. Ideally, better.

Debugging a K8 cluster with print statements is hopeless, but if the cleverest code you can write in a dumb editor is going through a good test suite and profiler, you might be fine. How many as-clever-as-possible optimization tricks have been made viable by Valgrind?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: