Hacker Newsnew | past | comments | ask | show | jobs | submit | more eigencoder's commentslogin

I think you may misunderstand what a gifted kindergartener is. Like, some kids come in and they essentially taught themselves to read. That doesn't mean they have helicopter parents who think they're extra special -- it means they have distinct needs that aren't well served in the normal classroom, which is boring for them.


I don't think four year olds demonstrating the ability to read several levels above their grade level is 1.) rare and 2.) a talent you can realistically tease out from over-parenting at that age.

Given the extreme levels of segregation in certain parts of the country (NYC for example has fewer than 5 percent of Black and Latino kindergartners in G&T programs, but higher enrollment for Black and Latino students in third grade) school systems like that one should seriously consider pivoting to prioritizing equality over G&T funding.


> I don't think four year olds demonstrating the ability to read several levels above their grade level is 1.) rare and 2.) a talent you can realistically tease out from over-parenting at that age.

Sure, but shouldn't those kids be in an environment where they can practice reading instead of being painstakingly re-taught the alphabet? As you said, it's not all rare.

I was one of those kids, and I was extremely disruptive in class because I couldn't bear to be made to sit and trace the letter "A" for 45 minutes when I was already reading novels at home. When they stuck me in a different class, things got much better for me, and it's not like doing that cost the school board any extra money.


But how does pulling a GT program help with equality? Putting bored kids in a classroom with kids who are far behind them developmentally drags everyone down. Don't the GT kids deserve to learn, too?


> NYC for example has fewer than 5 percent of Black and Latino kindergartners in G&T programs, but higher enrollment for Black and Latino students in third grade

Why does it happen?


I take issue with calling it "cruel". But yeah we should have really strict penalties on using smartphones while driving.


I know I'm late to this, but there is a minimal distribution of LaTeX that I used for e.g. writing my master's thesis. It's called TinyTeX.

https://yihui.org/tinytex/


Loved the breakdown of a topic I wasn't familiar with.

I just can't help but think that the whole ethos of Open Social Media is misguided. I think that social media isn't good for us -- not just because of the big companies making it worse, but because the technology itself doesn't promote health.

It feels like trying to make cigarettes open-source. Sure you can stick it to big tobacco but at the end of the day you're still making cigarettes.


As long as the Eternal September remains on Twitter, there's nothing unhealthy about being on Bluesky. The format isn't the problem, it's the people who use it as a stupid culture war battlefield. Those people seem content to remain on Twitter.


There is a lot of “culture war battlefield” stuff on Bluesky too.


I think approval time would take much longer. The issue is that while actual time spent in review may be shorter, there's a lot of context-switching time costs that increase with the number of PRs submitted.

No one on the team is just sitting there refreshing the list of PRs, ready to pick one up immediately. There's a delay between when the PR is marked as ready and when someone can actually get to it. Everyone is trying to get work done and have some time daily in flow state.

Imagine you have a change; you could do it as one PR that takes 1 hour to review, or 3 small PRs that each take 15 mins to review. The time spent in review may even be shorter for the small PRs, but if each PR has a delay of 1 hour before a reviewer can get to it, then the 3 PRs will take almost 4 hours before they're done, as opposed to just 2 hours for one big PR.


I don't think that's a realistic view of the timeline. I've done features as multiple PRs and there are really two cases:

1. I can't submit pieces until I have the final version. PRs go up at the same time and can be reviewed one after another immediately.

2. There's a very specific split that makes that feature two features in reality. Like adding a plugin system and the first plugin. Then the first part gets submitted while I still work on the second part and there's no delay on my side, because I'm still developing anyway.

Basically, I've never seen the "but if each PR has a delay of 1 hour before a reviewer can get to it," getting serialised in practice. It's still either one time or happening in the background.


I thought it was _optout_nomap (the _optout for Microsoft, _nomap for Google/Apple)


Hmm... I was under the impression that MS had added support for `_nomap` as well somewhere... but now I'm not finding any references to that. I suppose at the end of the day, you have to trust that they even follow their opt out policy at all.


And you'd probably have to rotate out the MAC address and broadcast name. At this point, cat is out of the bag. I'm brand new network name and Mac address with the opt-out flags is only going to keep you out of the honest databases :(


Could you help me understand how decoder-only LLMs maintain the Markov property? If you used the same random seed, the input to the model "The cow jumped over the" would not give the same output as just "the", right? So isn't that violating the Markov property?


State (in this sense at least) isn't word/token parsing progress, it's comprising all the input and any context (which may include the entire chat history for example).


There would be need to a state specifically for “the cow jumped over the” (and any other relevant context) and states for all the other times ‘the’ is proceeded by something.

This is the limitation i was getting at btw. In the example i wad getting at, if you have an image with solid vertical columns, followed by columns of random static, followed again by solid vertical colors a markov chain could eventually learn all patterns that go

solid->32 random bits->different solid color

And eventually it would start predicting the different color correctly based on the solid color before the randomness. It ‘just’ needs a state for every possible random color between. This is ridiculous in practice however since you’d need to learn 2^32 states just for relation ship between those two solid colors alone.


> It ‘just’ needs a state for every possible random color between.

You can use skipgrams - prefixes with holes in them.

Sparse Non-negative Matrix Language Model [1] uses them with great success.

[1] https://aclanthology.org/Q16-1024/

The pure n-gram language models would have hard time computing escape weights for such contexts, but mixture of probabilities that is used in SNMLM does not need to do that.

If I may, I've implemented an online per-byte version of SNMLM [2], which allows skipgrams' use. They make performance worse, but they can be used. SNMLM's predictive performance for my implementation is within percents to performance of LSTM on enwik8.

[2] https://github.com/thesz/snmlm-per-byte


I was really excited to try ghostty, but the text looks blurry when I place it on my mac's external monitor


I hate this, my mom's account got hacked and now someone is controlling it for who knows what purpose. She had to make a new account and lost all her photos, old posts, messages, etc. Facebook was completely unhelpful


This is a really neat idea. I'd love to see more of this rolled out in schools, workplaces, and homes.

This article reminded me of something I read recently that installing air filters in schools led to increased test scores.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: