Hacker Newsnew | past | comments | ask | show | jobs | submit | iman453's favoriteslogin

You don't. You need to have a goal and clear understanding about why you are doing what you are doing. This is the same with pretty much all activities that require significant effort - motivation is a brief blip that eventually withers away once you start struggling. What you need is discipline, planning, and regular routine. Plan (allocate some time each day/week) and do this regularly. Can't take it anymore? Make a coffee, take a walk, rest for a little while, take a nap, whatever, and then try again. Motivation is not something that you should be constantly chasing in the first place.

> So I feel that there are many people like me who are confused and kind of unsure on how to proceed.

Don't let AI write the code for you and send diffs when you're a newbie.

Use it to understand, to ask questions, use it like a better stack overflow/google, but don't copy/paste chunks of code.

If you do have it generate more than a single line, mess with it, change it around, type it in but change the way it works, see if there's other method calls that would do what you're doing, see if you can refactor it.

Basically, don't just get into a copy/paste loop. The same thing happened when Stack Overflow became big, you had a whole generation of code monkeys who could copy-paste something sorta working from stack overflow/googling, but when something broke, they had no clue how to fix it.

Copy-paste here (or having it send diffs) is the evil part, not the AI. AI can really help you learn new tech. Have it do code reviews, have it brainstorm ideas, or have it even find the right apis for you, Just don't copy paste!


Your best moat against low effort copycats? Stamina. Keep your app in the store, update it regularly, add support for new devices, add new features if appropriate, keep marketing and selling it, and keep polishing it. The copycats don't want any part of that. They want to make a quick buck with as little work as possible, hence the copying and plagiarizing. In a matter of weeks or months, unless they're making bank from it, their app will start to rot. If your app has staying power, then you will eventually rise above them all. And when you have another good idea down the road, cross promote between your own apps (but don't be obnoxious about it), and you'll begin to grow a user base who trust you. That's as good a moat against copycats as you could ever get.

Try this: never read without a pencil in your hand. Make it a point to not only orally or mentally restate but also rewrite by hand everything you read (on-screen and in real life) in your own words, and to make multiple drafts of each restatement until it is as succinct, orderly, and logical as possible.

This practice will probably help you:

* recognize and stop reading low-value material;

* read less on your phone;

* deeply understand what you're reading;

* identify errors in what you're reading;

* identify errors in your own understanding;

* improve your own writing (and possibly your handwriting!);

* recall what you've read and what it says and means; and

* build a record of your reading, reactions, and thinking.

And, of course, I think it should improve your ability to focus.

You can also make flashcards while you're at it (again, handwritten ones) and develop a spaced-repetition practice for the important information and skills.

--- Later Edit ---

You mention long-form text in particular. One of the most effective tactics to use while reading is to discern the writing's architecture/structure — at all levels, from the writing's genre, to its main components and their arrangement, to how each main part is composed, and so on. Doing so not only helps to understand and critique the author's ideas, but also becomes a fun game that keeps your attention directed.

And also keep in mind that > 99% of all writing is not only not great but also probably not worth reading in the first place. It's ok to use boredom as a guide: your mind may be indicating through boredom and distraction that what you're reading isn't worth the time and effort and attention it takes to do so. But if you have decided that you must read something, or that you want to read it and understand it, then I've found that there's no substitute for the handwriting technique. Check out the Mortimer Adler book How to Read a Book for further suggestions.


I find it much easier to do work for money if I can get excited about why that thing I do is a worthwhile and exciting endeavor. Sometimes you have to look for it, but it's usually there (if you're lucky).

Try starting the day thinking what you are going to work on, and why that is a Good Thing (TM).


Depends on the read/write workload and row size, but yeah after 100-200m rows PostgreSQL vacuums can take a while. And index rebuilding (which you have to do on an active table) too.

It all depends though, sometimes 1b is passe.

But 100m is a good point to consider what comes next.


Design must flow from customer demand/desires.

And 90% of design is just "correctly assigning priority" to elements and actions.

If you know what is important (and what is less important) you use...

- white space (more whitespacce = more important)

- dimension (larger = more important)

- contrast (higher = more distinct)

- color (brighter = more important)

... to practically implement the decided priority.

How to validate you have implemented priority correctly?

Just ask a few people what do they see first, second, third, etc in a page.

If you designed it right - their eyes will see things exactly in the order you expected them to.

In short - "design is guiding user's senses in the most prioritized manner to the user in achieving their goals"

In our startup - we call this the "PNDCC" system (priority, negative space, dimension, contrast, color).

There are a few more tricks to make it even more powerful - but as I said - just getting these right puts you in the top 10%


Break it down into manageable chunks. Do you have things documented?

I felt this way at first when I was doing my lead generation. I documented the process and brought on someone from the Philippines. I then ran into a similar situation where there were a lot of questions that I couldn't spend my time on. So, I built a GPT to help answer questions and to build another me to help them. This was simple and saved a ton of time.

Reflect on the tasks you are doing and pass off the work that you don't want to do first. Start small and continue passing off more work. You can hire a virtual assistant for $5-8 an hour, and it's beneficial to have some basic support. I also helped motivate someone who needed work.

It doesn't take much effort, let me know if you have questions on the tools and documents you would need to support something like this. I can share what I used.


This is how I feel, and why I'm stuck.

I have a pleasant little workflow maintaining a content-based website. I'd like to hire help, but offloading work to a first employee feels like more effort than just doing the work myself.

How do I transmit 7 years of tacit knowledge, principles and best practices to someone else so that they do good work? How do I teach a writer to use my elaborate static site generator setup that was never designed for other users?

Then comes the paperwork, and the inherent difficulty of working with other people instead of having full control over everything.

So far, I have just accepted that my work has a limited scope, and that as long as I'm satisfied with my income I don't need to change that.


The hype of Agentic AI is to LLMs what an MBA is to business. Overcomplicating something with language that is pretty common sense.

I've implement countless LLM based "agentic" workflows over the past year. They are simple. It is a series of prompts that maintain state with a targeted output.

The common association with "a floating R2D2" is not helpful.

They are not magic.

The core elements I'm seeing so far are: the prompt(s), a capacity for passing in context, a structure for defining how to move through the prompts, integrating the context into prompts, bridging the non-deterministic -> deterministic divide and callbacks or what-to-do-next

The closest analogy that I find helpful is lambda functions.

What makes them "feel" more complicated is the non-deterministic bits. But, in the end, it is text going in and text coming out.


Is there a reliable handwriting OCR benchmark out there (updated, not a blog post)? Despite the gains claimed for printed text, I found (anecdotally) that trying to use Mistral OCR on my messy cursive handwriting to be much less accurate than GPT-4o, in the ballpark of 30% wrong vs closer to 5% wrong for GPT-4o.

Edit: answered in another post: https://huggingface.co/spaces/echo840/ocrbench-leaderboard


To be clear, I agree that it often makes sense for the code to be “off” on initial deploy. By “default”, I mean what does the code do in absence of feature flag data?

Put another way, my specific concern is “what happens when the flag system fails?” If you accidentally drop your feature flag database, do all your features turn off, or do they all turn on?

Superficially it might seem safer for them all to turn off in this failure mode. After all, flags are for experiments, and what’s wrong with disabling experiments? The problem is that companies I have worked for (and those of friends I talk to) do not purge 100% of flags corresponding to launched features. Once code has been built on top of these unpurged flags, bugs are almost guaranteed if they get turned off via system error.


In many cases you are better off using a kill switch than a feature flag. This may seem pedantic, but the way your system fails (on vs off) can protect you from disaster when your flag setting framework has a bug.

On a large codebase it is easy to forget to clean these things up, and a flag that hasn’t been set to off in a year can be masking a major regression. At my last job we had two major outages in as many years from defunct flags defaulting to “off” when the feature flag system failed to return flag states.

Failing to “on” is a simple design choice to protect you from your tech debt. There are more expensive better fixes (e.g. automated enforcement of removing flags from codebase), but none as easy to implement.


The key is to figure out what your learning process looks like.

For example, I discovered early on that I learn in three phases: 1. I get exposed to something (a concept, a process, etc); basically discover that something exists. 2. I then see how that thing is used whether through mentorship or tutorials or, increasingly, through trial and error. 3. I apply that thing to some novel problem.

Through this cycle of Discovery-Tutelage-Application, I can assess my level of comfort with new material and understand when my struggles are due to trying to short circuit the process.

It's likely that you have some form of learning process that is equally cyclical, yet undefined -- once you identify and codify those steps, you can evaluate your progress when it comes to acquiring new skills.


I've researched trustworthy brands a lot, the ones I settled on are Nootropics Depot, Viva Naturals and Nordic Naturals. I prefer those with only Vit E as preservative.

It's a bit late here for my advice to be helpful to you but IMO these are questions to ask, and answer before you build the web app, not after.

The short answer to your question is "identify who your customer is, find out where they hang out (online or IRL), join that community, add value (not just your offering), demonstrate your usefulness and build credibility.

From that, you can sell product in a meaningful, sustained way, which reaches into the vote of their needs.

You've done it backwards, the easy bit first, and now the hard part will be harder. Instead of finding a niche, and filling it, you've filled a niche, and are now trying to find people in that niche.

Bur don't despair. You might yet pull it off. And if you don't consider it to be a really cheap part of your education. Making mistakes is how we gain experience and learn how the game is played.


This is incredibly simple yet incredibly powerful, and something that everyone who becomes proficient at delivering things of value learns eventually, but is rarely taught so succinctly.

By the way, for the programming case, this is a big part of the reason functional programming is so powerful. Avoiding shared state allows you to write your outline of smaller and smaller pieces, then write each piece as a stateless function, then pipe your data through a graph of these functions.

With other programming paradigms, you can't just knock out all the little pieces without thinking about the other pieces because the state is all tangled up. Which slows you down.

It's surprising how very simple the individual components can be, even for very complex systems, when following this approach.


I use GitHub Issues threads for this and it works amazingly well.

Any task I'm working on has a GitHub issue - in a public repo for my open source work, or a private repo for other tasks (including personal research).

As I figure things out, I add comments. These might have copy pasted fragments of code, links to things I found useful, quoted chunks of text, screenshots or references to other issues.

I often end up with dozens of comments on an issue, all from me. They provide a detailed record of my process and also mean that if I get interrupted or switch to something else I can quickly pick up where I left off.

Here's a public example of one of my more involved research threads: https://github.com/simonw/public-notes/issues/1

I also create a new issue every day to plan the work I intend to get done and keep random notes in. I wrote about how that works here: https://til.simonwillison.net/github-actions/daily-planner


When you’re creating your embedding you can store keywords from the content (using an LLM) in the metadata of each chunk which would positively increase the relevancy of results turned from the retrieval.

LlamaIndex does this out of the box.


Fantastic essay. Highly recommended!

I agree with all key points:

* There are problems that are easy for human beings but hard for current LLMs (and maybe impossible for them; no one knows). Examples include playing Wordle and predicting cellular automata (including Turing-complete ones like Rule 110). We don't fully understand why current LLMs are bad at these tasks.

* Providing an LLM with examples and step-by-step instructions in a prompt means the user is figuring out the "reasoning steps" and handing them to the LLM, instead of the LLM figuring them out by itself. We have "reasoning machines" that are intelligent but seem to be hitting fundamental limits we don't understand.

* It's unclear if better prompting and bigger models using existing attention mechanisms can achieve AGI. As a model of computation, attention is very rigid, whereas human brains are always undergoing synaptic plasticity. There may be a more flexible architecture capable of AGI, but we don't know it yet.

* For now, using current AI models requires carefully constructing long prompts with right and wrong answers for computational problems, priming the model to reply appropriately, and applying lots of external guardrails (e.g., LLMs acting as agents that review and vote on the answers of other LLMs).

* Attention seems to suffer from "goal drift," making reliability hard without all that external scaffolding.

Go read the whole thing.


Hey, I personally recommend smaller platforms, I'm not a native English speaker, so the platforms I use will probably be useless to you, but I can distinguish a few key points: Keep it small, companies that go niche platforms probably aren't looking for 1000 applicants. Look for human written posts, that actually tell you something about them, instead of classic "we provide complex solutions for big companies, that provide services to other big companies". Look for direct contacts, like a phone number or an email address, so you can engage in a real human interaction sooner and speak for yourself, instead of letting people judge you based on a resume, that might have some minor flaws.

Check the communities, like reddit, telegram/whatsapp groups, HN, etc. Again, direct communication can do wonders.

Check companies websites, most tech companies have a dedicated Careers page with all the open positions and requirements, write them a personal email and wait for the response. This can give an extra karma point, because you didn't find them on some hiring website while looking for a job, you've found this position on their website, so it already tells them something about you wanting to work there.

If you are interested in startups, you can check some Y Combinator reports/news, or look at some producthunt posts, it's a riskier bet, but if you are into that kind of a thing, can be a great path.

Hope this can be helpful, best of luck with your searches!


I watched things mentioned in sibling comments, but didn't help.

Until I found this:

https://www.youtube.com/@algorithmicsimplicity

Instantly clicked. Both convolution and transformer networks.

EDIT: for the purpose of visualization, I highly recommend following channel: https://www.youtube.com/watch?v=eMXuk97NeSI&t=207s

It nicely explains and shows concepts of stride, features, window size, input to output size relation - in convolutional NN


If you're a slow metabolizer of caffeine like I am, stay away from caffeine.

CYP1A2

https://www.geneticlifehacks.com/liver-detox-genes-cyp1a2/

The difference in sleep quality is dramatic.

If I have caffeine, even a small 20mg at 7am, I'm up 4-6 times the next night, going to the bathroom, superficial sleep.

Without caffeine, I'm in a deep sleep. So much so that I don't change positions at all, and my body slightly aches from being in the same position so long. My bladder nearly feels like it's going to burst, because I've slept so long.

There was a study I saw while back that said eating cruciferous vegetables speeds up caffeine metabolism. I've tried that, but that didn't seem to help. The caffeine still seemed to disturb my sleep. I tried BrocoMax, a broccoli supplement, that didn't seem to help either.

Exercise helps a little bit. But it's still not the quality of sleep I receive with zero caffeine.

I think much faster when I drink caffeine. Recently I revisited this issue and tried micro-dosing 5-Hour Energy (2mL). At first it seemed promising. But then it seems to slowly build up in my system. Sleep quality deteriorates slower. But the deterioration is there. I prematurely posted this status.

https://twitter.com/aantix/status/1706020516060971399

Sadly, it doesn't appear that I can drink caffeine and have quality sleep.

I hate that I have to choose.


This series of challenges from Fly.io seems to be good: https://fly.io/dist-sys/

I bought a set of carbon steel pans and my cast iron ones fell into complete disuse.

Especially carbon steel crepes pans (https://www.debuyer.com/en/poele-a-crepes-mineral-b-1472.htm...) are completely unbeatable. I have two of them and I can feed a small crowd of family and friends faster than they can eat. Nothing sticks if you take proper care.

The best thing, they are dirt cheap.


I'm an engineer and talking from experience here.

1. keep the db schema simple. remember you are startup things change, so does the db schema so does your architecture.

2. One piece of advice from personal experience, Microservices usually don't work well for startups try building your solution as a modularized monolith.

3. always make decisions that are reversible (2 way door) and make sure coming back is also easy.

4. for any features you release have qualitative and quantitative metrics and more importantly guard rail metrics.

5. where ever you can use the an off the shelf solution like feature flags (launchdarkly), product analytics (amplitude) use them. These are solutions used by countless startups do not reinvent the wheel.

6. prioritize customer feedback, that is going to get you money and they are going to help make your product better. Instead of "failing fast" prioritize on "learning fast".

7. Be focussed on solving the problem don't romanticize the solution.


They should study this in game design classes, to show how much you can motivate random behavior by progressively disclosing achievements to unlock

I've been interfacing with GPT programmatically for a little while now, leveraging it's "soft and fuzzy" interface to produce hard / machine-readable results. JSON was the format that felt best-suited for the job.

I see a ton of code in this project, and I don't know what most of it does. As far as GPT troubles with JSON, I'll add a couple: sometimes it likes to throw comments in there as if it was JS. And sometimes it'll triple-quote the JSON string as if it was Python.

My approach to solve these problems was via prompt engineering - using the system message part of the API call. Asking it to "return valid json, do not wrap it in text, do not preface it with text, do not include follow-up explanations, make sure it's valid json, do not include comments" - seems to work 99% of the time. For the remainder, a try-and-catch block with some fallback code that "extracts" json (via dumb REs) from whatever text was returned. Hasn't failed yet.

It's fascinating to watch the new paradigm arrive, and people using old habits to deal with it. This entire project is kind of pointless, you can just ask GPT to return the right kind of thing.


I find the best way to learn technical topics is to build a simplified version of the thing. The trick is to understand the relationship between the high level components without getting lost in the details. This high level understanding then helps inform you when you drill down into specifics.

I think this book is a shining example of that philosophy: https://www.buildyourownlisp.com/. In the book, you implement an extremely bare-bones version of lisp, but it has been invaluable in my career. I found I was able to understand nuanced language features much more quickly because I have a clear model of how programming languages are decomposed into their components.


I've only really used state machines in TypeScript via xstate, and I've been meaning to try Rust for a while, so your comment interests me. Got any tips or advice on further reading or specific libs that you can recommend for working with FSMs in Rust?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: