Hacker Newsnew | past | comments | ask | show | jobs | submit | more coder543's commentslogin

That's been happening consistently for over a year now. Small models today are better than big models from a year or two ago.


I used Continue before Cursor. Cursor’s “agent” composer mode is so much better than what Continue offered. The agent can automatically grep the codebase for relevant files and then read them. It can create entirely new files from scratch. I can still manually provide some files as context, but it’s not usually necessary. With Continue, everything was very manual.

Cursor also does a great job of showing inline diffs of what composer is doing, so you can quickly review every change.

I don’t think there’s any reason Continue can’t match these features, but it hadn’t, last I checked.

Cursor also focuses on sane defaults, which is nice. The tab completion model is very good, and the composer model defaults to Claude 3.5 Sonnet, which is arguably the best non-reasoning code model. (One would hope that Cursor gets agent-composer working with reasoning models soon.) Continue felt much more technical… which is nice for power users, but not always the best starting place.


  Location: Birmingham, AL
  Remote: Yes
  Willing to relocate: Yes
  Technologies: Go, Rust, TypeScript, React, Postgres, Kafka, AWS, GCP, etc.
  Résumé/CV: https://drive.google.com/file/d/1VNC272B3n7ZEfppMHkm2wGgaINwYl4Av/view
  Email: listed on resume
I have 8+ years of experience. Mostly backend-focused full-stack engineer. I enjoy working on high-scale systems that have a good impact, and require both efficiency and reliability, but I’m open to many options now.

I’ve taken about a year to explore different business ideas, but I don’t think any of them are going to work out, so I’m very interested in getting back to a more normal job. But, I did get more experience with native iOS Swift+SwiftUI development, making a really polished app that I use every day. Even though I am backend focused, I can contribute to whatever technical needs arise.

If I were going to relocate, I would see SoCal as a plus, but I’m open to the options.


I have never had a chance to use C# professionally, but it was one of the first languages I taught myself when I was first learning programming when I was a kid. I have a lot of fond memories of it, and I hear so many positive things about C# and .NET Core these days… but I just don’t see very many interesting tech jobs where I would have the chance to use C#. So, I’ve mostly used Go, Rust, and TypeScript through my career up to this point.

If anyone wants to point me to some good C# opportunities, I’m interested. (Tracebit is looking for a founding engineer, so presumably they want someone who is already an expert at C#, and I doubt they want to sponsor the visa work needed to bring someone to the UK.)


I think they wanted to define free functions, not figure out a way to use functions without a class name, but I could be wrong.


Yes, that's that I meant.


Mistral Small 3 is roughly comparable in capabilities to 4o-mini (apart from 4o-mini's support for multimodality)... o1-mini was already better than GPT-4o (full size) for tasks like writing code, and this is supposedly better than o1 (full size) for those tasks, so... o3-mini is supposedly in a completely different league from Mistral Small 3, and it's not even close.

Of course, the model has only been out for a few hours, so whether it lives up to the benchmarks or not isn't really known yet.


> 2.0 is the full model

Not quite. "2.0 Flash" is also called 2.0. The "Pro" models are the full models. But, I love how they have both "gemini-exp-1206" and "gemini-2.0-flash-thinking-exp-01-21". The first one doesn't even say what type of model it is, presumably it should have been "gemini-2.0-pro-exp-1206", but they didn't want to label it that for some reason, and now they're putting a hyphen in the date string where they weren't before.

Not to mention they have both "Flash" and "Flash-8B"... which I think will confuse people. IMO, it should be "Flash-${Parameters}B" for both of them if they're going to mention it for one.

But, I generally think Google's Gemini naming structure has been pretty decent.


> seems likely that its a 70B-llama distillation since that's what AWS offers on Bedrock

I think you misread something. AWS mainly offers the full size model on Bedrock: https://aws.amazon.com/blogs/aws/deepseek-r1-models-now-avai...

They talk about how to import the distilled models and deploy those if you want, but AWS does not appear to be officially supporting those.


Aha! Thanks that's what I was looking for, I ended up on the blog of how to import custom models, including deepseek distills

https://aws.amazon.com/blogs/machine-learning/deploy-deepsee...


Make is primarily used with C and C++. It is not commonly used in Java, Rust, Go, NodeJS, or hardly anything besides C and C++. Make is not "generally used with other languages".



This doesn't prove anything at all. Of course the toolchain has to be built somehow. Some toolchains use make to do that, rather than depending on the previous version of the toolchain's build system. Some toolchains are written in a language completely separate from their downstream language, so they obviously wouldn't be compatible with their own toolchain.

Downstream projects in these languages do not typically use Make.

More to the point, I clicked on the Go one, and it's just including this tiny "Make.dist" file that does nothing except invoke "go tool": https://github.com/golang/go/blob/master/src/Make.dist

Wow. So useful.

I clicked on the Rust one, and not only did it seem to be specific to some old testing infrastructure, but I found this note:

> There are two kinds of run-make tests:

> The new rmake.rs version: this allows run-make tests to be written in Rust (with rmake.rs as the main test file).

> The legacy Makefile version: this is what run-make tests were written with before support for rmake.rs was introduced.

So, it's an obsolete system that is being migrated away from.

But, again, the main point is that what the toolchain does with its free time has little to do with how end user applications are developed, and the complaints in this thread were strictly about building applications in distros, not about building toolchains.

If an application in one of these languages uses make, it is typically just a little syntax sugar around the toolchain commands, which does absolutely nothing to absolve the project of the complaints Linux distro maintainers have about how dependencies are managed.


In case you're not trolling (and it's really hard to tell), those makefiles are for building projects whose source code is written using C or C++. The projects they are building are things like the Java runtime, Go runtime, or the Rust compiler, but they are not building projects whose source code is written in Java, or Rust, or Go etc...

What people are claiming is that make is used as a build system for projects whose source code is written in C or C++.


Make is the common denominator in most projects I come across regardless of language. I see lots of frontend projects and certainly Go and Rust projects using Make quite often.

Ironically many modern C/C++ projects use Cmake to generate Makefiles. If anything the inverse of your observation is mine.


Are those Makefiles doing anything more than calling "go build" and "cargo build"?

Because if they're still using the language-specific build tools and dependency management systems, then I think you would find that the Fedora maintainer higher in this thread would not be any happier that there is a sugar coating of Make. That's not what they're asking for, based on other rants I've seen from Linux distro maintainers.


The barebones ones do exactly what you mentioned: simple calls to the canonical build tool.

The more complex ones at $JOB actually do some caching, dependency management, code generation, and compilation.


I build my OCaml stuff with `make`. I use `dune` only for libraries, because it makes installing them super easy.


Wow! This is some phenomenal engineering!

> Another area that stumped me is how to shut the power off 100% on the device, so that it can remain “off” for weeks or months.

This is actually a pretty solvable problem...

https://circuitcellar.com/resources/quickbits/soft-latching-...

Then the microcontroller can choose at any time to completely shut off the entire circuit (including itself), and extremely little power will be consumed until something (like a button) completes the power-on circuit again.

For ease of prototyping, there are off-the-shelf units you can play with: https://www.sparkfun.com/sparkfun-soft-power-switch-jst-2mm....

Some more advanced soft power switch circuits (like the SparkFun switch) also include the ability to forcibly power down a misbehaving device by holding down the button.

The design used in the SparkFun switch also allows your microcontroller to know if the button is pushed while the device is running, so you could imagine repurposing your existing button to also restore power to the device if the device is off, and still retain the existing functionality for cycling through watch faces. Then, either the device could automatically shut itself off after a period of inactivity or when the battery gets too low, or the user could click and hold the button for some number of seconds to turn the device off completely that way.


This is exactly what I was looking for! I had a foggy idea of using a MOSFET and the button to close/open the circuit but I needed something like this to show how it all fits together. Thank you!


No problem. I haven’t had a chance to do anything hardware related for a long time, so it’s fun to think about hardware problems again.

On the topic of extending battery life mentioned in the article, one relatively straightforward thing to investigate is simply reducing the processor clock speed. Your application probably doesn’t need to run at full tilt. I think there is a function called setCpuFrequencyMhz — I think it only works with a few specific frequencies, but the lower the frequency you can pick (while still keeping up with your application’s needs), the less power the system should consume while awake.

Of course, you want to be putting the processor to sleep between updates anyways, and there is a trade off between sleeping more (which means running the processor faster so it can sleep sooner, “race to sleep”), versus the inefficiency caused by running a processor higher on the frequency/efficiency curve, so there might be an optimal frequency that isn’t the lowest or the highest possible option. It’s something that would need to be measured.

Just some thoughts! It might not make a big difference if the processor is already sleeping most of the time, but I figured I would mention it as something to try.


Another alternative is to use ESP32's deep sleep mode. You can tell ESP to sleep until some event occurs. There are many options for waking up the microcontroller.

https://docs.espressif.com/projects/esp-idf/en/stable/esp32c...

It uses little power, so standard 18650 battery could last years with single charge.

https://www.programmingelectronics.com/esp32-deep-sleep-mode


The ESP32-S3 spends most of its time in deep sleep, only waking up for about 5-10 seconds every 5 minutes. The problem I ran into was that not just the ESP32-S3, every component on the board (the accelerometer, the haptic motor driver, the LiPo battery charger, the 3v3 LDO) has a some constant, minimum (quiescent) current draw. And even how your resistors are configured (pullup or pulldown) and their resistance values contributes. To diagnose it, I would need a precision measurement tool like a ($1k) Joulescope https://www.joulescope.com/products/js220-joulescope-precisi... and probably a significant amount of time.


I recommend the [nRF PPK II](https://www.nordicsemi.com/Products/Development-hardware/Pow...), it's (relatively) affordable and most likely would dl the job just fine for you


Here's a nice video featuring the product: https://www.youtube.com/watch?v=GqmnV_T4yAU


I have also recently been through the same steep learning curve you have and the following worked for me. Reading spec sheets is fine but nothing beats measurement if its feasible. I built a custom PCB with all the power pins for all the peripherals broken out so I could put an ammeter in series with each of them individually. Then I used Nordic's inexpensive power profiler kit 2 (search for Nordic PPK2 - its under $100). Really decent specs at 100kHz sampling rate and 100nA resolution - you connect it to a PC to see the charts. I also bought my own resin 3D printer. They are so cheap these days and it helped with iterating on designs and not having to wait days for things to arrive. PS, great post, loved it.


I had this problem on a low power design. We were making an industrial temperature sensor for something that moved and we needed to run on battery.

What I ended up doing was using a second voltage regulator with an enable pin for all the accessories. When the MCU wakes up it turns power on to the accessories and waits for things to stabilize, maybe 1ms or so. Then it does what it needs to do and before going back to sleep turns all the accessories back off.

Costs you a few extra parts and a second voltage bus and the hassle of programming but it turns "small quiescent draw" into essentially zero. Maybe the regulator has a bit of leak but it should be pico amps or less.


Here's a nice write-up where power consumption is measured using a voltmeter: https://peppe8o.com/raspberry-pi-pico-w-power-consumption/


Wait until you look at the leakage current of capacitors..! Very poorly specified, if at all, and can actually swamp the consumption of active components in these low or sub-microamp situations. The dual voltage rail that msanford described is the way to go here, gate as much as you possibly can and really focus on reducing the duty cycle.


ESP's deep sleep is not great - the datasheet for the C3 says 5 uA. That's an order of magnitude above low power microcontrollers (e.g. ATSAML), and two orders of magnitude above an ultra low power timer. Not horrendous, but higher than I'd prefer for a tiny watch battery.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: