The PIOs are state machines that let you develop custom peripherals that run asynchronously, not taking up CPU time. You could probably bitbang some custom peripherals on an ESP32/ESP8266, but that takes up a lot of CPU time and power.
I'm having trouble seeing where the datasheet actually says the GPIO pins are 5V tolerant.
EDIT: okay, section 14.8.2.1 mentions two types of digital pins: "Standard Digital" and "Fault Tolerant Digital", and the FT Digital pins might be 5V tolerant, it looks like.
Yep, I edited a few minutes ago to mention a reference I found in the datasheet. It's cool, but the reality seems a little more nuanced than that quote would indicate, since that only appears to work for GPIO-only pins, not just pins being used as GPIO. (So, if a pin supports analog input, for example, it will not be 5V tolerant.)
Sounds like it is an approximately 9.5-bit ADC now, instead of 9-bit like the RP2040 was. So... not much change.
Datasheet section 12.4.1 "Changes from RP2040"
- Removed spikes in differential nonlinearity at codes 0x200, 0x600, 0xa00 and 0xe00, as documented by erratum RP2040-E11, improving the ADC’s precision by around 0.5 ENOB.
- Increased the number of external ADC input channels from 4 to 8 channels, in the QFN-80 package only.
Supposedly it didn’t require any measurable amount of additional die space, because other things constrained the minimum size of the die (like the I/O pads), according to one of the Raspberry Pi engineers.
An additional ARM core would have required significant changes to the crossbar. Right now, only two cores can be active, not three.
It just means that the die already had to be large enough to physically fit the number of pin pads that they wanted to have. It doesn’t really say anything about the RISC-V cores. They could be big or small. But these do seem to be almost as powerful as the ARM cores, based on what people have said. (I still want to see more benchmarks.)
Coordinating access to the memory bus and peripherals is probably not easy to do when the cores weren’t ever designed to work together. Doing so could require a power/performance penalty at all times, even though most users are unlikely to want to deal with two completely different architectures across four cores on one microcontroller.
Having both architectures available is a cool touch. I believe I criticized the original RP2040 for not being bold enough to go RISC-V, but now they’re offering users the choice. I’ll be very curious to see how the two cores compare… I suspect the ARM cores will probably be noticeably better in this case.
They actually let you choose one Cortex-M33 and one RISC-V RV32 as an option (probably not going to be a very common use case) and support atomic instructions from both cores.
All of the public mentions of this feature that I've seen indicated it is an either/or scenario, except the datasheet confirms what you're saying:
> The ARCHSEL register has one bit for each processor socket, so it is possible to request mixed combinations of Arm and RISC-V processors: either Arm core 0 and RISC-V core 1, or RISC-V core 0 and Arm core 1. Practical applications for this are limited, since this requires two separate program images.
I'm an experienced software developer with a lot of experience building high performance software that has to work correctly. I prefer to work in Rust, Go, and TypeScript, depending on the needs of the project, but I have experience with other languages too. I'm primarily a backend engineer, but I do have full stack experience as well.
For the past year or two, I've spent a lot of my free time learning about the application side of large language models (LLMs), including their strengths and limitations, so if you're doing anything with LLMs, that could be of particular interest to me.
Once GPT-4o mini launched, I noticed that it didn't really perform any better than GPT-4o. I thought this might change over time, but it still hasn't, so I finally sat down to do some more comprehensive benchmarking of different LLM APIs to see how they compare. The vast gulf in pricing between GPT-4o and GPT-4o mini would usually indicate a speed gulf too, but it is oddly missing... and the data actually indicates the smaller model is slower.
I didn't search for any similar benchmarks today, but I have searched in the past, and I've never been able to find any good reference for the tok/s that people are getting out of different hosted models. I hope other people will find this data valuable.
The tokenizer system supports virtually any input text that you want, so it follows that it also allows virtually any output text. It isn’t limited to a dictionary of the 1000 most common words or something.
There are tokens for individual letters, but the model is not trained on text written with individual tokens per letter, it is trained on text that has been converted into as few tokens as possible. Just like you would get very confused if someone started spelling out entire sentences as they spoke to you, expecting you to reconstruct the words from the individual spoken letters, these LLMs also would perform terribly if you tried to send them individual tokens per letter of input (instead of the current tokenizer scheme that they were trained on).
Even though you might write a message to an LLM, it is better to think of that as speaking to the LLM. The LLM is effectively hearing words, not reading letters.
Gemini 1.5 Pro charges $0.35/million tokens up to the first million tokens or $0.70/million tokens for prompts longer than one million tokens, and it supports a multi-million token context window.
Substantially cheaper than $3/million, but I guess Anthropic’s prices are higher.
Is it, though? In my limited tests, Gemini 1.5 Pro (through the API) is very good at tasks involving long context comprehension.
Google's user-facing implementations of Gemini are pretty consistently bad when I try them out, so I understand why people might have a bad impression about the underlying Gemini models.
The PIOs are state machines that let you develop custom peripherals that run asynchronously, not taking up CPU time. You could probably bitbang some custom peripherals on an ESP32/ESP8266, but that takes up a lot of CPU time and power.