Hi HN! We built this to make "why AI works" feel tangible without drowning in math; feedback welcome!
- What it is
We built a structured AI-literacy platform to unpack core AI concepts with: bite-sized lessons + post-lesson quizzes, dozens of in-browser visualizers (neural nets, tokenizers, CNN, GPT-2, etc.), audio and curated videos, and daily AI news feed.
- Why we built it
Most AI courses right now felt either too math-heavy, too surface-leveled, too narrow (covers only a tiny side of AI), or just too expensive. We wanted a single place that balances rigor with intuition - accessible enough for high schoolers (and possibly younger), and structured enough to give adults a solid foundation.
- Who it's for
Anyone curious about AI (what it is, how it works, why it works, and when it doesn't), and in particular high school and college students who should really learn AI fundamentals like they do math and English, teachers looking for classroom-ready materials, adults who find themselves lost in jargons and just want to make sense of it all.
- What to try
Lessons + Thinking Corner & Teacher Notes + audio clip - start with Unit 1's first 5 lessons (after sign up, free); go through the slides, check out the Thinking Corner/Teacher Notes on each slide for additional insights, and pop up the accompanying audio clip to reinforce the slide material.
AI visualizers - GPT-2 Explorer, Tokenizer Playground, Neural Network Visualizer (free). Each visualizer includes sources and a brief 'how it works' section.
Assessments - take the short check-for-understanding at the end of each lesson.
Curated videos (YouTube) - browse through a library of handpicked short videos to reinforce concepts.
AI news feed - finish the day with a 2-minute skim of today's AI headlines; it uses AI to retrieve and rank the news with source-links.
- How we built it
React front-end + Python/Flask services (parts scaffolded with Lovable). Slides via Gamma. Visualizers are a mix of in-house and open-source (credited). Audio clips are generated using NotebookLM with context.
- Limitations
Balancing layman-friendly explanations with technical accuracy was not easy - corrections are welcome.
- Roadmap / feedback
Did we get anything wrong? Where does the difficulty curve feel off? What new content or features should we add? Our current plan is to add more lessons and visualizers. We are also exploring whether to add more practical lessons like building agents, vibe-coding app development, etc. Ideas and critiques are very welcome. Thanks!
Kudos for doing this, right now high school and college students are very much not prepared heading into the storm ahead, this stuff can do some good to help catch up to a critical baseline.
Exactly! Our team are most parents too. When we created slides for the first lesson a couple of months ago. I ran it over with my 9 yo daughter that night and she actually followed through (with my guidance :) that inspired us to finish the current version. Please try it out and we will keep adding new lessons to the set!
Really impressive work! I wonder how easy would it be to support (a future open source version of) SORA using Groq's design. Will there be a Video Processing Unit (VPU)?
i can't comment about sora specifically, however the architecture can support workloads beyond just LLM inference.
our demo booth at trade shows usually has StyleCLIP up at one point or another to provide an abstract example of this.
disclosure: i work on infrastructure at Groq and am generally interested in hardware architecture and compiler design, however i am not a part of either of those teams :)
We have created a 3D asset marketplace that contains a set of (expanding) categories of AI generated 3D base meshes that are mobile compatible with free AI-texturing. You can change polycount and also turn mesh into voxels, pretty cool for putting together assets for Roblox and Minecraft type of games. Go check it out and we welcome any feedback!
We have built BindDiffusion, one diffusion model to bind multi-modal embeddings. It leverages a pre-trained diffusion model to consume conditions from diverse or even mixed modalities. This design allows many novel applications, such as audio-to-image, without any additional training. This repo is still under development. Please stay tuned!
I agree that some people would prefer to see this as a weekly popup, but some people have the habit of taking a look at what is happening every day :-)
Sean is a very good engineer. Another of his work is moderngpu, a GPU primitive library. It has the same level of doc/tutorials. I really like one thing he wrote: Software is an asset, code a liability. On good days I'd add 200 or 300 lines to this repository. On great days I'd subtract 500.
- What it is
We built a structured AI-literacy platform to unpack core AI concepts with: bite-sized lessons + post-lesson quizzes, dozens of in-browser visualizers (neural nets, tokenizers, CNN, GPT-2, etc.), audio and curated videos, and daily AI news feed.
- Why we built it
Most AI courses right now felt either too math-heavy, too surface-leveled, too narrow (covers only a tiny side of AI), or just too expensive. We wanted a single place that balances rigor with intuition - accessible enough for high schoolers (and possibly younger), and structured enough to give adults a solid foundation.
- Who it's for
Anyone curious about AI (what it is, how it works, why it works, and when it doesn't), and in particular high school and college students who should really learn AI fundamentals like they do math and English, teachers looking for classroom-ready materials, adults who find themselves lost in jargons and just want to make sense of it all.
- What to try
Lessons + Thinking Corner & Teacher Notes + audio clip - start with Unit 1's first 5 lessons (after sign up, free); go through the slides, check out the Thinking Corner/Teacher Notes on each slide for additional insights, and pop up the accompanying audio clip to reinforce the slide material.
AI visualizers - GPT-2 Explorer, Tokenizer Playground, Neural Network Visualizer (free). Each visualizer includes sources and a brief 'how it works' section.
Assessments - take the short check-for-understanding at the end of each lesson.
Curated videos (YouTube) - browse through a library of handpicked short videos to reinforce concepts.
AI news feed - finish the day with a 2-minute skim of today's AI headlines; it uses AI to retrieve and rank the news with source-links.
- How we built it
React front-end + Python/Flask services (parts scaffolded with Lovable). Slides via Gamma. Visualizers are a mix of in-house and open-source (credited). Audio clips are generated using NotebookLM with context.
- Limitations
Balancing layman-friendly explanations with technical accuracy was not easy - corrections are welcome.
- Roadmap / feedback
Did we get anything wrong? Where does the difficulty curve feel off? What new content or features should we add? Our current plan is to add more lessons and visualizers. We are also exploring whether to add more practical lessons like building agents, vibe-coding app development, etc. Ideas and critiques are very welcome. Thanks!