I read the title a couple of times and I'm still not sure it isn't misleading. The benchmarks are not just for Postgres but for Postgres with the Mooncake extension. There are also other results for Postgres with different extensions. While it does rank among the top fastest databases, it is not the fastest and not even within the top 10.
Do we know if Gemma models are fundamentally different from the ones hosted as Gemini? Gemini 1.5 flash seems to produce good results for the price and performance.
I was recently researching structured output generation for my project and I enjoyed using Outlines library a lot. It felt quite fast as it uses FSM and indexing. There are few fine prints though:
1. Sometimes constraints can decrease the quality of the output since syntax of the response is prioritized more than quality of the response
2. For memory constrained inferences, certain sampling strategies like top-k can cause OOM errors if the max_token is too high. I haven't tested that it is entirely due to structured generation but I suppose it is possible for certain regexes.
3. Vision models and other multi-modal models are not supported yet.
Apart from this, closed models also have json output but I am not sure how consistent they are
The author did enroll his kid into school for freshman after a break. Is this normally considered beneficial? Having a period of unschooling followed by traditional school?
I do hear a lot that there's some level of engagement with a traditional school at some stage, but that the experience goes very differently.
Even as an adult, I sometimes want to enroll in an enrichment program that looks a lot like a school class (was just researching culinary training options yesterday, in fact), but there's a huge difference between how that lands and how it landed when I was "stuck" in school as a kid.
When you choose to be there because you've evaluated its merits and found something you consider worth the costs (whatever annoyances come with it), and you also know you can leave, it loses the "prison" aspect "schooling" has for a lot of kids.
Our oldest daughter will be 16 next month. Last year, she decided to take a couple of agriculture-related classes at the local high school so she could be part of FFA and show livestock. This year she spent three hours per day there.
She’s currently spending the next three days at an FFA leadership event at a university about five hours away - despite never having been enrolled as a student in a public school, she will be the only person in her (small) FFA chapter to have ever qualified and attended.
She’s planning on doing “high school” for one more year, then attending our local community college to get an associate’s degree. While she technically “won’t be a high school graduate” when she turns 18, she will have a two-year college degree and about half the transferable credits necessary to graduate from a four-year university if that’s what she wants. For that matter, she could start university this fall if she wanted - but it doesn’t make much logistical sense to send a 16-year-old to live on her own, and that’s not what she wants to do anyhow.
I don’t think there is a true definition for “pure unschooling”. By its very nature, every child ends up valuing different things and making unique choices.
Curious to hear more about your process here - did you start off with one of you homeschooling her and slowly just gave her more freedom as she got older? My oldest is not quite two and we're thinking about how we're going to approach this as we're older and your case sounds intriguing to me.
Nope. It was honestly our intention since the very beginning.
My wife and I are anarchists, and the idea of requiring our kids to attend a government school is kinda anathema to us. We knew about unschooling before our oldest was old enough for her peers to enter kindergarten, so we never really did anything else.
If there’s any one thing we’ve learned it’s that the key is to just let them live life with you. They’ll develop their own interests; support them in that.
If you’re concerned about them learning a specific skill or concept, find a way that it’s required for something they want to do. My oldest learned to read at a conversational pace through Guild Wars 2; my youngest learned basic math through crochet and needlepoint.
Learning is part of human nature. Kids aren’t an exception to that. As long as they have supportive people around them, they’ll learn everything they need to achieve their own goals - and in the process, they’ll learn “how to learn” and build the self-confidence needed to embark on more and more ambitious projects as they get older.
Looks like some of the docs are generated by an llm. I see pictures with typos and imagined terms, incomplete texts etc., I wonder to what extent we can trust rest of the docs.
Scroll down to the end and the removed text is totally suspect. I wouldn't be to surprised if all of this was generated by an LLM then anything strange was edited by a human. Another reason not to leave everything to the LLM.
On a tangential note, I find window management in MacOS much more horrible than Windows. Want to split windows, you end up with full screen. When on multiple monitors, selecting an app on one screen makes the same app active on the other screen (or sometimes it doesn't). I am willing to rewire my habits if I can just figure out how to make Mac window manager behave deterministically. I just don't get what is the grammar of user interaction that the designers went for.
It has been a while since I've used Mac OS, so I don't know if my comment is still valid:
The original Macintosh could only run one application at a time, though the application could have several documents open at a time. With the introduction of multitasking, a decision had to be made. Apple decided to go with something that resembles the multiple document interface, except all of the windows for a single application were effectively placed on a single layer (rather than being contained within another window, which was quite common on Windows back in the day). What you are seeing in that multiple screen setup is effectively a historical artifact.
This is really shoddy reporting. The title says Azure data breach whereas the attack is a phishing campaign targeting Office 365. Being a phishing campaign, it is unclear which components of Azure/Microsoft were instrumental in the attack. And the article goes on to make irrelevant allusions to Microsoft's negligent cybersecurity practices. This is such bad reporting that I wonder whether it is done with incompetence or malfeasance.
This is pretty impressive, it seems that OpenAI consistently delivers exceptional work, even when venturing into new domains. But looking into their technical paper, it is evident that they are benefiting from their own body of work done in the past and also the enormous resources available to them.
For instance, the generational leap in video generation capability of SORA may be possible because:
1. Instead of resizing, cropping, or trimming videos to a standard size, Sora trains on data at its native size. This preserves the original aspect ratios and improves composition and framing in the generated videos. This requires massive infrastructure. This is eerily similar to how GPT3 benefited from a blunt approach of throwing massive resources at a problem rather than extensively optimizing the architecture, dataset, or pre-training steps.
2. Sora leverages the re-captioning technique from DALL-E 3 by leveraging GPT to turn short user prompts into longer detailed captions that are sent to the video model. Although it remains unclear whether they employ GPT-4 or another internal model, it stands to reason that they have access to a superior captioning model compared to others.
This is not to say that inertia and resources are the only factors that is differentiating OpenAI, they may have access to much better talent pool but that is hard to gauge from the outside.
Gemini Advanced> Summarize this video: https://www.youtube.com/watch?v=ogJsCPqgFMk
This video is about a new method for training text-to-image diffusion models called Würstchen. The method is significantly more efficient than previous methods, such as Stable Diffusion 1.4, and can achieve similar results with 16 times less training time and compute.
The key to Würstchen's efficiency is its use of a two-stage compression process. The first stage uses a VQ-VAE to compress images into a latent space that is 4 times smaller than the latent space used by Stable Diffusion. The second stage uses a diffusion model to further compress the latent space by another factor of 10. This results in a total compression ratio of 40, which is significantly higher than the compression ratio of 8 used by Stable Diffusion.
The compressed latent space allows the text-to-image diffusion model in Würstchen to be much smaller and faster to train than the model in Stable Diffusion. This makes it possible to train Würstchen on a single GPU in just 24,000 GPU hours, while Stable Diffusion 1.4 requires 150,000 GPU hours.
Despite its efficiency, Würstchen is able to generate images that are of comparable quality to those generated by Stable Diffusion. In some cases, Würstchen can even generate images that are of higher quality, such as images with higher resolutions or images that contain more detail.
Overall, Würstchen is a significant advance in the field of text-to-image generation. It makes it possible to train text-to-image models that are more efficient and affordable than ever before. This could lead to a wider range of applications for text-to-image generation, such as creating images for marketing materials, generating illustrations for books, or even creating personalized avatars.
> Of 1,463 proteins analysed, aided by with a type of artificial intelligence known as machine learning, 11 proteins were identified and combined as a protein panel, which the researchers have shown to be highly accurate at predicting future dementia.
I understand that press releases are intended for non-technical folks but I don't get the point of this description. Is it assumed that machine learning is less understood than artificial intelligence?