Is there a linux equivalent of this setup? I see some mention of RDNA support for linux distros, but it's unclear to me if this is hardware-specific (requires ConnectX or in this case Apple Thunderbolt) or is there something interesting that can be done with "vanilla 10G NIC" hardware?
Still doesn't exactly say what it is? I get that it's glyphs for printable characters, but honestly it could be a PDF, video, collection of PNGs or SVG files, an Adobe Illustrator file, a linux distribution, a web browser, or pretty much any other piece of software or data format. I presume it's a TTF or OTF font file?
Waymo's usually something like 50% more expensive than Lyft in SF, in my experience. But the drivers don't tailgate, have colds, listen to your conversation (AFAIK)...I'll generally opt for Waymo now if I have a choice. The biggest problem I have is that it's usually a longer wait due to the smaller fleet size, but if I'm planning ahead, I'll just book one for a given time, and that takes care of it.
Lyft from MV to SF is like $100 I think? It's definitely not enjoyable but for Bay Area prices it's not ruinously expensive.
You /should/ be able to save by using shared rides, but in practice when I tried the driver was so mad they just dumped me on the side of the road and I had to call and get a refund.
The new Caltrain schedule isn't half bad though, if it came twice as often on the weekends we'd be cooking.
Basalt is stronger than glass fibers (made from silica / quartz / sand), but not as strong as carbon fiber. Also, its more expensive than glass, but less expensive than carbon. Generally considered eco friendly.
Interestingly where carbon fiber's failure mode is instant, failing catastrophically (like say chalk), basalt will be more gradual (like say wood), in some use cases that's an advantage.
Overall though its still not mass produced, uncertain if it will ever reach scale.
If interested in fibers and composites, the YouTube channel Easy Composites is really interesting / educational. For example you can use flax fiber.
It also has one very interesting property that carbon fiber doesn't: it's not conductive. This means, for example, that you can put it in an MRI machine and get signal back. You can't do that with carbon fiber, which shields the return RF signals and gives you a dark image, but doesn't damage anything. Basalt weave composites are basically completely transparent on an MRI.
(For the same reasons, it also can be microwaved successfully. Carbon fiber can not be microwaved. Do not microwave real carbon fiber or carbon fiber composites.)
> Interestingly where carbon fiber's failure mode is instant, failing catastrophically (like say chalk), basalt will be more gradual (like say wood), in some use cases that's an advantage.
So, should we use it to make a submarine to visit the Titanic?
Price performance. If the failure mode is slow, then my sport (rowing) could love this for cheaper boat construction which is stronger than fibreglass but cheaper than carbon fibre. I imagine surfboards and kayaks could work too.
Exactly this. I make kayaks and basalt would be the perfect middle ground between FG and carbon where the boat will get dinged up in rivers. Unfortunately its nearly impossible to obtain in small quantities for a hobbyist.
In addition to what sibling posts say, basalt is certainly abundant. Per Wikipedia, 90% of volcanic rock on earth is basalt. We're not going to run out of it.
I can imagine (I have no clue about this, I just watch manufacturing videos) that this is easier to mass produce. A less refined version of this is used to make Rockwool, an insulation material similar to fiber glass. Melt the stuff, extrude it, ????, profit. https://www.youtube.com/watch?v=t6FWPTZjwLo
See uses here: https://en.wikipedia.org/wiki/Basalt_fiber
I am no material scientist, so cannot comment on actual facts why it might be better in specific cases than Kevlar, Dyneema or Carbon. But from experience there's a lot I don't know and especially in engineering there's a lot to consider when putting materials under stressful conditions that might put this in in a specific spot superior to those mentioned above.
Each material has its own issues. Kevlar is very difficult to work with (need special scissors to cut and you can't sand the finished product), Dyneema is sensitive to UV degradation. Carbon is $$$. Basalt sounds like the sweet spot for some of my applications but afaict it can't be purchased by the yard like most materials so is essentially unobtainium to a hobbyist who can't afford a $1k or so roll of material.
"I don't see why" has never been the bar for scientific advancement, fortunately. "Someone is curious" is sufficient, and "Someone involved sees potential" provides funding.
Seriously, how much else of the world's technology would you summarily do away with, because you simply don't see the point?
The ultimate "out of sight out of mind" solution to a problem?
I'm surprised that Google has drunken the "Datacenters IN SPACE!!!1!!" kool-aid. Honestly I expected more.
It's so easy to poke a hole in these systems that it's comical. Answer just one question: How/why is this better than an enormous solar-powered datacenter in someplace like the middle of the Mojave Desert?
From the post they claim 8 times more solar energy and no need for batteries because they are continuously in the sun. Presumably at some scale and some cost/kg to orbit this starts to pencil out?
If it can be all mostly solid-state, then it's low-maintenace. Also design it to burn up before MTTF, like all cool space kids do these days. Not gonna be worse at Starlink unless this gets massively scaled up, which it's meant to be (ecological footprint left as an exercise to the reader).
> Fundamentally, it is, just in the form of a swarm. With added challenges!
Right, in the same sense that existing Starlink constellation is a Death Star.
This paper does not describe a giant space station. It describes a couple dozen of satellites in a formation, using gravity and optics to get extra bandwidth for inter-satellite links. The example they gave uses 81 satellites, which is a number made trivial by Starlink (it's also in the blog release itself, so no "not clicking through to the paper" excuses here!).
(In a gist, the paper seems to be describing a small constellation as useful compute unit that can be scaled, indefinitely - basically replicating the scaling design used in terrestrial ML data centers.)
> Right, in the same sense that existing Starlink constellation is a Death Star.
"The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity."
This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
> The example they gave uses 81 satellites…
Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
> This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
Irrelevant for spacecraft dynamics or for heat management. The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload. It's like, the basic tenet of digital computing.
> Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
Data center is made of multiplies of some compute units. This paper is describing a single compute unit that makes sense for machine learning work.
> The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload.
The more compute you do, the more heat you generate.
> Data center is made of multiplies of some compute units.
And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.
> The more compute you do, the more heat you generate.
Yes, and yet I still fail to see the point you're making here.
Max power in space is either "we have x kWt of RTG, therefore our radiators are y m^2" or "we have x m^2 of nearly-black PV, therefore our radiators are y m^2".
Even for cases where the thermal equilibrium has to be human-liveable like the ISS, this isn't hard to achieve. Computer systems can run hotter, and therefore have smaller radiators for the same power draw, making them easier.
> And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.
What you're doing here is like saying "cars don't work for a city because a city needs to move a million people each day, and a million-seat car will break the roads": i.e. scaling up the wrong thing.
The (potential, if it even works) scale-up here is "we went from n=1 cluster containing m=81 satellites, to n=10,000 clusters each containing m=[perhaps still 81] satellites".
I am still somewhat skeptical that this moon-shot will be cost-effective, but thermal management isn't why, Musk (or anyone else) actually getting launch costs down to a few hundred USD per kg in that timescale is the main limitation.
Think to any near-future spacecraft, or idea for spaceships cruising between Earth and the Moon or Mars, that aren't single use. What are (will be) such spacecraft? Basically data centers with some rockets glued to the floor.
It's probably not why they're interested in it, but I'd like to imagine someone with a vision for the next couple decades realized that their company already has data centers and powering them as their core competency, and all they're missing is some space experience...
I think the atmosphere absorbs something like 25% of energy. If that's correct, you get a free 33% increase in compute by putting more compute behind a solar power in LEO
And you can pretty much choose how long you want your day to be (within limits). The ISS has a sunrise every 90 minutes. A ~45 minute night is obviously much easier to bridge with batteries than the ~12 hours of night in the surface. And if you spend a bunch more fuel on getting into a better orbit you even get perpetual sunlight, again more than doubling your energy output (and thermal challenges)
I have my doubts that it's worth it with current or near future launch costs. But at least it's more realistic than putting solar arrays in orbit and beaming the power down
> How/why is this better than an enormous solar-powered datacenter in someplace like the middle of the Mojave Desert?
Night.
I mean, how good an idea this actually is depends on what energy storage costs, how much faster PV degrades in space than on the ground, launch costs, how much stuff can be up there before a Kessler cascade, if ground-based lasers get good enough to shoot down things in whatever orbit this is, etc., but "no night unless we want it" is the big potential advantage of putting PV in space.
They do mention it in the linked announcement, although not really highlighted, just as a quick mention:
> As a result, we’re very excited to share that in Ubuntu 25.10, some packages are available, on an opt-in basis, in their optimized form for the more modern x86-64-v3 architecture level
> Previous benchmarks we have run (where we rebuilt the entire archive for x86-64-v3 57) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that.
ARM/RISC-V extensions may be another reason. If a wide-spread variant configuration exists, why not build for it? See:
- RISC-V's official extensions[1]
- ARM's JS-specific float-to-fixed[2]
Just think of the untapped market of fresh 9 year olds who've never seen/played the game before. It's infinite, there will always be more people who have never played Minecraft.
Where are you getting that from? Minecraft has been comfortably above 100,000,000 monthly active users since at least 2019. The only comparable figure I can find for Fortnite claims 650,000,000 registered users, which doesn’t seem remotely possible unless at least half of them are bots. 650,000,000 is something like 1/12th of the world’s entire population. The Roblox figures I could find showed just under 400,000,000 MAU in 2024, which also seems completely beyond the pale.
So anyone who reports higher numbers than Minecraft is lying, but Minecraft's numbers are all accurate? You have literally just invented a conspiracy theory to affirm your biases around this matter.
Also note that monthly active users for Roblox and Fortnite equate to monthly revenue, whereas I doubt there are as many people buying Minecraft in-app purchases.
This does claim that 238,000,000 copies have been sold but that there are 400,000,000 registered Minecraft players in China (this would be about 1/3rd of the population so I think it’s probably a typo).
Epic is privately held so I suspect they wouldn’t bother reporting official player counts. Roblox actually does have numbers listed in their annual report, it just wasn’t showing up when I Googled it: 3.6 billion in revenue and 82.9 million daily active users. So that would put it within the wheelhouse of Minecraft’s playerbase, but still about 20,000,000 short.
It may well be that all the kids are playing it over Minecraft, though, since the document I linked above claims the average Minecraft player in North America and Europe is 27. I have no idea what those numbers look like for Roblox but from what I understand the playerbase has always skewed substantially towards minors.
> You have literally just invented a conspiracy theory to affirm your biases around this matter.
You can't compare daily and monthly active users. Monthly active users are can be many times higher than daily. Also, the numbers you are using for Roblox are two years out of date. Current DAU for Roblux is 150M [1].
> This was incredibly abrasive.
I reserve the right to be abrasive when people randomly decide to spread baseless claims.
The mantra for the library is "raylib is a simple and easy-to-use library to enjoy videogames programming." It's for hobbyist, learners, tinkerers, or just those that want to enjoy minimalistic graphics programming without having to deal with interfacing with modern machines yourself.
The default Windows installer bundles the compiler and a text editor to make poking at C to get graphics on the screen (accelerated or not) a 1 step process. Raylib is also extremely cross platform, has bindings in about every language, and has extra (also header only, 0 dependency) optional libraries for many adjacent things like audio or text rendering.
When I first started to learn C/C++ in the 2000s I spent more time fighting the IDE/Windows/GCC or getting SDL/SFML to compile than I did actually playing with code - and then it all fell apart when I tried to get it working on both Linux and Windows so I said fuck it and ignored that kind of programming for many years. Raylib is about going the opposite direction - start poking at C code (or whatever binding) and having it break and worry about the environment later when you're ready for something else.
I never ever bothered to compile SDL/SFML from source, what is so hard dumping the binaries into a folder, set the include paths for the compiler and linker?
Although I may imagine newbies may face some challenges dealing with compiler flags.
Not much to a developer. To a novice (potentially very young) user there's confusion why there are 3 versions for e.g. Windows, which to pick from and why, how to set the compiler/linker flags for the build tuple, and then confusion about how to make it work on the alternative targets for their friends (e.g. the web target or the Linux ARM Pi target for class) and why that has to be different steps. None of that is particularly complicated once you go through it all, but it is a bit of a barrier to the "see something on the screen" magic happening to drive interest. Instead, raylib is just a header file include, like a text based "hello world", regardless of platform - even if you don't want to use the pre-made bundle.
Or, more simply, it makes the process "easy as a scripting language" rather than "pretty easy".
> what is so hard dumping the binaries into a folder, set the include paths for the compiler and linker?
The problems already start with getting the precompiled libraries from a trusted source. As far as I'm aware the SDL project doesn't host binaries for instance.
This site is for hackers, which basically means people who like to do things like this. If you can't understand why someone would be interested in this, probably you should remain silent and try to understand hackers rather than commenting.
As someone who was once a child trying to figure out how to compile and link things to use SDL, I think there's some educational value in letting people make games without having to dive deep into how to use C++ toolchains.
I think there's still value in learning the C++ language and making a game or a demo is quite rewarding although raylib does have bindings for basically every conceivable language.
I'd make the opposite argument about educational value.
If you learn to compile libraries and programs you have, so to speak, passed an exam: you are ready to "make games" with confidence because you know what you are doing well enough to have no fear of tool complications.
What should be minimized is the accidental complication of compiling libraries and programs, for example convoluted build systems and C++ modules.
If you learn to compile libraries and programs, you just learn how to compile libraries and programs. That doesn't teach you anything about how to "make games." It doesn't even make game development significantly easier.
I think the real answer to educating people about making games without the complications of low level programming would be using a framework like Godot or languages like Python or Lua.
Of course technical concerns aren't directly relevant to making games, but they are still necessary. Productive development means overcoming technical hurdles, not only domain specific challenges.
What if you cannot adopt some library that would do something very useful because you lack the skill integrate or replace CMake or Bazel or Autoconf? Unnecessary technical constraints impact game quality.
What if due to insufficient automation the time between tests after making a very small change is 10 minutes rather than 10 seconds? Reduced productivity impacts game quality.
Very useful skills to have. But they don’t need to be learned during the very first lesson on the very first day unless you are trying to filter people out for some reason.
As someone working on a game engine with a multithreaded SSE/NEON implementation of ~GL 1.3 under the hood, this is rad for many reasons other than portability or compatibility. You get full access to every pixel and vertex on the screen at any point in the rendering pipeline. This allows for any number of cool (also likely multithreaded) postprocessing effects that you don't have to shoehorn through a brittle, possibly single-platform shading language and API.
It's slower indeed, but it's easier to write and debug, more portable, and gives you total control over the render pipeline. It's good for experimentation and learning, and would still be trivial to compile and run 20 or 50 years from now. And with how obscenely fast modern CPUs are, it might even be fast-enough for a simple 3D game.
Someone always points out how Doom wasn't "real 3D" like it's some sort of gotcha. Games are smoke and mirrors, it's all a 2D grid of pixels at the end.
Well yeah, a 2d grid of pixels also describes the result of rendering a 2d game. It matters how you arrive at that 2d grid of pixels, that's why you can't render Crysis on just a CPU. At least not in real time.
reply