I run a neocloud and our entire UX is TUI-based, somewhat like this but obviously simpler. The customer feedback has been extremely positive, and it's great to see projects like this.
Can you tell me more about what do you mean by Neocloud and where are you exactly hosting the servers (do you colocate or do you resell dedicated servers or do you use the major cloud providers)
this is my first time hearing the term neocloud, seems like its focused on AI but I am gonna be honest that is a con in my book and not a pro (I like hetzner and compute oriented compute cloud providers)
Share to me more about neoclouds please and tell me more about it and if perhaps it could be expanded beyond the AI use case which is what I am seeing when I searched the term neocloud
Neocloud has come to refer to a new class of GPU-focused cloud providers. Sure, most of our customers use us for AI purposes, but it is really open to anything GPU related.
We buy, deploy and manage our own hardware. On top of that, we've built our own automation for provisioning. For example, K8S assumes that an OS is installed, we're operating at a layer below that which enables to machine to boot and be configured on-demand. This also includes DCIM and networking automation.
We built our own ironic. Instead of a ton of services and configuration, we just have a single golang binary. Our source of truth is built on top of NetBox. We integrate Stripe for billing. We're adding features as customers ask for them.
While it is a lot of moving parts coordination, I'm not sure I agree with the complexity...
That is a value for the entire gpu, what about the memory part itself? Also consumers don't need 300GB of it (yet).
But to answer - memory is progressing very slowly. DDR4 to DDR5 was not even a meaningful jump. Even PCIe SSDs are slowly catching up to it which is both funny and sad.
As for the usecase - I use my memory as a cache for everything. Every system in the last 15-20 years I used I maxed out memory on, I never cared much about speed of my storage, because after loading everything into RAM, the system and apps feel a lot more responsive. The difference on older systems with HDDs were especially noticeable, but even on an SSDs, things have not improved much due to latencies. Of course using any webapp connecting to the network will negate any benefits of this, but it makes a difference with desktop apps.
These days I even have enough memory to be able to run local test VMs so I don't need to use server resources.
Coincidentally, the first issue (referencing Navi 21) was the one I started these experiments with, and this turned out to be pretty informative.
Our Navi 21 would almost always go AWOL after a test run had been completed, requiring a full reboot. At some point, I noticed that this only happened when our test runner was driving the test; I never had an issue when testing interactively. I eventually realized that our test driver was simply killing the VM when the test was done, which is fine for a CPU-based test, but this messed with the GPU's state. When working interactively, I was always shutting down the host cleanly, which apparently resolved this. A patch to our test runner to cleanly shut down VMs fixed this.
And I've had no luck with iGPUs, as referenced by the second issue.
From what I understand, I don't think that consumer AMD GPUs can/will ever be fully supported, because the GPU reset mechanisms of older cards are so complex. That's why things like vendor-reset [3] exist, which apparently duplicate a lot of the in-kernel driver code but ultimately only twiddle some bits.
She also had no choice, as SBF was blaming her. The point being that they still didn't really need her help. It was obvious that he committed fraud, and there was plenty of proof of it.
I mean, the guy was constantly high on nootropics and they had no idea what actual investments FTX made. I'd imagine most of the time was just spent untangling that web, his case was more or less a slam dunk.
ssh admin.hotaisle.app
reply