I got tired of waiting for JetKVM availability in the US and pulled the trigger on a GL.iNet Comet PoE. A bit more expensive on Amazon ($110) but supports PoE which the JetKVM does not. Honestly, it has worked great. I know the earlier Comet firmware had some issues, but apparently they fixed it up and it has been solid.
I use UTC for all public/production servers, but for my homelab servers in my closet rack they all use local time. Makes grokking times in homelab servers much easier. The exception is database insert/update date/times which are always stored UTC.
I've run a business in this space since 2021, I am yet to meet a business that lets their marketing team own their status page.
You'll find most engineering teams will start owning a status page to centralise updates to their stakeholders, before eventually growing into the customer success/support org owning it to minimise support tickets during incidents.
Anybody used rqlite[1] in production? I'm exploring how to make my application fault-tolerant using multiple app vm instances. The problem of course is the SQLite database on disk. Using a network file system like NFS is a no-go with SQLite (this includes Amazon Elastic File System (EFS)).
I was thinking I'll just have to bite the bullet and migrate to PostgreSQL, but perhaps rqlite can work.
`<thead>` and `<tfoot>`, too, if they're needed. I try to use all the free stuff that HTML gives you without needing to reach for JS. It's a surprising amount. Coupled with CSS and you can get pretty far without needing anything. Even just having `<template>` with minimal JS enables a ton of 'interactivity'.
Adelaide, Australia used to have constant rolling blackouts, including a state wide blackout once. After that, Tesla (pre insanity era) built a grid battery storage system which essentially fixed the problem. I'm sure there were other improvements to the grid at the same time. But these days the grid is incredibly stable while also being majority solar and wind powered. The battery is able to buy and sell power daily and profit on the difference between high and low demand times. And if there's an equipment fault somewhere, it can respond fast enough to cover the time between a generator going offline, and the backup ones starting up.
By the time that blackout occurred, the grid was already quite stable and rolling blackouts were a thing of the past. The state-wide blackout was the result of a severe storm, which included lightning, gale-force winds and three tornadoes, taking out critical transmission lines, combined with inadequate protection circuits not set up to account for lightning strikes. When the state failed over to the Victoria interconnect, the interconnect shut down because the load was too high. So although the grid was stable, it had some failure points that were exposed during this severe and unusual storm.
The battery array was just one measure taken to increase grid resilience in such a scenario. The general idea was to have an instantly dispatchable electricity supply ready to go at any time while bringing gas-powered electricity online. A nice side effect of the battery is that it flattens out wholesale price spikes and makes a bit of money for itself in the process.
I'm running a Python 3.13 Flask app in production using gunicorn and gevent (workers=1) with gevent monkey patching. Using ab can get around 320 requests per second. Performance is decent but I'm wondering how much a lift would be required to migrate to FastAPI. Would I see performance increases staying with gunicorn + gevent but upgrading Python to 3.14?
Did you profile your code? Is it CPU-bound or IO-bound? Does it max out your CPU? Usually it's the DB access that determines the single-threaded performance of backend code.
I did some quick tests increasing workers=2 and workers=3 and requests per second nearly scaled linearly so seems just throwing more CPU cores is the quick answer in the mid-term.
> Each GizmoEdge worker pod was provisioned with 3.8 vCPUs (3800 m) and 30 GiB RAM, allowing roughly 16 workers per node—meaning the test required about 63 nodes in total.
How was this node setup chosen? Specially 3.8 vCPU and 30 GiB RAM per? Why not just run 16 workers total using the entire 64 vCPU and 504 GiB of memory each?
Hi nodesocket - I tried to do 4 CPUs per node, but Kubernetes takes a small (about 200m) CPU request amount for daemon processes - so if you try to request 4 (4000m) CPUs x 16 - you'll spill one pod over - fitting only 15 per node.
I was out of quota in Azure - so I had to fit in the 63 nodes... :)
I'm not exactly sure yet. My goal was to not have the shards be too large so as to be un-manageable. In theory - I could just have had 63 (or 64) huge shards - and 1 worker per K8s node, but I haven't tried it.
There are so many variables to try - it is a little overwhelming...
Would be interesting to test. I’m thinking there may not be a benefit to having so many workers on a vm instead of just the entire vm resources as a single worker. Could be wrong, but that would be a bit surprising.
Supply side constraints are typically bullish and good for a business. I was buying Intel stock hand over fist at peak media fear mongering as I knew their demise was greatly exaggerated. $INTC is up 90+% YTD.
reply