The Omada line makes pretty decent AP’s too. A step above ubiquiti in terms of reliability but also dead simple. I’d have gone ruckus if I could have but the mailman stole my $200 eBay R550 score and I didn’t feel like shelling out $500 for one.
I actually picked up a refurb desktop from Walmart with a ryzen 3500 for $400, it runs basically everything without breaking a sweat. Proxmox running homeassistant, docker, my seedbox, media server, etc and it averages 3% CPU usage.
I did not know just how much heat a 16tb disk can put out up until that point though.
Either it picks up too much garbage if you allow any P2P data exchange (can't allow only outgoing AFAIK) or it kinda only knows about the sites you know about. Which kinda defeats the purpose.
Even assuming you just want a specific index for yourself of your own content then it struggles to display useful snippets about the results, which makes it really tedious to shift through the already poor results.
If you try to proactively blacklist garbage, which is incredibly tedious because there's no quick "delete from index and blocklist" button under index explorer, then you'll soon run into an unmanageable blocklist, the admin interface doesn't handle long lists well. At some point (around 160k blocked domains) Yacy just runs out of heap during startup trying to load it which makes the instance unusable.
It also can't really handle being reverse proxied (accessed securely by both the users and peers).
It also likes to completely deplete disk space or memory, so both have to be forcefully constrained. But that ends up with a nonfunctional instance you can't really manage. It also doesn't separate functionality enough that you could manually delete a corrupt index for example.
Running (z)grep on locally stored web archives works significantly better.
Those are pretty bad issues. I remember using it along time ago and only remember the results being bad. I've heard that Yacy could be good for searching sites you've already visited but it sounds like even that might not be a good use case for it.
I do understand the taking up of disk space thing. It's hard to store text of all your sites without it talking up a lot of space unless you can intelligently determine which text is unique and desired. Unless you are just crawling static pages it becomes hard to know what needs to be saved or updated.
I remember trying it for a while in 2012, but the results were essentially worthless, probably because there were so few nodes/crawlers back then. I guess the more users there are, the better the results.
Alternatively, ignore the public network (it's still useless) and run it as your own crawler. Seed it with your browsing history, some aggregators like HN, your favourite RSS feeds, etc. and you'll be good.
> I remember trying it for a while in 2012, but the results were essentially worthless,
I had mine crawling gov, mil, etc sties for pages that Google was starting to delist back then. Inbound requests were heavy with porn until I tweaked - IDK, something.
I got an instance going in a truenas core jail, freebsd and using freebsd java not a linux vm or linux abi compatibility. had to make my own rc script.
Then had to mess with the disk & ram settings to get it to run for more than a day. But the settings are not actually explained at all and whatever they do, they definitely don't do what their names and worthless tooltips say they do.
It seems to be running now indefinitely without killing either itself or the host, in full p2p mode, but I really have no idea why it's working, or really for sur if it actually is fully. I changed "idk, something"
And I don't use it for search myself so far. Maybe some day but for now I'm paying for kagi.
I just like the idea and want it to be a thing, and it seemed a little less "invite a world of shit and attention onto my ip" than running say a tor exit or something. Maybe only a bit less but I'll see how it goes and react if I need to.
I have a 14 MBP M1 16/512. It’s… not great. The other day photoshop was running super slow so I opened up process manager and discovered that not only had it eaten all 16gb of ram, it was also using up ~30gb of swap space. Even just general day to day safari use will give me random lags and slowdowns that I don’t get on my desktop which is running an i5.
I don't know. I just felt it was like that, so said it. Can't really explain it. Gut feel.
>Now, what happens to those increased wages when the demand for workers is gone?
>That’s right. They collapse.
Yeah. It happens. All the time. Deal with it.
Cycles, dude, cycles.
Business cycles. Ups and downs.
Power cycles (between various power groups). Ditto.
You know, like our old pals, Xerxes, Darius, Cambyses, Alexander, Attila, Caesar, Ashoka, Chandragupta Maurya, Genghis Khan, Akbar, Timur, Napoleon, Hitler, Cortes, and many more (in no particular order).
Horse-drawn carts. Cars. ICE cars. EVs. Age of sail. Steam ships. Abacus. PCs. Mobiles.
You get the drift. It's called progress. Not so sure. But it is what it is.
They really aren't struggling we have plenty of firepower there. But yeah, China will soon outstrip our naval numbers by a large amount within the end out of the decade. We still have better tech but what do we do when they launch 200 "good enough" cruise missiles at each aircraft carrier sitting in the Taiwan Straight?
They really aren't struggling. But yeah, China will soon outstrip our naval numbers by a large amount within the end out of the decade. We still have better tech but what do we do when they launch, at the same time, 200 "good enough" cruise missiles at each aircraft carrier sitting in the Taiwan Straight?
typical carrier complement has ~300-500 vertical launch tubes and sea sparrows can be quad packed into a tube, and CIWS. not sure i would bet against aegis