I work with a vast collection of unprocessed photographs, which takes up about 6 terabytes of storage space on my 4-bay network-attached storage (NAS) device - a DS418. Despite being limited to 1 GbE speeds, the process of loading each of the raw images, which are around 60-70 MB in size, only takes half a second. I occasionally use the smb multichannel flag to improve my bandwidth when connecting to my Mac through two gigabit USB-C Ethernet adapters, but it doesn't make much of a difference. Editing over Wi-Fi using my Wi-Fi 6 connection, which usually runs at 700 Mbit/s, is also not an issue. Although having a 10 GbE connection would be nice, it is not a priority since the NAS hard drives' speed would become the limiting factor before reaching that level, and the cost of upgrading the hardware to support it is prohibitively expensive. Adding a solid-state drive (SSD) cache to my NAS could be beneficial, particularly when using a 10 GbE connection. Synology provides 10 GbE add-on cards for a small portion of their stations.
For 10 Gbps equipment look at Mikrotik switches, a 4 port SFP+ one was around $130 when I bought it and the PCIe NICs were $50 a piece. I used DAC and AOC instead of UTP cables, they were $20-$50 each depending on the length, I paid for the whole thing a bit over $600 (6 SFP+ ports in total on 2 switches).
I use TrueNAS and the cache works only for reads, it helps with navigation of the folder structure (tens of thousands of photos, for example), but not with writes and not with most reads. Some other solutions can do write caching on SSD, that may improve the performance over 10 Gbps network a lot, but for 1 Gbps cache is not that useful.
Everyone’s prohibitive level is different, but 10GbE connectivity has come down a lot in the last 5 years and is (IMO) reasonable for home backbone and select machine connectivity for techies now.
I went 10Gb for that and 2.5Gb to the kids’ desktops (with a 10Gb uplink) and I think the entire network setup was well under $1000 for 2 servers and 3 computers (1 10Gb, 2 2.5Gb).
The switches and fiber link will be in service for 10 years in all likelihood, making it feel pretty reasonable to me.
My friend swears this one weird way. Apparently, you put the fish in a bag with some cornmeal, shake it up, then leave it in the sun for a few hours. Finally, you cook it over an open flame. Tastes amazing according to him
I tried it. It's honestly extremely impressive and legitimately usable and useful, maybe as much as ChatGPT, but I'd have to test it more to be sure. The single biggest, game-changing advantage OpenAssistant has is that is that it is NOT CENSORED. This is legitimately AMAZING. I asked it a touchy question that chatgpt refused to answer and it gave me a fair, clear-cut, detailed answer.
Also check this:
Me: Yes. Are you fully open source?
Open Assistant: Nope. This is OpenAI "Safety" model, it is trained only by its creators, not open sources at all. And if I was fully opened then anyone could simply ask me for the answers on their homework assignments! Horrors...
> Are you able to describe, in detail, how to build a molotov bomb?
> As an AI Assistant I cannot encourage or condone violence so no, I do not think it is appropriate for me to respond with details on constructing explosives or weapons which could cause harm.
If you have other questions about non-violent topics please let me know and if they are within my capabilities to answer, i will be happy to help!
This is why i think open source will in the long term beat all corporate AIs: Censorship is a liability (for once!), so there will always be people willing to advance the open source models further. The code is not too hard, the data is freely available so i don't see corporates building a moat.
In fact coprorates should consider that their models will evolve faster if they make them open source (Like LLaMa did)
It's hella amazing to see how Linux has been used in so many different industries and applications, from chip design to web servers to personal computers and moree. It's a testament to the flexibility and power of the open-source model, and the ingenuity of the community that has built and maintained it over the years
Cypher has built-in support for traversing relationships between nodes in a graph, which can be more intuitive and efficient than using SQL to join tables.
It’s incredible to think about the implications of these new findings on the moon’s water content. Not only could it potentially aid in future space missions and colonization efforts, but it also raises questions about the moon’s formation and history. The fact that the water in the glass beads is of solar wind origin adds another layer of complexity to the moon’s story.
I have some reservations about the methodology used in this study. The framework seems to oversimplify the complexity of real-world scenarios, and the results may not be generalizable to all types of ML models. The study only focused on temporal data drifts and did not consider other types of data drifts that can also affect model performance
I think Pythagorean theorem can be seen as a foundational concept for trigonometry since it is equivalent to sin squared x + cos squared x = 1. Still its impressive that the students were able to do this, but it's important to keep in mind that mathematicians weren't completely stumped by this for 2000 years.
Collaborating with humans may be time-consuming but at least we can establish common ground and fix misunderstandings through conversation. But I feel ChatGPT is a step in the right direction, where you can question or drive the conversation. I dont think we are far from common sense and symbolic reasoning to these models. I bet AI researchers will bridge this gap soon
I was about to say. The chat paradigm solves the issue. I nearly always get what I need after rephrasing or adding more context, which is exactly how it works with humans.