Anything that fits in RAM on one machine is easily too small for Hadoop. In those cases, the overhead of Hadoop is going to make it get destroyed by a single beefy machine. The only times where this might not be the case is when you're doing a crazy amount of computation relative to the data you have.
Note that you can easily reach 1TB of RAM on (enterprise) commodity hardware now, and SSDs are pretty fast too.
A Hadoop submission may help people realize that. But since you only have one machine to work with it should be obvious that you're not going to get any speed-up via divide and conquer.
The worst part is that if you were in a dedicated flight sim, set up your sticks, and enjoy.
In SC, you gotta set up your sticks, you gotta then use mouse + keyboard to get to your ship. Then you can enjoy. But then like 50+% of the game is outside the cockpit, so now you're having to fiddle with trying to find a comfortable place to put that damn keyboard/mouse.
This game presents its own unique physical challenges, not to mention a need for like a GTX3070 at a minimum to get decent framerate (just 60 without every cool looking thing turned off)
the server meshing demo is technically impressive, very curious how they did that
so in the example, they had 3 location authority server and 1 replication layer,
so each location server can represent a physical location on 3d world, say its own solar system, and moving out from location A to location B moves you to a different server, and that was possible cause your position matrix was store in the replication layer and each server has a copy of replication layer.
now how do you handle updating all those in realtime is what amazed me, they even have bullets go through locations
In the demo they simulate everything in every servers, which is a huge waste of cpu, not sure how it's going to work in the game. There are a lof ot corner cases / problems not solved with their tech.
> now how do you handle updating all those in realtime is what amazed me, they even have bullets go through locations
Probably just as other distributed systems implement replication: you have a leader to broadcast instruction and other followers applied the instruction to their local copy of the state.
However, there'll always be delay/stale data between the leader and followers. So, to improve performance, you can allow the followers to perform speculative decision based on their own calculation, but these local decisions will be overwritten / rollback by instructions received from the leader.
I just recently found out that even if browser supports up to http3, it still up to the browsers to decide which protocol to use even if the browser supports http3 too, this was dishearten to find out that you don't have control of forcing the browser to use http2 or 3 specially if you have features that only worked on http3 and was broken on http2, I guess I should have just fixed the implementation on http2
You don't have control because browser might not support of http3 at all. It's up to browser developers to decide when their support levels are mature enough to use by default. There's no other way of doing it.
In postgres, every object (table, index, function, view, etc.) lives in a "schema", which is better thought of as a namespace. I put low-level objects like tables into one or more schemas, then create an "API" schema with views, functions, procedures that operate on tables in other schemas. Then I only grant access to that API schema to the application users.