Definitely agree with this one. It's called Algorithm Specialization on Coursera. I'm now on course 3 and it's definitely helped me a lot in thinking about how reason about algorithms.
One of the best algorithms class I have taken. I liked his way of introducing new concepts and intuition behind them. He really enjoys teaching algorithms.
I second this. Seriously the best textbook on systems programming I've worked through especially when accompanied with the famous CMU labs[0]. Anyone who works thoroughly through this book can become a master systems programmer.
Sure! I recommend sitting with the book, a pen, and a notebook at a cafe or wherever you like and write solutions to the practice problems you see sprinkled in each chapter as you read every single word. Then choose a few of the homework problems and do those, some will require a computer. Most of all, work through the labs and don't cheat yourself by looking at other (probably not very good) solutions posted online! Solving the labs with the textbook and TLPI[0] as a reference is how I got the most out of the course. A list of the assignments, as they're done at CMU, is posted below[1]. Good luck!
How about this: once you write code in Go and test it well, you can deploy it and forget about it unless there is a hardware issue? I recently ported some C++ code to Go which processes 8 Billion events per day flawlessly.
Once you write a well designed, well tested, and feature complete piece of software, you can deploy it and forget about it unless the server it's running on breaks.
Go isn't special in that regard, unless there's something that makes Go easier to write, test, or deploy, which might be the case, but you haven't supported that.
Our team was able to develop/deploy about 20 Microservices in Go in the past year or so which is really awesome. I can say this after having worked with several other languages.
We have started parallelizing our tests with the new subtest feature: leaktest in the top-level test, t.Parallel in the subtests. This means we only check for leaks in between batches of parallel subtests. This works OK for us for now since our slowest "test" is really a huge data-driven test suite, and that's the only place we're currently parallelizing, although it would be better if we could parallelize more of our tests.