- Never call yourself a Junior Engineer. If you have experience, even just a little, you are an experienced engineer.
- Develop your communication skills. The barrier to entry to start presenting at meetups is extremely low, do that with the goal of presenting at tech conferences.
- Skip the side projects. Of the 10 or so water walkers I've worked with, 0 have github accounts.
- Don't buy into the last paragraph of this blog. Software development isn't hard, and the last thing you want to do is think you need to constantly keep up with the latest and greatest tech. If you step back it looks a lot like an industry that doesn't know whether it's coming or going, don't fall for the trap.
Many of the very best engineers are effectively prohibited from having Github projects. The deeper down the stack you are, or the more hardcore the computer science you work on, the more likely this is to be true because your primary work area is deemed a trade secret.
This isn't a major issue if you are an app developer because a side project app is unlikely to convey valuable trade secrets. A database kernel engineer, on the other hand, can't have any side projects related to databases that demonstrate their skill level because such a demonstration would violate their non-disclosure restrictions.
This comes off a little aggressive to me. People should be able to provide opinions and anecdotes on HN without the constant "citation needed" for everything. We all know everything isn't fully backed by 4 studies that will be presented here and now. Just say you disagree and offer why. It will probably be an anecdote too.
On the other hand, if everyone used "anecdata" to support their arguments, we would never get anywhere.
If you want to make qualitative arguments, make qualitative arguments. What I absolutely can't stand is lame attempts to lend weight in such arguments using completely unverifiable and most likely non-representative numbers.
I've posted about this in the past but I have "suffered" from an eye condition for the last several years due to I think extreme computer use and an overhead HVAC vent (I think).
I've seen a few Ophthalmologists, official diagnosis is Blepharitis but ultimately my eyes are constantly tired, floaters, sharp pains, dry from the moment I wake up and have nearly constant muscle spasms.
I've tried fish oil, antibiotics, Restasis, numerous drops and gels, various apps, changing behavior etc. I haven't found the silver bullet but I give my eye lids massages and try to drink plenty of water and try to avoid environments that make it worse. I limit computer use to work hours only and am in a role where I only really use a computer for ~4 hours a day. The last few years things haven't gotten worse but haven't gotten better either.
Best advice I can give is take breaks and have hobbies that don't require a computer.
Same here. At my previous job, my desk was positioned directly below an HVAC vent. This was in DC, and I used to feel more relieved stepping outside into 90 degree humidity, because my eyes were so parched.
Someone really needs to invent a detachable, re-sizable vent cover that can block or redirect air flow, and yet still be easily removable for irritating building superintendent inspections.
Depends on your workload. If you have a constant workload, then sure. But if you have variable workloads then AWS/cloud providers are a no brainer.
Source: 6 figure/mo AWS spend. We use pretty much every infrastructure piece in AWS in multiple regions and to replicate that flexibility in our own data centers would be an incredible cost.
Your workload doesn't need to be super constant either -- I think I ran the numbers some time ago, but for larger boxes, if you needed the machines for more than 8 hours a day, it was cheaper to run them in a DC and just leave them idle the rest of the time than to run them for that 8+ hours on AWS and shut them down for the rest of the time.
Indeed. I have no sympathy for massive multi-national corporations when executive pay and corporate profits are as high as they are now. The world does not need any more corporate welfare programs.
Hi Hillel, Not sure if you can help but I am interested in TLA and was following your tla intro website, particularly this part: https://learntla.com/pluscal/toolbox/. I installed TLA+ Toolbox, added the example spec, translated and then tried to "run the model" however nothing happens. No output, the start and end time are still blank, no statistics etc. It is as if I never hit the run button. I don't see any errors in the console and I'm not exactly sure what I am doing so I may have missed something. FWIW the Model Overview view looks identical to the screenshot in the webpage.
Would you mind emailing me the screenshots at h@learntla? My immediate guess would be that "temporal properties" is unchecked, but I'd have to take a look.
There is a lot of incorrect information in this post.
- Containers are a combination of namespaces, cgroups, and chroot (maybe). You don't need LXC to use containers. Docker doesn't even use LXC.
- There is no overhead for running processes in containers.
- There is no requirement to virtualize networks for containers. They can be configured to use the host's network directly, at which point you are bound by the host's network capabilities. Otherwise it is typically a combination of bridges and overlay networks for which the benefits outweigh the performance concerns for most workloads.
I've mentioned this in another comment but the short answer is: rate-limiting.
Now, Netflix, being a priority customer, may get higher limits and such. But average joe public cloud user should keep that in mind before trying to use EC2 for running containers.
Even if that claim wasn't wrong, it's an unrelated question. If there were rate-limiting problems, they'd apply to using EC2 at all even without involving containers.
I think you're ignoring the fundamental issue when deploying container based services v/s services on a multiple VM's. Usually, the architecture for containers involves spinning up a bunch of VM's and deploying some kind of layer on top of that (either K8s or Swarm or something else). When you deploy containers, they may not be on the same VM, or the overlay network itself may require some kind of communication to another container on another VM. This usually creates a lot more communication b/w hosts and rate limiting becomes the bottleneck.
Do you have any evidence of this rate-limiting showing that it's that much of a problem? People have been running clustered apps on EC2 for over a decade and it's not like you hear people saying you can't run Cassandra, ElasticSearch, etc. on EC2 because the network is limited.
Similarly, do you have any data showing that a container system has such incredible overhead compared to the actual application workload? I mean, if that was true you'd think the entire Kubernetes team would be staying up nights figuring out how to reduce overhead.
You run a compute pool, you don't spin up EC2 instances on demand for this kind of application. You scale the pool based on target utilization metrics.
I was wrong about docker, back when I was playing with it it did use LXC, and appears to have started out as project to make a specialized version of LXC. You're right that Docker has its own container runtime now.
The overhead for running containers is usually very low but real. The OS needs to partition low level resources that are normally shared and the scheduling introduces some overhead.
I disagree about network performance. The virtualization adds a somewhat small but non-trivial overhead here (the overhead for other stuff could probably be considered trivial)
I'd need a citation that a process running in a namespace adds overhead.
My point about network virtualization is that it is not required to use linux containers. Yes, some container tools do create network abstractions that add overhead, but they aren't required and most tools allow you to optionally bypass the abstraction and sit directly on the host's network stack.
I think you are misreading this question. It is a merit of the organization and I think gets at the issue of lots of organizations using OSS, but very few of them actually contributing back. What this requires of the organization is to acknowledge that they need to contribute if they want to benefit long term and thus they need to allow and encourage their engineers to participate.
If employers were encouraging or at least allowed it you'd see a lot more great developers have this merit, but for most medium to large organizations it's a one way street with OSS and prohibit their engineers from open sourcing projects or contributing to projects the organization depends on.
The problem with this test and the Joel test is an employer can check all of the boxes (essentially do you follow modern software practices that were revolutionary 20 years ago, and are you "Agile") and it can still result in a toxic or less than optimal environment.
I did like the questions around OSS and sharing expertise. I'd like to see more questions that address recruitment anti patterns (diversity, agism, disclosing previous salary, etc) and tech organization anti patterns (an actual career path on par with management, non transparent equity grants, etc)
Like, what would the questions be if even, say, Google didn't look so good if it answered them.