I remember you from the good ol flash times.
I didn't wanted to downplay your effort you put into threejs. <3
My point was that you don't have to have a PhD to write a 3D engine, the math is here, there's a plenty of literature, reference and libraries at hand that anyone can make their own toy 3D rendered on a weekend.
... to make that work in the browserland, that's whole another story. :)
I remember writing a tiny deeplinking "library", back in 2006.
When I finished implementing the specification it had ~30 lines of code.
At the time it was compatible with all common browsers it got to about 500. :)
Even if it was fun I often felt it was useless because the skills I was acquiring were only applicable for game development and that was something I was not interested in.
It was quite a shock to me when I found myself working on a WebGL demo for Google² and all what I had learnt in the demoscene paid off (and has continued paying off ever since!³⁴).
Getting older, one of the most important things I've learned is that no technical skill is useless. Especially since you never know where you end up. I've had so many things at university where I was absolutely sure that I'd never need them. And for like 90% of them I was right. But the remaining 10% were incredible boosters to my career and I never would have thought they could have such a huge a impact.
Being a techie also means that you aquire meta skills like being able to figure out and troubleshoot random tech very fast, no matter how trivial or complex. Where most people would never bother to dig deep enough or quite really fast, techies can deep wide and deep, over a long period if need be.
> You claim no technical skills are useless then claim that you didn't need (use) 90% of them.
Restating their (apparent) point more explicitly: there are no technical skills that have probability 0% of being used; there are many (most) technical skills that have probability less than 100% of being used. There are cost-benefit tradeoffs to consider, but assuming the cost is low enough, it's better (useful) to have it and not need it than need it and not have it.
The argument is that you cannot know ahead of time which skills you are going to use, so at best you can only determine which were useless after you're dead. However, there's also the "practice" argument: piano learners don't play scales so they can play a scale at Carnegie Hall, the practice fits within broader learning.
It is about coverage and opportunity in our lives.
The more skills one picks up, the greater the chance any one of them will make a big difference.
Here is a crazy example:
Paper tape. In the late 80's, I worked in some smaller shops using paper tape to drive their CNC machines. I learned all about it and can patch, the whole nine.
A few years ago a call for help found it's way to me and it turns out there are STILL people driving CNC machines off paper tape! I was able to fix the setup and get them running, edit a few programs and repair a damaged tape or few. Made a nice bit of extra cash.
Seen from the perspective of my own education that is a very strange thing to say. Did you never learn anything practical at school or university? My own education was full of things that I have used all my life in my career as an engineer.
You are going to be very surprised when you find out that all your “practical” skills are nothing but the distillation of decades of research and applied science, most of it done at schools and universities.
Imagine how much better off our society would be if every adult was grounded in the fundamentals of political science, ethics, economics, and philosophy.
An educated electorate demands higher quality candidates. The populist demagogues of the last several decades wouldn't have stood a chance.
Oh man, I couldn't disagree more. Learning is learning. My education was broad in a few ways, and so many of the thing I learned that were seemingly unrelated to technical skills have made me a better engineer in many ways.
Some young people aren't sure what they want to do with life and school is a reasonable place to figure that out and hopefully pick up some practical skills/networking/life experience along the way.
>And for like 90% of them I was right. But the remaining 10%
I'd argue that this kind of language isn't even right. It's not only that you don't know when you need something, you don't even know when you use it. If we've learned one thing from the success of ML systems it's that the proper representation of knowledge is extremely complex and connected. Everything influences everything else. You can't learn "10%" of a language, or "10% of programming", as if there exists some chunk neatly separated from the rest.
It's much more likely everything you learned contributes to most of what you do, even if we're not actively aware of it. You can't learn x% of Mandarin by learning y% of the words, it's not even appropriate to separate anything into useful or useless. Almost everything you pick up just illuminates your mental model just a little bit more.
My demos were my resume (CV) at the time. I remember Jez San of Argonaut (Starfox) contacting me and wanting me to come interview for a development position. I was 16 IIRC.
A year or so after that I became a professional game developer. In fact, the game I was working on was changed from 2D to 3D overnight and I pulled out my demo source and ported the whole 3D engine over in two weeks:
Demos really force you to learn everything there is to know about how a modern computer system works, CPU, RAM, ROM, bus, video, etc. That knowledge will remain invaluable.
I mostly do web dev in C#, but any time I write a single line of code my brain is thinking in the background "how many opcodes will this be? what about this branch? am I making an extra copy of this variable for no reason?"
Copying variables has only superficial values nowadays - the compiler will track variable lifetimes and allocate registers or spill them to stack as appropriate.
For C# specifically, if you want to scratch the itch, there is a way :)
If you are using VS, you can install Disasmo extension and disassemble arbitrary methods with SHIFT+ALT+D (keep in mind it's ready-to-run-like codegen which does not have all optimizations).
You can also go further, and use `DOTNET_JitDisasm='methodName'` env variable - in this case you will see how method transitions through all compilation tiers (if it gets promoted past Tier 0). And last but not least, if you build your .NET binary with NativeAOT, you can just use standard disassembler, debugger or profiler you would use for C++ binaries.
This brings back the memories of my first ever "high profile" job[1][2]. 2012, I was a wannabe gamedev at the time (even made a couple mediocre demos), but actually a pretty decent backend dev, so they somehow hired me to do the backend/devops. WebGL was a hot new thing at the time, and we were running into a lot of compatibility problems on various consumer GPUs. So I've helped get the 3D/frontend guys set up Sentry, so they could actually see and fix all the problems. Even though all my code ended up being "invisible", I like to think I had a hand in making the beautiful graphics happen.
I was a wannabe gamedev also, was obsessed with learning 3D concepts, optimizing code, learning assembly language, and so on. After getting nowhere with game companies (You think FAANG is selective?? Try late-90's game companies), my first job ended up being writing GPU drivers for a major graphics card company. Lucked out, I got to work with gaming technology but without the soul-crushing burnout and 80 hour week death marches.
This is honestly one of the best kept secrets in tech. If you actually want some work life balance and don't care too much about job titles and status, the best gigs period are those where you support the "front line" devs in some fashion. Nearly the same pay, but less stress and more self-direction. Tooling and QA folks rarely get called up at 3am on a Sunday morning because prod is down.
> Tooling and QA folks rarely get called up at 3am on a Sunday morning because prod is down.
My (devops) team did the tooling, but we've worked very closely with QA, and while they didn't get the pages, both teams felt that they shared the responsibility of making the final product (well basically a cool tech demo) rock solid for the viewers. In 4.5 years there I've touched ca 600 projects, but I agree with you - it was still easier than doing the frontend work. Arguably less glory but definitely more life.
Ha, for some reason my company (actually zscaler) is blocking your ricardocabello.com site because of "Copyright infringement". I'll check it out on my personal laptop.
I remember your 3 Dreams of Black experiment when it came out, that was awesome work!
Yes, the whole problem area of GPU programming, not just for 3d or VR but also 2D pixel-perfect rendering on GPU (which is going to be needed as screen resolutions, color depths and frame rates increase, leading to more overhead if rendering on the CPU) is ripe for the application of demoscene-like techniques.
I remember my father telling me (tongue in cheek) "sure, 3d realtime polygons are nice, but when will it earn money?". And many years later, I'm still coding both for fun and money (and usefulness).
At least JS-powered renderers can do something about making sure it looks the same in all browsers. With CSS you can't do anything about the fact that this simple demo doesn't render correctly in Safari/iOS and Firefox/Android.