Technical analysis is the projection of future price data through analysis of past price data (usually for the purpose of trying to create trendlines or find "patterns"). Options pricing is quite a different beast - it encodes marketwide uncertainty about the future price of the underlying, which has little to do with the past price action of the underlying, and everything to do with all known information about the actual underlying company, including fundamentals analysis, market sentiment, future expectations and risks, etc.
To put it another way, to price an option I need a) the current price of the underlying, b) the time until option expiry, c) the strike price of the option, and d) the collective expectation of how much the underlying's price will vary over the period between now and expiry. This last piece is "volatility", and is the only piece that can't be empirically measured; instead, through price discovery on a sufficiently liquid contract, we can reparameterize the formula to empirically derive the volatility expectation which satisfies that current price (or "implied volatility"). Due to the efficient market hypothesis, we can generally treat this as a best-effort proxy for all public information about the underlying. None of this calculation requires any measurement or analysis of the underlying's past price action, patterns, etc. The options price will necessarily include TA traders' sentiments about the underlying based on their TA (or whatever else), just as it will include fundamentals traders' sentiments (and, if you're quick and savvy enough, insiders' advance knowledge!) The price fundamentally reflects market sentiment about the future, not some projection of trends from the past.
This is a breathtakingly disingenuous summary of the article. I cannot imagine a perspective sufficiently warped to produce this interpretation a priori.
Agreed. It’s easy to imagine billions of reasons why people will people will defend indefensible behavior by companies with billions of dollars though.
unless tape, and the infrastructure to support it, is dramatically cheaper than disk,
This turns out to be the case, with the cost difference growing as the archive size scales. Once you hit petascale, it's not even close. However, most large-scale tape deployments also have disk involved, so it's usually not one or the other.
You might squirm at using refurbished or used media but those 3TB SAS ex-enterprise disks are often the same price or cheaper than tapes themselves (excluding tape drive costs!). Will magnetic storage last 30 years? Probably not but they don't instantly demagnetize either. Both tape and offline magnetic platters benefit from ideal storage conditions.
It's not just cost / media, though. Automated handling is a big advantage, too. At the scale where tape makes sense (north of 400TB in retention) I think the inconvenience of handling disks with similar aggregate capacity would be significant.
I guess slotting disks into a storage shelf is similar to loading a tape changer robot. I can't imagine the backplane slots on a disk array being rated at a significant lifetime number of insertions / removals.
If you're ok with individual storage units as small as 3TB, then we're talking about a different set of needs. At that scale, whatever you can lay hands on is probably fine. Used tape is also cheaper than new. IA is dealing with petascale, which is why I mentioned that the price difference widens with scale.
This is a common use for tape, which can via tools like HPSS have a couple petabytes of disk in front of it, and present the whole archive in a single POSIX filesystem namespace, handling data migration transparently and making sure hot data is kept on low-latency storage.
Even just spawning a thread is going to make somebody complain that they can't build the code on their platform due to C11/pthread/openmp.
This matches squarely with my experience, but it's not limited to threading, and Rust evades a large swath of these problems by relatively limited platform support. I look forward to the day I can run Rust wherever I run C!
While Rust doesn't have C coverage, it has (by my last check) better coverage than something like CPython currently does.
The big thing though is Rust is honest about their tiers of support, whereas for many projects "supported platform" for minor platforms often mean "it still compiles (at least we think it does, when the maintainer tries it and it fails they will fix it)"
Not to be too glib though, there are obviously tools out there that have as much or more rigor than Rust and cover more platforms. Just... "supported platforms" means different things in different contexts.
All too common (not just with compilers) for someone to port the subset they care about and declare it done. Rust's decision to create standards of compliance and be conscious about which platforms are viable targets and which ones don't meet their needs is a completely valid way to ensure that whole classes of trouble never come. I think it's a completely valid approach, despite complaints from some.
reply