Soon there will be Lemonade style app driven per hospitalization insurance on top of normal health insurance where for elective procedures you will be able to protect yourself from surprises like balance billing etc.
Translation for those not proficient in Academese: Serverless computing a novel paradigm enabled by a large cloud provider, does not matches our past research, thus we argue why it's not the right thing.
This is classic "if theory does not matches practice lets change the practice" approach, which is far too common in academic systems research.
I trust AWS with knowing what its customers truly want (in terms of performance and cost) and what it can provide,Since AWS has real financial stakes in its success.
A decade ago the same researchers would have mourned emergence of cloud computing as a wrong thing and instead asked for P2P computing since that's what they had spent the decade before doing research on.
I'm afraid your "translation" does a disservice to all the potential readers of the paper, to those who want to decide whether to use serverless computing at present and to the authors.
Quoting from the conclusion of the paper:
"Taken together, these challenges seem both interesting and surmountable.
The FaaS platforms from cloud providers are not fully
open source, but the systems issues delineated above can be explored
in new systems by third parties using cloud features like
container orchestration. The program analysis and scheduling issues
are likely to open up significant opportunities for more formal
research, especially for data-centric programs. Finally, language
design issues remain a fascinating challenge, bridging program
analysis power to programmer productivity and design tastes. In
sum, we are optimistic that research can open the cloud’s full potential
to programmers. Whether we call the new results “serverless
computing” or something else, the future is fluid."
Interestingly, a paper not 10, but 9 years ago, not by the same authors, but by the same group (systems folks at Berkeley), was proclaiming the cloud as an idea whose time had finally come. No mourning there. https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-...
Scientists/Academics treat prestige as primary currency, and most would view lack of access to information just as a sign of not being in prestigious enough institution.
You are being naive if you think academia is filled with do-gooders it’s just a race but of another kind.
Unless there is a direct impact of doing something more openly e.g. accessible code base that thousands of researchers can use to publish and cite your work quickly or significant risk of getting scooped. There is very little to motivate any change.
Also if you think journals/review system is bad, just get a glimpse of “grant review” system by NSF/NIH or “tenure committees” etc. They make the worst stack ranking performance review etc. seem light hearted fun.
I think you are overegenralizing applicability of Neural Architecture Search etc. and cherry picking individual examples. There is an enormous gap between what gets published in academia with what’s actually useful.
E.g. Compute wars have only intensified with TPUs and FPGA. sure for training you might be okay with few 1080ti but good luck building any reliable, cheap and low latency service that uses DNNs. Similarly big data for academia is few terabytes but real Big data is Petabytes of street level imagery, Videos/Audio etc.
Your last comment reminded me of this article [1] on "Google Maps's Moat", which discusses the vast resources that Google has poured into collecting data at a global scale to make Google Maps what it is.
I have come across the github repo's authors work a number of times now, and I am continually impressed by the quality of documentation, examples, and immediately usable code.
"When you end up with a bunch of papers showing that genetic algorithms are competitive with your methods, this does not mean that we’ve made an advance in genetic algorithms. It is far more likely that this means that your method is a lousy implementation of random search."
The article seems reasonable and well-argued. But policy gradients are a major cornerstone of reinforcement learning - just about every textbook will dedicate some time to them.
So how can we reconcile that observation with the arguments in the article? Is recht overstating his case or is this a big screw-up in the field in general?
Can anyone who knows about reinforcement learning weigh in?
Ben's blog series culminated in a nice article[1] touring reinforcement learning. He also held a tutorial on the topic at ICML[2]. They might address some of your concerns.
I am working on a project to build an ML and CV enabled database for images and videos. It supports visual search/NN as core primitive and developed for scalability using Kubernetes.
Scanner is a great tool I have been following it's development very closely over the last year. Great to see it reach 1.0 . I highly recommend checking out Hwang [1] (a sparse video decoder) which is bundled and used in Scanner.
Scanner is one of the first few tools to leverage Docker/Kubernetes by demonstrating ability to ship complex heterogenous architecture in a reliable/reproducible manner.
Hwang lets you perform "efficient random access" across the video by building a GOP/Segment aware index. This comes in handy in a lot of applications where you want to access particular frame or set of frames but do not wish to decode and store all frames. Most tools such as ffmpeg (command line application not library) are optimized for sequential decode use case.
> Hwang is a library for performing fast decode of frames from h.264 encoded video (most mp4s). Hwang provides both a Python and C++ API. Hwang decodes on the CPU (using ffmpeg) or on the GPU (using the NVIDIA hardware decoder).