Hacker Newsnew | past | comments | ask | show | jobs | submit | asien's commentslogin

> People still use Java?

In Fortune 500 ? YES , pretty much everywhere.

In Startups ? It's Kotlin or Scala mostly.

JVM is far from being unpopular , but fore sure it's not as sexy as ['React','Deno'].push(...)


In my opinion you are mistaken , affordability does not mean home appliance.

Hydro is pretty cheap , does not mean it’s fit for home.

Here the best scenario is similar to Singapore - Australia or UK - Morocco remote solar grid.

This innovation is very interesting compared to Water- Hydrogene electrolysis that is a bit expensive and needs metals that are going to become difficult to source in the coming decades.

Gallium is quiet abundant from my understanding.


Appreciate the graph in the article , this time the numbers actually are calculated by engineers from DoE , not by Journalists...

Even if the plan is there , without an “economy of war” and the implication of basically every single American it’s nearly impossible to reach those types of deployment.

Money is not the answer to everything , as pointed we are also going to reach “civilization” types of limits with land and ressource exhaust...

My humble opinion is we should simply consume far less energy and accept a much simpler lifestyle, that would be much easier ...


Did we read the same article? The land usage for wind and solar is about that of golf courses and coal. Getting permits to use the land is the main obstacle.

As for resource constraints, the limiting factor is the speed on which a lithium mines can come online. The bulk of the that being environment reviews and lawsuits.

Americans are just going to have to come to terms with building or mining stuff causes localized environmental damage and other externalities for the communities that live close by. We should weight the pros and cons and move swiftly with whatever the decision is.


Lithium isn't even in the top twenty of mined metal. It would also be relatively cheap to recycle at scale.

Problems are political, i.e monied interests not environmental, technological, or even financial.

Should also be optimistic about non lithium solid state batteries!

>https://www.weforum.org/agenda/2021/10/all-tonnes-metals-ore...


Honestly as a golfer, a windmill in the middle of the fairway would make for an interesting hole


Mini-golf, but life size?


Perfect :)


That’s correct.

The startup Ynsect at the moment is targeting Fish Farming not human.

What would be the impact over many generations of that type of food ?

I’m scared of the “SuperBug” type of disease that would be resistant to antibiotics because it’s been dormant in us for too long...


The probability of Covid type pandemic become closer to 1 the more bugs farm they’re.

The lethality of of that type of outbreak become stronger the longer the we ignore that threat.

It’s just pure Maths to be honest.


I don't know whether what you are saying is true. But for the sake of argument, let's assume that it is.

Could you explain to me what is exponential about this process?


I love how the guy is getting punished for misusing a math term.


'Asien' didn't misuse any math terms. I don't know why they're being downvoted.

'Shaburn' made some claims that sounded like they want to be math, but didn't make any sense as math. Hence my confusion.

I recognise that people use terms colloquially, like 'exponential', but given all the (pseudo?) sophisticated, math-y language in the comment

> Given the rate of mutation and transmission in bugs with natural gestation and migration, the probability of catastrophic outcomes is exponential without a similar dataset in other human food sources.

I had hoped that Shaburn actually had a more concrete model in mind that they could explain.


1. Number of people eating bug(driven by mimesis pushed through a media narrative and thus typically viral(often exponential). 2. Number of variety of bugs being eaten(regionality and entrepreneurialism((often referred to as Cambrian explosions in perfectly competitive markets, thus exhibiting exponential growth functions)). 3. Number of geographies bugs for consumption being grown in. 4. Number of production methods and processes. 5. New combinations of genetics of peoples and insects/infectious organisms being consumed. Think Montezuma's revenge or lactose intolerance in certain regions of the world except possibly contagious and deadly Multiply all that by orders of magnitude faster gestation cycle and thus the chance for mutation, aside from technology developed to support existing food chain, Number of mutations per lifecycle, increasing the chances of deadly DNA combination by 12x, so order of magnitude.

Average lifespan of... A. Bacteria: 12 hours B. Insect: 12 months C. Mammal: 12 years (shortest being the primary disease harbinger, the rat).


Could you please point to any source for your claim that the number of people eating insects has grown dramatically?

I could find this study from 2015 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4552653/ which says:

> At least 2 billion people globally eat insects in over 113 entomophageous countries though this habit is regarded negatively or as revolting by others [4–6]. More than 1900 species are consumed by local populations globally but insect consumption (entomophagy) shows an unequal distribution.

There's roughly 8 billion people on the globe. Between 2 billion to 8 billion, there's not much room left for exponential growth.


No surprise, just like Uber , Airbnb etc..coinbase has been overstaffed ever since they raised a major amount of capital many years ago...

Hence , I don’t see what 5K people can be doing at coinbase since some startups in Europe are doing the same with like 100 people...

It’s been known for years : startups overhire and lay-off when recession comes up.

With the “nazi revolution” in the 50’ , most startups rely exclusively on “Social Darwinism” , to solve problems or improve products , by having them fight internally and let the best ideas come to executives. Of course only employees who know how to navigate corporate politics are able to reach them and win those fights..

Those 1000 were not part of it, but they will have no problem finding another position somewhere else.


> I think FB, Goog, etc. could lay off thousands with no adverse effects and in fact an increase in velocity, quality, and quantity of new features/products.

You can add Airbnb,Netflix,Uber they often attend conference to describe their architectures.It’s obvious most of these people have no idea what they are doing have no clear direction. They are just havin’ fun will trying to navigate corporates politics. Even the stuff that is published online it’s scary to see their is no technical leadership what so ever.

To be fair I’ve worked in fortunes 500 as well, 60% of the workforce can be replaced with automation.

Since it’s cheaper and less risky they just keep hiring people for repetitive tasks , it compensate the technical debt.


There was a post here recently where a Netflix employee was proudly showing off their log processing system. Which was collecting the equivalent of nearly 2 MB of logs per minute of user streaming time.

In my mind, that's just bonkers, and no amount of handwaving could justify it.


> Which was collecting the equivalent of nearly 2 MB of logs per minute of user streaming time.

to clarify could you say the same thing, in a different way?


For every minute that someone streams Netflix, 2 MB of data is logged. So if 1,000 people are using Netflix simultaneously, they're generating 2 GB of logs per minute.


Warning: NPM packages are out of date x 1000000


> 2 MB of logs per minute of user streaming time.

2MB/minute is 33KB/second.

How is that impressive?


I think it's impressive that they somehow found 33KB/second worth of data to log for each stream. I can't even imagine the amount of useless shit that must be logged to get to that number.


This is where I'm at. Like thats honestly not much log data. But what are they actually logging? I imagine there is a LOT of repetitive data.


Detailed logging can function as an on-demand APM. Not a bad idea if you have the bandwidth and storage for it.


I think the impressive thing is how much data that is for each user-minute. What could they possibly be storing in 33KB for each second of Netflix you stream?


That's per user. So a million (or ten, 50...) active users means a lot more per minute.


i think you and the above poster are in vehement agreement. ingesting 2 MBs of logs per minute is impressive in its pluperfect unimpressiveness.

maybe the presentation was called "Timmy's first named pipe" or "Sally explores /etc/logrotate.d"


That’s a LOT of text to describe me sitting on my couch. 2MB per minute is far more than the most detailed biography in existence.


220m users. Let’s imagine 50m are streaming concurrently. That’s 100TB an hour in logs lol. They could be storing an entire petabyte of logs a day. My friend did some data center stuff for the large hadron collider and wasn’t hitting these data ingestion states, and these are just to record me binging the office.


The comment said "log processing system". Sounds more like it's a stream and not stored logs.


2 MB/per minute/per stream at Netflix scale is crazy.


There is a lot of tokenism in hiring. A Fortune 500 might be marketing a new "push to AI" or whatever and want to seem legit by hiring loads of people, quickly realising that it doesn't work like that but at least it looks good on paper.

FANNG type companies are more likely to do a big hire after doing a big raise. Imagine someone has given you $300M, what do they expect? Now you have the money, we want more features = more sales = more ROI. How do we do that? By hiring a load more people and again learning that it doesn't work like that. Leave it a year or 2 and the same investors complain about burn rate so you lay them off.


Just a note, that ABNB did a huge layoff at the start of the pandemic which allowed them to come out the other side a much stronger company. Actually highlights your point.


Was it the layoff that made them a stronger company or was it the market improving?


Both. When money is free flowing it's easy to avoid hard decisions (in business money hides many mistakes). Companies may continue to fund projects that should be cut or hire instead of optimizing a process. Prioritization alignment meetings end with everything is a priority.

In ABNBs case, business going to almost zero overnight was a forcing function to a level not often seen. After being forced to lean up and prioritize, they were well positioned for the market to improve.


Seems the problem is that these massive companies hit a threshold in size and then everything is about self-perpetuation by creating large moats, even ones that don't make sense, hence you have teams and entire departments engaged in boondoggles that are wastes of time and resources.

Imagine Facebook pouring untold manpower and money into developing original content such as cloning HQ Trivia, for its also-ran streaming content that no one watches. Or even Facebook Reel, which mostly just reposts TikTok and Instagram material. Or the entire hopeless arena that is cloud gaming, where all of these tech companies are involved in with no service that has really taken off yet.

I suppose if the regulatory environment was to correctly deter these companies from staying so big and content and engaged in wasteful behavior, there would be actually more companies, and all of those people in the companies you mention would be distributed across smaller, nimbler, more customer-focused firms, with more competition and thus better choices for consumers. That's the theory, anyhow.


Yeah but which thousand ...


>I would assume any modern processor would make a context switch a one instruction affair.

Has been the historic assumption, has been proven to be wrong by every possible benchmark.

Consider tech empower[0] for raw stack performance , runtime level threads outperform IO threads since OS thread were designed to be mapped on physicals cores.

This is very expensive and inefficient.

Creating one thread for every request you have ( Apache + PHP ) will exhaust the hardware after a few thousands/qps target.

Runtime can indeed have millions of those “lightweight threads” without killing your machine since they create a pool from physical threads and tap into IO events to efficiently switch or resume contexts. This is by far much faster.

[0] https://www.techempower.com/benchmarks/#section=data-r20&hw=...


> Creating one thread for every request you have ( Apache + PHP ) will exhaust the hardware after a few thousands/qps target.

PHP installations more realistically use nginx and FastCGI. This is not one thread per request and it’s also a better design than hosting your entire server and every user request in the same process; that’s just asking for security issues.


France tried years ago.

Incredibly expensive and inefficient.

I love the concept but it just cant compete with Solar , Nuclear or Fossil Fuel...


Just because something has been “tried before” in the past for something as critical as energy sustainability, particularly with the backdrop of climate change, doesn’t mean we shouldn’t try it again. If anything it puts us in a better position, with grater knowledge, in order to improve are chances of making it work.

As Edison put it about trying to invent the lightbulb: “I haven't failed - I've just found 10,000 that won't work”.


Except in this case we have the light bulb already in nuclear reactors.

It's like if Edison threw out the first light bulb that worked because it felt hot to the touch.


Interestingly, this whole technology might come to market more quickly than a nuclear reactor in an industrialized country could be build from decision to commercial operation.

At best, nuclear power is unwieldy, hard to calculate, needs a vast, highly specialized infrastructure and workforce and leaves you with radioactive waste with no solution realized as of yet. At worst, "impossible" accidents happen, people shoot at your reactor, nation states devolve and use the fuel for dirty bombs, somebody tries to blow the reactor up on purpose...


You hear about that tidal generator that blew up and contaminated 1000 Sq km of land? Ya, I didn't either.


Actually I have.


Just like we already had the “lightbulb” in the form of candles. There are significant downsides to nuclear reactors that other kinds of sustainable power might be able to address


> Cobol remains the language of choice

Sigh , those single liner that both illustrate the ignorance and the status of the author.

I’m an enterprise architect in banking , 6 month ago I was hired for IT Transformation.

My mission was very simple « move the bank the out of mainframe »

In 2 weeks or so I presented a Kafka based runtime based with JVM contracts that would enable the bank to perform in a near real-time manner as opposed to « batch » processing while covering and simplifying 90% of banks related scenario ( SEPA , MasterCard , AML etc...)

The project was accepted by directors but devs refused to go into that because much like the authors they are 30 years in the banks and don’t want to learn something else than what they know « cobol ».

90% of our contractors work is spent dealing with mainframe constraint and writing interfaces and top of that piece of crap that can only process data at night or during the weekend.

Mainframe is not there because « it’s superior » , distributed system have largely proven their capability and maturity.

Mainframe are still there because of Corporates Politics and lack of Leadership from top management.

When you are reminded that Citibank lost 0.5 Billions because they spent 0$ on their UI, you may start to understand how much corporates world is rotten to its core and why mainframe is still there.

Has nothing to do with it’s capability , period.


devs refused to go into that because much like the authors they are 30 years in the banks and don’t want to learn something else

The way to fix that is to use the method IBM used to introduce the IBM-PC back in 1981. They set up a completely independent project group that had no connection with the main-frame boys, and so weren't 'brainwashed' into the IBM 'way of life'. The rest is history.

Incidentally, while I no longer program in COBOL, I still like it. It was always easy to do maintenance on a program that I might not have looked at for decades because of its wordiness. I normally program in C these days, but it's not as 'maintenance-friendly' as COBOL unless there are lots of comments.


The thing is, you solved a very narrow problem to which there is already a solution (and has been) on the mainframe for 30+ years (MQ). The real problem is that in that COBOL code is 50 years of business rules smeared across millions of lines of code, adjusted for all the changes in law (sometimes applied retroactively) which impact how money is handled. It isn't that mainframes don't have message queues or can't interoperate with web services (they can), even if not all customers take advantage of those features. The problem is replacing that code requires extracting all that knowledge out of the code. Then, on top of that, if there's any downtime, it can be existential risk for the bank.


Yep. It's real easy as an architect to come in and propose an overall architecture that will work, but the rubber meets the road when you attempt to 'strangle pattern' your way out only to find the deep interconnected and undocumented business logic. You're also fighting the business by trying to wrangle SMEs that have no interest in helping or have long since moved on.


> When you are reminded that Citibank lost 0.5 Billions because they spent 0$ on their UI, you may start to understand how much corporates world is rotten to its core and why mainframe is still there.

That wasn’t actually on a mainframe or in COBOL, it was an Oracle app (Oracle Forms/Reports, PL/SQL, Java, etc). And, it was a product from an Oracle subsidiary (OFSS), the software itself was not maintained in-house. Although, to complicate the story, that subsidiary was started by Citibank and then sold to Oracle in the mid-2000s.

So Citibank no longer directly controls the decision on whether the UI is updated, now that is up to Oracle. They can encourage Oracle to do that, and decide how quickly to upgrade if/when Oracle delivers it, but Oracle controls the actual UI. Or they could decide to look for a new product to replace it with.

(Disclaimer: former Oracle employee, was peripherally involved with OFSS banking products during my time at Oracle, although I never had anything to do with Citibank, and I never saw this specific banking product either.)


The Oracle outsourcing connection is interesting! I read about this incident at the time in Matt Levine's column. [1] See HN discussion at [2]

[1] https://www.bloomberg.com/opinion/articles/2021-02-17/citi-c... [2]. https://news.ycombinator.com/item?id=26180785


It wasn't exactly classic outsourcing.

In the early 1990s, Citibank decided to outsource banking software development to India. Not an unusual decision, but somewhat unusual the approach to it they chose – they set up an Indian subsidiary (iFlex) to develop banking software for them, but also decided the subsidiary would sell the software as a product to other banks. So Citibank owned an Indian subsidiary which developed banking software both for Citibank and also for others. And then Citibank sold that subsidiary to Oracle in the 2000s, and Oracle renamed it from i-Flex to OFSS. Actually Oracle owns the majority of it but a minority of it is publicly listed on the stock exchange in Mumbai.

It isn't just one product, it is a whole suite of banking software applications. I know there are banks who have deployed just certain apps out of the suite and integrated those apps with their legacy core banking. Or, you can buy it all and use it for everything. Given Citibank is the original customer, I imagine they use more of it rather than less of it, but I’m just guessing.


> and writing interfaces and top of that piece of crap that can only process data at night or during the weekend.

I worked on mainframes and this seems like some deliberate policy not a mainframe limitation.

Also your Kafka+Java architecture is unlikely to still be supportable in 2 decades. Will have the same problems with Java and Kafka in the future as you have with Cobol today.


> Also your Kafka+Java architecture is unlikely to still be supportable in 2 decades. Will have the same problems with Java and Kafka in the future as you have with Cobol today.

I doubt that. Java has shown a huge commitment to backward compatibility; you can take code from 20 years ago and run it unmodified today. Kafka is younger but it's also a project that takes compatibility seriously.


> When you are reminded that Citibank lost 0.5 Billions because they spent 0$ on their UI, you may start to understand how much corporates world is rotten to its core and why mainframe is still there.

Mainframe is still there because crypto isn't yet.


(humor)

> COBOL- still standing the test of time

.. like the plague.

https://en.wikipedia.org/wiki/Bubonic_plague#History


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: