Hacker Newsnew | past | comments | ask | show | jobs | submit | themgt's commentslogin

Overall I like Dagger conceptually, but I wish they'd start focusing more on API stability and documentation (tbf it's not v1.0). v0.19 broke our Dockerfile builds and I don't feel like figuring out the new syntax atm. Having to commit dev time to the upgrade treadmill to keep CI/CD working was not the dream.

re: the cloud specifically see these GitHub issues:

https://github.com/dagger/dagger/issues/6486

https://github.com/dagger/dagger/issues/8004

Basically if you want consistently fast cached builds it's a PITA and/or not possible without the cloud product, depending on how you set things up. We do run it self-hosted though, YMMV.


One thing that I liked about switching from a Docker-based solution like Dagger to Nix is that it relaxed the infrastructure requirements to getting good caching properties.

We used Dagger, and later Nix, mostly to implement various kinds of security scans on our codebases using a mix of open-source tools and clients for proprietary ones that my employer purchases. We've been using Nix for years now, and still haven't set up any of our own binary cache. But we still have mostly-cached builds thanks to the public NixOS binary cache, and we hit that relatively sparingly because we run those jobs on bare metal in self-hosted CI runners. Each scan job typically finishes in less than 15 seconds once the cache is warm, and takes up to 3 minutes when the local cache is cold (in case we build a custom dependency).

Some time in the next quarter or two I'll finish our containerization effort for this so that all the jobs on a runner will share a /nix/store and Nix daemon socket bind-mounted from the host, so we can have relatively safe "multi-tenant" runners where all jobs run under different users in rootless Podman containers while still sharing a global cache for all Nix-provided dependencies. Then we get a bit more isolation and free cleanup for all our jobs but we can still keep our pipelines running fast.

We only have a few thousand codebases, so a few big CI boxes should be fine, but if we ever want to autoscale down, it should be possible to convert such EC2 boxes into Kubernetes nodes, which would be a fun learning project for me. Maybe we could get wider sharing that way and stand up fewer runner VMs.

Somewhere on my backlog is experimenting with Cachix, so we should get per-derivation caching as well, which is finer-grained than Docker's layers.


See also Hope:

In the previous sections, we first discussed Continuum Memory System (CMS) that allows for more persistent storage of memories and defines memory as a spectrum of blocks with different frequencies of update. Due to the larger capacity and constraints for scaling the parameters, often CMS requires simple learning rule but higher capacity to store more persistent knowledge. On the other hand, in the previous section, we discussed the design of a self-modifying Titans, where it can generate its own keys and so learning update to better adapt to the context. Contrary to CMS, the self-modifying Titans has a small capacity but is using a complex and expressive learning rule. Accordingly, these two systems seem to be complementary and their combination can enhance the model expressiveness from different aspects.

To this end, we present Hope architecture: A neural learning module that incorporates self-modifying Titans followed by Continuum Memory System.

https://research.google/blog/introducing-nested-learning-a-n...


For most papers, the main idea can be described in 1-2 sentences, sort of "we did X using Y".

That doesn't work for HOPE - a short summary can't explain what it actually does besides "self-modifying" and "continuum memory".

So it seems to be an innovation of Transformers calibre, really big (if true). It's definitely not "transformer but with such-and-such modification".

Gemini came up with a following visual metaphor for the difference:

> Transformer is a series of frozen glass panes (the weights) and a scratchpad (the attention) where it writes notes about the current text.

> The HOPE architecture involves no scratchpad. Instead, the glass panes themselves are made of smart liquid. As the data flows through, the first pane reshapes itself instantly. The second pane reshapes itself slowly. And the mechanism deciding how to reshape them is itself a tiny, intelligent machine, not just a basic math rule.


+1 Insightful.

This comment was illuminating -- and IMHO an excellent example of why it's important to avoid rigid rules against posting any AI-generated content in HN comments. You gained insights by asking Gemini, and shared them, noting the source. Thank you!


I have sympathy for some of the GitHub complaints. otoh just went to try to signup for Codeberg and it's down ... 95% uptime over the last 2 weeks?

https://status.codeberg.org/status/codeberg


One can always host Forgejo themselves if a service level has to be kept under control. With Github that’s not even an option.

I would even consider that moving everything from one single point of failure to an other is not the brightest move.


> With Github that’s not even an option.

Github does offer a self hosted product: GitHub Enterprise Server


Forgejo is GPL 3, with the Github stuff apparently even running it on owned device is tied to a per user per month bill, and I have no idea if code is available and editable just having a look at https://azure.microsoft.com/en-us/pricing/details/githubente...


yes, GitHub Enterprise Server is not free. And yes you pay a license fee per user per month, billed annually, and the minimum license purchase is 10 users at something like $21/user/month. Microsoft discounts you qualify for will bring that down. You pay because you get support. You won't need it often, but when you do, you really need it.

It is easy to administer even for 15k users, and mostly it takes care of itself if you give it enough RAM and CPU for all the activity.

Downloading the virtual hard drive image from GitHub is easy and decrypting the code inside is borderline trivial, but I'm not going to help anyone do that. I've never had a need to do it.

As a server product it is good. I recommend it if you can afford it. It is not intended for private individuals or non-profits, though. It's for corporations who want their code on-premise, and for that it is quite good.


Commercial software support is not free. Contracting out for professional services or diverting internal developers to fix issues with open source software are also not free.


People attention is not free. Rights of people is not free. Thinking only through money lenses is not free of consequences.


There have been complaints about it on Reddit as well. I registered an account recently and to me the annoying thing is the constant "making sure you are not a bot" check. For now I see no reason to migrate, but I do admit Forgejo looks very interesting to self-host.


https://tangled.org/ is building on ATProto

1. use git or jj

2. pull-request like data lives on the network

3. They have a UI, but anyone can also build one and the ecosystem is shared

I've been considering Gerrit for git-codereview, and tangled will be interesting when private data / repos are a thing. Not trying to have multiple git hosts while I wait


I, too, am extremely interested in development on Tangled, but I miss two features from GitHub - universal search and Releases. the web frontend of Tangled is so fast that I am still getting used to the speed, and jj-first features like stacked PRs are just awesome. kinda reminds me of how Linux patch submitting works.


It's fast because it lacks features

I'm more interested in gerrit/git-codereview for stacked commits than jj. A couple extra commands for new folks, not a completely new tool and lexicon


3 of the most exciting decentralized GitHub alternatives being developed today:

  Tangled (2024, ATP)
  Radicle (2019, IPFS) 
  Codeberg (2018, Gitea fork which supports decentralized protocols)


Which decentralized protocols does Codeberg support?


Codeberg doesnt currently support any, but Forgejo, the software it runs on, is implementing support for ActivityPub. Codeberg will likely enable it once support is stable.


> but I do admit Forgejo looks very interesting to self-host.

I've been self-hosting it for a few years now and can definitely recommend. It has been very reliable. I even have a runner running. Full tutorial at https://huijzer.xyz/posts/55/installing-forgejo-with-a-separ....


I moved (from selfhost gitlab) to forgejo recently, and for my needs it's a lot better, with a lot less hassle. It also seems a lot more performant (again probably because I don't need a lot of the advanced features of gitlab).


I've been contemplating this for almost two years. Gitlab has gotten very bloated and despite disabling a number of services in the config, it continues to require increasingly more compute and RAM; we don't even use the integrated Postgres database.

There are a few things that keep me on Gitlab, but the main one is the quality of the CI/CD system and the gitlab runners.

I looked at Woodpecker, but it seems so docker-centric and we are, uh, not.

The other big gulf is issues and issue management. Gitlab CE is terrible; weird limitations (no epics unless you pay), broken features, UX nightmares, but from the looks of it Forjego is even more lacking in this area? Despite this seeming disdain, the other feature we regularly use is referencing issue numbers in commits to tie work together easily. On this one, I can see the answer as "be the change - contribute this to Forgejo" and I'm certainly willing. Still, it's currently a blocker.

But my hopes in putting this comment out there is that perhaps others have suggestions or insight I'm missing?


I mean, they're battling with DDoS all the time. I follow their account on Mastodon, and they're pretty open about it.

I believe the correct question is "Why they are getting DDoSed this much if they are not something important?"

For anyone who wants to follow: https://social.anoxinon.de/@Codeberg

Even their status page is under attack. Sorry for my French, but WTF?


Crazy. Who would have an incentive to spend resources on DDoS'ing Codeberg? The only party I can think of would be Github. I know that the normalization of ruthlessness and winner-takes-all mentality made crime mandatory for large parts of the economy, but still cannot wrap my mind around it.


Not just them. For example, Qt self hosted cgit got ddos just two weeks ago. No idea why random open source projects getting attacked.

> in the past 48 hours, code.qt.io has been under a persistent DDoS attack. The attackers utilize a highly distributed network of IP addresses, attempting to obstruct services and network bandwidth.

https://lists.qt-project.org/pipermail/development/2025-Nove...


Sounds like the good old AI scraper DDoS - which, by the way, has no evidence of actually being AI related


Probably some little script kiddie fucks who think they are elite mega haxors and use their mommie's credit card to pay one of the ddos services readily accessible.


DDoS are crazy cheap now, it could be a random person for the lulz, or just as a test or demo (though I suspect Codeberg aren't a bit enough target to be impressive there).


Is it because the s in iot stands for security? I'm asking genuinely. Where are these requests coming from?


I would put it down to 4 things:

- the internet's a lot bigger nowadays

- there are a lot of crappily secured iot devices

- the average household internet connection has gotten a lot faster, especially on upload bandwidth.

- there's a pile of amplification techniques which can multiply the bandwidth of an attack by using poorly-configured services.


Search for “residential proxy”.


This seems like a synonym for botnet.


Also a good synonym for "anonymized and deceiving army of AI crawlers circumventing controls for their own benefit".


What is cheap and what are the risks of getting caught? I can understand that for a 15 yo it might be for the lulz, but I am having a hard time to imagine that this would give street creds, and why be persistent about it. AI-bots would make more sense, but these can be dealt with.


Big tech would be far more interested in slurping data than DDoS'ing them.

An issue with comments, linked to a PR with review comments, the commit stack implementing the feature, and further commits addressing comments is probably valuable data to train a coding agent.

Serving all that data is not just a matter of cloning the repo. It means hitting their (public, documented) API end points, that are likely more costly to run.

And if they rate limit the scrappers, the unscrupulous bunch will start spreading requests across the whole internet.


> Who would have an incentive to spend resources

That's not how threat analysis works. That's a conspiracy theory. You need to consider the difficulty of achieving it.

Otherwise I could start speculating which large NAS provider is trying to DDoS me, when in fact it's a script kiddie.

As for who would have the most incentives? Unscrupulous AI scrapers. Every unprotected site experiences a flood of AI scrapers/bots.


I think the goal is unclear, but the effect will be that Codeberg will be perceived as less of a real, stable alternative. Breaking in was not in my mind, but that will have the same effect, maybe even more damaging. Now, if that has been the intended effect, I hope I won't have to believe that.

Story time:

I remember that back in the day I had a domain name for a pretty hot keyword with a great, organic position in Google rankings. Then someday it got all of a sudden serious boost from black-SEO, with a bazillion links from all kinds of unrelated websites. My domain got penalized and dropped of from the front page.


Actually I think that's roughly how threat analysis works though.


For threat analysis, you need to know how hard you are to break in, what the incentives are, and who your potential adversaries are.

For each potential adversary, you list the risk strategy; that's threat analysis 101.

E.g. you have a locked door, some valuables, and your opponent is the state-level. Risk strategy: ignore, no door you can afford will be able to stop a state-level actor.


I concur the question, "Who would have an incentive to spend resources on DDoS'ing Codeberg?" is a bit convoluted in mixing incentive and resources. But it's still, exactly, threat analysis, just not very useful threat analysis.


Wouldn't an AI scraper working for a huge firm have more incentive to scrape your code, than a competitor?


>The only party I can think of would be Github.

I think it's not malice, but stupidity. IoT made even a script kiddie capable of running a huge botnet capable of DDoSing anything but CloudFlare.


its easier for MS to buy codeberg and close it than to spent time and money to DDOS things


How do you buy an e.V.?



this only works in countries with questionable rule of law


You goes to BYD dealership???


I said e.V., not EV. Codeberg is an e.V., i.e. a "registered association" in Germany. I am not actually sure if you could technically buy an e.V., but I am 100% certain that all of the Codeberg e.V. members would not take kindly to an attempt at a hostile takeover from Microsoft. So no, buying Codeberg is not easier than DDoSing them.


they can't buy the orgs but they can buy the codeberg or its member

which is basically the same thing


What do you mean by "orgs", and what do you mean by "the codeberg"?

Sure, they could try to bribe the Codeberg e.V. active members into changing its mission or disbanding the association entirely, but they would need to get a 2/3 majority at a general assembly while only the people actively involved in the e.V. and/or one of its projects can get voting rights. I find that highly unlikely to succeed.


Like how you buy a standards committee.

Just research about Office formats' ISO standardization process.

I'm not insinuating MicroSoft will buy Codeberg, but I just wanted to say that, they are not foreigners to the process itself.


Are there standards committees with 786 voting members, of which you would have to convince at least 2/3 to betray the ideals of the association they chose to actively take part in to get the association to disband or otherwise stop it from pursuing its mission?

I don't think your comparison works out.


~800 members? That's great to hear actually. I like Codeberg and want them to succeed and be protected from outside effects.

That's said, I believe my comparison checks out. Having ~800 members is a useful moat, and will deter actors from harming Codeberg.

OTOH, the mechanism can still theoretically work. Of course Microsoft won't try something that blatant, but if the e.V loses this moat, there are mechanisms which Microsoft can and would like to use as Codeberg gets more popular.


I took the number from here: https://blog.codeberg.org/letter-from-codeberg-onwards-and-u...

I think another big "moat" is actually that Codeberg is composed of natural people only (those with voting rights, anyway). Real people have values, and since they have to actively participate in Codeberg in some way to get voting rights those values are probably aligned with Codeberg's mission. I don't actually now the details of the standardization process you cite, but I think this is a big difference to it.

Additionally, from skimming the bylaws of Codeberg I'd say they have multiple fail-safes built in as additional protection. For one, you can't just pay ~1600 people to sign up and crash a general assembly, every membership application has to be approved first. They also ask for "support [for] the association and its purpose in an adequate fashion" from its members, and include mechanisms to kick people out that violate this or are otherwise acting against Codeberg's interests, which such a hostile attack would surely qualify as.

Of course it's something to stay vigilant about, but I think Codeberg is well positioned with regard to protecting against a hostile takeover and shutdown situation, to the point that DDoS is the much easier attack against them (as was the initial topic).


Part of the problem is that Codeberg/Gitea's API endpoints are well documented and there are bots that scrape for gitea instances. Its similar to running SSH on port 22 or hosting popular PHP forums software, there are always automated attacks by different entities simply because they recognize the API.


That's rough ... it is a bad, bad world out there.


Try exposing a paswordless SSH server to outside to see what happens. It'll be tried immediately, non-stop.

Now, all the servers I run has no public SSH ports, anymore. This is also why I don't expose home-servers to internet. I don't want that chaos at my doorstep.


Expose it on port 22 on ipv6 and it might as well be invisible. Cleanest logs ever.


Yeah, I have been thinking about hosting a small internet facing service on my home server, but I’m just not willing to take the risk. I’d do it on a separate internet connection, but not on my main one.


You can always use a small Hetzner server (or a free Oracle Cloud one if you are in a pinch) and install tailscale to all of your servers to create a P2P yet invisible network between your hosts. You need to protect the internet facing one properly, and set ACLs at tailscale level if you're storing anything personal on that network, though.


I would probably just ssh into the Hetzner box and not connect it to my tailnet.


Would tailscale or cloudflare do the trick. Let them connect to the server.


Yeah no need for public ssh. Or if you do pick a random port and fail2ban or better just whitelist the one IP you are using for the duration of that session.

To avoid needing SSH just send your logs and metrics out and do something to autodeploy securely then you rarely need to be in. Or use k8s :)


Whitelisting single IP (preferably a static one) sounds plausible.

Kubernetes for personal infrastructure is akin to getting an aircraft carrier for fishing trips.

For simple systems snapshots and backups are good enough. If you're managing a thousand machine fleet, then things are of course different.

I manage both so, I don't yearn to use big-stack-software on my small hosts. :D


This is just FUD, there is nothing dangerous in having an SSH server open to the internet that only allows key authentication. Sure, scanners will keep pinging it, but nobody is ever going to burn an ssh 0day on your home server.


A few years ago a vulnerable compression library almost got pushed out that major Linux distros linked their OpenSSH implementations to. That was caught by blind luck. I'm confident there's a lot more shit out there that we don't know about.


> This is just FUD.

No, it's just opsec.

> Sure, scanners will keep pinging it, but nobody is ever going to burn an ssh 0day on your home server.

I wouldn't be so sure about it, considering the things I have seen.

I'd better be safe than sorry. You can expose your SSH if you prefer to do so. Just don't connect your server to my network.


"opsec" includes well defined things like threat modeling, risk factors, and such. "Things I have seen" and vague "better safe than sorry" is not part of that.


There are two golden rules of opsec:

    1. Never tell everything you know and seen.
    2. 
For what I do, you can refer to my profile.


this can be fixed by just using random ssh port

all my services are always exposed for convenience but never on a standard port (except http)


It reduces the noise, yes, but doesn't stop a determined attacker.

After managing a fleet for a long time, I'd never do that. Tailscale or any other VPN is mandatory for me to be able to access "login" ports.


As a customer of GitHub actions, anecdotally feels like Github experiences issues frequently enough to make this not a problem.


GitHub uptime isn't perfect either. You will notice these outages from time to time if your employer is using it for more than just "store some git repos", e.g. using GHA for builds and deploys, packages etc.


Just a reminder, Codeberg is for open source projects only, and maybe some dotfiles and such. Its on their frontpage and in their TOS.


99.95 from something I use to do work is non negotiable.


you probably wouldn't use it for work anyway, codeberg is for OSS only


What? It says it's up for 98.56% for the last 2 weeks.


That's probably the average. But if Codeberg Translate shines with 99.58%, it is an unnecessary entry which harms the "92.42% Codeberg.org" reality.


Average big tech alternative. Doesn’t solve your problems, doesn’t scale, terrible UX, but at least it’s run by fanatics.


Forgejo does solve my problems, doesn't scale yet (I am really looking forward to ForgeFed), has fine UX, and at least it's run by people who care.


Because they are Codeberg I'm betting they have a philosophical aversion to using a cloud based ddos protection service like Cloudflare. Sadly the problem is that noone has come up with any other type of solution that actually works.


How well can Cloudflare protect against malicious account creation, where the attackers are set up to supply a response to email?


Substitute “perpetual motion machines” for “datacenters in space”.

This is an absurd strawman. A datacenter in space doesn't violate any fundamental physical laws. Science would not be "disrupted" if engineers made it economically feasible for certain use-cases.

It's totally reasonable to doubt that e.g. >1% of Vera Rubins are going to wind up deployed in space, but fundamentally this is a discussion about large profitable companies investing in (one possible) future of business and technology, not a small group of crackpot visionaries intending to upend physics.

Starlink sounded fairly nuts when it was first proposed, but now there's thousands of routers in space.


It does theoretically look like a useful project. At the same time I'm starting to feel like we're slipping into the Matrix. I check a GitHub issue questioning the architecture.md doc:

> I appreciate that this is a very new project, but what’s missing is an architectural overview of the data model.

Response:

You're right to call me out on this. :)

Then I check the latest commit on architecture.md, which looks like a total rewrite in response to a beads.jsonl issue logged for this.

> JSONL for git: One entity per line means git diffs are readable and merges usually succeed automatically.

Hmm, ok. So readme says:

> .beads/beads.jsonl - Issue data in JSONL format (source of truth, synced via git)

But the beads.jsonl for that commit to fix architecture.md still has the issue to fix architecture.md in the beads.jsonl? So I wonder does that get line get removed now that it's fixed ... so I check master, but now beads.jsonl is gone?

But the readme still references beads.jsonl as source of truth? But there is no beads.jsonl in the dogfooded repo, and there's like ~hundreds of commits in the past few days, so I'm not clear how I'm supposed to understand what's going on with the repo. beads.jsonl is the spoon, but there is no spoon.

I'll check back later, or have my beads-superpowered agent check back for me. Agents report that they enjoy this.

https://github.com/steveyegge/beads/issues/376#issuecomment-...

https://github.com/steveyegge/beads/commit/c3e4172be7b97effa...

https://github.com/steveyegge/beads/tree/main/.beads


lmao, agent powered development at its finest.

Reminds me of the guy who recently spammed PRs to the OCaml compiler but this time the script is flipped and all the confusion is self inflicted.

I wonder how long will it take us to see a vibe-coded, slop covered OS or database or whatever (I guess the “braveness” of these slop creators will (is?) be directly proportional to the quality of the SOTA coding LLMs).

Do we have a term for this yet? I mean the person, not the product (slop)


Slorchestrator.


You cannot go along like “I’m writing a cold path high-level code, I don’t need performance, I don’t need to go deeper into lifetime handling, I just want to write a high level logic”. You will be forced into the low level nuances every time you write a single line of Rust. There is no garbage collector for Rust and will never be — you will have to semi-manually pack all your data into a tree of ownership. You have to be fluent in ownership, borrowing, traits to write just a few lines of code.

It's still quite rough around the edges, but Crystal is a fun choice for this type of thing. If you want a readable high level language and sane package manager that compiles to reasonably performant machine code, it's worth a look.


the issues with Crystal, nim, zig, is that they have zero changes to be bigger.


crtystal and nim, probably not.

Zig... is surprisngly used a lot given how rough the state of the language is. It makes me think that if it ever reaches v1.0, it has a very good chance of being at least a "Kotlin", probably a "elixir"/"haskell", and a decent enough shot of "typescript".


I'm trying to understand this. The languages mentioned will never grow because others didn't give them a chance, so you won't give them a chance either. Sir, I think I've diagnosed an infinite loop here. I recently started learning Nim. I'm only at the beginning of my journey, and there are things I don't like, but overall, Nim is a very nice language :-) A significant portion of the mistakes one can make in C/C++ can be avoided by writing idiomatic Nim, which is most case is super easy to do :-) EDIT: you=>one :-)


Feel like I've been reading Year of the Linux Desktop™'ers writing this stuff for the last 20 years.

A bit of a backstory. I’ve been using GNUplusSlashLinux for more than fifteen years. Most of the time, I used GNOME, starting from GNOME2, moving to Unity maybe for two years, then GNOME Shell, then KDE Plasma 5 for another two years, and switched back to GNOME Shell again. I’m not mentioning some of my at most month-long endeavors to other DEs, like XFCE, or tiling WMs, because they never stuck with me. So I’ve been there for most releases of GNOME Shell, followed them closely, even used to run Ubuntu GNOME when GNOME Shell became a thing, until it became the default in Ubuntu once again. Though by that time, I had already moved from Ubuntu to a different distribution for a variety of reasons.

I did, however, run Unity on my older PCs, as it was far less taxing on resources than early versions of GNOME3, but then it was discontinued, and long-awaited Unity 8 with Mir never became a thing. So, when I was fed up with GNOME being a resource hog, often crashing, and moving towards Wayland, which didn’t work as good as it was advertised, I decided to try KDE somewhere around 2018...

My backstory: I've been using MacOS X for more than fifteen years. Most of the time, I used MacOS X. Actually, all of the time. The end.


You can skim through the wikis for some color, but tldr Turkey is generally playing amoral "middle power dilemma" politics rather than the Marvel universe fan fiction version:

In June 2016, Turkish President Recep Tayyip Erdoğan sent a letter, on the recommendation of Farkhad Akhmedov[123] to Russian President Vladimir Putin expressing sympathy and 'deep condolences' to the family of the victims. An investigation was also reopened into the suspected Turkish military personnel involved in the incident.[124] Three weeks later (in the meantime, there had been a coup d'état attempt against him), Erdoğan announced in an interview that the two Turkish pilots who downed Russian aircraft were arrested on suspicion that they have links to the Gülen movement, and that a court should find out "the truth"

On 12 September 2017, Turkey announced that it had signed a deal to purchase the Russian S-400 surface-to-air missile system; the deal was characterised by American press as ″the clearest sign of [Recep Erdoğan]′s pivot toward Russia and away from NATO and the West" that ″cements a recent rapprochement with Russia″.[109] Despite pressure to cancel the deal on the part of the Trump administration, in April 2018 the scheduled delivery of the S-400 batteries had been brought forward from the first quarter of 2020 to July 2019.[110]

In September 2019, Russia sent the Sukhoi Su-35S and the 5th Generation stealth fighter Su-57 to Turkey for Technofest Istanbul 2019. The jets landed at Turkey's Atatürk Airport, weeks after Recep Tayyip Erdoğan went to Moscow and discussed stealth fighter with Vladimir Putin.[111]

In November 2021, Russia offered assistance to Turkey in developing new-generation fighter jet to Turkey.[112][113] Some Turkish officials have also shown interest to buy Russian jets if the US F-16 deal fails.[114][115][116][117][118]

In 2024, Washington warned Turkey of potential consequences if it did not reduce exports of US military-linked hardware to Russia, critical for Moscow's war efforts. Assistant Commerce Secretary Matthew Axelrod met Turkish officials to halt this trade, emphasizing the need to curb the flow of American-origin components vital to Russia's military. The issue strained NATO relations, as Turkey increased trade with Russia despite US and EU sanctions since Russia's 2022 invasion of Ukraine. Axelrod urged Turkey to enforce a ban on transshipping US items to Russia, warning that Moscow was exploiting Turkey's trade policy. Despite a rise in Turkey's exports of military-linked goods to Russia and intermediaries, there was no corresponding increase in reported imports in those destinations, suggesting a "ghost trade."[119]

https://en.wikipedia.org/wiki/2015_Russian_Sukhoi_Su-24_shoo...

https://en.wikipedia.org/wiki/Russia%E2%80%93Turkey_relation...


Just saw Gemini 2.5 with a little thinking: https://imgur.com/a/nypRD7x


It is infested with the cult of immutability

Immutability is like violence: if it doesn't solve your problem, you aren't using enough of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: