Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO the "better" attack here is to just kind of use Return Oriented Programming (ROP) to build the nefarious string. I'm not going to do the example with the real thing, for the example let's assume the malicious string is "foobar". You create a list of strings that contain the information somewhere:

    const dictionary = ["barcode", "moon", "fart"];
    const payload = [ [2, 0, 1], [1, 1, 2], [0, 0, 3] ];


Very interesting idea. You could even take it a step farther and include multiple layers of string mixing. Though i imagine after a certain point the obfuscation to suspicion ratio shifts firmly in the direction of suspicion. I wonder what the sweet spot is there


Yeah my thinking here is to find some problem that involves some usage of a list of words or any other basic string building task. For example, you are assembling the "ingredients" of a "recipe". I think if you gave it the specific context of "hey this seems to be malicious, why?" it might figure that out, but I think if you just point it at the code and ask it "what is this?" it will get tricked and think it's a basic recipe function.


Based on the complete out of my behind number I'd say something like 99.9999% of successful hacks I read about use one level of abstraction or less. Heavy emphasis on the less.

So I think one layer of abstraction will get you pretty far with most targets.


If anything, the pattern of the obfuscated code is a red flag for both human and LLM readers (although of course the LLM will read much faster). You don't have to figure out what it does to know it's suspicious (although LLMs are better at that than I would have expected, and humans have a variety of techniques available to them).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: