Hacker Newsnew | past | comments | ask | show | jobs | submit | more vjerancrnjak's commentslogin

It changes. Previously, I wouldn’t watch h264 videos that had wrong encoding settings (bad deblock, 2pass, bad crf). Now I don’t really care.

When I cared, I cared about movies as well. It’s just the energy of caring. Now I don’t find movies interesting at all.


It must be the tokenizer. Figuring out words from an image is harder (edges, shapes, letters, words, ...), yet internal representations are more efficient.

I always found it strange that tokens can't just be symbols but instead there's an alphabet of 500k tokens, completely removing low level information from language (rhythm, syllables, etc.), side-effect being a simple edge case of 2 rs in strawberry, or no way to generate predefined rhyming patterns (without constrained sampling). There's an understandable reason for these big token dictionaries, but feels like a hack.


How will you deal with lack of 3 AZ or FI to DE latency?


I like how Serbian is completely fine with Njujork or Majkl Ðekson.

Phonemic orthography should win and destroy all spelling bees.


This seems homonymphobic.

Edit: homophonephobic, technically.


A recurring problem, somehow making it very easy to write code that deals with 1 thing from start. When time comes , somehow hard to write code that deals with N things.

I wonder how different the code would look if it was just written to deal with N things from the start.

I’m also not sure how far this code can go, if I have queries that depend on responses of preceding queries , how will my runAp_ give me this? It probably won’t.

always wondered where are http frameworks that just give me a batch of requests to deal with from the start.


> I have queries that depend on responses of preceding queries , how will my runAp_ give me this? It probably won’t.

It definitely won't, which is what I was trying to get at with the discussion of monads and data dependencies. Applicatives by definition cannot have one "effectful" computation depend on the result of another. You could do a large bunch of parallel work until you need to pass a result into a function that decides what additional work to perform, at which point you need a monad. More advanced frameworks like Haxl apparently make this distinction explicit, so your computation proceeds as a sequence of batched parallel options, combining as much work as possible.


I guess we just need pipelining systems that batch.

These patterns constantly appear yet we continue writing code step by step.


But Cloudflare Workers or AWS Lambda setup would not work anyway with any db?

* spawning 1000 workers all opening a connection to a db,

* solved by service/proxy in front of db,

* proxy knows how to reach db anyway, let's do private network and not care about auth


DBs like DynamoDB work great with these kinds of runtimes, they don't need a connection pool in front


Cloudflare has its own DB (D1, Sqlite-derived), but you can also connect with PostgreSQL using their adapter (Hyperdrive). I have used both, they're okay.


I wouldn't recommend D1 for now due to its harsh storage limitations (10 GB).


Exactly, sounds like misuse of unions.

Although Python type hints are not expressive enough.


There’s no exponential improvement in go or chess agents, or car driving agents. Even tiny mouse racing.

If there is, it would be such nice low hanging fruit.

Maybe all of that happens all at once.

I’d just be honest and say most of it is completely fuzzy tinkering disguised as intellectual activity (yes, some of it is actual intellectual activity and yes we should continue tinkering)

There are rare individuals that spent decades building up good intuition and even that does not help much.


I felt similar with lenses. The problem lens solve is horrible. You don’t even want that problem.

FP can be the pragmatic as well. You’re going to glue up monad transformers, use lenses like there’s no runtime cost, and compute whatever you need in days but at least you know it works. Maybe there’s accidentally quadratic behavior in lifting or lenses but that’s by design. The goal is to just throw software at things as fast as possible as correctly as possible.


> I felt similar with lenses. The problem lens solve is horrible. You don’t even want that problem.

Lenses abstract properties in a composable manner. How is this problem horrible?

> FP can be the pragmatic as well. You’re going to glue up monad transformers, use lenses like there’s no runtime cost, and compute whatever you need in days but at least you know it works. Maybe there’s accidentally quadratic behavior in lifting or lenses but that’s by design. The goal is to just throw software at things as fast as possible as correctly as possible.

Any abstraction can be used inappropriately. Slavish adherence to an approach in spite of empirical evidence is a statement about those making decisions, not the approach itself.

In other words:

  A poor craftsman blames his tools.


Lenses: solve a problem elegantly (if you can hide the boilerplate) but inefficiently. A self caused problem by having extremely nested records. How did you get to a point where you have a structure that's hard to play with?

Lenses are exactly glue for throwing software at things as fast as possible as correctly as possible. A poor tool.

The very need for lenses often indicates that the data model has been designed in a way that's hostile to direct, ergonomic manipulation. A glued up steampunk contraption, a side-effect of throwing software at everything as fast as possible. Invented in a language environment where they can't be efficient.

Monad transformers: lift has quadratic complexity and runtime cost. Not really composable. Similar to effect systems in other languages, control flow becomes very unclear, depending on the order of application.

Lenses and monad transformers are just a nice trick that you shouldn't ever learn.

But I agree with your last statement, many of these libraries are just poor craftsmans giving us new tools that they made for problems we never want to have.

It's similar to dependency injection, why would anyone need a topological sort over dependencies and an automatic construction of these dependencies? Is it so hard to invoke functions in the right sequence? Sounds like you've made a program with too many functions and too many arguments. (or in oop, too many classes with too much nesting and too many constructor args)

These tools are "pragmatic". Given the mess that will naturally arise due to poor craftsmanship, you'll have these nice tools to swim well in an ocean overflowing with your own poop.


> Lenses: solve a problem elegantly (if you can hide the boilerplate) but inefficiently. A self caused problem by having extremely nested records. How did you get to a point where you have a structure that's hard to play with?

I see the value of lenses from a different perspective, in that they can generalize algorithms by abstracting property position within an AST such that manipulation does not require ad hoc polymorphism. For example, if there exists an algorithm which calculates the subtotal of a collection of product line items, lenses can be used to enable its use with both a "wish list" and a "purchase order."

Another thing they cleanly solve is properly representing a property value change with copy-on-write types. This can get really ugly without lenses in some languages.

I respect your take on them though and agree their definitions can be cumbersome if having to be done manually.


I understand your examples, but I'd say ASTs are rare and they're already a convenience, not a performance or efficiency choice. You're using ASTs because you'll be able to write other code quickly.

For the wishlist and purchase order, just think of the boilerplate you have to write to get 1 computation for variable data shapes, compared to doing what you want 2 times.

Copy-on-write types are easy if they are shallow, I'd question why they're so deep that you need inefficient lens composition to modify a deep value. We've already invented relational structures to deal with this. I'm assuming you care about history so copy-on-write is important and is not purely an exercise in wasteful immutability.


> For the wishlist and purchase order, just think of the boilerplate you have to write to get 1 computation for variable data shapes, compared to doing what you want 2 times.

This example is simple enough to not have to use lenses for sure. Another example which may better exemplify appropriate lens usage is having properties within REST endpoint payloads used to enforce system-specific security concerns. Things like verifying an `AccountId` is allowed to perform the operation or that domain entities under consideration belong to the requestor.

> Copy-on-write types are easy if they are shallow, I'd question why they're so deep that you need inefficient lens composition to modify a deep value. We've already invented relational structures to deal with this. I'm assuming you care about history so copy-on-write is important and is not purely an exercise in wasteful immutability.

While being able to track historical changes can be quite valuable, using immutable types in a multi-threaded system eliminates having to synchronize mutations (thus eliminating the possibility of deadlocks) and the potential of race conditions. This greatly simplifies implementation logic (plus verification of same) while also increasing system performance.

The implication of using immutable types which must be able to reflect change over time is most easily solved with copy-on-write semantics. Lenses provide a generalization of this functionality in a composable manner. They also enable propagation of nested property mutations in these situations such that the result of a desired property change is a new root immutable instance containing same. Add to this the ability to generalize common functionality as described above and robust logic can be achieved with minimal duplication.

It is for these and other reasons I often find making solutions with immutable types and lenses very useful.


How to reuse `readFile` `writeFile` program with this module trick?

Assuming `IO.readFile` and `IO.writeFile` is replaced by HTTP requests. I can define `writeFile` and `readFile` in a type class and then implement the effect for HTTP variant, hiding the HTTP client beneath.

Is it just wiring it up in mixins, cabal file?

I think general conclusion is that there's no need for dependency injection, environment objects, or similar tricks if module system is rich enough.

For a long time I questioned why Python needs anything but `async def` or `def` (async should be achievable through module or `yield` usage) and `import` statements, to achieve maximal reuse, given dynamic nature of language and modules. We could ignore all object-oriented features, decorators, replace them with modules. Would be flatter and readable compared to bloated feature set.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: