Hacker Newsnew | past | comments | ask | show | jobs | submit | wellpast's commentslogin

https://xelly.games/

Users post small games to social feeds.

Scroll like a social network, jump into and play any game by tapping on it.

Games are served into fully locked-down, sandboxed iframes for security.



https://xelly.games/

Twitter but for games instead of tweets.


https://xelly.games/

Social media network where users post microgames!


It’s a least-common denominator effect.

I.e., most people don’t care.

Local-first is optimal for creative and productivity apps. (Conversely, non-local-first are terrible for these.)

But most people are neither creative nor optimally productive (or care to be).


> most people don’t care.

it's not that they "don't care", but that they dont know this is an issue that needs to be cared about. Like privacy, they didnt think they need it until they do, but by then it's too late.


Wouldn’t it depend on use case?

If the app confirms to me my crypto transaction has been reliably queued, I probably don’t want to hear that it was unqueued because a node using SQLite in the cluster had died at an inconvenient specific time.


If you had a power failure between when the transaction was queued and the sqlite transaction was comitted, no amount of fsync will save you.

If that is the threat you want to defend against this is not the right setting. Maybe it would reduce the window for it a little bit, but power failures are basically a non existent threat anyways, does a solution that mildly reduces but not eliminate the risk really matter when the risk is negligible?


> but power failures are basically a non existent threat anyways

Not in the contexts sqlite3 is often used. Remember, this is an embedded database, not a fat MySQL server sitting in a comfy datacenter with redundant power backups, RAID 6 and AC regulated to the millidegree. More like embedded systems with unreliable or no power backup. Like Curl, you can find it in unexpected places.


I think in that context, durability is even less expected.


A better example is probably

1. I general a keypair and commit it.

2. I send the public key to someone.

I *really* want to be sure that 1 is persisted. Because if they for example send me $1M worth of crypto it will really suck if I don't have the key anymore. There are definitely cases where it is critical to know that data has been persisted.

This is also assuming that what you are syncing to is more than one local disc, ideally you are running the fsync on multiple geographically distant discs. But there are also cryptography related applications where you must never reuse state otherwise very bad things happen. This can apply even for one local disc (like a laptop). In this case if you did something like 1. Encrypt some data. 2. Commit that this nonce, key, OTP, whatever has been used. 3. Send that datasome where. Then You want to be sure that either that data was comitted or the disc was permanently destroyed (or at least somehow wouldn't be used accidentally to be encrypt more data).


Of course it will because same programmers don’t ack their customers until their (distributed, replicated) db says ack.


sane*


I believe in the comment they're referring to the "crypto transaction" not the SQLite transaction.


if you are doing crypto you really ought to have a different way of checking that your tx has gone though that is the actual source of truth, like, for exple, the blockchain.


I knew I shouldn’t have said crypto, but it is why I said queued. I knew a pedant was going to nitpick. Probably subconsciously was inviting it. I think my point still stands.


I had the same reaction.

I’m eager to find someone who has managed to abstract expertise.

I’m 25 years into repeatedly building large software systems. I was thinking, look here maybe someone’s managed to codify what I do.

Nope.

To your point, everything is contextual. If you try to apply a suite of rules the systems (which extend far outside of the bounds of a software subsystem deployment) will mock you by burying your efforts leaving no intentional outcome.

Everything is systems thinking and every seat & context is different. You have to navigate mostly with realtime navigations based on long- and hard- earned muscle memory.


The majority of people work in that world. Even the majority of startups, where you’d expect to find the novel pursuits, are building glorified spreadsheets.


Startups are glorified spreadsheets in the same sense that you and I are glorified collections of atoms. Technically correct (spreadsheets are turing complete after all) but totally useless as a model of how they work.


I’m just making the point that most software dev work is not novel.

You’re either making a productivity app where CRUD and UX are pretty well known patterns.

Or a scalable web system - also very well tried territory.

Or analytics and data processing - again well trod.

If you’re not a good pattern matcher you might think every UI framework, or your next API abstraction, is some next general theory of relativity.

But otherwise the major novelty in most software project pursuits is going to be the context, people, and industry your building it into not the tech


Right, fair enough, I agree with that.


This is great. I've been working on a social site where the content users post and scroll through are microgames (<500kb). My kids are my biggest users so far. My daughter loves this pill sorting game, for example:

https://xelly.games/game/b7eba4db-dc4b-4cfe-8420-3e42e494e52...


I’m working on a social site where users post (and “fork”) “micro games” (<500kb) instead of tweets

https://xelly.games


I love this


Thank you!


I agree 100%.

I've been around for a while, I've used many different build systems for JVM based builds (Java, Hybrid Java/Clojure, Scala) and Maven is by far the simplest most solid.

The basic reason is it's commitment to being declarative.

I understand why programmers want imperative (we're programmers) but it's just the wrong choice for a build system.

I've worked on many OSS projects. I've never pulled a Maven-based project that didn't immediately build and immediately load into my IDE. However for imperative based build systems (Gradle, Ant, now Mill) it's almost inevitable that you won't be able to get the build to work right away or pulled into your IDE in a sensible way (as IDEs cannot read imperative statemetns).

I've created many many build with Maven with many many different demands (polygot codebase, weird deployment artifacts, weird testing runtime needs, etc etc) and Maven has never let me down. Yes, in some cases I've had to write my own plugin but it was good that I had to do that; it forced me to ensure I really needed to -- the Maven plugin ecosystem is already great and covers 90+% of use cases of builds.

I've met a lot of Maven naysayers and the disdain is almost always either some weird aversion to XML (such a trivial reason to choose a worser build system) and/or because the programmer never took the time to understand the rather simple Maven runtime semantics and architecture.


> imperative based build systems (Gradle, Ant, now Mill)

Build code in Mill is pretty declarative. You're using the word to mean "not 'pure, serialized data'".

> IDEs cannot read imperative statemetns

They can, however, run the code to dump the structure.

It's easy for code to embed pure data; on the flip side it's hard to encode behaviour in serialized data. More often than custom Maven plugins I see people just drop down to using shell.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: