Hacker Newsnew | past | comments | ask | show | jobs | submit | leftrightupdown's commentslogin

i asked myself the same 2 years ago when my team was deciding if we need all of that javascript framework madness. In the end we built whole product based on dartlang (http://www.dartlang.org) which compiles down to javascript. We used only Dart language and no frameworks and best thing that came out of this is that we don't feel bound by restrictions of some framework but add stuff as we feel like. What we did is email marketing app https://www.listshine.com, something like mailchimp but faster and with less clutter.


Crm - streak crm (http://www.streak.crm), nice crm that integrates with gmail. Used to categorize email communication. Email marketing - listshine (http://www.listshine.com) cheap alternative to mailchimp, used to regularly email new and existing customers.


what we do at listshine is run code on dartium in checked mode. when it works 100% then we build js version in staging and after that is confirmed to work then we do production build. debugging dart code is faster and better in dartium


OK, but what if I am using JS interop code? Dartium still won't help me, because I still need to go through the layer which allows to 'glue' my Dart code to the JS code.


That is true but part that is in dart will be easier to debug


Exactly why we picked out dart for our app


i think strong typing generates same code as optional typing if you are not using dart dev compiler.


When you say the "dev compiler" are you referring to the DartVM? While (embedded in Dartium browser) that's used for dev convenience for front-end, it's also (without the browser, obviously) the main target for back end Dart.


The dev compiler translates Dart code into human readable JS code. So you can write a library in Dart and still provide JS code someone could realistically adapt to their environment or if Dart were to go away it would allow you an easy path of escape.


That's true today, but we're integrating strong mode into the VM and dart2js right now. Soon, the whole platform will have full support for it and all of the benefits you get from a sound type system.


I think they mentioned some gains in the size of the compiled JS. I don't remember tho.


We are writing spa frontend for email marketing service in dart. No angular, react, just lots of plain dart. Strong analyzer mode is used and dartium is always in checked mode. Dart shined most when we created drag and drop html editor. We like using it anl if anyone wants to know why we used dart ask for details. Check our app at listshine.com


Yea I found just using plane Dart to work really well. I preferred it over Angular. I had an entire functioning web app in about 80kbs, including all images, html, css, and JS. That was a while back, I'm sure its only gotten better.


What i personally liked is that we write once and in 99.9% cases generated js is compatible with all browsers. Also type checking and strong analyzer mode helped catch ton of bugs. Im sure if you bet your next project on this tech you wont be sorry. Fast code and fast development with webdev batteries included.


Yea actually I forgot about how slow it was to compile to JS. They are saying it should around 100MS now to do a dev compile. So its feasible to use other browsers actively during development. That is probably the biggest thing for me they announced at the Dart Con.


looks useful, just my 2 cents


alt-tab on osx closes current chat so you have to click it again, counterproductive, deleting/not using until its fixed.


so based on this, if someone wanted best compression program they would choose paq?


By the nature of things, anything that compresses some input data must necessarily lengthen other input data, since you can't get away from the fact that there are only so many input files that can be represented by the output number of bits. In fact, it will almost certainly lengthen many more of the possible inputs than it shortens.

I once heard someone describe compression programs as 'expansion programs with interesting failure cases', and so, of course, the best compression program to use depends on exactly which failure cases you're interested in.


While true this doesn't seem to be a practical issue. Any uncompressable data can be encoded using only one bit of overhead, where the bit is a flag indicating whether the rest of the data is compressed. In practice, there is a header and a field indicating which compression method to use. You pay for the size of the header. Adding support for another compression method is nearly free as far as space is concerned; one byte can switch between 256 of them. (Time is another matter.)


Depending on how many times you're going to decompress the same data and the bandwidth you'll use to transmit the compressed data, different things will be "best". http://fastcompression.blogspot.com/p/compression-benchmark....


It depends on what sort of data you're compressing.


PAQ (particularly ZPAQ) is pretty good at most things because it selects the model which works best. Variants of PAQ tend to be the Hutter Prize winners (paqh) - PPM derivatives are generally excellent at text, source code, HTML, and things with that kind of word-like symbol distribution (which is why PPMd.H in particular is used by RAR and 7-zip for text compression, although RAR selects it automatically and unfortunately 7-zip doesn't seem to, probably because of the extra RAM overhead for decompression that PPMd.H introduces).

However, PAQ tends to be really* slow, largely because it tries more things. It's highly tunable, but people who aren't compression geeks tend to not want to tune their compression. Presets are available.

That's pretty much why it hasn't caught on - speed. There may be some hybrid approaches that deliver a better compromise between context mixing's effectiveness and dictionary coding's speed and memory usage: I guess you could argue LZMA, bringing a Markov-chain algorithm into the mix, is one such, in a way. Sort of.

I'm also a little antsy about ZPAQ formats containing bytecode descriptions of the decompression algorithm needed, and are, broadly speaking, executable (and in some cases, are). That seems like the kind of thing that may invite security problems if approached without due caution.


tnx, i'll stick with independently funded openssl


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: