Hacker Newsnew | past | comments | ask | show | jobs | submit | more m0nastic's commentslogin

Without knowing what his next project is, maybe I'm mis-reading the statement; but I took that as more of a condemnation against the web than against Javascript specifically.


I've worked implementing both, and Bromium is basically as good of a solution to this problem as you're going to get, in the sense that it requires the least modification of user behavior (the user's Windows machine mostly behaves like a normal one).

Even Bromium was pretty upfront about the use case for their product though (high-value targets like executives who travel to China). They were very honest about it being overkill for an entire enterprise.

I think securing endpoints is basically a lost cause though (I'm happy to consider that a minority opinion however). My company spent many years trying to get TPM's to be the solution to this problem, and I'm pretty sure that ship has now sailed; with the only 2 sectors of the industry that are continuing to grow being completely unsuited to TPMs (virtualization and mobile).

I think we'll eventually realize that much like networks, devices have to assumed to be untrustworthy, and we have to route accordingly.


> the only 2 sectors of the industry that are continuing to grow being completely unsuited to TPMs (virtualization and mobile)

A counterpoint is that mobile platforms often have some form of secure enclave, but sadly not standardized. Even AMD's low cost x86 CPUs are adding an ARM coprocessor, which could in theory be used for functionality similar to TPM, DRM, or AMT. Some of those are more useful than others. On the Intel side, SGX will add more enclave options, and complexity, but hopefully will be open and well documented.


I take issue with "often", as the vast majority of mobile phones don't have anything (even if there exist specific models which could have them).

There was a brief window in time when you had to go out of your way to buy an Intel laptop "without" a TPM (even Macs had them for a time, even if Apple never made use of them). The Trusted Computing Group failed to capitalize on that timeframe by providing both a "reason" and decent solutions to that problem.

There's a lot of reasons why that was, if I've been drinking I'd happily go into many of them.

On the mobile side, I agree, it's a hodgepodge. Apple has their secure enclave (which doesn't quite act like a TPM, even though it theoretically could), and there exist vendors who could theoretically include a TEE in their phones (right now they're almost entirely limited to special "government-specific" use cases).

And I'm ignoring Samsung's solution (which is basically snake oil).

Intel's SGX would be great, provided that the industry suddenly switches to X86 for mobile (which I don't think is going to happen).

The mobile industry is way too fragmented from a hardware perspective for any type of trusted computing platform to achieve even a modicum of install base. That might change in the future, but I wouldn't bet on it.


Intel is slowly inching their way onto smaller devices (compute stick, 7" fanless tablets with TPM & TXT). While Google's Project Ara may look like a lab experiment, the Panasonic FZ-M1 is shipping with multiple peripheral "modules", so there's at least one proof point for modular devices with a radio.

If modular mobile architectures succeed, there will be a better chance of combining one's preferred hardware TCB with one's preferred sensors. Sometimes, it only takes one counterexample to move entire markets, look at the time interval between the first Galaxy Note and Apple iPhone 6.


Secure enclaves are very useful tools for OS design, but that's not the kind of security we're talking about here. Enterprises can't easily exploit processor protected VMs and address spaces to, say, prevent PII from leaking. By and large, companies aren't losing data to VMWare jailbreaks; they're losing it to much, much more prosaic attacks.


If every endpoint could support at least two isolated enclaves, it would be feasible for enterprises to isolate some high-value info assets to an internal VPN that is isolated to one of the enclaves, with the other exposed to risky public channels and attacks.


I feel like you're conflating "purity" with algorithms.

Just to clarify, in Haskell there is no issue with algorithms being contained inside functions.

Here's a naive implementation of quicksort:

  qsort [] = []
  qsort (x:xs) = qsort smaller ++ [x] ++ qsort larger
                 where
                 smaller = [a | a <- xs, a <= x]
                 larger = [b | b <- xs, b > x]
Nothing about this algorithm requires side-effects, it doesn't mutate any state, it doesn't make use of counters. You can implement it anywhere inside of a Haskell program.

People may charitably point out that this is a pretty bad naive implementation, because it doesn't just mutate values in the list up and down, which is inefficient. People uncharitably will take issue with even calling it quicksort, as some people believe that the property of just mutating values in the list is an integral part of it being quicksort. I'm not smart enough to have an opinion about whether or not it's a "true" quicksort.

But the point is that you can absolutely have algorithms in Haskell that are not restricted to only run in a Monad.

Monads exist in Haskell as a way to reason about side-effects. Whether or not an algorithm (or any function) needs to be executed inside a monad is a function of its use of side-effects, it's not fundamental.


Thanks. As I said in my comment I know my ignorance and I'm looking for enlightenment (my confession didn't stop the downvotes though)

I do not "see" any clear algorithm in the code (no step-by-step procedure). I see recursion only.

¿Can you explain in words what this code is doing?

¿Can you implement this without using recursion?


Ok, here let's step through what this function is doing with a simple example (sort a list that looks like this: [3, 5, 1, 4, 2]:

  qsort [] = []
This means that if you hand the qsort function an empty list, you get an empty list ([] is the empty list in Haskell).

    qsort (x:xs) = qsort smaller ++ [x] ++ qsort larger
Ok, so this means that we are making a list out of three parts: (qsort smaller, x, and qsort larger). The '++' operator is just concatenation.

                  where
                  smaller = [a | a <- xs, a <= x]
                  larger = [b | b <- xs, b > x]
Here we're defining what "smaller" and "larger" are. They're list comprehensions. "Smaller" returns a list that is made up of pulling out all the numbers that are less than or equal to x. "Larger" pulls out all the numbers that are greater than x. Then it does it again.

Using that example list above ([3, 5, 1, 4, 2]), here's if we did it by hand:

  qsort [3, 5, 1, 4, 2]
   = {applying qsort}
  qsort [1, 2] ++ [3] ++ [5, 4]
   = {applying qsort}
  (qsort [] ++ [1] ++ qsort [2]) ++ [3] ++ (qsort [4] ++ [5] ++ qsort[])
   = {applying qsort}
  ([] ++ [1] ++ [2]) ++ [3] ++ ([4] ++ [5] ++ [])
   = {applying ++}
  [1,2] ++ [3] ++ [4,5]
   = {applying ++}
  [1,2,3,4,5]
That "step-by-step procedure" is an algorithm.

Is it that there are no loops that you don't see how it's an algorithm? It is definitely true that some algorithms are very obvious to implement using loops (quicksort is actually one such algorithm), and can in fact be a pain in the neck to implement in some functional languages.


Thank you very much m0nastic, it's a great explanation.

Now I can see the mathematical elegance of the haskell code.

An explicit algorithm of the same (pseudo)code (using a mutable list) could be:

   function qsort(list)
     if list is [], return []
     let x = extract_first_element(list) // car 
     let smaller = [a | a <- list, a <= x]
     let larger = [b | b <- list, b > x]  
     return qsort(smaller) ++ x ++ qsort(larger)
And by writing this pseudo-code, I can see that in the haskell code, "smaller" and "larger" are clearly parallelizable, while in the "imperative"-"explicit algorithm" pseudo-code, the compiler cannot do such optimization...

And then, I'm enlightened. Thank you.


(You made a small mistake qsort [1, 2] ++ [3] ++ [5, 4] should be qsort [1, 2] ++ [3] ++ qsort [5, 4])

Also, quicksort is a pretty bad example of an pure algorithm, since it loses a lot of its performance benefits when not done in place.


Ah, you're correct. (It's too late for me to fix my comment).

I went over the issues with this version of quicksort in the original comment, but I still think it was a good example because most people are familiar with quicksort, and I was trying to address the original posters claim that algorithms in Haskell could only be used in Monads.

Also, quicksort is one of the few I can do from memory.


I hesitate to recommend my process to other people, because I don't think I'm a very good programmer.

But for the past year or so, I find that I program best by actually writing out my program in a notebook (in my case a quad-ruled lab notebook). I don't even start typing until I have it laid out pretty much in it's entirety on paper.

This sounds ridiculous (and I can imagine it's not practical for all types of programming), but I've found that it's been tremendously helpful in getting me to understand what all the code that I'm writing does.

Most of the code I've written this past year has been in Haskell, so that helps somewhat by not having a lot of syntax to write down, but I'm sure I'd be doing the same thing even if I was writing in Java.


I find it very hard to convince my students to pull out a pencil and a notebook before they start coding.

"Draw some pictures. Make some notes. Write a function that you suspect will be tricky. Make a flowchart. List the methods that will have to be in that class. Write out how the user will interact with it. Try to list all the pain points for the user. You can write it any way that helps you think. Make up a notation if you want."

Yes, the assignment is due and they need to get code on the screen. But one of these days I'm going to give them a programming assignment and tell them that I only want to see their notes.


Paper rules.

I had to add a new feature to a decades old user facing codebase with had to interact with all of the existing system. The implementation of the core of the new feature took only a few hundreds of lines of code since it was just a bunch of graph operations basically. Interfacing with the rest of the system was hell.

I began by just investigating the requirements and designing on paper. I realized on my paper designs quickly that a few graph operations would be sufficient. Understanding how it fit with the rest of the system was only achievable to me by documenting for myself on paper how the bits and pieces worked I had to touch.

Took half a year to get the sucker ready.

Worked just like it was supposed to and my colleague praised how easy the codebase was to extend later on.

Paper and carefull design. It just rules.


As a beginner I recall myself asking other programmers "is using pen and paper cool before writing any actual code?" and they giggled before nodding in agreement.

I had this false expectation in my head that when programmers tackle a new problem (be either writing a small program or solving a challenge) they're able to think about it for a few minutes then straight up write code and if I can't it's because I suck and this job isn't for me.

Then I realized that what I expected was unrealistic and the "no pen and paper" stage only occurs to people who already met the same or similar challenge before and can recall even the slightest hint of what they did back then.


I think most people would agree that writing things on paper first is a total necessity if you are writing mathematical code (I mean, are you going to do the equations in your head and type them out?), and I think the benefits to paper increase in proportion with the mathiness.

On the non-mathy side, I find it useful to take the concept of "rubber duck" debugging (which is totally applicable to original design and creation, and not just debugging) but write a "duck" document rather than talk out loud. Aside from not looking crazy, I find writing easier for thought organization.

Sometimes preparing slides can also be a nice way to really force your thoughts into simplest and clearest form. And hey, when you're done, you can present it to your co-workers.


Paper is my favorite programming tool. The beauty of paper is that it's easy to find out when you're wrong, when you're pursuing the wrong path, without going deeply in and getting distracted by the small-scale details of the code.

This gets right back to the design thing.


To put it another way, it's far less time-consuming to edit a Word document than it is to update an entire production system.


No, I need to get away from the computer completely. Word docs (or any other computer typing) is still bound by the computer structure and the computer distractions.

With paper, I can draw graphs of relationships, write tables, add notes to things I drew earlier, etc. It's far more expressive, and far faster, than any computer tool I've tried.


This is the way I work too. I don't handwrite pseudocode, but I draw data structures and relationships in varying degrees of granularity at least until I'm sure I understand the problem. Only then do I begin writing tests and coding.


Paper is awesome. I find I rarely visit my notes afterwards. I just need a medium to spit out my thoughts, or I end up going in circles. As soon as I flush to paper, then my mind is unblocked to go solve the next problem. Most of the time, I remember most of what I've written down, so the paper is write-only.

My best work I do while half-napping with a notebook by my side.


No matter how you do it, taking notes on paper is very good.

One advantage, I think, is that we can type faster than we can write with pen and paper. Thus paper is a more deliberate medium than a keyboard is.


I'm not the OP, but I am also currently taking the FP101X Mooc with Erik Meijer. So far, I'm enjoying it tremendously. I've signed up for several moocs the past couple years, but have never been able to stick with any. This is the furthest I've been able to actually stay with it, which is probably indicative of something.

The lectures are basically laid out 1:1 with Hutton's Programming in Haskell book, so if you're familiar with that book, you're familiar with the way the course is structured. I find the actual experience of doing all the homework questions very helpful, even if so far nothing has been particularly difficult.


RootBSD (https://www.rootbsd.net/services/virtual-servers-vps/) has OpenBSD VPS's (I'm guessing it'll be a little bit before they have 5.6 available).


You can install it yourself actually, even if they don't offer it right away.


I think you're grossly underestimating the number of people that an electric car is a complete non-starter for.

It's great that electric cars exist now (and maybe someday in the future I'd even be interesting in buying one once there existed infrastructure that made it feasible), but I think your timeline for the obsolescence of internal combustion engines is laughable.


per the article, the volvo engine here will be rolled out in "5 to 10 years", so in 10 years yes I think the demise of the internal combustion engine will be well under way by then.


It doesn't predict my account correctly:

   PROBABILITY FEMALE: 0.997 
   PROBABILITY MALE: 0.569 
I wonder if the fact that I mostly just post pictures with no text accompanying them skews things.


By 1998 (when I was there), I don't remember ever seeing anything like that (we used normal Unix 'talk' to communicate with people logged into the Digital Unix servers).

Go Engineers!


There was a burst of hacking because of the final transition to UNIX. I was privileged to experience the much more diverse before time: A DECSYSTEM-20 was the main campus computer (it used a Z-80/S100 bus custom terminal multiplexer), the OS course involved writing an OS in PDP-11/23 assembly language, you could write your documents using the Wang word processing computer, of course there were VAXen and there was even an IBM mainframe. There were some UNIX machines (3B2s..), but everything changed when the DEC-20 was replaced with an Encore Multimax and Decstation-2100s (I remember "xtank" was a popular multiplayer game on them).

I could see the vestiges of the previous burst of hacking in the DEC-20's student written software library.

The popularity of the messaging programs should have been a big hint to us..


> a Z-80/S100 bus

What used to be the heart of a microcomputer.

A Serious Business Micro, that is, not a glorified game system like a Commodore-64.

> xtank

There's documentation of a work to port it to modern systems:

http://documentation.wikia.com/wiki/Xtank

Also something on Freecode:

http://freecode.com/projects/xtank


It sounds awesome but I was there in 1990 and worked in the computer lab and never saw it.


DEC-20 was retired in 1988 I think. Also this was when the computer lab (WACCC) was in the library- before it moved to the CS building.


Also, the hackers were big users of the the wpi.* USENET groups (which were a replacement of the TOPS-20 MM groups- actually one of the first student written UNIX programs was 'bboard'- because they didn't have news installed on the Encore at first).


Strictly speaking, being on the GSA schedule and being able to offer products/services for sale to the federal government requires that you not sell that product/service anywhere else for less than that GSA price.

This is actually one of the reasons that doing business with the government is less lucrative than the commercial industry (for products and services which are comparable, many are things only sold to the government, so there is no hesitation about pricing them through the roof).

As you can imagine, one of the ways that duplicitous federal contractors get around the idea of having to sell to the government for less than their normal prices is to structure their products/services as different (and therefore not apples to apples comparable).

I still would bet that the price that they charge Goldman Sachs (just as an example, I don't actually know which investment banks are using their product) is higher than what they charge for any individual government client.

Also, I think it's probably worth noting that the maximum addressable customer base for their products for federal agencies is way smaller than the equivalent number of financial customers. There's really only a handful of agencies that have this capability (I'd guess more than a handful, but probably less than a dozen).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: