Hacker Newsnew | past | comments | ask | show | jobs | submit | TwistedWeasel's commentslogin

plumbers, electricians, civil engineers, architects, structural engineers, etc. All these occupations have licensing requirements. Why is software different?


Chances of people dying from my web app is super slim. Chances of people dying because I bladly wired their house is and it caught fire is a lot higher.

Chances of me dying while developing my web app is super slim. Chances of someone frying themselves with wiring something up when they don't know how is a lot higher.


chances of being miserable in an administrative job^W^Wordinary life because some software product you have to use is making your job worse rather than better are close to 100%.

If you only count deaths, yeah, bad programming has negligible impact maybe. If you extend it to general suffering, it's quite a drag on everyone actually. And incidentally, good programming can make a world of a difference, too.

So wanting to select for good programming, with even just having a good minimal standard, is a reasonable goal.

The problem is that we're not even sure what makes good programmers and how to spot them, as evidenced by the continuous stream of "I think..." and "Well actually" stories & comments here on HN.


Is bad programming a net negative? I'm not convinced (and it's not just because I'm a bad programmer, I swear!), I think if you only had good programming, you'd have very little programming and that would be concentrated of the areas that the powers that be deem most important: military, finance, police, factories.

Having bad programming gets you a lot of programming. I'd rather have a million people who can each build a house a day that will stand reasonably reliable for ten years than having a thousand people that can each build a house a day that will stand for a hundred years.


> Having bad programming gets you a lot of programming

This is true. I'll add that machine learning is arguably the computer doing a lot of bad programming.


I will leave this right here for your education - https://www.bugsnag.com/blog/bug-day-ariane-5-disaster


GP doesn't say it's never happened, just that the typical programmer isn't going to kill someone with a buggy password complexity validator. By and large, the standard programmer does not hold life and death in their hands when navigating callback hell.


Tell that to citizens who can't register for unemployment or fill their taxes because the callback hell doesn't work as it should.


Again, the typical programmer doesn't kill someone when they write a bug. Judging from the backlogs of each company I've worked at, not a single PaaS, SaaS, BaaS, CaaS, DaaS, FaaS, GaaS, HaaS, JaaS, KaaS, LaaS, MaaS, NaaS, QaaS, RaaS, TaaS, VaaS, WaaS, XaaS, YaaS, ZaaS, or other would have a living customer base if one bug == one death.

There are edge cases and there are certainly plenty of times when software bugs can kill people. However, to say that the typical programmer holds life and death in their hands with every keystroke is an extreme over-exaggeration and I think you know that.


To GP's point, nobody died in that disaster. A much better example would have been the Therac-25: https://en.wikipedia.org/wiki/Therac-25


These two examples are interesting. They're both cases where what was being created was a system where software was an important component, as opposed to the software written by the vast majority of us where the hardware components of the system are always the same (monitor, keyboard, etc.) This is the same distinction in Diamond v Diehr for when software might be included in a patent. I always thought the US Supreme Court made a good decision there. Unfortunately they were later overruled by lower courts. (For legal experts out there about to correct me and say that lower courts can't overrule higher courts, I wish you were right.)


sorry, it doesn't matter


web apps are a small fraction of the software development world. Software Engineers are responsible for code that runs in hospitals, aircraft, power switching stations, and many many other safety critical systems. In many cases code that was never written for safety critical work is deployed in those environments. What OS and software runs the elevator controls in a hospital or military base? We never know the real impact of our work.


That's really not true. At NASA for example there are standards that need to be followed when designing a system, implementing the code for it, reviewing and testing it, and releasing it. [1]

Yes there will always be bugs but no practice or method is invulnerable to this.

Software in general, in these high risk environments, has been extraordinarily successful in terms of reliability and safety.

[1] https://sma.nasa.gov/sma-disciplines/software-assurance/2019...


At NASA, sure. You can't say that with any certainty for all the other systems in the world where software has a huge impact on daily life and human wellbeing. We can't know for sure because there is no regulation or independent monitoring.


Medical tech has similar standards as does flight control and many other mission critical code bases. Static analysis requirements, limits on certain trusted compilers, libs, etc.

I think you may need more time in the field and observing the reality here. There are unbelievably high standards and practices in many places. Maybe CRUD codebases for a consumer website has critical failures but that doesn’t really matter. People will stop using their site if it’s too large a problem.

Software is different than many technical and engineering fields. Codebases change over time as new requirements come in to extend functionality. Things can be patched. When standard engineering practices are required they are implemented. Yes, mistakes happen too but bridges fall down on occasion.


The whole point of my last comment was that the impact of bad software cannot be fully understood if we don't have ways to monitor and measure it. You are correct that many industries have high standards and many other industries have no need for any standards as market forces will decide, but there's likely a huge grey area in between that we don't know much about.


Unless, for example, you're writing software for civil engineers ...


> plumbers, electricians, civil engineers, architects, structural engineers, etc. All these occupations have licensing requirements.

These are almost exclusively local regulations, not US national requirements, and certainly not international.

So even if there were licensing, whose jurisdiction applies? What if you have distributed software development teams?


This is a solved problem per our tax laws. Just because you write code that runs on a server in another state, doesn't mean you pay the taxes in that state.


Different states have dramatically different taxes. Some states have no income taxes.

So in one sense, it's a "solved problem", with the solution being that licensing requirements would be dramatically different based on locality, but in the sense that people want — a uniform standard for hiring software developers — it's not solved at all.


> but in the sense that people want — a uniform standard for hiring software developers — it's not solved at all.

Exact same thing can be said about the hiring process in general. Taxes and employment legal contracts vary from state to state. I am sure employers would love a single contract, no matter where they hire an individual, but that is not currently the case.


It's complicated so we should just not care?


I think "Why is software different?" is the wrong question. Why is software the same? Most professions don't need professional licensing, and it's not clear that the economy would be served well by adding licensing requirements.


if you look at it economically, maybe. It's a complex problem. The economy doesn't benefit from contractor licensing either, but the consistency reliability and quality of our infrastructure does.


The reliability of licensed contractors is... questionable. ;-) In some cases it seems pretty silly. For example, beauticians need to be licensed, even though they make very low average salaries, so it's not particularly helpful to them, and it doesn't stop some of them from giving bad haircuts.

The mention of contractors brings up an important point though. Many people support software developer licensing because they believe (mistakenly IMO) that it would make hiring easier, but not all software development is done by employees. In a licensed profession, you cannot legally practice the trade, not even as an entrepreneur, unless you have a license. Are we to apply this same standard to software development? Nobody can write software without a license from the state? Is that even possible? What about the people writing consumer software alone at home? Can nobody even publish a web site with HTML and JavaScript without a license? A web site is essentially all you need to create a billion dollar business, so either licensing prevents that from happening, or licensing won't really be a uniform standard for the software industry.

Mark Zuckerberg was a college dropout. Thus, he wouldn't have a license. No Facebook. Maybe you're ok with that, if you hate Facebook, but nobody really thinks the problem with Facebook is that Zuck was an incompetent programmer. In any case, software development licensing would put up a major barrier to entrepreneurship in the tech industry.


All these roles have certain code they have to work to. A plumber can't just go into a home and install something outside of code.

Software is generally rather ephemeral and without a code to write or build to. We have "best practices".

Yes, some software is built under engineering assumptions - airplane software, etc. NASA has a standard they write code to and software engineers are expected to work within these confines. Part of the code requires reviews of written software, etc. [1]

[1] https://sma.nasa.gov/sma-disciplines/software-assurance/2019...


>A plumber can't just go into a home and install something outside of code.

That is....not true at all. A plumber/carpenter/electrician works to code under threat of losing their license. Additionally your building could be exempt from code for numerous reasons (grandfathered/historical, outside of city limits and no county building codes/etc.) Developers and Software Engineers don't have this threat.


Right, if they continue to do things short of code (when required, etc) they'd lose their license. They have to work within the confines of the code, where applicable.

Software engineers will certainly lose their job if they don't ship code to standards set by the company they work for. Someone writing code for an airplane is going to have a different type of standard than someone writing code for video game, however.

Why would it make sense for a government agency to set a code for software? It literally makes no sense as different problems have different requirements.


Left up to private enterprise they will set their standards to the lowest they can get away with to maximize profits and reduce the time to market.


So the makers of Candy Krush should ban heap allocation and dynamic memory allocation and perform rigid static analysis on all the code they ship?


In an ideal world, yes, at least to a certain extent. How many times has the security of entire phones been compromised because of an app?

Obviously software written for use in medical devices and banking systems should be held to a higher standard, but the same can be said for other licensed professions. A plumber installing a fire suppression system in a 50 story high-rise would similarly have their work held to a higher standard than someone setting up a rain barrel for their garden.

Personally I find the quality and vetting process of software designed to be installed on the same device that most people use to manage their online banking and carry on their most intimate conversations (often using said software) to be so low as to be considered criminally negligent by the standards of any other respectably industry.


> plumbers, electricians, civil engineers, architects, structural engineers, etc. All these occupations have licensing requirements. Why is software different?

There are plenty of journeyman carpenters that do not have a license, this is not a problem as long as they work for a company that is licensed.


You're making an argument in favor of software engineering licensing, just not that every individual involved needs to be licensed.


Because their fields are mature. Practitioner fundamentals haven't changed in decades, often not even in centuries.


How do you define mature? Is it about stability? There are newer and more modern methods of construction and engineering and the field is always evolving. Maturity in those fields is about consistency and agreements of best practices enforced by standards and regulations. Software is not mature because it's not regulated. If you want software engineering to mature and stabilize, it needs more regulation.


Yes, construction etc is evolving, but it's doing that as such a slow pace compared to software engineering.


Are you sure? Construction materials are evolving and changing every few years (engineered lumber, modern environmentally friendly methods, etc) whilst we still use base operating systems designed in the 60's for most of our services.


Take a look at a house built [100, 50, 25] years ago. Compare it to a house built today.

Then do the same exercise with some piece of software.


yep, both are significantly different in many way, houses built 50 years ago have a lot of differences to modern houses. Software from 50 years ago, is also significantly different but the fundamentals of both are the same, foundations, walls, roof trusses, siding, filesystems, operating systems, processes, threads, data structures etc. We may use Go instead of C and we may use engineered lumber instead of Douglas Fir for door headers.


Go vs C: That's 50 years.

Wooden house building fundementals: That's like 200-500 years.


last 50 years of change includes modern nail guns, engineered lumber, hurricane anchors, plywood roof sheathing, vapor barriers, and countless other differences in how homes are built and the tools we use to do it. There is nothing special about software, other industries also evolve and grow their methods and tools. We just change software with reckless abandon and little regard for the reasons we started down any given path in the first place.


And scope-limited in comparison.


exactly - it is time for the computer science and software engineering professions to have trade orgs and unions so those who are qualified can practice the profession and get paid and respected like other professions that require years of college education.


I'm with you, we should approach problems with optimism not fear. However, teaching that approach to kids is a long term strategy, and helping them adapt to a new routine is a slow process, it's less about the difficulty and more about the emotional cost that you have to pay to succeed.

I have three kids, all elementary school age and I am prepping to start the school year all remote. I'm not afraid of the challenge but i'm honest with myself that it will take a toll on my kids and my marriage that I cannot avoid. No matter how hard I work, it will take time and getting into the routine cannot be done overnight.

It's okay to be daunted by that prospect.


My results show 1.3s for HTTP/1.1 and 3.0 seconds for HTTP/2 using Chrome on OS X. So, this demo wasn't very impressive for me.


Same for me. I tried numerous times and couldn't get HTTP/2 to be faster than HTTP/1.1. I had one time where it was close, but the vast majority of the time its between 2x and 4x slower than HTTP/1.1.


That means your internet is fast enough to reduce latency problem that HTTP/2 fixes.

on mobile the difference is much bigger, like, 100s vs 5s


Same bug for Utah...

In 2014 the Utah software industry gave jobs to 1 foreign software managers, Infinity times more than the year before. Most of them made between $NaN and $NaN per year. The best city for software managers was South Jordan with an average salary of $165,000.


Once you scale your worker pool up beyond a couple of machines you need some sort of config management with Celery. We use SaltStack to manage a large pool of celery workers and it does a pretty good job.


Indeed. I use Ansible myself.


I used Redis for celery in production with great success for a year but then we started running some long running jobs that needed the ACKS_LATE setting and the Redis delivery timeout kept hurting us by resending the task to another worker. It's configurable but in the end we just switched to RabbitMQ. I found it quite painless to setup and migrate to.


Convincing people to make the long trip to Mars in the future will be much easier if they can be sure there will be coffee served on the trip.


It's admirable that MS is responding to criticism of their device and working for a solution.

However, all the issues he experienced seem like they would be quite obvious to anyone testing the device usage for any kind of drawing application (which, for a device with a stylus seems like a common enough use case to be testing), perhaps Microsoft needs to spend more time and effort on their QA process for the next round of the Surface instead of playing catch up after release.


Seemingly pedantic point, but this is a UX design problem not a QA problem. QA is largely something that you want to automate, minimize, and reduce the cost of. QA is making sure things work as designed. UX design is something that you want to maximize. That's figuring out what you want to make.

The tools of the UX design trade are very different. I think the best UX is done with mockups that go into customer feedback sessions. The middle ground is lots of static drawings of UX, which leads to a lot of bikeshedding and not a lot of real data. Obviously the worst is no UX design at all.


I somewhat disagree, it depends on how QA fits into your organization and how you scope its role. UX needs Quality Assurance too, and sometimes things get past the UX team and into a product - at that point new issues come to light during testing and should be fed back up to the responsible teams.

In general I view QA as the last line of defense before the customer, if your QA doesn't speak up about ANY issues with the product, technical, UX or otherwise then who will?

Of course, my view of the role of QA in product design may be different from others.


The best QA people I've worked with have all had this view.


>QA is making sure things work as designed.

That is why I don't like describing the Test discipline as QA. It's not QA. When I worked in Test, I viewed my role as making sure things work well for the user, which includes 'testing' the specs to make sure there's no scenarios with glaring pain points. I think there's some parallels with the Obmudsman role in acting as the internal customer advocate.


all the issues he experienced seem like they would be quite obvious

It's the classic QA paradox. The typical hacker world view doesn't prioritize such feel issues enough. Even in today's UX focused world, stylus specific issues aren't widely understood. Combine the dynamics of the Asch conformity experiment, and it makes perfect sense.

https://www.youtube.com/watch?v=FnT2FcuZaYI


The typical hacker shouldn't be testing this device, then. I'm sure Microsoft has someone with the skills on their teams.


Yes, but did the management of the team realize the above, and did that manager make sure the team would work to produce good results in this area?


> The typical hacker shouldn't be testing this device, then.

Like, say, Gabe? :)


>It's admirable that MS is responding to criticism of their device and working for a solution.

It is admirable, but their work towards a solution will be fruitless. N-Trig's pressure sensitive stylus technology is vastly inferior to Wacom. Sure, most people can't tell the difference, but most artists can. Sure, artists can still produce professional quality work with an N-Trig stylus, but the experience of using an N-Trig stylus is substandard.

Microsoft improved nearly every aspect of the Surface Pro 3, but their switch to N-Trig was extraordinarily stupid. I'd rather have a slightly thicker, slightly more expensive device than one with a less than perfect stylus. I hope they continue to improve things by switching back to Wacom in the future.


On the other hand, the N-trig stylus works accurately at the edge of the screen. You can't say the same about Wacom's, and the issue is made worse by the fact that most desktop software puts toolbars full of tiny mouse targets around the edge of the screen.

With their Bamboo/Intuos/Cintiq products, Wacom can avoid the accuracy falloff problems by including an enormous margin around the edge of the active area. On a portable device like the Surface Pro 2, they weren't able to do that and it showed.

There are certainly tradeoffs in going to N-trig, but I don't think it's fair to portray it as inferior to Wacom's tech in every area but cost.


The SP2 Wacom setup was not perfect by any stretch of the imagination, in my experience -- accuracy was terrible in the corners / near the edges, and parallax was a serious issue, if you did not write/draw with the pen perpendicular to the surface of the screen, there was a significant offset from the tip of the pen to where the line was actually drawn. These were bad enough to turn me (and at least one professional artist that I know) off from buying one.

The move to N-Trig hypothetically fixes both of these. Early reviews/videos say that corner accuracy is greatly improved, and the lack of a separate digitizer layer allows a thinner optical stack, reducing parallax (and allowing the device overall to be thinner).

Driver support has historically been an issue, but msft seems to be improving things significantly.

There are fewer levels of sensitivity and hovering doesn't work quite as well, but I am overall reasonably optimistic about the switch.


Website is having some stylesheet trouble for me, but Surface Pro Artist says they have a decent handle on the driver compatibility. The update isn't generally released yet, but it's a significant fix.

http://surfaceproartist.com/blog/2014/6/9/n-trig-closing-win...


The possibility also remains that Wacom did not want the kind of business that the Surface brings to them. The surface is the first product that does a reasonably good job of cannibalizing cintiq sales. I would expect their margins on Cintiq to be higher than selling a digitizer component to Microsoft. I think Wacom is in a really tough place here - they have to balance between limiting access to their crown jewels while also making sure that ntrig doesnt make too many inroads as a legitimate alternative.


Absolutely nothing to do with that. Wacom would love to be in the SP3. But weight, thickness, cooling, display quality (and writing feel due to extra layers between the glass and display panel), and battery life all conspired against them. The N-Trig solution trades off some drawing precision (at a degree very few will notice) and the requirement of a battery in the pen for improvements to all of those things. Seems like a no-brainer.

When I was at MS folks were working super closely with Atmel (touch panel vendor for most early Win8/RT tablets) and really pushed the limits of their technology. There was a tight feedback loop there and I personally had found issues which eventually were solved via iterations of back-and-forth with Atmel and with software tweaks to work around hardware limitations.

It may not happen over night, but I suspect they're doing the same thing with N-Trig and pushing them to improve the experience in a way which other PC/tablet vendors never have or would. So don't assume that just because other OEMs haven't cared enough to get the most out of N-Trig's text that Surface doesn't have a shot at doing better.


[deleted]


Less parallax when drawing too.


>It is admirable, but their work towards a solution will be fruitless.

Any Microsoft employees listening: Go ahead and pack it up. Someone on the internet has told you all you need to hear. It's fruitless. Literally nothing you do will work, as this post has clearly pointed out. Look to this non sequitor full of opinion presented as fact for all the info you need: Until you switch to the hardware that OP knows is superior, your work will be for naught. Sorry.


It is a fact that both Wacom and N-Trig digitizers have different sets of advantages and disadvantages. It is a fact that the Surface Pro 3 can't completely eliminate its chosen technology's disadvantages. I never claimed that the entire Surface product line was doomed to failure, after all, the world doesn't revolve around digital artists.

As a couple of people pointed out, Wacom has a major disadvantage as well. It really sucks that it loses its accuracy at the edges of the screen, but in spite of that I still find the experience far superior to using an N-Trig stylus. I have come to this conclusion after using many different devices with different kinds of digitizers over the last decade. I also frequent a few of the major online digital art communities, and they all seem to agree with me. Perhaps the N-Trig fans are just quiet, but I'm inclined to think the lack of representation is due to the fact that artists enjoy using Wacom products.

I'm not sure why you feel the need to be so childishly hostile. I wasn't aware that expressing an opinion in an anonymous online community where people gather to have casual conversations was frowned upon.


Yeah, I'm baffled as to how the button placement made it past any kind of initial hands-on testing. It's hard to imagine a person using the pen on the screen for ten or fifteen minutes not encountering that problem.


From the article, it sounds like the Microsoft folks were kicking themselves for that oversight.

To their credit, I've made similar mistakes in my career.


Actually the beginning of the first paragraph (repeated below) really made me think 'wtf, apart from the number of people, this is exactly how some meetings with users work at our tiny startup'. Then I realized every engineer/designer/... probably makes such mistakes, maybe becasue of losing sight on the bigger picture, and MS is no different.

I ended up in a conference room with about half a dozen people from the Surface team. More rotated in and out as I worked. I drew and talked for two and half hours while they watched and took notes. Within the first thirty seconds they realised how frustrating the home button placement was.


Yes, me too. However i'd hope that for a product as big and important to MS as this then there would be more checks and balances in place to make sure that such things don't get overlooked.

It's understandable for one engineer to overlook it, maybe even a whole team but for an entire division of design, engineering, QA, marketing etc? Something is rotten in their process.


The developers were probably using non-final hardware and/or sharing engineering samples if they had samples at all rather than developing drivers to the datasheet. By the time the marketing team got their hands on them the advertising may be booked and pre-orders received from retailers (if they ever actually get their hands on products rather than just arranging final mock ups or production trial run results shipment for photo-shoots etc.).

QA should have had a short window to study this type of issue but is it something that they would delay shipment over?

Basically I would expect the product to ship on schedule unless there was a really critical problem and that there wouldn't be slack in the schedule for weeks of refinement. Component orders may be place 6 months ahead to secure supply so it is hard to flex the schedule without causing inventory problems not to mention messing customers about.

Based on my experience in a CE company that wasn't Microsoft.


People make mistakes. Every piece of software ever written has bugs: I don't think that means everyone's process is rotten. It just means they are human.


Ok, but large corporations are supposed to be able to avoid human centric mistakes with good operations workflows that provide checks and balances in their pipeline.

I would assume MS is not shipping products directly from the engineering lab to the factory, so for something glaring to get all the way to the customer then there is something wrong in their process that failed to correct for human error.


Ok, but large corporations are supposed to be able to avoid human centric mistakes

They are not supposed to be worse?


> To their credit, I've made similar mistakes in my career.

What did you mean? You don't mean that if X makes a mistake that you made first, that mistake is a credit to X, do you?


Not at all. Just that mistakes happen to everyone, even the biggest corporations.

I could have phrased that a little better. Perhaps "I've made similar types of mistakes that were obvious in retrospect, so I have a hard time faulting them" might be a better way to say it.


Oh, OK. That makes sense.


They probably didn't have an artist use the device prior to release. They were probably testing the app in OneNote where you could just move the page to center it if you needed more room.


A lot of QA departments these days put their effort behind automated testing, which is very valuable but will never replace actual human usage of a device.

Automation should catch one class of problems but real world usage is necessary to catch a whole different type of issue.


I've worked in a testing lab for a product somewhat like the Surface. They had an automated testing platform but it was pretty much useless. They didn't put enough effort into it to make it worthwhile. As a result basic sanity testing had to be done by hand. It took forever and as a result there was no real time for 'actual human usage'.

I found the experience invaluable in helping me to understand why I found virtually every hardware and the vast majority of software products extremely painful to use. And how much I appreciate Steve Jobs for showing us how it's done.


I have that problem with every laptop with the touch pad square in front of the keyboard. I am always brushing against it with my palm while typing that produces very unintended input.

I'm not the only one, I've noticed other people having the same problem.

I have no idea how this design became ubiquitous. It does not work for me at all.


There's a comment from another sketcher in here talking about how it might be a peculiar grip not used by all artists.


> It's admirable that MS is responding to criticism of their device and working for a solution.

It is, but it isn't some trade secret, they could open source this and have 1,000s of people converging toward the solution. Why are they so hung up and being The Ones Who Deliver The Software And The Hardware?


money? "optimal experience"

I'm all for supporting open source. But seriously, it took me about 20-30 hours to get my FreeNAS setup working (the way I wanted) (getting the hardware together, configuring... lots and lots of configuring). And I'm definitely not going to say that that project is "end user friendly".

I really really love open source projects but I'm willing to put down that they don't always meet the end user in an appreciable way without a serious and committed company behind the product making it end user applicable (android, et all). Otherwise the open source project will fit the needs of the people that work on it (engineers) and that's about it.


Pretty sure chillingeffect's idea is to have Microsoft behind it, so I'm not sure how that applies.


Sorry, I didn't really understand the parents as that. I guess a more on topic reason might be:

They can barely manage a team of their own engineers to get this product out. Throwing more guys at it isn't going to help necessarily. Not without significantly more overhead and more managers. I think mystical man month definitely still applies to open source projects as well.

that and microsoft is just not good at managing open source. And I can't imagine anyone would want to work on the windows kernel without being paid to do so >_>


> Why are they so hung up and being The Ones Who Deliver The Software And The Hardware?

For the same reason that Ubuntu and Firefox are?


Firefox isn't delivering the hardware. And the software is open source.


I want to applaud MS but this whole thing stinks of a PR move. Wacom has always had a customizable pressure curve in their software. This should be a bedrock standard but instead this catch-up is spun as MS' dedication to artists. To think they are just getting around to doing this (not even a user adjustable slider scale but just a few presets) speaks volume of how much they actually care about artists -- it wasn't goal they had in mind until they sent out their demo units.

Personally, I think the SP3 is probably the best PC for digital artists, but I'm disappointed in how little they worked on making the stylus user friendly.


Would you have preferred they did not address the concerns of Krahulik?

SP2 is likely the best portable computer for digital illustrators. You can still have an SP2, but if you're like me, you're bummed out that SP3 means that the accessory ecosystem is stuck as-is for the SP2.


Free (with purchased subscription)


"... but this language, once released, will be fixed for a decade at least. Something with that lifespan should be great from day one."

I'm not sure why the author believes that once the language ships it cannot change. Surely all languages evolve and change over time, it would be foolish to think Swift as it ships in September will not change for a decade after that.


A language that is constantly changing in fundamental ways is not usable for serious software. Languages are largely constrained by the choices that were made when they were released. See Python 3 for an example of what happens when a language tries to make breaking changes — and Python's were relatively minor! No language could thrive while going through that sort of strife constantly.


Objective-C has added several features in recent years; things like bindings, dot notation, etc. are relatively new, but pretty substantial changes in terms of code clarity.

As for Python 3, the Python community never intended for people to move over to Python 3 immediately, nor was it intended for people to move their Python projects from Python 2 to Python 3 unless they had some reason to (e.g. Django, public libraries, etc).


Objective-C has added things, but it has largely not changed existing things. As I said in another comment, languages generally do not change — they just accrete.

If you feel like a language is missing something, that might be fixable — but if you feel like a language either added the wrong thing or did something the wrong way (which is the OP's concern), that is a much more difficult problem, because after release you can't take away what's already there without making people angry.


Languages generally don't change -- they get added-to and very rarely removed-from but they don't often change. For example, it's usually very hard to add new keywords. So C++ for example, uses every possible keyword and symbol overloaded to provide new features to the point of ridiculousness. Java is on the same road. Python changed a few things and the split between 2.x and 3.x is still ongoing.


This was about the pace of language development once upon a time. C++ changed glacially for its first 20 years, for example. Perl, Ruby, and Python also took pretty conservative approaches to language evolution (but kudos to Python for finding a way to encode versioning information into the code itself). I think Objective-C didn't really change all that much until the mid-2000s, probably in prep for iOS?

So the expectation seems like it's probably based on real history, but things seem different now, though. Languages are evolving faster than before, even some of the ones that previously moved very slowly.


By and large, languages don't really change — they just accrete. It is very rare for something that used to work one way to later work another way. For example, languages generally do not go from being statement-based to being expression-based, or go from something being mutable to immutable, or eliminate operators. (MzScheme did the second one — it went from mutable to immutable defaults — and it was considered so significant that they stopped calling their language Scheme and renamed it Racket to avoid confusion!)


I agree this has historically been true, but again, I think this is changing. Ruby 1.9 and Python 3 both did more than accrete, they actively broke existing code in quite significant ways.

C++ has so far avoided completely breaking changes, but with all the accretion it's doing now it's probably only a matter of time before some significant breaking changes happen lest it become even more ridiculously complex than it is now.

Go has had breaking changes as well, I believe, but they have a smart upgrade tool to help with it. This is probably something that will catch on for other rapidly evolving languages.

I think we'll see a lot more of this kind of thing in the future.


Ruby and Python each did one release where they were willing to do significant changes. That's it — they're not doing it again for a good long while now. It's an isolated incident, not a trend in those languages' development practices that we can project into the future.

Go did breaking changes pre-1.0, but they are now committed to providing a stable platform that only accretes features (http://golang.org/doc/go1compat).


I don't expect either of them to do it for a while either. The trends I'm talking about are in PL development in general. More breaking evolution is taking place post-initial-development than ever has before in languages both new and old.

Note that the Go compat wiki you link to acknowledges a future Go 2 that may break compatibility. That's actually a pretty strongly pro-evolution statement compared to past languages.


The statement you were questioning is that Swift will be "fixed for a decade at least" after release. If you agree that it will be at least four years before Python makes any more changes like Python 3, then you are agreeing with the OP.


I'm noting an acceleration in recent years and believing in the possibility of further acceleration going forward. Part of that being newer languages being more willing to undergo breaking changes sooner in their evolution than older ones were. Thus, I agree that Python might stay at a big change every ten years (which would still be faster than historical language evolution!) while still believing that swift or go might go for faster than that.

It's also worth noting that Swift isn't even at 1.0 yet, and they've said there will be changes before release. So I also disagree with a somewhat hysterical "we'll be stuck with this!!11!!" right now.

It's all just guesses, though. We'll see.


Python 3 is also not meant to run Python 2.x code. It's pretty trivial to port most code from Python 2.x to Python 3, but there's no real reason to do so unless you're writing a library or other project for other people to take advantage of.

It was intentionally done as a 'clean break' release; 2.x keeps doing the same stuff it used to do, and Python 3 changes a bunch of stuff which, in hindsight, makes sense (such as a distinction between 'stream of bytes' and 'string of text', vs. 'stream of bytes which may or may not be ascii text' and 'string of unicode text').

That said, there's no real benefit to moving an existing project/codebase from 2.x to 3.x, and it was never intended that there would be one. Python 3 is for new projects; Python 2.x is for existing projects, or new projects which need deployment in older environments.


Which is one of the reasons I think it's a sign towards acceleration. People are developing strategies to deal with language evolution. This, "import from __future__", and gofmt are all tooling that helps you deal with a language that's still willing to evolve.

That said, I don't think this part of the python3 effort entirely succeeded. Unfortunately they share a package source, so a lot of projects do need to support both. But whatever failings there have been in the python jump to 3.0, they're nothing compared to the disaster that was ruby 1.9, even though I think 1.8.7 is truly relegated to legacy now.

As long as they learn from those issues, though, I think the future is bright for non-stagnating languages.


This isn't right about the renaming. The switch from mutable pairs to immutable pairs happened in release 4.0 of PLT Scheme, in June 2008. The name change was in May 2010.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: