Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am a firm believer that if the web had working authoring support as originally planned in the proposal document[1] (phase 2), these early web building services wouldn't have been nearly as popular, or would have even been entirely unnecessary. The web would've grown in a distributed way with people having full control over the data they share. Centralized services might not have evolved into the commercial behemoths they are today. Social media platforms wouldn't exist as we know them, or possibly at all. ISPs would've been forced to offer symmetric connections from the start to meet the demand for home servers.

So I see this as an early mistake that snowballed into the cesspool that is the modern web. Things would've been very different, possibly for the better, if we had web publishing tools that were equally as user friendly as the web browser was in those early days.

The original WorldWideWeb browser released in 1991 did have support for WYSIWYG editing of documents, but was quickly overtaken by Mosaic, which was read-only. It would be interesting to know why this feature was abandoned and not iterated upon. The Wikipedia article[2] mentions this:

> The team [at CERN] created so called "passive browsers" which do not have the ability to edit because it was hard to port this feature from the NeXT system to other operating systems.

That could be a hint, but it doesn't explain why NCSA didn't implement it in Mosaic.

WebDAV came out a few years later in 1996, but it never really took off. It was seen more as an alternative to FTP, than a native web feature. Why did it fail? Why wasn't it adopted by web browsers? Konqueror seems to have been the only one.

Much later in 2009 Opera launched Unite, a web server inside the browser, which seemed really promising at the time. But it was also quickly discontinued. At that point it might've been too little, too late.

And now we have Web3 and the decentralized movement, and even TBL is trying to undo the damage with Solid. But I have little hope any of these projects will see mainstream adoption. The modern web has too much traction, and the average web user doesn't care enough about their data to change their habits, even if the tools were simple to use.

Anyway, this is possibly too tangential for a thread on GeoCities, but I'm curious if anyone here has more information about this early history of the web. I would love to know TBL's perspective about all of this.

[1]: https://www.w3.org/Proposal.html

[2]: https://en.wikipedia.org/wiki/WorldWideWeb



Thank you for the interesting bits of history. I never knew about the planned phase 2, or the other attempts for a more distributed web.

I suspect the reasons why it never caught on were much simpler. Most of the early adopters were very enthusiastic, wanted to contribute, and there were many niches where one could contribute something new. Then there is the whole issue of design, which makes publishing on the web a relatively high effort proposition. Even with GUI editors, it is much closer to desktop publishing than word processing. The former never really caught on outside of professional work because it is high effort. Blogs and wikis are more popular, with the effort being closer to word processing. Yet even that is a bit much, given that the very low effort social media seems to be dominant these days.

Then there are things like hosting. Hosting has always been easy to find, but it is harder to find something stable across time. And discoverability, which has always been an issue but at least centralized services mitigate some of that. And the whole social angle, which instantly makes it more complex to setup and manage.


> Then there is the whole issue of design, which makes publishing on the web a relatively high effort proposition.

Sure, but early web pages were quite rudimentary. A simple tool that allowed adding links and images would have been enough to start with. Later when scripting and styling were standardized, it could've evolved to support those features as well. WYSIWYG HTML editors did this relatively well, but they never solved the more difficult task of actually publishing and serving the content. The best they would do is offer file syncing over FTP, but actually setting up the web server, DNS, etc., was left to the user.

An entire industry of web builders and hosters appeared to fill this void. Services like GeoCities and Tripod were early iterations, and today we have Wix, Squarespace, Wordpress, and countless others. All social media platforms are essentially an offshoot of this, enabling non-technical users to publish content on the web. This proves that it can be done in a user friendly way, but the early web tooling simply didn't exist to empower the user to do this for themselves.

Imagine if instead of using a web browser, consuming web content consisted of a command-line tool to download files from a server, and then opening them up separately in other tools to view them. I love cURL and tools like it, but the reality is that this experiment would have never become mainstream if we didn't have a single tool that offered a cohesive and user friendly experience. This is a large reason why Mosaic was as popular as it was. It really brought the web to the masses.

> Then there are things like hosting. Hosting has always been easy to find, but it is harder to find something stable across time. And discoverability, which has always been an issue but at least centralized services mitigate some of that. And the whole social angle, which instantly makes it more complex to setup and manage.

Sure, but I think those problems would've been solved over time. We see them as difficult today because we always relied on large companies to solve them for us. Instead of search indexers that crawled the web, why couldn't we rely on peer-to-peer protocols instead? DNS was an established protocol at the time that already was highly distributed. The internet itself is distributed. Why couldn't discoverability on the web work in a similar way?

The WWW proposal mentions this point:

> The automatic notification of a reader when new material of interest to him/her has become available. This is essential for news articles, but is very useful for any other material.

This sounds very similar to RSS. So there were early ideas in this direction, but they never solidified. Or if they did, it was too late to gain any traction.

Today the decentralized/federated movement is proof that this _can_ work. Imagine if we had protocols and tools like that from the very beginning. My argument is that the reason we have the highly centralized web of today is because we lacked simple web authoring early on. Non-technical users would have learned to use these tools just as they learned how to use the web browser. Our collective mindset of what the web is and how to participate in it would be built on collaborative instead of consumerist ideals. We would still need some centralized services, of course (e-commerce would still exist), but it wouldn't be for simple things such as publishing content. I even think that the grip of the advertising industry would be far weaker, since they wouldn't be able to profit as much from our personal data. Users would have far more control over their data and privacy. Propaganda and mass psychological manipulation in general wouldn't be as prevalent as they are today.

But maybe all of this is wishful thinking by a jaded mind. :)


> Sure, but early web pages were quite rudimentary. A simple tool that allowed adding links and images would have been enough to start with.

I'm still not convinced that would have helped maintain a decentralized web. For one thing, such tools existed very early on. I seem to recall WYSIWYG HTML editors with integrated FTP support being common. Many ISPs also included web hosting, which would have taken care of the web server and DNS part. While such a setup wasn't decentralized, it was certainly less centralized than the web today. I simply think the number of people who are interested in typing more than a paragraph or two at a time is quite limited. In that case, the effort is minimal and the tools required to support it can be quite simple (e.g. there is very little need to deal with formatting, creating links to other pages, etc.).

> Imagine if instead of using a web browser, consuming web content consisted of a command-line tool to download files from a server, and then opening them up separately in other tools to view them.

That pretty much reminds me of Gopher. And you're right. Just look at how quickly the web took over.

>> Hosting has always been easy to find, but it is harder to find something stable across time. > Sure, but I think those problems would've been solved over time. We see them as difficult today because we always relied on large companies to solve them for us.

In a sense, you're right. Self-hosting wasn't an option for many people in the early days since they had ephemeral dial-up connections. So you had to rely upon someone else, which meant that there was a good chance you would have to "move" (e.g. changing ISPs in the ISP provided web hosting, or simply changing web hosting providers). Aside from reliability, security, dynamic IPs, and cranky ISPs, there are no barriers to self-hosting today. Most of those can be overcome with existing software. I simply don't think there is much demand for such software these days, which is why it is uncommon.

> But maybe all of this is wishful thinking by a jaded mind. :)

There is nothing wrong with hopeful thinking. Your suggestions are even valuable in the current context since there are people who are interested in building and hosting webpages in a decentralized manner. While a simple tool would revolutionize the lives for those people, I'm going to stick by my doubts about it revolutionizing the web.


> I'm still not convinced that would have helped maintain a decentralized web. For one thing, such tools existed very early on.

HTML editors with FTP support were quite common, yes. As well as free hosting, though my memory is spotty regarding how user friendly ISP hosting was specifically. I came online in the late 90s, so I missed this early era, and I don't remember my ISP offering this service.

Regardless, these services and software came after the functionality of the web was already established (Prodigy, 1994; Geocities, 1994; FrontPage, 1995; Dreamweaver, 1997). They were essentially a response to a demand for features the web didn't natively have. By that point Mosaic was the most popular browser, and Netscape and IE were starting to dominate. The idea that the web was a read-only medium meant for consumption had already gained momentum. Even back then relatively few people bothered to figure out how to build their own site. Many non-technical people did, and these services and software did help, but by and large most users were consumers.

> I simply think the number of people who are interested in typing more than a paragraph or two at a time is quite limited.

The current state of the web negates this opinion. Most people do indeed want to share their thoughts and ideas. Social media platforms were created essentially to enable everyone to communicate, to publish and discover content, regardless of their technical knowledge. A _ton_ of content is created by non-technical users, which now vastly dominates the content available on the web. Think only of the millions of hours of video uploaded to YouTube, Instagram, TikTok, etc.

So it's not that people aren't interested in creating content, but that the only places they can easily do so is on platforms created and maintained by large companies.

I'm not saying that we might not have needed centralized services of this kind even if we had native web publishing from the start. But imagine if the software used to access the web had the same simple UI as Twitter or Facebook, but instead of being a centralized service, it was available locally directly in the browser (or "publisher", or however we would've called it). That would've certainly reduced the friction to contribute, lowering the need for 3rd party services and software to be created for filling that void, and the web would've evolved much differently.

Perhaps we would've invented P2P protocols like BitTorrent much earlier, to address the need of sharing large volumes of data. Perhaps the demand for higher connection speeds would've accelerated the migration to broadband. Who knows, and we can only speculate at this point.

> Aside from reliability, security, dynamic IPs, and cranky ISPs, there are no barriers to self-hosting today.

I disagree. It's still largely a task for technical users. A non-technical person would still struggle to setup their own web server that serves their own content, despite how ubiquitous cloud hosting providers are, and how user friendly web servers have become (Caddy et al.). Doing that on their home networks is even more challenging, because of the issues you mentioned. User friendly web serving software and network infrastructure is uncommon precisely because there's no demand for it, because the problem is solved by large companies. It's a chicken and egg scenario.

> While a simple tool would revolutionize the lives for those people, I'm going to stick by my doubts about it revolutionizing the web.

Yeah, I don't think there's a chance anything will change now. That ship has already sailed. The decentralized movement is much larger today than it's ever been, yet decentralized services are only used by a very niche group of internet users. I don't think that architecture will ever replace the existing web. I just wish we would've enabled and educated users from the start that the web is a collaborative medium, not one controlled by corporate interest. At least I think that was TBL's vision, which he's now trying to realize with Solid. I wish them and projects like it the best of luck, but I think it's too little and too late to undo the mistake.

Cheers for the discussion! :)


Browsers still support an editing mode:

    document.designMode = 'on'
But including a web server in every web browser never caught on, sadly enough.


didn't early versions of netscape have a wysiwyg html editor?

i think the bigger challenge was highly available hosting. there were authoring tools aplenty.


There were plenty of HTML _editors_, but the process of actually publishing something on the web required technical knowledge about networking and system administration. This was the domain of tech enthusiasts (and still is today), which evolved into the web hosting industry, and later enabled the proliferation of centralized web platforms like social media. My argument is that if tools existed from the very start that made publishing content as easy as web browsers made consuming it, the web would look very different today, and for the better. Clearly this was planned in the early WWW proposals, but never caught on for some reason, and I'm trying to understand why.


No, the earliest I remember what Netscape Communicator, which included the whole suite of Navigator (browser), Composer (wysiwyg editor), Mail, and News (Usenet). That was basically the latest versions of Netscape before its downfall, not the earliest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: