Hacker Newsnew | past | comments | ask | show | jobs | submit | benbalter's commentslogin

TL;DR: When working as a distributed team, be mindful of cultural differences, time zones, encouraging breaks between meetings, and connecting as humans.


TL;DR: When authoring a pull request, use the body as an opportunity to document the proposed change, especially the “why”, and cross link any related issues or other PRs to create a trail of breadcrumbs for future contributors.


Default to transferring context asynchronously. Hold colleagues accountable for being async first. If you receive a meeting invite without context, an agenda, or a read-ahead doc, consider politely declining.


PiHole is what first introduced me to the DNS sinkholing concept and was more mature when I was first researching options. AdGuard home has come a long way since then and I’m planning on giving it a closer look when I’m looking for my next project.


> A lack of knowledge hoarding, healthier knowledge transfer and decision making, and reduced waste.

Author here, +100 to this. The role of the PM should be to drive consensus around the problem, not to decree the problem (or requirements) by fiat.

For me, that process is highly collaborative, and engineers (and design, support, etc.) should 100% be involved from the begining. The amount of definition will vary from team to team and even engineer to engineer, but if a PM is hoarding knowledge, their understanding of their own role is the opposite of what it should be.

To paraphrase a famous product manager at Initech, "I talk to the customers so the engineers don't have to". That's not to say the engineers shouldn't talk to customers, but generally speaking, PMs should be the ones conducting qualitative and quantitative research day-to-day so that everyone can focus on what they do best.

At least on my teams, for example, user interviews are recorded and shared with the entire team, along with their raw notes, as are the high-level takeaways, allowing everyone to opt-in to as much or as little context as they'd like. We treat quantitative research the same way, sharing the underlying query, raw data, etc.

While the PM may ultimately drive the discussion, problem definition should be collaborative so that the entire team is aligned around a shared product vision. The PM's role should be to gather, organize, and share knowledge to build consensus, not to hoard it.


Agreed. It's one of the biggest things that drew me to SAFe as well because it fosters more communication and collaboration, empowers more people to make decisions and specifically de-emphasizes the PM as a focal point of all things.


What’s SAFe?


Scaled Agile Framework


When you request an archive of your data, we send the download link to your primary email address (the required token is not available via the web UI). Once you click that link, you'll be asked to re-enter your password. So for this particular feature, an attacker would need both your GitHub password (and your 2FA seed or an active session if 2FA is enabled) and access to your email.


There's also API (https://developer.github.com/v3/migrations/users/), which doesn't involve web UI or email.

The docs says it's "only available to authenticated account owners"; I hope it means you can't use a token for that, but I'm not sure.


Did I miss something? Is there not a way to use 1Password 7 without it automatically uploading your 1Password 6 vault to their cloud as part of the setup flow (as it did for me)? Unless I did something wrong, it looks like a my.1password.com account is _required_ in 1Password 7.


I've been testing 1PW7 Windows beta, and I'm testing against a local vault. Would surprise me if the Mac version only supported 1password.com cloud vaults.


I am the Product Manager for GitHub Pages. As has been mentioned multiple times here, the usage limits were not in response to a specific external event. The limits have been an internal policy (in one form or another) for as long as I've been involved (nearly 4 years now), and we chose to publicize them in a series of updates beginning early this summer.

This is a classic case of "this is why we can't have nice things". If you're using GitHub Pages for your personal site or to document/talk about the work you're doing on GitHub, in general, you should be fine, even if you get HN-level traffic every once in a while.

The problem comes when a small handful of users use GitHub Pages for things like automated version checks or configuration distribution, as a makeshift ad CDN for for-profit sites, pushing an automated build every minute, or to distribute large assets (for which things like Releases are a better fit).

When a user (ab)uses GitHub Pages in a way that threatens our ability to build or serve other users' sites, technically or practically, we need to step in, and those posted limits are intended to help set expectations as to what you should and shouldn't use GitHub Pages for. But again, the vast majority of the nearly 1M users that use GitHub Pages will never hear from us (and in most cases when they did, we proactively reached out and provided ample warning/offered to help).


> Additionally, GitHub Pages sites must refrain from:

> Pornographic content

How strict is this rule?

There has been some interesting open source AI projects related to NSFW images (eg. Yahoo_NSFW, Open_NSFW, MilesDeep). What is GitHub's policy regarding this projects? Could a GitHub page present results? What about a link to download a training dataset?

I also just noticed that Open_NSFW's web page is hosted on GitLab (https://open_nsfw.gitlab.io/). Would a page like this (which might be considered pornographic, depending on your interpretation) be allowed on GitHub pages?


See the section on "Sexually obscene content" in the GitHub Community Guidelines (https://help.github.com/articles/github-community-guidelines...). We purposely chose the word "obscene" and not "explicit" to allow for explicit but educational, scientific, or artistic content like this.


Clever, but respectful. I did not expect this response, but thinking about it again in context of the company it is coming from, I can see why you chose that response.


Thank you. Seriously. GitHub Pages is a great service. It's saved me in a whole number of smaller projects. The Usage Limits are surprisingly liberal considering it's a complimentary service! So, thank you and your team!


Do the "requests" mean page views or http requests? A single page view almost always has multiple http requests.

Github really needs to add https support for custom domains. It's 2016, https should be the default.

[Reposting my comment]


CloudFlare supports HTTPS for GitHub Pages, would definitely recommend it as although Namecheap is pretty good CloudFlare make everything DNS, CDN and security related soo easy for $0. (I'm not affiliated with them in any way :P )

https://blog.cloudflare.com/secure-and-fast-github-pages-wit...


The route from the user's browser to Cloudflare is encrypted (https), but the route between Cloudflare's servers and github pages is only http as Github does not support https for custom domains.

User <---https---> Cloudflare <---http---> Github pages


As long as your github page has https (it does) Cloudflare can do full HTTPS all the way through, and even strict to require a valid ssl cert (which github has).


I don't think this is correct for GitHub Page sites that use custom domains. See [1] and [2].

[1]: https://konklone.com/post/github-pages-now-supports-https-so...

[2]: https://github.com/isaacs/github/issues/156


Any tips on how to configure this? I'm pretty sure my setup has the problem that ploggingdev talked about.

I acknoledged the issue given, but considered it better that the content the user is accessing was hidden for their privacy - the link between Cloudflare and GitHub is backbone-of the internet stuff and has a whole different set of risks. Would be nice to plug it.



We may make things more flexible down the line, but for now, it was motivated by two primary reasons:

1. The overwhelming majority of users use one of those three design patterns. For example, I can tell you that more than 98% of GitHub Pages sites use either `master` or `gh-pages` as their primary branch (with only about one tenth of one percent of sites using the `stable` branch).

2. Our experience tells us that every option we add to GitHub Pages increases the learning curve for newer developers. With only those three options, if you're just learning HTML, you don't need to understand Git's branching model before you can create your first website.

We chose these options to start because we thought they struck a good balance between supporting collaborative documentation workflows for open source projects and encouraging "hello world" experimentation among new developers, but as with most features at GitHub, this is just the start.


That's what defaults are for.

You have two choices to make: which branch and which path within that branch. The defaults are "master" and "docs," but allowing maintainers to make different choices there doesn't need to impact the onboarding experience for new developers. The newbies use the defaults - people who need customization can customize. Software development is a broad field and projects work within various constraints and architectures. Asserting a convention here feels a bit misguided.

You already allow the default branch to be customized. It's a bit weird that I can specify a default branch (like "develop") and delete the master branch, but then won't be able to publish docs without recreating master or maintaining a gh-pages branch.


> Asserting a convention here feels a bit misguided.

Expecting GitHub to spend real money to cater to a tiny minority of users of their web service is misguided. The business models behind public internet services are simply not structured to cater to anything but the lowest/simplest common denominator users.


I expect a reasonable amount of flexibility in developer tools, especially since the difference between the current solution and the customizable one is most likely two inputs in a web form.

It's great to see new features shipping on GitHub.com (especially since it felt stagnant for so long). I have no idea what their codebase looks like, but prepopulating a text field with the contents of a template.md file and allowing the name of the publishing branch to be specified feel like they should have been fairly minor changes. Especially now that they clearly have the architecture to choose repo-specific publishing locations, allowing those to be user-specified should be a very low-effort way to make the tools useful for more projects.


> Expecting GitHub to spend real money to cater to a tiny minority of users

You must be new here...


Great! Please don't give in to the demand for added flexibility without strong justification. Just as it did with README.md, GitHub has an opportunity to drive successful conventions and global simplifications that virtually no other organization in open source can.


I'm not sure if you're being sarcastic, but README file was a convention for decades literally.


Not sarcastic at all. First, I said README.md. Markdown in readmes was popularized - if not invented by - GitHub. Furthermore, the top billing of README.md files in the GitHub UI took over from more traditional meta-level documentation that used to be in separate places, such as an "About" field/page on SourceForge or a separate web page. I much prefer this documentation in repository. Even if projects had READMEs, they might not include certain information there previously.


All these things are just GitHub's implementation and UX details. The README file is the traditional meta-level documentation (if I correctly understand what you mean by that: top level document that gives a more or less comprehensive overview of the project). READMEs always included this information, sometimes offloading details of particular topics into INSTALL, AUTHORS, LICENCE, etc. files. GitHub just figured they might as well use the thing as intended and actually render it on a web page. As for markup, people were already using email like markup in those files, so this is again just following established practice to its logical conclusion and modern implementations. GH's sole innovation in convention here is tacking on the very annoying file type extension in order to not have to figure out the format automatically.


Thank you for making a sane default based on actual usage data. BTW, for people who need something a bit different, I have been using this for one of my sites that is hosted on GitHub:

  $ git subtree push --prefix=public origin gh-pages
The `public` directory is where the HTML/CSS/JS generated by my static site generator and I push that subtree to the gh-pages branch.


Favorited for later. Thanks, that's useful!


Personally speaking, I think this is a great idea. Great job!


Why /docs instead of something more encompassing like /site or /pages?


> Is there a discussion somewhere that outlines some of the reasoning behind this pick?

Yes. The decisions was actually made on the open source GitHub Pages Gem repo, based on both user feedback and actual usage stats: https://github.com/github/pages-gem/issues/179.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: