Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Libaws: A simpler way to declare AWS infrastructure (github.com/nathants)
87 points by ithkuil on May 24, 2022 | hide | past | favorite | 89 comments


Why should it be?

is Linux easy? Is Windows?

AWS (as is GCP and Azure) can basically be thought of as an entire operating system, with extremely low level concepts (VPCs, EC2 machines), mid-level (Lambda, StepFunctions, EventBridge), and high-level (Translate, Comprehend, Rekognition, etc)

I am not sure if this library helps. It presents its own opinionated flavour of AWS but doesn't really hide enough of the details.

If you want to use AWS and not think of underlying infra like networks and security groups, build on high-level services like EventBridge and StepFunctions.

Alternatively, if you don't want to think about services at all, use CDK constructs: https://docs.aws.amazon.com/cdk/v2/guide/constructs.html

For example, ehre is a NetworkLoadBalancedFargateService: https://docs.aws.amazon.com/cdk/api/v1/docs/@aws-cdk_aws-ecs...


My anecdotal evidence is developers struggle big time to deploy simple stuff to AWS. "Easy things should be easy" isn't part of their philosophy.

Not complaining thought, I make good money understanding AWS so others don't need to.

> Why should it be?

Because if they don't focus on developer experience, they might end up being treated as a commodity. Eventually someone will eat their lunch.


> isn't part of their philosophy.

There is no one AWS philosophy. The philosophy of foundation networking services is going to be different from higher-abstraction services.

They are for different customers with different needs.

> Because if they don't focus on developer experience, they might end up being treated as a commodity. Eventually someone will eat their lunch.

That's certainly true.


> Why should it be?

There exists a cohort of internal customers at medium-to-large sized software companies that need to ship new applications without being bogged down by the decisions of how to get that thing running and playing nicely with the rest of the ecosystem. For the same reason they don't give their end users CRUD APIs to use their products, it doesn't always make sense to give our developers the entire suite of AWS and say "figure it out".

There's need here but I'm not sure we've figured out what the right answer is.


> a cohort of internal customers at medium-to-large sized software companies that need to ship new applications without being bogged down by the decisions

Well, if it's a medium-to-large sized company they will be bogged down by governance processes anyway. Who approves the budget for this? Who signs off the cybersecurity and data protection compliance policies?


I consider that as something that would need to be abstracted away from individual contributors who aren't part of the infrastructure team(s) and baked into any self-service solution. These are the things that slow teams down and we want sorted before they even think about spinning up a new app.


Right, and I think that strikes at the core of the issue; AWS allows you to configure things in insecure, poorly architected ways. Then they'll tell you "oh this is bad, you should fix that", as if they weren't the ones who built it to do that!

The classic example is public-read S3 buckets. My favorite-er example is actually a security guidance which reads: nothing should use the default VPC (usually phrased from the perspective of, all the SGs on the default VPC should restrict all traffic). Follow that line of logic: why did you give me a default VPC then? Well, they'd argue, it makes onboarding easier, you can just deploy all these cool resources without worrying about the complexities of the networking. Wait hold on. All of this is within their power to control, they made the networking insanely complex, they made the networking layer integral to the Cool Product, they created the idea of a default VPC, they created the security rule that says its bad, and it really paints this picture of a house of cards where no-one is in control, no one has vision, its just a bunch of heads arguing with each other about their worldview on how things should be.

Of course, one head yells "backwards compatibility, legacy, more knobs more knobs" and because its Amazon (and Microsoft) that head gets a megaphone.

Because of this, as the OP wishes, AWS accounts are an extraordinarily useless thing to hand over to a development team. They're nothing like a Heroku or DigitalOcean account; at best, they're so complex that you need a domain expert to understand what to do, so all that ends up living in the CloudOps Org. At worst, the team will shoot themselves in the foot, and maybe take the company's bank account (or worse, customer data) with them.

Thus, the New Hotness in BigBoyTech is to build a PaaS on top of AWS. Because Amazon isn't capable of doing it! Lightsail is a joke. Lambda is probably the closest they've gotten. This PaaS asserts the security, compliance, performance, architecture, etc opinions the company wants, and dev teams can build on it.

Fine, whatever, but at some point you have to question why we're so obsessed with inefficiency that we tolerate knobs we won't use, or we'll miss turning to Secure, when alternatives do exist. Do you know what a compliance official's wet dream is? "Hey man I need somewhere to store this file in the cloud" "Use Google Drive" DONE! They bought Google Drive. Maybe tweaked a couple settings. Secure by default. AWS is nothing like that, and it can take millions in engineering to get an active AWS account to that state (let alone delegated management of multiple accounts across an org).

It's actually collective insanity that we put up with it, when the only half-reason why it is so is because JP Morgan has a COBOL service built in 1987 that can only run if its hardwired with a telephone jack directly to an ISP interchange in Ashburn, and they're too cheap to modernize it so just tax the rest of the world by abusing Amazon's "never say No to a paying customer" policy. I'm being dramatic, but that describes 98% of the knobs on AWS.


as an indie dev who is NOT a DevOps person i'd love more tooling that enabled me to configure AWS and have good safe defaults preset for me. Right now you really need to be a pro or spend a lot of time learning the nitty gritty configuration stuff of services you need but don't actually care about.


Any decent DevOps / SRE / cloud ops type person has been striving to build something like this for a long time and has been frustrated at the kind of experience developers get out of the investment they put into all these crazy tools.

The problems I’ve seen is that like most software in general there is no one size fits all approach to scaffolding and management infrastructure in AWS or any other cloud, especially when we add in the realities of business pivoting being likely.

On the other hand, there are offerings such as AWS Landing Zones that sorta help if you’re running a big ol’ sprawling company but it’s totally overkill and costs more than anyone really wants to pay for if they’re just getting started.

IMO, the AWS journey is probably better off for most folks using Lightsail and scaling with several of those instances making sure they can be containerized and have a brain dead easy deployment workflow that can withstand instance failures before getting more complicated. A lot of people are pushing serverless at first but it’s cumbersome enough for application developers that it’s distracting from the most important thing early on - building your dang product, silly.

Otherwise I’d suggest going hard into containers and using a PaaS until it really, really hurts to where you can afford to pay the eye watering compensation of most decent SREs (expect $15k+ USD / month total spend if you have few advantages in terms of access to the market).

I am not a fan of over-planning infrastructure at all but have also seen what happens when developers take too long to hire someone specialized in infrastructure and has seen the common mistakes that can gut company growth hard as they’re trying to scale up and out.


Or make a ton of mistakes and scramble to recover from them...


That was sort of the idea with the various PaaS implementations out there. But invariably they either go complicated (basically transitioning to a Kubernetes-based container platform). Or they were missing some capabilities, making them less interesting to enterprises in particular.


Is there a PaaS out there that enables some guardrails? An example would be "we only want to use Fargate, don't manually provision EC2" and that option just doesn't show up as something you can self-serve.


At my old job, SecDevOps built a metric fuckton of monitoring of all AWS accounts in the Organizations, as well as some pretty complex pipelines that checked the typs of infrastructure being built.

Everything built in AWS was done with Terraform (minus some bootstrapping of accounts and orgs). Custom terraform modules were built which were guardrails of underlying raw Terraform providers.

I'm not saying it's ideal, I'm just throwing it out there.


I'm not really familiar with terraform, I'll check it out.


100% agreed

"Piloting a passenger plane should be easy"

"Performing brain surgery should be easy"

None of that should be easy. AWS is not meant to be focused on ease of use in the first place. It offers brutal power to those who know how to operate it and brutal ways to fuck up if you don’t.

The shipping of applications mentioned by the comment in this thread is not a valid statement either.. there are countless services by now attempting exactly this (Firebase e.g), yet, demanding that AWS or GCP should be child's play to use shows a complete disregard for what they actually offer.. I feel like this is similar to saying "we want to build a spacecraft to transport satellites into space, but we'd love to just have two buttons to start and land the rocket"

And then, even if you manage to abstract everything away to just those two buttons, someone will press it in the wrong moment and the rocket hits a plane flying above and you realize the expertise required is always required and someone who has this expertise doesn't need all of the abstraction in the first place.


> "Piloting a passenger plane should be easy"

> "Performing brain surgery should be easy"

> None of that should be easy.

Why shouldn't those things be easy? Wouldn't it be better if they were? It would mean more people could do it well (I'm not saying let anyone do it!) and it would presumably have come about as a result of better understanding and tooling. Imagine if people didn't die on the waitlist for the best neurosurgeon in the country, because we'd made brain surgery easier and so now all the neurosurgeons could do that one formerly-super-difficult surgery.


If surgery was made "easy" then it would be a one size fits all and anyone with an outlier condition would not get medical care. We train doctors to have a very deep knowledge of their discipline because when they do encounter that edge case, they need to know immediately what to do.

Medical care is sometimes extremely time sensitive and you don't want the doc to have to ask his superior how to stop your bleeding.


> If surgery was made "easy" then it would be a one size fits all and anyone with an outlier condition would not get medical care.

On the contrary, I think it would be a lot easier to treat an outlier condition because doctors would have a lot more time to expand their skills and study uncommon conditions.


> If surgery was made "easy" then it would be a one size fits all and anyone with an outlier condition would not get medical care.

Why would that necessarily be the case? Why wouldn't we train doctors to still know what to do, and also make it a lot easier to do it well?


.. we are.


Because anyone trying to perform brain surgery still needs to have phenomenal understanding of the (brain) matter at hand. Sure, super-accurate robots to do the cuts and whatever are really cool, but I'd reckon they still have to be operated by someone who knows what to do.....


> if you don't want to think about services at all, use CDK constructs

This is the level that it makes sense to use AWS at if you're a small team. 10 lines of code/5 minutes and you have a deployment of https terminated load balanced containers that scale with your usage. It just works, it scales, and for small teams it saves a huge amount of time (and Dev effort - instead of focusing on setting up the correct nginx or haproxy settings I can write my app).


Because they're literally wasting your existence making it overly-complicated to either justify their own egos, or, to steal your/your company's money for an inferior product.

Linux is easy/well-designed/well-documented. Windows is a hackjob mess. There's a difference.


We need AWS Distributions, like Linux distributions, and pre-made Apps that work with that distro.


There is no tool in the world that will make you good at something you don't know.

If you allow me a story, I sail on a kind of boat, called a squib. For the foresail, eg the sail that is in front of the mast, it has the regular sheets, a Cunningham, a halyard tensioner and barber haulers. The squib is a very easy boat to sail, notoriously hard to sail fast.

Complaining about AWS being complicated, is the same as complaining the squib is complicated, just because it gives you options. I don't have to use all those controls, I still can get from a to b without using them. But if I want to win the race, I have to understand what they do and use them appropriately.


Sometimes the race is "not burning your whole raise".


> X should be easy

> proposes solution that involves YAML

Am I crazy, ignorant, or is YAML the most tedious and error-prone format edit?


I'm really fascinated about why so many infrastructure-as-code tools are actually infrastructure-as-markup. Many of them seem to "evolve" until they're awkward attempts at bolting full programming language semantics onto that markup language. I imagine it started out as wanting to keep a simple configuration-style format for infrastructure but fell apart when people's stacks grew large and complex.

Projects like Pulumi and CDK seem to be much better approaches, but don't seem to have much traction compared to TF, Cloudformation, etc.


In the early days, people would write imperative scripts to provision infrastructure, but once it's out there you can't just delete resources that you no longer want by deleting the relevant blocks from your script and re-running it--you had to delete them manually or write a "migration script". This was untenable.

Then some tools came out which let you use YAML or JSON to describe the desired state of the world, and some tool would diff that against the current state of the world to determine which resources to create, which to delete, and which to update. People started to conflate "YAML" with this sort of diffing tool and they conflated imperative programming languages with the legacy imperative scripting approach. Early infra-as-code vendors capitalized on the YAML=good/programs=bad myth and re-emphasized it by showing cute toy examples and passing off the simplicity as an effect of the YAML/etc rather than the inherent simplicity of the example.

Unfortunately, pure YAML/HCL/etc doesn't scale--you end up needing to reference resources from other resources (and/or attributes on those resources), and you end up needing to DRY up many repetitious blocks and so on. What you naturally want is to generate those static YAML configs from a programming language; however, IaC vendors had committed themselves to the "YAML = simple" brand so instead they started building half-assed programming language features into their YAML dialect (effectively using YAML to represent abstract syntax trees for the world's crappiest programming languages). Terraform and CloudFormation fall into this bucket.

I guess Helm looked at the landscape and decided it wasn't easy enough to generate syntactically invalid YAML blobs, and consequently decided to use text templates.

Eventually the industry caught on to the scam and demanded programming languages back, so we got CDKs which misunderstand the assignment in a different way. Rather than emitting YAML/etc that can be passed into a diff engine, you get some weird bindings that calls (and is called from) a node process (I can't tell what this node process actually does even after reading the docs).


CDK outputs CloudFormation templates that you can pass to any other tool if you wish. CDK is technically just a glorified shim over CFN (although 1000x better but you do get caught in the awkwardness that is CFN many times).


It does, but very indirectly which defeats the point of "just emit YAML". I regret phrasing it that way as it seems to have caused a lot of confusion. See my responses to sibling comments for more information.


> Eventually the industry caught on to the scam and demanded programming languages back, so we got CDKs which misunderstand the assignment in a different way. Rather than emitting YAML…

You’re in for a shock when you realise all they do is emit YAML


Maybe they do, but if you have to tack on a bunch of inheritance and run everything through a node process just to emit YAML then you've lost all of the benefits of "just emit YAML".


Amazon's CDK emits YAML.

Troposphere emits YAML.

You can diff the outputs if you wish, or did I misunderstand your last paragraph.


"Just emit YAML" is about avoiding all of the inheritance and javascript IPC in favor of writing straightforward YAML-builder code in the host language (more like Troposphere, but Troposphere also has poor ergonomics).

These are two examples I found online. The first is more complicated but it doesn't do any JavaScript IPC, no inheritance, no mutation, etc. You write it just like you want to write YAML, and it straightforwardly emits YAML (rather than the CDK version which is more opaque/magical). I prefer this:

    def main():
        bucket_name_parameter = ParameterString(Description="The name of the bucket")
        key_arn_parameter = ParameterString(
            Description="The ARN of the KMS key used to encrypt the bucket",
        )
        bucket = Bucket(BucketName=bucket_name_parameter)
        t = Template(
            description="S3 Bucket Template",
            parameters={
                "BucketName": bucket_name_parameter,
                "KMSKeyARN": key_arn_parameter,
            },
            resources={
                "Bucket": bucket,
                "BucketPolicy": ManagedPolicy(
                    PolicyDocument={
                        "Version": "2012-10-17",
                        "Statement": [
                            {
                                "Sid": "AllowFullAccessToBucket",
                                "Action": "s3:*",
                                "Effect": "Allow",
                                "Resource": Sub(
                                    f"${{BucketARN}}/*", BucketARN=bucket.GetArn()
                                ),
                            },
                            {
                                "Sid": "AllowUseOfTheKey",
                                "Effect": "Allow",
                                "Action": [
                                    "kms:Encrypt",
                                    "kms:Decrypt",
                                    "kms:ReEncrypt*",
                                    "kms:GenerateDataKey*",
                                    "kms:DescribeKey",
                                ],
                                "Resource": key_arn_parameter,
                            },
                            {
                                "Sid": "AllowAttachmentOfPersistentResources",
                                "Effect": "Allow",
                                "Action": [
                                    "kms:CreateGrant",
                                    "kms:ListGrants",
                                    "kms:RevokeGrant",
                                ],
                                "Resource": key_arn_parameter,
                                "Condition": {"Bool": {"kms:GrantIsForAWSResource": True}},
                            },
                        ],
                    },
                ),
            },
        )
        print(json.dumps(t.template_to_cloudformation(), indent=4))
Rather than this:

    class S3Stack(Stack):
        def __init__(self, app: App, id: str) -> None:
            super().__init__(app, id)
            self.access_point = f"arn:aws:s3:{Aws.REGION}:{Aws.ACCOUNT_ID}:accesspoint/" \
                f"{S3_ACCESS_POINT_NAME}"
    
            # Set up a bucket
            bucket = s3.Bucket(
               self,
               "example-bucket",
               access_control=s3.BucketAccessControl.BUCKET_OWNER_FULL_CONTROL,
               encryption=s3.BucketEncryption.S3_MANAGED,
               block_public_access=s3.BlockPublicAccess.BLOCK_ALL,
            )
            # Delegating access control to access points
            # https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-policies.html
            bucket.add_to_resource_policy(
                iam.PolicyStatement(
                    actions=["*"],
                    principals=[iam.AnyPrincipal()],
                    resources=[
                        bucket.bucket_arn,
                        bucket.arn_for_objects('*')
                    ],
                    conditions={
                        "StringEquals":
                            {
                                "s3:DataAccessPointAccount": f"{Aws.ACCOUNT_ID}"
                            }
                    }
                ),
            )
Note: Compared to Troposphere, the bindings in the first example are completely generated from a spec published by AWS, so they never fall behind. They're also type-annotated so you can use those bindings with type safety. Sadly, the project has been abandoned because it's CloudFormation-specific and the world has moved away from CloudFormation.


I am fond of Starlark, the Python-like configuration language of Bazel. It's code, but they've removed most of the footguns that you encounter when using Python as configuration (e.g. you can't mutate global state).

I think it's a pleasant middle ground between config and code, it's definitely code, but limited.


I have grown to loathe YAML over the years. It is a giant foot gun of a mess of syntax.

I lile python, I like significant whitespace, I hate how YAML does it.


Yes! YAML is great at visualizing data structures or producing small configs. The way it’s being used today in devops is terrible


Agreed, trial & error with YAML sucks. People hated it so much that Amazon created CDK, a tool that basically generates/executes Cloudformation templates.


Using a proper programming language seems to be the obvious end-game of configuration. Everything as code. Not quite sure why we keep trying and failing to use non-Turing complete configuration languages.


The worst offense is perhaps that we've evolved quite intuitive languages that would impeccably serve most con/prosumer needs as far as "mostly static but smartly architected" configurations are concerned.

Python, Go, even Scratch (for a 100% GUX)… And among the many pros, error handling is enough to warrant the change from markup.

But IMHO here's the one barrier: people need to stop thinking of an Information System (IS) as a collection of machines, and move one level of abstraction above to consider said IS as the unit, and machines as subs of sort (subsystems, components, modules, call it what you want). Conceptually, your "main" space is the IS, everything else is modules down the namespace hierarchy. We need to think of machines as we used to think of programs, and think of IS's as we used to think of machines. In the UNIX philosophy, any application is a collection of programs, just as an IS-as-code is a collection of machines-as-software-modules (basically feature libraries for "main" to use).

When that paradigm comes, you'll be able to simply "import" some subsystem (say Elastic Search, whatever machines in the IS) and add that (as a class, method, whatever) to all your existing objects, because the concept of "interfaces" (quite literally the basis of interoperability, dating back forever in computing) will be _native_ to your IS model in a Turing-complete paradigm.

Not sure my wording made sense to non-programmers (cue: "quotes" are technical terms, not general meaning), but I'm too lazy to write a layman expose.


The issue isn't that they're Turing complete or not, but that people are hacking programming language features on top of YAML in the worst conceivable way. Like if you were regular sadistic, you would say "I'm not going to write Python, I'm going to write Python's AST in YAML", but if you're advanced sadistic you say "I'm not going to write Python's AST in YAML, I'm going to make up an AST for a programming language that doesn't exist with bizarre and inconsistent semantics in YAML (YAML has lists, but I'm going to make people pass lists as comma-delineated strings--it'll be a riot)".


...and doing that allows one to interface to existing sources of truth for inventory and whatnot.


Feels like it needs to be stated it also introduces additional attack vectors. It’s not without consequence.


agree


Just like with every other tool, right? Ansible dynamic inventories, Terraform plugins, etc.


YAML is bad, but not that bad.

The problem is more with the software that uses it, and it using YAML is just a high quality proxy for the signal you want.

YAML is a bit error prone, but it's those tools that have an incredibly complex format.


To be fair, I find editing YAML files not that bad, especially with VSCode checking the syntax and pointing out anomalies.

But it can be tricky to read other people's YAML files on GitHub. It's easy to miss subtle mistakes.


The worst YAML is when they use the templating and anchors for repetition. It ends up completely unreadable.

Though better than when Jinja is used on YAML.


YAML was not created to be edited by hand. It is a serialization data format; programs are supposed to serialize objects into it, not have humans type into it.

Do you edit Python Pickle files by hand?


I didn’t even get that far. I simply can’t deal with anything that does indentation with 2 spaces.

PS: I forgot that I wrote a cloud init file merely hours ago. Yuck.


Terraform modules? In particular, https://github.com/terraform-aws-modules/ is a better solution for anyone looking for this IMO.

(Yes there is a comparison to 'terraform' in the Readme - I'm not convinced it's considering modules, versus just using the AWS provider directly. Another bonus of using tf modules over this: you can eventually realise you need to graduate to using the AWS provider directly, and do so quite easily (cf. `terraform state mv`).)


We've had a lot of success using the AWS CDK (our code[0]). We wrote a tonnn of stuff with Terraform back we were building a low-code platform, but it was a ton of work to manage.

The AWS CDK has higher level "constructs"[1] that are really nice to use.

How does that compare to the state of the world for things like Pulumi or Terraform these days?

0: https://github.com/lunasec-io/lunasec/tree/master/lunatrace/...

Any of these files represents the infrastructure. The "bin" folder is the actual entry point.

1: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_...

We're using this as well as the Fargate SQS worker one to scale up backend job processing. Lambda is a pain and this works quite magically.


You were building a lowcode platform so it makes sense. I wouldn't promote that to every other DevOps team out there because they'd shoot their feet regularly.




Or even better, Crossplane for auto-reconciling infra: https://aws.amazon.com/blogs/opensource/introducing-aws-blue...


Why change language when you can do the same with Cloudformation?

https://github.com/cfn-modules/docs

I would personally switch to CDK, but of you wanted to add predefined templates you don't need to switch your existing languages.


Sure, if you're already using it. Greenfield personally I'd prefer Terraform, but each to their own.


IMO, this is seeing everything as a nail when you're a hammer.

AWS provides extreme flexibility. This really only becomes valuable once you hit a scale that you can optimize your infrastructure around the shape of your needs.

* Need some heavy workload to run in the most cost effective data center at the most cost effective time. Yep, AWS is great for that.

* Need a specialized GPU set. Yep, AWS is great for that.

* Need some crazy storage or memory setup. Yep, AWS is great for that.

For everything else, you should really consider using a PaaS with a good ol' Docker container.


> AWS provides extreme flexibility. This really only becomes valuable once you hit a scale that you can optimize your infrastructure around the shape of your needs.

This is upside down? When you hit the scale where you get value from modelling your infra around your business use cases that's when you go on-prem or use services that let you rent bare metal. Before that you use cloud.


It's interesting to see everyone's different views on Docker. Here you (at least to my reading) suggest that people who need simplicity should just chuck their app in a Docker container, but elsewhere I've seen it portrayed as something you shouldn't touch until you have an infrastructure team.

Maybe I'm interpreting this wrong and the HN-consensus path is something like bare VPS => Docker => use AWS directly and tweak everything how you want it?


Docker containers are pretty simple, and I've got AWS App Runner / Fargate / ECS / ECS Anywhere to work pretty well for them.

You get something working in a docker, push to your AWS repo, and App Runner grabs it and goes.


Yes because Kubernetes is simple…


I have used AWS CDK for IaC and for the most part, it works great. It also has escape hatches for constructs that are not yet implemented in the CDK API that you could fall back on CloudFormation templates or whip out a custom resource, if you are so inclined.

For the most part, the constructs provided out of the box has good defaults and the documentation helps. The aws-cdk support team is also responsive on issues/bugs reported.

This is just based on my experience and I started off as a total noob on AWS.


Some feedback on this idea from someone who works at AWS and has worked on similar tools to make AWS easier, such as AWS Copilot (https://aws.amazon.com/blogs/containers/introducing-aws-copi...):

The hard part isn't making AWS easier, it is making a tool which is both easy and flexible enough that a user can do what they need (within reason), then outgrow your tool and move on. In specific any tool based abstraction has the downside that if the only way to interact with your infrastructure is through the tool then you severely limit the options for a user to make customizations. AWS will add features faster than you can keep up, and there will always be someone who can't use your tool unless you have feature X, Y, and Z.

At that point you can either add more and more features until your tool becomes as complex to use as all of AWS itself, or you can draw a line in the sand and say "you have outgrown my tool, it is time to move on". When they reach that point you should try to make it easy for them to avoid having to recreate everything from scratch themselves.

For AWS Copilot we tackled this problem in part by adding a "copilot svc package" command which spits out a full CloudFormation stack for your application and all its infrastructure, so that you don't have to restart from scratch if you decide you have outgrown Copilot. Copilot will help you deploy simple Docker based container applications quickly and easily, but then if you have more complex needs you can "graduate" on to the next experience by exporting things up into a stable, productionized CloudFormation template that lets you customize the full range of AWS settings and resources to your hearts content.

I think your tooling will need something similar. The "outgrowing" section (https://github.com/nathants/libaws#outgrowing) is a bit weak right now, and that's what you need to really nail down for people to have the trust to start adopting a tool for serious usage. They need to know that once they outgrow the tool they will have a viable path that won't be a huge headache.

Otherwise I like your ideas here, and I think the YAML based DSL here is a fascinating abstraction of many common, simple use cases. Great job!


In case of cloud providers its quite often that Terraform Providers are updated faster by dedicated teams than their own Cloud solutions.

Cloudformation or AWS CDK are upgraded slower than providers for example.

So this argument is mot.


It's a mixed bag. From what I've seen its 50/50. Sometimes Terraform providers are way behind, and sometimes they are ahead. Additionally, the low level providers which are easiest to keep up to date don't really abstract much. It is mostly a one to one API mapping, so it doesn't make the AWS experience any "easier".

I'm specifically speaking about tooling that attempts to abstract and simplify the AWS experience. This is where the trickiest decisions have to be made about which properties and options to surface, how to surface them if so, and which ones should just be left out and require the user to drop down a level to the lower level provider as a fallback.


Thats called terraform modules. You dont need to understand what they deploy but dont be surprised that default values will be often not the best for your use case.

Managers are long looking for a way to hire cheap labor. Issue us cheap labor lacks knowledge required to build soecialized things. And thankfully it will never change, no tool can provide that.


AWS employees are contributors to Terraform.


I am convinced that AWS is intentionally obtuse in order to promote job security for engineers and thus be the preferred platform


My experience as well; I had to become really intimate with aws over the years because I noticed more and more that consultants we hire (we don’t need fulltime) for aws or other cloud setups, pull wool over your eyes by throwing out terms (you generally wouldn’t know outside these setups), saying the word ‘security’ a lot and showing extremely complex deployment charts which look great but contain many things you didn’t know you needed and probably do not need. I cannot count the times I found 1000-10000$/month of unneeded stuff at clients in these setups. First it was just ec2 (‘in the beginning’) which was more expensive than our vps/baremetal setups, but that price jumped up for only marginally more complex setups and so I decided to learn it all and see what was happening. Almost never justified in my experience.


A lot of effort is put into the customer experience, however often it's by the feature devs. So it's good once you grok it. But obtuse to start and not good across products.

It really needs some love from a higher level thinker who's not steeped in the solution, but in the problem. Presumably this is a PM but there's just not a lot of good ones in the org to be frank. They're almost not worth consulting.


Same for devops in general and often, programming too. A few smart, cunning personal-life-optimizers drove this and a vast majority of innocent workers profit from this system. Personally, I profit from this and I am very scared the people that pay us eventually realize how easy everything could be.


As someone who has worked within AWS, and also used AWS as a customer at multiple companies, I totally understand that sentiment. But I wouldn't want to solve some of the problems myself, rather than using their services.


> But I wouldn't want to solve some of the problems myself, rather than using their services.

Sure, but vanilla setups should be trivial and optimised for cost. That is not to their or their consultants’ benefit.

I mean, whenever I asked someone ‘how do I do this in aws’, they tell me ‘oh that is a bog standard blah setup, very Simple’. So if that is simple, why is it so hard to set up as a novice (with 20 years of rack server bare metal hosting experience). I had to learn everything about aws in order to figure out that all experts saying things are ‘simple’ were still not providing optimal solutions for my vanille, and for decades unchanged across many different applications, hosting wishes. If you don’t know literally everything about aws and hired a consultant or a team to set it up, you are probably (massively) overpaying. And often, as I see at clients, for vanilla stuff that should be 1 click setup. But yes, I guess tool providers should provide this on top, not aws itself.


I don't think AWS should be easy, any more than a compiler should be simple. It's a different layer of the stack. I do think most people should use Heroku/Render/etc on top of AWS, and I bet AWS agrees... because either way, AWS is making money.

AWS is a low-level tool, and that's okay.


It's funny to me why people aren't aware of CDK. It's literally no different than writing code. You don't have to depend on anyone, you can customize this to no end, its part of your codebase.

    from aws_cdk import aws_s3 as s3,naws_iam as iam,bcore

    class S3Template(core.Stack):
        def __init__(self, app: core.App, id: str, **kwargs) -> None:
            super().__init__(app, id)

            myBucket = s3.Bucket(self, 'MyFirstBucket', 
            bucket_name='balkaran-aws-cdk-s3-demo-bucket')
            myBucket.add_to_resource_policy(
            iam.PolicyStatement(
                    actions=['s3:GetObject'],
                    resources= 
                    [myBucket.arn_for_objects('*')],
                    principals= 
  [iam.AccountPrincipal(account_id=core.Aws.ACCOUNT_ID)]
            )
        )
            myUser = iam.User(self,'s3User') 
            myBucket.grant_write(myUser)

    app = core.App()
    S3Template(app, "S3Template", env={'region':'us-east-1'})
    app.synth()


Careful, sarcasm is hard to detect online


are you sure you can offline?


I'm a bit lost as to how learning a new infrastructure-as-code tool makes AWS easier and more "fun." At best, you're still exposed to all the same complexities of the different AWS services, but now you gotta learn this new tool.

I do sympathize with the idea though, as I was recently trying to deploy a container for a small project, and didn't want to deal with all the complexities a fully featured cloud provider has. I found AWS Lightsail, GCP Cloud Run, and later settled on fly.io. It's hard for me to imagine using this tool from the beginning, or even graduating to this tool instead of upgrading to plain AWS and maybe Terraform.


A lot of Go code using the Go AWS SDK that's useful as a non-trivial example in and of itself.

Is this IaC "light" for purposes of setting up and tearing down small applications for experiments or demos?

Obviously a lot of effort spent. Of course, once you know AWS well enough to "make it easy", by that time, you've found the need for tooling which supports the harder stuff. And, frankly, someone new to AWS needs a lot of context just to understand the canonical ways to wire stuff together in AWS to even comprehend which pattern will be useful.

For example, S3 events are really backed by SQS under the covers, with implied retries on failure, but no DLQ support, and a lambda that triggers on an arriving S3 event may need to account for "duplicate" events, etc. That's just one wrinkle that even a simplified configuration might require more complex understanding of the underlying AWS ecosystem.


I think AWS is getting there, but hiding either with the cost of hiding complexity with tools like AWS CDK or even actually with services like AWS Lightsail designed for people that doesn't care about building blocks (ec2, s3, route53, sqs, sns, etc...)


Saying AWS should be easy is like saying a Lego parts bin should be easy. Easy for what? Easy to build a robot monkey with? Even if legos are really easy to snap together, you still have to do the work of putting them together in the shape of a robot monkey.

This project seems to be a collection of pre-assembled lego parts, that can eventually be put together in the form of a robot monkey. But I think all of us actually just want the robot monkey, not several clumps of legos. We should have more feature-complete projects that we can push a button and it pops up for you. But that's very different than saying AWS should be easy.

I think we should stop obsessing over "code", and start focusing on robot monkeys. In other words, stop thinking you need more and more and more abstractions around calling AWS APIs, and focus more on publishing a complete working solution for a specific problem.

I don't give a shit about SQS queues and CloudWatch alerts and Lambdas and IAM role policies! I give a shit about a file-upload-website that automatically triggers a python program to process a file and deliver the result into an S3 bucket or Postgres database. That complete thing is the robot monkey. That's what we should be publishing; not a million lines of boilerplate that I still have to spend time and energy cobbling together into a robot monkey. I have built the same robot monkey over and over and over again in 5 different abstractions. I don't care what the abstraction is, I just want the robot monkey!

I think the reason we don't have more robot monkeys is that they're built within companies out of "code", and so we aren't allowed to open-source them, because it's "intellectual property". We need a new abstraction that is not code, and is not considered super secret proprietary information, so we can publish it as open source without thinking.

For example: what if you could point a tool at some part of your AWS account, and it exports a declarative configuration file that is a snapshot of how everything is plumbed together? The tool could strip out any "literals" and anonymize all strings, so that there is nothing proprietary in the end result except a lot of API calls to functions with random names. The end result would essentially be an unlabeled architectural diagram that you could execute. You could then publish that on the Robot Monkey Database with a description of what it does.


As stated earlier this week, AWS's CF is your opportunity. Seriously, there is a business opportunity here. Make a multi-cloud interface that is easy to use, prevents lock-in, has reasonable defaults ... please?


Orchestrations that promise they are “cloud-agnostic” like Kubernetes in fact are also bound to vendor lock-ins due to services they are run on.

This makes a pretty sound argument that “cloud-agnostic” solution is never going to be built.

Its bad for cloud-providers business and so they will never unify.


You'll always be beholden to egress costs


is this a new infrastructure-as-code/config attempt?


AWS is very easy for me when all I have to do is launch ec2 instances ^.^




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: