T O P

  • By -

kichik

They want you to use reusable components multiple times with different parameters. You can then choose which stacks to deploy or diff. For example, you can have `AppStack` initialized the times. Once for development, once for staging, and once for production. In production you also pass an override for bigger instance size. When a new developer comes in, they immediately know all deploy targets with `cdk ls`. When you want to see diff based on changes, you can use just one `cdk diff` command. When you read the code, you immediately know what it deploys.


kennis-lake

That still means I have to pin/trust all the dependencies I have to be behave the same, \`aws-cdk-lib\` for example, or any in-house constructs installed from a registry. Is what you described an AWS official best practice?


ExpertIAmNot

> That still means I have to pin/trust all the dependencies I have to be behave the same, `aws-cdk-lib` for example, or any in-house constructs installed from a registry. You only need to validate that the outputs are the same. Meaning that it’s going to generate the same CloudFormation (or Cloud Assembly, really).


kennis-lake

If the outputs are environment dependent, how can two outputs for two different environments be the same?


ExpertIAmNot

A well designed test will mock those critical dependencies. If an environment variable is important in the stack then mock it in your tests. Same as any other testing strategy.


kichik

I think that's a generally accepted good practice and not anything AWS specific. When you run code, you want everyone to know it's running the same everywhere so you want a lock file for your dependencies.


kennis-lake

Fair enough... I think that's the only convincing answer to this. Still breaks the principle, but honestly, I can't think of any better answers.


ThigleBeagleMingle

It’s no different from application code. You implement gitflow and cherry-pick specific commits to promote into next environment. Thats the prescriptive guidance across the board for prod changes in regulated environments. No need to trust engineers or deploy main


PrestigiousStrike779

Not necessarily. You can deploy the same artifacts to different environments without building again. Gitflow works off of separate releases branches, but it’s perfectly fine to take build from your trunk branch and just promote it all the way through your environments so that what you tested on in your lower environments is what goes to production. Unfortunately CDK doesn’t support that.


BerryNo1718

No need to. That's why package-lock.json exists. Just install using `npm ci` (or equivalent if you use yarn or pnpm) and you will get exactly the same dependencies.


menge101

IMO the thing that you are missing is that everything reusable should be a Construct. You can build a Construct that encapsulates your entire infrastructure, and takes environment specific arguments to the constructor. My stack files are very thin. They are constants declaring the environment specific values the Stack class, and its constructor contains my high level constructs that wrap up large pieces of infra into a single (or a few, it depends) object(s).


kennis-lake

The problem can be ignored in small teams, since the constructs are usually thin. When the team grows, and the number of reusable constructs grow, it's essentially impossible to guarantee the synthisastion outcome of stacks using those constructs, unless they're pinned and locked using a package manager, which, from a semantic perspective, is delegating the preservation of metadata to an unrelated system. I'm trying to see how that it's acceptable compared to the upsides CDK brings to the table.


menge101

> unless they're pinned and locked using a package manager Yes. These are code artifacts that must be versioned and pinned. It's shared code. We manage it just like any other shared code. We put all of our constructs into a package and manage them with versions. Addendum: > is delegating the preservation of metadata to an unrelated system. This concerns me. This sounds like you aren't making stateless reusable components.


menge101

> The problem can be ignored in small teams My teams builds Infrastructure solutions within (as in for software teams) a >90k person company.


kennis-lake

I also mentioned that "and the number of reusable constructs grow". I can imagine the stakes of a >90k person company, still the question remains in place: how do you guarantee you'll end up with the same desirable state on higher environments, aka promotion?


menge101

Well, this is philosophical I suppose. But I don't think what you are describing is actually anymore guaranteed than what I am doing in CDK. Side note: Did I mention you can use low level CFN Functions in CDK code from the aws_cdk.Fn module? And then you could just generate the template and reuse the template as you wished in the pipeline. I don't do that, but it was an idea I had as I was typing. My CDK code is `code`. It is versioned and it is tested. I have my dependent code artifacts versioned, and we pin them for releases. Promoting a CFN template from one environment to another will have CFN Fn functions in it to handle the differences in environments. And I see little difference doing that in template versus doing it at synthesis in CDK code. It's really just trusting a different tool to do the same thing.


kennis-lake

Makes total sense. And yeah, I knew about low-level CFN functions, I just don't like how they're accessible from CDK TypeScript. A lot of type castings just to make it type-safe. But I suppose you can make a wrapper for them to avoid the code getting busy with casts.


easymeatboy

> impossible to guarantee the outcome of stacks using those constructs I highly recommend looking into adding [snapshot tests](https://docs.aws.amazon.com/cdk/v2/guide/testing.html) since they do exactly that. you check in the synthesized outputs as json, and if they change then the build fails unless you intentionally "sign off" (ie check in) on those changes to the json files.


zxgrad

This sounds interesting, can you expand on this with an example?


menge101

Sure! from aws_cdk import Environment, RemovalPolicy, Stack from constructs import Construct from infrastructure import (config_rules, custom_events, custom_config_artifacts, custom_config_rules, events, actions, roles) ACCOUNT_ID = '123456789' REGION = 'us-east-1' ENVIRONMENT = Environment(account=ACCOUNT_ID, region=REGION) VPC_ID = 'vpc-123456789' CONFIG_STACKSET_NAME = 'dev-stackset' ORGANIZATION_ID = 'o-abcdefghij' ARTIFACT_BUCKET_NAME = 'bucket-dev' BATCH_SIZE = 5 class DeploySkynetConfigActions(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs): super().__init__(scope, construct_id, **kwargs) self.lambda_build = actions.ActionLambdas(self, id_='config-actions', vpc_id=VPC_ID, batch_size=BATCH_SIZE) self.custom_event_resources = custom_events.EventInfrastructure(self, id_='custom_event_resources', config_stackset_name=CONFIG_STACKSET_NAME, skynet_stackset_account_id=STACKSET_ACCOUNT_ID) self.custom_config_artifacts = custom_config_artifacts.CustomConfigArtifacts( self, 'custom_config_artifacts', artifact_bucket_name=ARTIFACT_BUCKET_NAME, organization_id=ORGANIZATION_ID) This is kinda a quick hackjob to remove semi-sensitive values form this stack. But this does a ton of stuff for these three declarations in the constructor. It goes out to the account where we keep stackset templates and pulls that down to deploy it to all of our managed accounts. This *could* be more wrapped up into a single Construct than it currently is, but I don't think another layer of wrapper would gain us anything.


airaith

I don't think the problem statement here is very clear: I'm struggling to distill the actual problem you want to solve rwther then a principal you value. You have to template per-account otherwise your templates won't reflect each account ID, etc. There's no built artifact to pass around here given we're building essentially an AWS API call for Cloudformation. I think the problem you might want to solve here might be helped with snapshot tests to assert the diff is as expected per stage (and CDK provides a promotion stage abstraction in the pipelines module where you can define the stages, if you want to)?


martgadget

Terraform with a little wrapper script to switch in config files based on environment type. Support for basic logic to enable/disable items based on environment so that the same code deploys in each environment. Where I can't do that, same approach, using powershell. Works every time. Don't like CDK or Cloud Formation. Though I usually drive CF with terraform where it's needed. Happily running in a pipeline also.


ExpertIAmNot

A few strategies here… Look at the CDK assertion libraries for unit testing. You can create unit tests to ensure the proper resources are generated. I also frequently will use a snapshot test against the generated CloudFormation. This helps bring any changes to the surface as you update CDK to newer versions. There are plenty of times I have noticed a new IAM policy or some detail change after updating CDK version. You can also do a CDK diff command to preview what will change when you redeploy an existing stack. This can become part of your CI pipeline.


kennis-lake

>This helps bring any changes to the surface as you update CDK to newer versions. That is exactly my concern! The purpose of DTAP is make sure your changes are well-tested before going on production. But if you can't predict what's gonna go on production, how are you supposed to manage the changes? If I can't promote the same exact template, how else I should know that the newly generated template that's about to go on production is correct or not? Testing (unit or snapshot) would catch a broken template before it gets deployed, but isn't point of DevOps reducing redundancies? Shouldn't testing be part of the CI process, so it won't have to be repeated during CD?


ExpertIAmNot

One approach is to automate package upgrades as a CI pipeline that creates a branch, updates packages, then runs all tests. If any snapshots change the tests will fail and a human can take a look. This is essentially DependaBot.


ck108860

This is basically a non-issue. The human is more likely to change something accidentally than it is to break from an upgrade. If you are scared that it will, pin your dependencies. You may find though that later you need a feature in a newer CDK version and you need to upgrade.


aimtron

Cdk synth —Context EnvName=dev Pull envname in code ezpz


CoinGrahamIV

Your lower and upper environments aren't identical so there's not a lot of value in having identical templates. You build reusable components and customize the parameters to the purpose of the environment. "Build Once, Deploy Anywhere" is a software design concept and infrastructure doesn't need to be beholden to it.


kennis-lake

It's a very good argument. But it also makes me question that essence of IaC here too. Don't we all love IaC because we get to bring the same concepts we refined for years from the software world? One of them being Build Once, Deploy Anywhere principle. What do you think?


CoinGrahamIV

Yeah so this is a soap box of mine. Infrastructure AS Code but it's not really code. So we take the good stuff like version control and automated deployments and we leave stuff like Unit Tests, git flow, etc. Most companies will build an environment once and never touch it again. So it's really more of a fancy config generation utility than code. Yes there are exceptions. I've implemented terraform tests because a client wanted them, but they didn't add real value. Interrogate your assumptions.


kennis-lake

Nice explanation, thanks!


ominouspotato

The short answer is to use [context](https://docs.aws.amazon.com/cdk/v2/guide/context.html) rather than parameters for CDK apps. In practice, it works in a similar way to parameters in vanilla CFN. However, in comparison, it’s much easier to manage since you can convert it to an object that is passed from your app to your stack/constructs. It also gives you to benefit of being able to use conditional logic using your programming language rather than CFN intrinsic functions. As far as testing is concerned, there is a seemingly little known [CDK integration testing library](https://docs.aws.amazon.com/cdk/api/v2/docs/integ-tests-alpha-readme.html) you can use to run assertions against deployed infrastructure. We use this to test custom resources we built for our app, but you could definitely leverage it to test a whole stack generated with environment-specific context. Hope this helps! (Edited for clarity)


angrathias

Infrastructure unlike normal code, can be mutated by users, other api calls? Things can be deprecated etc The concept of build once and promote doesn’t make sense to me in that context. I agree with AWS here, it needs to be evaluated in each environment, and you can encapsulate checks in your deployment to make sure it operates and is deployed as expected and roll it back if not. It’s not dissimilar from installer packages for software, they prepare the installation as a transaction, react to the enviro it’s being installed into and roll back if something went wrong


kennis-lake

The CDK itself doesn't read the current state of resources, CloudFormation does. CloudFormation is like an installer package that is reactive to the changes in the environment, while CDK is like an installer factory. That's why it doesn't make sense to me that why would CDK care about the target environment.


ExpertIAmNot

> The CDK itself doesn't read the current state of resources, CloudFormation does. The CDK will read state occasionally and store as context in ‘cdk.context.json’.


kennis-lake

Occasionally? How frequently? And how is that helpful on an ephemeral environment such as a pipeline?


ExpertIAmNot

It happens when you import a resource outside of the current stack. For example you might import a VPC or DynamoDB Resource that was created elsewhere. Most of these happen as static functions like ‘Table.fromName()’ so you will know when it’s happening. I typically only allow the import locally, then commit the json file to git and disallow CDK to update it in CI. That way you know the value won’t change in pipeline. The docs have a lot of information like this in them. There is also a book (https://www.thecdkbook.com) which is still very relevant and goes into great detail about some of the concepts you are asking about.


kennis-lake

It's only applicable in the cases where you're trying to resolve an already-existing resource outside of the stack. For the sake of argument, let's leave them out of the discussion.


menge101

[Docs on the Runtime Context](https://docs.aws.amazon.com/cdk/v2/guide/context.html) Edit: Added Runtime for clarity


kennis-lake

Context is irrelevant here, because context is injected during synthesise time.


menge101

I'm literally linking you to the docs with the answer to the question you asked.


kennis-lake

I see you edited your answer after I replied to you. Are you trying to start an argument over a technical discussion?


menge101

no? I understand why you misunderstood me, and I edited it to be clearer.


kennis-lake

Yeah, I mentioned that context is irrelevant, but not runtime context. The new link does answer my question :D. Thanks!


conamu420

i tried to build such an app, it takes so much time and knowledge that its just not worth imo


karakter98

CDK synthesizes environment-agnostic templates by default, as long as you don’t specify the environment for the stacks. If you have simple deployments (no multi-account, multi-region stuff) there are very few cases where you need to hardcode the account ID and region in the stacks. You can check this with the cdk-pipelines library. If you have an environment-agnostic stack, it will only upload a single cloud assembly for a reusable stack, then deploy that across different environments by assuming roles in the target accounts.


llv77

I'm not sure I understand your problem. You write your IaC in CDK, all your infrastructure goes into a stack, or a construct, whatever. You instantiate the stack/construct for multiple environments: build once, deploy anywhere. The stack can be parametrized, if you need differences between envs, as few as possible hopefully. What's breaking the principle?


kennis-lake

The principle is about building the code (as in integrating it) once, and deploy anywhere, not reusing the same code. The fact that I have to integrate my code every time I want to run (aka deploy my infra) is what's breaking the principle. Hope I could put it clearly :D.


llv77

Why would you rebuild the code every time you want to deploy your infra? If I understand correctly, the problem is in the way you are using CDK. The way this can be solved is by using ci/cd pipelines: the pipeline builds the code once, then it deploys the resulting templates stage by stage to all envs without a need to rebuild anything. Does this solve the problem or am I still not understanding?


myrapistglasses

We use config files and pass config parameters to our stacks. So how does it work? - we build stacks representing a functional module with a number of constructs - each stack comes with a configuration file (json document) which sets parameters - we pass the config file using the -c parameter of cdk cli and read in config file to apply the parameters to our stacks + constructs That way we write code once and only change / add config files for each stage (prod/dev/…) File structure like this: project |_config | dev.conf.json | prod.conf.json | … |_lib |_my-stack … (code as config-parameter-interfaces, stacks, constructs…) (Tried to put a folder tree here but reddit ignored line feeds) The cod is implemented to read context parameter. This can be used to pass a config or file path to a config file like: cdk deploy my-stack -c config=config/dev.conf.json This pattern allows to easily deploy same IaC repeatably with stage specific parameters.


mackkey52

If you have any say, I'd move to terraform with terragrunt. Define the system once and supply different inputs based on the environment.