AWS CDK Lambda Layer Merge Order

September 16, 2025

Today I lost a few hours to something that looked simple: Lambda layers in AWS CDK. In our team we use layers to bundle private packages we reuse across functions. One of those bundled packages includes Pydantic (a library for data validation in Python). On top of that, we're big advocates of AWS Lambda Powertools for logging & observability, so we usually attach the Powertools layer to all Lambda functions - through a common construct

Here's a rough example of how we add the Powertools layer in CDK:

power_tools_layer_name = (
    "AWSLambdaPowertoolsPythonV3-python312-x86:5"
    if kwargs["architecture"] == lambda_.Architecture.X86_64
    else "AWSLambdaPowertoolsPythonV3-python312-arm64:5"
)

kwargs["layers"].append(
    lambda_.LayerVersion.from_layer_version_arn(
        scope,
        f"{construct_id}PowerToolsPython",
        layer_version_arn=f"arn:aws:lambda:{Aws.REGION}:017000801446:layer:{power_tools_layer_name}",
    ),
)

You can find official docs for AWS Lambda Powertools here

So far so good. But here's where it got weird - our implementation of PrivateLayer started to fail with some unknown error that was not related to our codebase - and as it turned out, it was Pydantic - as it turned out the AWS Lambda PowerTools layer also uses Pydantic within its Parser utility - and so we had a version conflict. To resolve the issue, it was necessary to 'simply' change the order of layers - so that the PrivateLayer with Pydantic would be listed before the Powertools layer, taking precedence upon merging. No matter what order I listed the layers in CDK, the final CloudFormation template kept them in exactly the same order - which I assumed was due to dependencies, and build order. The Layers array in the generated template.json had its own idea:

"Layers": [
 {
  "Fn::Join": [
   "",
   [
    "arn:aws:lambda:",
    { "Ref": "AWS::Region" },
    ":017000801446:layer:AWSLambdaPowertoolsPythonV3-python312-arm64:5"
   ]
  ]
 },
 { "Ref": "SecretFunctionRequirementsLayer65704973" },
 "arn:aws:lambda:eu-west-2:1234567890:layer:private-project:1"
]

Do note different types of layer references here

Turns out this is intentional. CDK sorts layers internally to guarantee the same hash if you add the same layers in different orders. That's nice for consistency, but not so nice when the order matters - and it does!

"a reason for sorting layers is to ensure the same hash is calculated for Lambda functions which register the same layers in a different order, but are otherwise identical." - source

The fix? It's hiding in cdk.json. There's a feature flag called recognizeLayerVersion. By default it's true, which means CDK takes control of layer sorting. If you want to keep the order you specify, set it to false:

"context": {
  "@aws-cdk/aws-lambda:recognizeLayerVersion": false
}

That's it. Once I flipped this flag, CDK stopped reordering my layers and finally respected the merge order I needed.

So yeah, that's my lesson of the day: if you ever run into weird Lambda layer order issues in CDK, remember this one flag: recognizeLayerVersion.