I recently deployed an app to AWS ECS and since it is in its early stages, I wanted to pay as little for storage as possible. So instead of a dedicated MongoDB instance, I spun it up as a service in my ECS cluster.
I also opted for Fargate Spot Instances since I could live with a container restarting once every day or so. But this meant I had to make sure I had persistent storage so that I wouldn’t lose data if the MongoDB containers cycled.
Using EFS, I created a mount point for the MongoDB container but encountered the following error:
chown: changing ownership of '/data/db': Operation not permitted
In order for MongoDB to properly set itself up, it needs some permissions. Looking at the official MongoDB docker image, you can see that it uses UID/GID 999 (user/group mongodb
) to create the dir /data/db
, where it stores the data.
So this means that the EFS should grant access to this specific user and group when it sets up an access point. Using terraform, I defined my EFS and an access point for MongoDB:
# EFS
resource "aws_efs_file_system" "efs_storage" {
creation_token = "${var.resource_prefix}-storage-fs"
tags = var.root_tags
}
# Mount EFS to the same subnet MongoDB is in
resource "aws_efs_mount_target" "efs_mount_private" {
file_system_id = aws_efs_file_system.efs_storage.id
subnet_id = var.subnet_id_private
security_groups = [var.security_group_id_private]
}
# Access point for MongoDB
resource "aws_efs_access_point" "efs_access_storage_mongodb" {
file_system_id = aws_efs_file_system.efs_storage.id
posix_user {
gid = 999 # 999 is the user/group for the mongodb Docker image
uid = 999
}
root_directory {
creation_info {
owner_gid = 999
owner_uid = 999
permissions = "755"
}
path = "/mongodb-data"
}
tags = var.root_tags
}
This gives the user and group specified in the MongoDB image access to the path /mongodb-data
on the EFS drive.
MongoDB task definitions
Now that EFS is ready to used by MongoDB, the MongoDB ECS service will need to correctly mount to it. This means that the container definition of the MongoDB task definition needs to:
- Specify the mount point
- Specify the volume
Mount point
In terraform, the mount point is defined in a aws_ecs_task_definition
’s container_definitions. It is equivalent to mounting a local MongoDB container with a volume to a path on your local machine: -v mongodb-volume:/data/db
.
container_definitions = jsonencode(
[
{
essential = true
name = "${var.resource_prefix}-mongodb"
image = "mongo:noble"
...
mountPoints = [
{
sourceVolume = "mongodb-volume"
containerPath = "/data/db"
}
]
},
]
)
Volume
In terraform, the volume
configuration has its own section. Note that in my case, the efs_volume_configuration does not need to specify the root path because the authorization_config
specifies an access point, which already defines a path.
volume {
name = "mongodb-volume"
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs_storage.id
transit_encryption = "ENABLED"
transit_encryption_port = 2999
authorization_config {
access_point_id = aws_efs_access_point.efs_access_storage_mongodb.id
}
}
}
Gotchas
Depending on how your security groups are set up in your infrastructure, this may or may not apply to you. In my case, the following needed to be true:
- The security group of the ECS task definition will need to allow outbound traffic on port 2049 for NFS traffic
- The security group of the EFS will need to allow inbound traffic on port 2049 for the same traffic