Written by Thierry Delran, DevOps Engineer

AWS Thinkbox Deadline 

AWS Thinkbox Deadline is a powerful rendering management software used in the film, animation, and visual effects industries. It manages the rendering of complex computer graphics projects through the automation of rendering pipelines and optimization of resources.

One of the key features of AWS Thinkbox Deadline is its ability to scale up or down to meet the demands of any rendering workload. This makes it an ideal solution for studios who sometimes require a high level of rendering capacity but don’t want to invest in their own infrastructure. Deadline also gives studios the ability to select more cost-effective EC2 Spot instances to address rendering requirements. TrackIt has recently published a blog post titled ‘Using the Spot Plugin to Burst Render Farms’ that explores this topic in detail. 

Thinkbox Render Farm Deployment Kit

AWS Thinkbox Deadline can be deployed in various environments including on-premises, cloud, and hybrid setups. When deploying Deadline in a studio in the cloud (SIC) environment, AWS recommends using a deployment kit. 

The Thinkbox RFDK (Render Farm Deployment Kit) is a collection of tools that allow the deployment of Deadline in an SIC environment quickly and easily. The kit is a set of CloudFormation templates that automate the deployment of the required AWS infrastructure for running Deadline. This includes the setup of Amazon Elastic Compute Cloud (EC2) instances, Amazon Elastic File System (EFS), and other necessary resources.

Once the infrastructure is deployed, the Thinkbox Deadline installer can be used to install the software on EC2 instances. The installer can be configured to automatically connect to the repository and allow users to manage their render farm from a single location.

The following tutorial details the steps required to deploy Deadline for a studio in the cloud environment with the Thinkbox RFDK (render farm deployment kit) v1.2.0.

SIC RFDK integrationfull.png

Specific Integrations RFDK adds to a Studio in the Cloud (SIC) Environment

Prerequisites

The RFDK script can be launched from any computer but in order to follow Thinkbox RFDK best practices, it should be launched using CodeCommit or using the Cloud9 integrated development environment (IDE). 

Users must have the following installed on their virtual machines before proceeding with the rest of the tutorial: 

Creating a Code Commit Environment

The deployment of a CodeCommit environment facilitates the maintenance and deployment of Deadline render farms. CodeCommit was chosen for simplicity, but the script can also be hosted in other version control platforms such as GitLab or GitHub. This and the following step can be skipped if choosing to run the RFDK on a different machine. 

Prior to creating the repository, it is important to ensure that the AWS CLI is configured to access the AWS account programmatically. The following command can be used to create the repository:

aws codecommit create-repository –repository-name deadline-rfdk –repository-description “Deadline rfdk repository” –tags Team=Trackit

Take note of the repository ID and name in the command output since they will be required in later steps of the tutorial.

Creating a Cloud9 Environment

A Cloud9 environment enables the modification of the RFDK script using an integrated development environment (IDE). 

Access the AWS console and ensure the appropriate region is selected. Navigate to the Cloud9 service and select “Create environment”. In the Cloud9 create environment window, fill in the fields as follows:

  • Name: “DeadlineStack”
  • Description: “Deadline deploy & maintenance environment”
  • Environment type: “New EC2 instance”
  • Instance type: “t3.small”
  • Platform: “Amazon Linux 2”
  • Timeout: “4 hours” in accordance with Thinkbox good practices, with the option to reduce it for cost savings as needed.
  • In Network settings, use SSM (AWS Systems Manager Agent) to enable users to manage their AWS resources and applications through a unified interface. 
  • For VPC settings, select the studio VPC and its associated public subnet. 
  • Click Create to complete the process. 
  • Once the environment is created, access the Cloud9 environment list and select “Open” from the “DeadlineStack” line.

image.png

Upon launching the IDE, open the terminal and execute the following command lines to install the Long Term Support (LTS) version of NodeJS which is recommended by AWS for the RFDK:

nvm install 14
echo “nvm use 14” >> ~/.bash_profile

The next step is to expand the attached EBS to 40GB. This is achieved by creating a resize.sh file on the IDE and copying the script provided below:

#!/bin/bash
# Specify the desired volume size in GiB as a command line argument. If not specified, default to 20 GiB.
SIZE=${1:-20}
# Get the ID of the environment host Amazon EC2 instance.
TOKEN=$(curl -s -X PUT “http://169.254.169.254/latest/api/token” -H “X-aws-ec2-metadata-token-ttl-seconds: 60”)
INSTANCEID=$(curl -s -H “X-aws-ec2-metadata-token: $TOKEN” -v http://169.254.169.254/latest/meta-data/instance-id 2> /dev/null)
REGION=$(curl -s -H “X-aws-ec2-metadata-token: $TOKEN” -v http://169.254.169.254/latest/meta-data/placement/region 2> /dev/null)
# Get the ID of the Amazon EBS volume associated with the instance.
VOLUMEID=$(aws ec2 describe-instances \
–instance-id $INSTANCEID \
–query “Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId” \
–output text \
–region $REGION)
# Resize the EBS volume.
aws ec2 modify-volume –volume-id $VOLUMEID –size $SIZE
# Wait for the resize to finish.
while [ \
“$(aws ec2 describe-volumes-modifications \
  –volume-id $VOLUMEID \
  –filters Name=modification-state,Values=”optimizing”,”completed” \
  –query “length(VolumesModifications)”\
  –output text)” != “1” ]; do
sleep 1
done
# Check if we’re on an NVMe filesystem
if [[ -e “/dev/xvda” && $(readlink -f /dev/xvda) = “/dev/xvda” ]]
then
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/xvda 1
# Expand the size of the file system.
# Check if we’re on AL2
STR=$(cat /etc/os-release)
SUB=”VERSION_ID=\”2\””
if [[ “$STR” == *”$SUB”* ]]
then
  sudo xfs_growfs -d /
else
  sudo resize2fs /dev/xvda1
fi
else
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/nvme0n1 1
# Expand the size of the file system.
# Check if we’re on AL2
STR=$(cat /etc/os-release)
SUB=”VERSION_ID=\”2\””
if [[ “$STR” == *”$SUB”* ]]
then
  sudo xfs_growfs -d /
else
  sudo resize2fs /dev/nvme0n1p1
fi
fi

Open a terminal and run “bash resize.sh 40” (40 is recommended by Thinkbox)

python -m ensurepip –upgrade
python -m pip install –upgrade pip
python -m pip install –upgrade virtualenv
python -m venv env

To activate the Python environment, execute the command source env/bin/activate. This command needs to be executed every time the Cloud9 IDE environment is accessed.

Share well the environment with the account administrator account or any user who will maintain the render farm. To do this, click on the Share option on the top-right section of the IDE.

image.png

Click on the cog next to the Share option to enter Preferences:

  • On the left panel, go to “AWS Settings” and uncheck below credentials “AWS managed temporary credentials”. This will cause the Cloud 9 environment to fall back using Cloud 9 instance IAM role

Now go to the IAM service on the AWS console, and look for the role “AWSCloud9SSMAccessRole”:

  • Select it to enter it current associated permissions policies window
  • Select “Add permissions” > “Attach policies”
  • Add “Administrator Access”

Important: Anyone who connects to the Cloud 9 environment now will have administrator access. However, this is currently the only solution that allows shared users to maintain the deployment. Using CLI access/secret key will also be shared with other users.

Now clone the repository previously created using the following commands:

CC_REPO_NAME=deadline-rfdk
aws codecommit get-repository \
 –repository-name ${CC_REPO_NAME} \
 –query repositoryMetadata.cloneUrlHttp \
 –output text \
 | xargs git clone

Copy the content from this public repo and move it to another folder:

git clone https://github.com/trackit/Deadline-rfdk-public
cd Deadline-rfdk-public
git checkout SIC-Independant-deployment
cd ..
mv Deadline-rfdk-public/DeadlineStack ./deadline-rfdk
rm -rf Deadline-rfdk-public
cd deadline-rfdk/

Install AWS CDK:

npm update -g aws-cdk

Bootstrap the environment:

ACCOUNTID=$(aws sts get-caller-identity –query “Account” –output text)
REGION=$(curl -s -H “X-aws-ec2-metadata-token: $TOKEN” -v http://169.254.169.254/latest/meta-data/placement/region 2> /dev/null)
cdk bootstrap aws://$ACCOUNTID/$REGION

The Cloud9 environment is now ready for use.

Creating a Key Pair for the EC2 Test Instance

  1. Create the EC2 Key Pair:
    • From the Cloud 9 instance
    • Install jq : sudo yum install jq
    • Command: aws –region <REGION> ec2 create-key-pair –key-name “deadlinetest” | jq -r “.KeyMaterial” > ./deadlinetest.pem
    • Note: <REGION> is the current region you want to create the key pair in
    • This command will create a new key pair and save the private key in a file named deadlinetest.pem.
  2. Store the Key Pair in AWS Secrets Manager:
    • Store the private key in Secrets Manager.
    • Command: aws secretsmanager create-secret –name DeadlineTestSecret –secret-string file://deadlinetest.pem
    • This will create a new secret named DeadlineTestSecret with the key pair.
  3. Retrieve Key Pair When Needed:
    • To access the key pair, use Secrets Manager.
    • Command: aws secretsmanager get-secret-value –secret-id DeadlineTestSecret –query SecretString –output text > deadlinetest.pem

Configuring the RFDK Deployment

Once you have your environment set up, repo data available, and the Python env activated:

  • Navigate to “DeadlineStack” folder
  • Setup the env variable with the following commands, you can find the latest & compatible version at https://github.com/aws/aws-rfdk/releases but lately, the version is not updated. It is preferable to visit Nimble Studio | us-west-2 (amazon.com) until the first link is up to date:
RFDK_VERSION=1.3.0
CDK_VERSION=$(npm view aws-rfdk@$RFDK_VERSION ‘dependencies.aws-cdk-lib’)
DEADLINE_VERSION=10.3.1.4
echo “Using RFDK version ${RFDK_VERSION}
echo “Using CDK version ${CDK_VERSION}””
echo “Using DEADLINE version ${DEADLINE_VERSION}”

  • Before installing dependencies, check that within setup.py file (within the DeadlineStack folder is) the RFDK value matches the RFDK version you want to deploy, ex:
install_requires=[
       “aws-cdk-lib==2.114.1”,
       “aws-rfdk==1.3.0”
   ],

You might also need to update the aws-cdk-lib version, but you’ll get an error message if it’s needed when installing the requirements.

  • Install the dependencies of the sample app:
$ pip install -r requirements.txt
  • Stage the Docker recipes for RenderQueue
npx –package=aws-rfdk@$RFDK_VERSION stage-deadline –output stage $DEADLINE_VERSION
  • Create an S3 bucket named “deadline-workers-scripts-{studio}” where {studio} is the name of the studio and {region} is the region where you want your s3 bucket to be hosted.
aws s3 mb s3://deadline-workers-scripts-{studio} –region {region}
  • Change the values in variable in package/config.py according to the customers need, this is a critical step
  • Deploy all the stacks with this command:
cdk deploy “*”

Connecting to Deadline from EC2 Test Instance

  • Generate a certificate file with this command using AWS cli (with programmatic access to the account) :
aws secretsmanager get-secret-value –secret-id {secret-id}  –query SecretString –output text > ca.crt

where :

  • secret-id: is the secret arn (on secrets service) named “DeadlineStack/RootCA-X.509-Certificate…”
  • Upload this ca.crt file to the S3 worker script bucket in a deadline folder.
aws s3 cp ca.crt s3://<S3workerscriptbucket>/deadline/ca.crt
  • Restart the test EC2 instance so it downloads the crt file
  • To connect to the EC2 instance, select it in EC2 service through theAWS console.
  • Click Connect, select “RDP client” tab
  • For connection type, select “Connect using Fleet Manager”
  • Click on the “Fleet manager Remote Desktop” link
  • On the Fleet Manager – Remote Desktop window, select Key pair as authentication type
  • You can either browse for the key pair on your computer or paste it content from the secret you created earlier
  • Click Connect.

Configuring Spot Instance Deadline Workers

If you want to use Spot instance Deadline workers, you will need to configure them to access a storage endpoint.

On top of allowing them to reach the storage endpoint wanted, you will also have to configure it access on the worker user data.

In the Python folder of the Trackit RFDK repository, the following example scripts can be found for Linux and Windows respectively: “linux_workers.sh” and “windows_workers.ps1”:

Modify the values declared in each script, and then proceed to upload them to the designated bucket in the “config.py” file, under a “deadline” folder. Use the following command:

aws s3 cp workers_linux.sh s3://[worker_bucket_name]/deadline/workers_linux.sh
aws s3 cp workers_windows.ps1 s3://[worker_bucket_name]/deadline/workers_windows.ps1

Access to the repository can now be gained through the EC2 test instance and you can initiate rendering tests.

jhAX7gxaUNlC0qKXpcoiREOlvwBrd00HmRxCWBmPIahsj6BW WSyBTboUPTHl7mtGGxVej9HIkWZBGK7P O tsbVlIUgC7E7IewU3 weCuleGJt w3YgPtIJljDpXg8JvIZMRBZqNHCnZHZ49NS s0VPpzT00 nu SkMviMG

Deadline Rendering Job

Conclusion

AWS Thinkbox Deadline is a scalable and effective solution for managing rendering workloads in a studio in the cloud (SIC) environment. The Render Farm Deployment Kit (RFDK) enables studios to quickly and easily deploy a resource-optimized rendering solution that helps reduce costs and improve project timelines. 

About TrackIt

TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

About Thierry Delran

Thierry Photo

DevOps engineer at TrackIt since 2022, with master’s degrees in computer science, Thierry has deep expertise in building and managing AWS Studio in the cloud environments as well as setting up large-scale Deadline render farms.

Thierry thrives in situations when Chaos needs Order and Automation (across fields).