AWS Thinkbox Deadline 

AWS Thinkbox Deadline is a powerful rendering management software used in the film, animation, and visual effects industries. It manages the rendering of complex computer graphics projects through the automation of rendering pipelines and optimization of resources.

One of the key features of AWS Thinkbox Deadline is its ability to scale up or down to meet the demands of any rendering workload. This makes it an ideal solution for studios who sometimes require a high level of rendering capacity but don’t want to invest in their own infrastructure. Deadline also gives studios the ability to select more cost-effective EC2 Spot instances to address rendering requirements. TrackIt has recently published a blog post titled ‘Using the Spot Plugin to Burst Render Farms’ that explores this topic in detail. 

Thinkbox Render Farm Deployment Kit

AWS Thinkbox Deadline can be deployed in various environments including on-premises, cloud, and hybrid setups. When deploying Deadline in a studio in the cloud (SIC) environment, AWS recommends using a deployment kit. 

The Thinkbox RFDK (Render Farm Deployment Kit) is a collection of tools that allow the deployment of Deadline in an SIC environment quickly and easily. The kit is a set of CloudFormation templates that automate the deployment of the required AWS infrastructure for running Deadline. This includes the setup of Amazon Elastic Compute Cloud (EC2) instances, Amazon Elastic File System (EFS), and other necessary resources.

Once the infrastructure is deployed, the Thinkbox Deadline installer can be used to install the software on EC2 instances. The installer can be configured to automatically connect to the repository and allow users to manage their render farm from a single location.

The following tutorial details the steps required to deploy Deadline for a studio in the cloud environment with the Thinkbox RFDK (render farm deployment kit) v1.2.0.

SIC RFDK integrationfull.png

Specific Integrations RFDK adds to a Studio in the Cloud (SIC) Environment

Prerequisites

The RFDK script can be launched from any computer but in order to follow Thinkbox RFDK best practices, it should be launched using CodeCommit or using the Cloud9 integrated development environment (IDE). 

Users must have the following installed on their virtual machines before proceeding with the rest of the tutorial: 

Creating a Code Commit Environment

The deployment of a CodeCommit environment facilitates the maintenance and deployment of Deadline render farms. CodeCommit was chosen for simplicity, but the script can also be hosted in other version control platforms such as GitLab or GitHub. This and the following step can be skipped if choosing to run the RFDK on a different machine. 

Prior to creating the repository, it is important to ensure that the AWS CLI is configured to access the AWS account programmatically. The following command can be used to create the repository:

aws codecommit create-repository –repository-name deadline-rfdk –repository-description “Deadline rfdk repository” –tags Team=Trackit

Take note of the repository ID and name in the command output since they will be required in later steps of the tutorial.

Creating a Cloud9 Environment

A Cloud9 environment enables the modification of the RFDK script using an integrated development environment (IDE). 

Access the AWS console and ensure the appropriate region is selected. Navigate to the Cloud9 service and select “Create environment”. In the Cloud9 create environment window, fill in the fields as follows:

  • Name: “DeadlineStack”
  • Description: “Deadline deploy & maintenance environment”
  • Environment type: “New EC2 instance”
  • Instance type: “t3.small”
  • Platform: “Amazon Linux 2”
  • Timeout: “4 hours” in accordance with Thinkbox good practices, with the option to reduce it for cost savings as needed.
  • In Network settings, use SSM (AWS Systems Manager Agent) to enable users to manage their AWS resources and applications through a unified interface. 
  • For VPC settings, select the studio VPC and its associated public subnet. 
  • Click Create to complete the process. 
  • Once the environment is created, access the Cloud9 environment list and select “Open” from the “DeadlineStack” line.

image.png

Upon launching the IDE, open the terminal and execute the following command lines to install the Long Term Support (LTS) version of NodeJS which is recommended by AWS for the RFDK:

nvm install 14
echo “nvm use 14” >> ~/.bash_profile

The next step is to expand the attached EBS to 40GB. This is achieved by creating a resize.sh file on the IDE and copying the script provided below:

#!/bin/bash
# Specify the desired volume size in GiB as a command line argument. If not specified, default to 20 GiB.
SIZE=${1:-20}
# Get the ID of the environment host Amazon EC2 instance.
TOKEN=$(curl -s -X PUT “http://169.254.169.254/latest/api/token” -H “X-aws-ec2-metadata-token-ttl-seconds: 60”)
INSTANCEID=$(curl -s -H “X-aws-ec2-metadata-token: $TOKEN” -v http://169.254.169.254/latest/meta-data/instance-id 2> /dev/null)
REGION=$(curl -s -H “X-aws-ec2-metadata-token: $TOKEN” -v http://169.254.169.254/latest/meta-data/placement/region 2> /dev/null)
# Get the ID of the Amazon EBS volume associated with the instance.
VOLUMEID=$(aws ec2 describe-instances \
–instance-id $INSTANCEID \
–query “Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId” \
–output text \
–region $REGION)
# Resize the EBS volume.
aws ec2 modify-volume –volume-id $VOLUMEID –size $SIZE
# Wait for the resize to finish.
while [ \
“$(aws ec2 describe-volumes-modifications \
  –volume-id $VOLUMEID \
  –filters Name=modification-state,Values=”optimizing”,”completed” \
  –query “length(VolumesModifications)”\
  –output text)” != “1” ]; do
sleep 1
done
# Check if we’re on an NVMe filesystem
if [[ -e “/dev/xvda” && $(readlink -f /dev/xvda) = “/dev/xvda” ]]
then
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/xvda 1
# Expand the size of the file system.
# Check if we’re on AL2
STR=$(cat /etc/os-release)
SUB=”VERSION_ID=\”2\””
if [[ “$STR” == *”$SUB”* ]]
then
  sudo xfs_growfs -d /
else
  sudo resize2fs /dev/xvda1
fi
else
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/nvme0n1 1
# Expand the size of the file system.
# Check if we’re on AL2
STR=$(cat /etc/os-release)
SUB=”VERSION_ID=\”2\””
if [[ “$STR” == *”$SUB”* ]]
then
  sudo xfs_growfs -d /
else
  sudo resize2fs /dev/nvme0n1p1
fi
fi

Execute the command bash resize.sh 40 to initiate the expansion process (40 GB is the size recommended by Thinkbox). 

Next, clone the Deadline RFDK repository to the Cloud9 EBS and generate a Python environment using the following commands:

pip install virtualenv
python -m venv env

To activate the Python environment, execute the command source env/bin/activate. This command needs to be executed every time the Cloud9 IDE environment is accessed. The Cloud9 environment is now ready for use.

Configuring the RFDK deployment

After activating the Python environment on the Cloud9 IDE and copying the repository data, proceed to configure the RFDK: 

  1. Navigate to “examples/deadline/SIC-deployment/python/”
  2. Install the dependencies by running the following command:

pip install -r requirements.txt

  1. Modify the values of variables in the package/config.py file as per your requirements.
  2. Stage the Docker recipes (a set of instructions that define how to build a Docker image) for the deployment using the following commands:
# Set this value to the version of RFDK your application targets
RFDK_VERSION=<version_of_RFDK>
# Set this value to the version of AWS Thinkbox Deadline you’d like to deploy to your farm. Deadline 10.1.12 and up are supported.
RFDK_DEADLINE_VERSION=<version_of_deadline>
npx –package=aws-rfdk@${RFDK_VERSION} stage-deadline –output stage ${RFDK_DEADLINE_VERSION}

The latest Deadline version supported by RFDK can be accessed here.

Deploy all the stacks using the command:  cdk deploy “*”

Configuring the Route Table on the SIC VPC to Access the RFDK VPC

Upon successful deployment of the stacks, configuring the Studio in the Cloud (SIC) VPC is required to ensure that the SIC workstations can access the Remote Connection Server (RCS). The following AWS CLI commands can be used for this purpose:

Using AWS CLI, launch the following commands:
VPC_PEERING_CONNECTION_ID={deadline_vpc_peering_connection_id}
WORKSTATION_SUBNET_ID={sic_workstation_subnet_id}
AD_SUBNET_ID={sic_ad_subnet_id}
FSX_ROUTE_TABLE={sic_fsx_route_table_id}
PEER_VPC_REGION={aws_region}
RFDK_VPC_CIDR_RANGE={deadline-vpc-cidr-range}
ROUTING_TABLE_WORKSTATION=$(aws –region $PEER_VPC_REGION ec2 describe-route-tables –query “RouteTables[*].Associations[?SubnetId==’$WORKSTATION_SUBNET_ID’].RouteTableId” –output text)
aws –region $PEER_VPC_REGION ec2 create-route –route-table-id $ROUTING_TABLE_WORKSTATION –destination-cidr-block $RFDK_VPC_CIDR_RANGE –vpc-peering-connection-id $VPC_PEERING_CONNECTION_ID
ROUTING_TABLE_AD=$(aws –region $PEER_VPC_REGION ec2 describe-route-tables –query “RouteTables[*].Associations[?SubnetId==’$AD_SUBNET_ID’].RouteTableId” –output text)
aws –region $PEER_VPC_REGION ec2 create-route –route-table-id $ROUTING_TABLE_AD –destination-cidr-block $RFDK_VPC_CIDR_RANGE –vpc-peering-connection-id $VPC_PEERING_CONNECTION_ID
aws –region $PEER_VPC_REGION ec2 create-route –route-table-id $FSX_ROUTE_TABLE –destination-cidr-block $RFDK_VPC_CIDR_RANGE –vpc-peering-connection-id $VPC_PEERING_CONNECTION_ID

In the code above:

  • deadline_vpc_peering_connection_id refers to the Deadline peering connection ID (VPC/Peering Connections)
  • sic_workstation_subnet_id refers to the SIC workstation subnet ID
  • sic_ad_subnet_id_1 refers to the first SIC Active Directory subnet ID
  • sic_ad_subnet_id_2 refers to the second SIC Active Directory subnet ID
  • sic_fsx_subnet_id refers to the SIC FSx subnet ID
  • aws_region refers to the region the studio & Deadline is deployed into
  • deadline-vpc-cidr-range refers to the deadline VPC CIDR range

Updating the FSx Security Group

To enable render workers to access/mount FSx, the related security group for each FSx drive needs to be updated. Follow the steps below:

  1. In the EC2 service, navigate to “Security Groups”.
  2. Locate the security group named [studio-name]_userprofiles_storage. Update its inbound rule to allow Server Message Block (SMB) from the RFDK VPC CIDR.

Render workers will now be able to mount FSx.

Connecting to Deadline from the SIC Workstation

The SIC workstations need to be configured to connect to the RCS server. To achieve this, follow the steps below:

  1. Update the workstation AMI by installing the Deadline client on it. Do not modify any of the default settings except the ‘Launch worker at start’ setting.
  2. Generate a certificate file using the AWS CLI (with programmatic access to the account) with the following command:
aws secretsmanager get-secret-value –secret-id {secret-id}  –query SecretString –output text > ca.crt

In the code snippet above, secret-id refers to the secret ID on AWS Secrets Manager named“DeadlineStack/RootCA-X.509-Certificate…”

Place the generated certificate file into a Deadline folder on the FSx shared drive.

Getting the RCS Server URL

On AWS CloudFormation, select the deadline stack and go to the Outputs tab

To get the RCS URL, locate the load balancer tag that starts with RenderQueueAlbEc2ServicePatternLoadBalancer and fetch its value “….elb.amazonaws.com”.

Modifying Workstation User Data to Automatically Connect to Deadline

Add the following code to SIC workstation user data:

Write-Output “configuring deadline connection”
$DEADLINE_PATH = “C:\Program Files\Thinkbox\Deadline10\bin”
pushd $DEADLINE_PATH
.\deadlinecommand.exe -SetIniFileSetting ConnectionType Remote
.\deadlinecommand.exe -SetIniFileSetting ProxyUseSSL True
.\deadlinecommand.exe -SetIniFileSetting LaunchSlaveAtStartup 0
.\deadlinecommand.exe -SetIniFileSetting ProxySSLCA “\\<fsx-dns-name>\share\deadline\ca.crt”
.\deadlinecommand.exe -SetIniFileSetting ClientSSLAuthentication NotRequired
.\deadlinecommand.exe -SetIniFileSetting ProxyRoot <rcs-url>:4433

In the code above: 

  • fsx-dns-name refers to the FSx DNS name that can be found in the FSx service
  • rcs-url can be found in “get the rcs url” part of this document

Configure Deadline Workers

The Deadline workers need to be able to access the SIC active directory and mount shared drives.

In the Python folder of the Trackit RFDK repository, the following scripts can be found for Linux and Windows respectively: “linux_workers.sh” and “windows_workers.ps1”:

Modify the values declared in each script, and then proceed to upload them to the designated bucket in the “config.py” file, under a “deadline” folder. Use the following command:

aws s3 cp workers_linux.sh s3://[worker_bucket_name]/deadline/workers_linux.sh
aws s3 cp workers_windows.ps1 s3://[worker_bucket_name]/deadline/workers_windows.ps1

Access to the repository can now be gained through a SIC workstation and users can initiate rendering tests. 

jhAX7gxaUNlC0qKXpcoiREOlvwBrd00HmRxCWBmPIahsj6BW WSyBTboUPTHl7mtGGxVej9HIkWZBGK7P O tsbVlIUgC7E7IewU3 weCuleGJt w3YgPtIJljDpXg8JvIZMRBZqNHCnZHZ49NS s0VPpzT00 nu SkMviMG

Deadline Rendering Job

Conclusion

AWS Thinkbox Deadline is a scalable and effective solution for managing rendering workloads in a studio in the cloud (SIC) environment. The Render Farm Deployment Kit (RFDK) enables studios to quickly and easily deploy a resource-optimized rendering solution that helps reduce costs and improve project timelines. 

About TrackIt

TrackIt is an Amazon Web Services Advanced Tier Services Partner specializing in cloud management, consulting, and software development solutions based in Marina del Rey, CA. 

TrackIt specializes in Modern Software Development, DevOps, Infrastructure-As-Code, Serverless, CI/CD, and Containerization with specialized expertise in Media & Entertainment workflows, High-Performance Computing environments, and data storage.

In addition to providing cloud management, consulting, and modern software development services, TrackIt also provides an open-source AWS cost management tool that allows users to optimize their costs and resources on AWS.