Blogs – TrackIt – Cloud Consulting & S/W Development https://trackit.io Trackit main website Tue, 14 Oct 2025 09:47:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://trackit.io/wp-content/uploads/2017/10/favicon.png Blogs – TrackIt – Cloud Consulting & S/W Development https://trackit.io 32 32 Streamlining VCL Migrations: Moving from Fastly to Amazon CloudFront https://trackit.io/streamlining-vcl-migrations-fastly-to-cloudfront/ Mon, 13 Oct 2025 12:10:47 +0000 https://trackit.io/?p=15539 Written by Lucas Marsala, DevOps Engineer at TrackIt

Content delivery networks power modern digital experiences, yet each platform takes a different approach to configuration and edge logic. Fastly relies on Varnish Configuration Language (VCL) files to define edge logic, caching rules, and request handling. Amazon CloudFront, by contrast, uses a combination of cache behaviors, origin configurations, and AWS-native services such as CloudFront Functions and Lambda@Edge to achieve similar results. CloudFront configurations can also be managed declaratively through Infrastructure-as-Code (IaC) frameworks such as AWS CloudFormation or Terraform, giving teams flexibility in how they define and maintain delivery infrastructure.

Fastly is also more flexible at the edge, with extensive conditional logic capabilities that must be accounted for during migration. At the same time, VCL can be less intuitive to read and interpret compared to declarative IaC approaches such as Terraform, adding another layer of complexity.

Reducing Risk in CDN Migrations: Fastly VCL to CloudFront

The sections below highlight the key points and challenges of migrating from Fastly to CloudFront. They explain how VCL-based rules can be translated into CloudFront behaviors, how AWS services such as CloudFront Functions and Lambda@Edge can be leveraged, and how to ensure a smooth cutover with minimal impact on performance or availability. 

Understanding the Challenges of CDN Migration

Migrating from Fastly VCL to Amazon CloudFront involves translating Fastly’s flexible edge logic and caching rules into CloudFront’s configuration model. This is challenging due to architectural and capability differences. Fastly allows granular control over request handling, header manipulation, and caching behaviors directly at the edge using VCL. CloudFront, on the other hand, relies on cache behaviors, origin configurations, and Lambda@Edge functions to achieve similar outcomes.

To ensure a successful migration, the first step is to gain a clear understanding of the current environment. This involves taking stock of caching rules, endpoints, SSL certificates, custom headers, DNS records, origin configurations, and delivery requirements. Once the baseline is established, the features and limitations of the target CDN should be reviewed to determine how existing rules and behaviors can be mapped. This process helps identify potential gaps and adjustments early.

Why Planning Matters

A CDN migration demands careful, detailed planning. Even small oversights can result in broken links, degraded performance, or unexpected downtime once live traffic is introduced. The effort often spans multiple environments, requires coordination across teams, and demands a clear understanding of how traffic will behave during and after the switch.

Translating VCL Logic Manually 

Manual migration from Fastly to Amazon CloudFront begins with a detailed audit of existing VCL files. The objective is to document every functional component—caching rules, conditional logic, header rewrites, redirects, and routing flows—before making any changes.

Once the baseline is established, each rule can be mapped to CloudFront’s configuration model. Cache Behaviors handle path- and header-based conditions, while CloudFront Functions and Lambda@Edge enable advanced logic and header manipulation directly at the edge. Origin settings and associated policies allow fine-tuning of caching, TTLs, compression, and redirect handling.

Not every VCL feature has a direct equivalent in CloudFront. In such cases, additional AWS services such as API Gateway or AWS WAF can be integrated to replicate the desired behavior.

The following examples illustrate some of the most common Fastly rules and how they can be translated for CloudFront. These examples provide a practical reference for teams performing manual migrations or seeking a deeper understanding of the mapping process.

The most common rule categories include:

  • Cache Control: Defines what content is cached, for how long, and under which conditions.
  • Header Manipulation: Adds, modifies, or removes HTTP headers in requests and responses.
  • Redirect Rules: Direct users to alternative URLs or domains based on specific conditions.
  • Access Control: Restricts or permits access to certain content according to criteria such as IP address, geographic location, or authentication status.

Cache Control Rule

image 1

In the example provided above, the VCL configuration defines a simple caching rule. All requests for files ending with “.html” are cached for ten minutes, while any requests beginning with “/api/” are never cached.

To replicate this behavior in Amazon CloudFront, an ordered cache behavior must be created for the /api/ path to ensure those requests bypass caching. The following snippet illustrates how both rules can be implemented in CloudFront using Terraform.

image 2
image 4

Both rules are nearly identical; the main difference lies in the cache_policy_id and TTL settings. For the /api/ path, caching is disabled, while the .html files are configured to be cached.

For those unfamiliar with Terraform, the same configuration can be achieved directly through the AWS Management Console. Within the Behaviors section of a CloudFront distribution, a rule can be created for the pattern /api/ with the cache policy set to CachingDisabled. The remaining configuration primarily involves defining the Origin and Origin Groups, which in most cases will point to an Amazon S3 bucket.

image 6

Geographic Rule

CloudFront cannot natively replicate every rule supported in Fastly. In certain cases, additional AWS services are required to achieve equivalent functionality. One common example involves geographic rules, where content availability must vary by region. In some parts of the world, it may be necessary to restrict or modify specific content based on local regulations. In such scenarios, the AWS Web Application Firewall (WAF) can be used to manage geographic restrictions.

For instance, if a website contains information that cannot be displayed in countries such as Russia or China, WAF can be configured to block access from those regions. Below is an example of the corresponding VCL file.

If a client request originates from Russia or China, it will be blocked. To identify the client’s location, the configuration relies on ISO country codes.

image 7

When working with AWS WAF, several key parameters must be defined. In this example, the rule is named BlockSpecificCountries. The rule name serves primarily for identification and documentation purposes—it does not affect the configuration’s functionality.

There are three main components in a WAF rule:

  • Priority: Defines the order in which rules are evaluated. Rules are processed from the lowest priority number to the highest.
  • Statement: Specifies the condition or logic that determines when the rule applies.
  • Action: Defines what WAF should do when a request matches the rule condition.

In this simple WAF configuration, the priority is set to 1. The statement uses a GeoMatchStatement, which matches requests originating from Russia or China using their ISO country codes. Finally, the action section instructs WAF to Block the request and return a 403 response with custom HTML content.

image 3

Compared to VCL, implementing this type of rule in CloudFront requires additional services, which can make the setup slightly more time-consuming. While maintaining separate services may seem complex, it actually simplifies debugging and troubleshooting in large infrastructures. By isolating logic across dedicated components, issues become easier to identify and resolve without sifting through excessive configuration details.

Header Manipulation Rule

image

This next example focuses on header manipulation rules, which involve removing or modifying headers to enhance security and protect sensitive information.

The goal is to minimize the exposure of implementation details. Headers that reveal server technology or version—such as Server—can allow attackers to target known vulnerabilities (CVEs). The same applies to X-Powered-By, which may disclose the framework or platform used to build the site. Similarly, the Via header can expose proxy or caching layers, potentially helping attackers infer request flows and attempt evasion.

On CloudFront, header removal is commonly implemented with a lightweight edge function. Lambda@Edge or CloudFront Functions can strip or rewrite response headers before they reach the client, thereby reducing the attack surface without modifying backend logic.

image 5

It is important to consider that introducing additional services can affect overall infrastructure costs. Although Lambda functions are not among the most expensive AWS services, their usage still contributes to the total operational spend. Each added component—whether for logic, monitoring, or automation—should be evaluated as part of the broader cost analysis before initiating the migration.

Conclusion

Migrating from Fastly VCL to Amazon CloudFront requires translating custom edge logic into CloudFront’s configuration model. While VCL scripts provide direct control over caching, routing, and headers, CloudFront achieves similar functionality through cache behaviors, origin settings, and edge compute options such as CloudFront Functions and Lambda@Edge.

Successful migration depends on accurately mapping each VCL rule to its CloudFront equivalent and bridging any gaps with supporting AWS services like AWS WAF for security and API Gateway for routing. Managing the deployment through Terraform or AWS CloudFormation ensures automation, consistency, and version control across environments.

Ultimately, the objective is to preserve Fastly’s flexibility while leveraging CloudFront’s scalability, reliability, and deep integration within the AWS ecosystem.

About TrackIt

TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

]]>
Amazon Connect: A Modern Tool for AI-Powered Contact Centers https://trackit.io/amazon-connect-a-modern-tool-for-ai-powered-contact-centers/ Mon, 29 Sep 2025 07:34:38 +0000 https://trackit.io/?p=15429 Written by Tiago Valenca, Solutions Architect at TrackIt

Customer expectations are evolving rapidly, with seamless, personalized experiences across multiple channels becoming the standard. Traditional contact centers often struggle with fragmented systems, lengthy onboarding, and high operational costs, making it challenging to deliver consistent service. Cloud-native solutions address these challenges by providing flexibility, scalability, and faster access to new features.

For organizations seeking a modern contact center solution, Amazon Connect offers an omnichannel, cloud-native platform built on AWS. It enables rapid deployment of voice, chat, and task routing while providing the flexibility to integrate analytics and AI capabilities. Its pay-as-you-go pricing, browser-based agent workspace, and continuously evolving feature set support reduced handle times, improved customer satisfaction, and lower operational overhead.

What is Amazon Connect?

Amazon Connect is a fully managed contact center service. It allows configuration of flows, such as IVRs and chatbots, routing contacts to the appropriate agents across channels, and managing operations entirely through a web console, eliminating the need for on-prem PBXs or manual patching. Native integration with AWS services like Lambda, S3, and Kinesis makes extending logic or analytics seamless. As a Contact Center as a Service (CCaaS), it allows enterprises to scale up or down in minutes rather than months.

Key Benefits

  • Faster time-to-value: launch pilots from the browser and iterate quickly.
  • Omnichannel by design: unify voice, chat, and tasks in a single workspace with a consistent workflow model.
  • Lower TCO: no long-term commitments or seat licenses—costs are strictly usage-based.
  • Built-in analytics & QA: Connect Contact Lens provides transcription, sentiment analysis, search, and evaluation tools.

Step-by-Step Guides: “In-Desktop” Playbooks for Agents

Step-by-Step Guides are interactive workflows built directly into the Amazon Connect Agent Workspace. They allow agents to follow guided procedures for common tasks—such as reservations, returns, identity verification, or payments—without needing to memorize steps or switch between systems. Each guide enforces the correct order of actions and surfaces the relevant data at the right time, making it easier to standardize processes and reduce training requirements.

These guides function like dynamic checklists, with forms, prompts, and customer context embedded within the workspace, helping agents complete tasks efficiently while minimizing errors. The example below illustrates how a Step-by-Step Guide can streamline appointment cancellations. 

Amazon Connect GIF

When a customer starts a chat, the Agent Workspace automatically opens a guide in Cards view. The form is pre-populated with information from Amazon Connect Customer Profiles. The agent confirms the customer’s intent, and a single action triggers AWS Lambda to send both an SMS via Amazon SNS (Simple Notification Service) and an email via Amazon SES (Simple Email Service). The cancellation is completed, confirmations are sent, and the agent remains fully within Connect, without manual copy-pasting or typing boilerplate responses.

Generative AI in Connect: Meet Amazon Q in Connect

image 10

Amazon Q in Connect represents the latest advancement in agent assist and customer self-service. It analyzes live conversations (voice or chat), identifies the customer’s issue, searches knowledge bases and web content, and and suggests responses or actions. This allows agents to resolve interactions more quickly and confidently, while also enabling self-service that can both answer questions and complete tasks, such as checking orders or processing returns.

  • Real-time agent assist: provides recommended answers, step guidance, and links based on enterprise knowledge sources.
  • Personalized help: suggestions incorporate customer info and business rules to ensure relevance.
  • Secure by design: content imported from sources such as S3, SharePoint, Salesforce, ServiceNow, or Zendesk can be encrypted with AWS KMS (Key Management Service) keys, and search indices remain encrypted at rest by AWS.

Contact Lens + AI: Supervisor superpowers

Contact Lens for Amazon Connect transforms every interaction (voice and chat) into structured, searchable insights. It provides automatic transcription, sentiment and theme detection, sensitive-data redaction, and robust search capabilities, both in real time and after the interaction. Supervisors can define rules and alerts to act immediately on important signals, improving oversight and responsiveness.

  • Real-time & historical analytics: transcripts, sentiment, issue detection, categorization, and rule-based alerts to support agent training or escalation.
  • QA at scale with GenAI: automated agent evaluations and AI-generated post-contact summaries enable more interactions to be assessed while reducing after-call workload.
  • Built-in data & pricing model: dashboards and an analytics data lake are included, with optional services such as conversational analytics, performance evaluations, and screen recording billed based on usage.
image 9

Why Choose Amazon Connect in 2025?

With its mature capabilities and strong AWS integrations, Amazon Connect is recognized by industry analysts as a leading Contact Center as a Service (CCaaS), maintaining momentum through 2024–2025. That validation mirrors what many teams experience post-migration: lower costs, faster agent onboarding, and better customer outcomes. 

How TrackIt Can Help

TrackIt is an AWS Advanced Consulting Partner and software integrator focused on building scalable, production-ready cloud architectures for ISVs and enterprises. We help customers design, pilot, and scale Amazon Connect deployments, providing end-to-end support across all stages.

Our offerings for contact center clients include:

  • Discovery & ROI modeling: total cost of ownership analysis and migration roadmap planning
  • Pilot builds: flows, routing, Agent Workspace setup, and Step-by-Step Guides
  • Knowledge strategy and Amazon Q configuration: integration of sources, prompt creation, guardrails, and KMS setup
  • QA & analytics with Contact Lens: dashboards, alerts, post-contact summaries, and automated evaluations
  • Data pipelines and AWS integrations: Lambda, Bedrock, Kinesis, S3, Redshift, QuickSight, and other related services
  • SecOps and compliance alignment: adherence to AWS best practices
  • Change management, training, and playbooks: workflow documentation and agent training support

About TrackIt

TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

]]>
Deploying AWS Media2Cloud with Terraform https://trackit.io/deploying-aws-media2cloud-with-terraform/ Fri, 26 Sep 2025 07:06:50 +0000 https://trackit.io/?p=15410 Written by Clarisse Eynard, Software Engineer at TrackIt

Media organizations today face the challenge of handling vast amounts of video, image, and audio content efficiently. From ingesting raw assets to enriching them with metadata and distributing them to global audiences, the workflows involved are complex and resource-intensive. To address these challenges, AWS provides Media2Cloud, a reference solution designed to automate and streamline media workflows at scale.

image 12

Media2Cloud ingests content, applies AI/ML services for analysis and enrichment, and prepares assets for efficient distribution. It is a turnkey solution that reduces the time needed to build a robust media supply chain, while offering flexibility to integrate with existing pipelines. Traditionally, AWS distributes Media2Cloud as a ready-to-use CloudFormation template, which sets up the required components in a single deployment.

While CloudFormation is powerful and fully supported by AWS, many organizations either already rely on Terraform for infrastructure as code (IaC) and want to keep their environments uniform, or choose Terraform because of its broad provider ecosystem, multi-cloud capabilities, and seamless integration with CI/CD workflows.

Instead of rewriting the entire Media2Cloud architecture natively in Terraform, a more pragmatic approach is to wrap the CloudFormation template in Terraform. This is possible using the aws_cloudformation_stack resource, which allows Terraform to manage the lifecycle of a CloudFormation stack as part of a broader Terraform-managed environment. This ensures consistency, visibility, and control, without waiting for a native Terraform module for Media2Cloud.

Two Ways to Customize Media2Cloud with Terraform

When it comes to integrating Media2Cloud into a Terraform-based environment, there are two possible approaches, each with distinct trade-offs:

1. Full Terraform Translation

This approach involves rewriting the entire Media2Cloud CloudFormation template in pure Terraform code, utilizing native Terraform resources.

StrengthsWeaknesses

Provides full infrastructure transparency, with every AWS resource explicitly defined in Terraform.

Enables native state management for improved dependency tracking and drift detection.

Ensures a single Infrastructure-as-Code (IaC) language across all components, simplifying team workflows and CI/CD integration.

Offers fine-grained customization of individual components and smooth integration with existing Terraform modules.
Requires significant initial effort to accurately translate hundreds of CloudFormation resources.

Involves continuous maintenance as AWS updates to Media2Cloud must be manually replicated to maintain parity.

Carries a risk of divergence from the official AWS implementation, potentially missing key fixes or optimizations.

May receive limited AWS support due to deviation from the officially supported deployment model.

Necessitates extensive testing to ensure complete functional equivalence with the original CloudFormation template.

Below is an example of the deployment folder structure:

image 13

2. Terraform Wrapper (Recommended Approach)

This approach uses the aws_cloudformation_stack resource to deploy the official Media2Cloud CloudFormation template from within Terraform.

StrengthsWeaknesses
Enables minimal deployment effort using the official, ready-to-use CloudFormation template.

Automatically benefits from AWS updates, bug fixes, and optimizations without manual maintenance.

Simplifies version management by allowing quick upgrades through template URL updates.

Provides full Terraform state integration, ensuring lifecycle management (create, update, destroy) alongside other infrastructure.

Eliminates translation errors and guarantees feature parity with the official AWS implementation.
Offers limited control over individual AWS resources within the CloudFormation stack.

Keeps internal configurations abstracted, functioning as a black box.

Requires forking and modifying the CloudFormation template for deep customization.

Increases debugging complexity due to nested Terraform and CloudFormation contexts.

Which Approach Should You Choose?

For most organizations, the Terraform wrapper approach is the pragmatic choice. It provides rapid deployment, maintains alignment with AWS’s official solution, and integrates seamlessly into Terraform workflows. This is the approach demonstrated in this guide.

The full Terraform translation approach should only be pursued in specific scenarios, such as when:

  • Specific compliance requirements that mandate pure Terraform infrastructure
  • Extensive customization needs that go beyond parameter configuration
  • Dedicated engineering resources are available to maintain the translated implementation over time.

For detailed guidance on implementing Media2Cloud with native Terraform resources, refer to our comprehensive guide: AWS CloudFormation to Terraform Translation.

Solution Overview

Prerequisites

  • An AWS account with the necessary privileges to create IAM roles and policies, access S3, and deploy resources via CloudFormation.
  • AWS CLI installed and authenticated with sufficient permissions.
  • Anthropic Claude 3 Haiku or Sonnet enabled via the Amazon Bedrock console under Manage model access.
  • Terraform, jq, Docker and Nodejs 20.x installed and properly configured on the local environment.
  • Docker daemon running locally to allow containerized operations.

Deployment Steps

Step 1: Building the Media2Cloud V4 Deployment Package

Media2Cloud requires building Lambda function packages and other artifacts before deployment. This step prepares all necessary files and uploads them to an S3 bucket.

Follow the steps from the AWS Solutions Library Samples repository:

1.1 Create an S3 Bucket for Artifacts

First, create an S3 bucket to store the Media2Cloud deployment artifacts. Skip this step if the plan is to reuse an existing bucket.

aws s3api create-bucket –bucket yourname-artefact-bucket –region us-east-1

Note: If deploying to a region other than us-east-1, the –create-bucket-configuration LocationConstraint=your-region parameter must be added.

1.2 Clone and Build Media2Cloud

Clone the official AWS Solutions Library repository and build the deployment package:

# Clone the repository
git clone git@github.com:aws-solutions-library-samples/guidance-for-media2cloud-on-aws.git
# Navigate to the deployment directory
cd guidance-for-media2cloud-on-aws/deployment
# Build the distribution (this may take 10-15 minutes)
bash build-s3-dist.sh –bucket yourname-artefact-bucket –version vexemple –single-region > build.log 2>&1 &
# Monitor the build progress
tail -f build.log

What this does:

  • Installs Node.js dependencies
  • Packages Lambda functions
  • Prepares CloudFormation templates
  • Creates deployment artifacts

Parameters explained:

  • –bucket: S3 bucket name (without s3:/ prefix)
  • –version: Version tag for this deployment (use a meaningful version like v4.0.0)
  • –single-region: Optimizes for deployment in a single AWS region

1.3 Upload Artifacts to S3

Once the build completes successfully, upload all artifacts to the S3 bucket:

bash deploy-s3-dist.sh –bucket yourname-artefact-bucket –version vexemple –single-region

This uploads:

  • CloudFormation templates
  • Lambda function ZIP files
  • Web application assets
  • Configuration files

1.4 Locate the CloudFormation Template URL

Once uploaded, the template will be available at:

https://yourname-artefact-bucket.s3.amazonaws.com/media2cloud/v4.0.0/media2cloud.template

Save this URL – it will be required for the Terraform configuration outlined in Step 2.

image 14

Troubleshooting

Build fails with Node.js errors:

  • Ensure Node.js 20.x is installed: node –version
  • Install nvm if needed: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
  • Switch to Node 20: nvm install 20 && nvm use 20

Permission errors during S3 upload:

  • Verify AWS credentials: aws sts get-caller-identity
  • Ensure IAM user/role has S3 write permissions

Build takes too long:

  • The first build can take 15-20 minutes due to npm package downloads
  • Subsequent builds are faster due to caching

Step 2: Configure Parameters

In a separate directory, create a main.tf file to define the CloudFormation stack resource with the appropriate parameters:

resource “aws_cloudformation_stack” “m2c” {  
name = “media2cloud-terraform-deploy”
template_url = “https://yourname-artefact-bucket.s3.amazonaws.com/media2cloud/vexemple/media2cloud.template”
  
parameters = {    
VersionCompatibilityStatement = “Yes, I understand and proceed”    
Email = “mail@exemple.com”    
DefaultAIOptions = “Recommended V4 features (v4.default)”    
OpenSearchCluster = “Development and Testing (t3.medium=0,m5.large=1,gp2=10,az=1)”    
PriceClass = “Use Only U.S., Canada and Europe (PriceClass_100)”    
StartOnObjectCreation         = “NO”    
BedrockSecondaryRegionAccess  = “North Virginia [US East] (us-east-1)”    
BedrockModel                  = “Anthropic Claude 3 Haiku”  
}

  capabilities = [“CAPABILITY_IAM”]
}

Understanding the Configuration Parameters

2.1. The Media2Cloud CloudFormation template accepts several parameters that control the behavior and features of the deployment. Below is a detailed explanation of each parameter:

Mandatory Parameters

ParameterValueDescription
VersionCompatibilityStatementYes, I understand and proceedThe version compatibility statement must be read and acknowledged before proceeding
Emailyour@email.comEmail address used to register with Amazon Cognito UserPool and to receive an invitation to the Media2Cloud web portal.

Core Configuration Parameters

ParameterExample valueOptions & description
DefaultAIOptionsRecommended V4 features (v4.default)Controls which AI/ML features are enabled by default. Can be modified later via the Media2Cloud web portal Settings page
OpenSearchClusterDevelopment and Testing (t3.medium=0,m5.large=1,gp2=10,az=1)For testing: Use single instance configurationFor production: Use multi-AZ configuration with appropriate instance types
PriceClassUse Only U.S., Canada and Europe (PriceClass_100)Amazon CloudFront price class. Choose based on your target audience geography:• PriceClass_100: US, Canada, Europe• PriceClass_200: Adds Asia, Middle East, Africa• PriceClass_All: All edge locations worldwide

Ingestion & Storage Parameters

ParameterExample valueOptions & description
StartOnObjectCreationYES or NOYES: Automatically process files when uploaded to the ingest bucket NO: Manual ingestion trigger required
UserDefinedIngestBucketLeave blank or specify bucket nameLeave blank: Media2Cloud creates a new S3 bucket Specify bucket name: Connect an existing S3 bucket for ingestion

Advanced AI/ML Parameters

ParameterExample valueOptions & description
BedrockSecondaryRegionAccessNorth Virginia [US East] (us-east-1)Required for Generative AI features. Choose: • us-east-1 (North Virginia)• us-west-2 (Oregon)
BedrockModelAnthropic Claude 3 HaikuGenerative AI model for content analysis: • Claude 3 Haiku: Faster, cost-effective for basic tasks• Claude 3 Sonnet: More capable for complex analysis Both models support text and image inputs

Optional Advanced Features

ParameterDefaultDescription
EnableKnowledgeGraphNOYES: Enables Amazon Neptune graph database for visualizing relationships between content assetsNO: Standard deployment without graph capabilities
CidrBlock172.31.0.0/16Only applicable when EnableKnowledgeGraph is set to YES. Defines the VPC CIDR block for Neptune deployment

Configuration Tips

  • For development/testing: Use the default values shown above with a single OpenSearch instance
  • For production:
    • Upgrade to a Multi-AZ OpenSearch configuration
    • Consider enabling Knowledge Graph if relationship visualization is required
    • Set StartOnObjectCreation to YES for automated workflows
    • Choose an appropriate CloudFront price class based on audience location
  • Cost optimization: Start with Claude 3 Haiku and upgrade to Sonnet only if needed
  • Email validation: Ensure the email address is valid and accessible, as it’s required for portal access

Important Notes

Template URL: Replace “https://yourname-artefact-bucket.s3.amazonaws.com/media2cloud/vexemple/media2cloud.template” with the actual S3 URL from Step 1

Bedrock Model Access: Before deployment, verify that the chosen Bedrock model (Claude 3 Haiku or Sonnet) is enabled in the Amazon Bedrock console under “Manage model access”

image 17

IAM Capabilities: The CAPABILITY_IAM capability is required because Media2Cloud creates IAM roles and policies

2.2. In the same folder, create a provider.tf file.

provider “aws” { 
region = “us-east-1”
}

Step 3: Deploy with Terraform

Run the following commands in the project directory containing main.tf:

    terraform init
    terraform apply -auto-approve

    Terraform provisions the CloudFormation stack, which in turn deploys all Media2Cloud resources. From the Terraform perspective, this deployment becomes part of the managed infrastructure state.

    image 15

    Expected outcome: Media2Cloud is deployed and operational, with Terraform maintaining the lifecycle of the CloudFormation stack.

    image 16

    Conclusion

    Deploying Media2Cloud with Terraform is a practical way to bring AWS’s official solution into a broader Terraform workflow. It reuses the CloudFormation template while ensuring consistency, automation, and visibility in infrastructure management. This method bridges the gap until a native Terraform module becomes available, which would provide even greater control and integration.

    About TrackIt

    TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

    We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

    Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

    ]]>
    TrackIt TV: A Fully Functional Live Streaming and Monetization Pipeline https://trackit.io/trackit-tv-live-streaming-and-monetization-pipeline/ Tue, 16 Sep 2025 07:26:59 +0000 https://trackit.io/?p=15387 Written by Mathis Lorenzo, Software Engineer at TrackIt

    The media and entertainment industry continues to demand reliable, scalable, and monetizable live streaming solutions. Broadcasters, event organizers, and enterprises need systems that not only deliver high-quality video but also provide opportunities for dynamic revenue generation through advertising.

    TrackIt TV is a demonstration pipeline designed to highlight how professional broadcast standards can be preserved while extending reach and monetization globally. By combining AWS Elemental MediaLive Anywhere and AWS Elemental MediaTailor, the pipeline offers a fully functional live streaming workflow with advanced ad insertion, showcasing how on-premises video encoding can integrate seamlessly with AWS cloud services.

    What is AWS Elemental MediaLive Anywhere?

    AWS Elemental MediaLive Anywhere is a feature of AWS Elemental MediaLive that enables live video encoding on on-premises hardware while retaining management, monitoring, and orchestration in the AWS Cloud. This hybrid approach combines the flexibility of cloud-based operations with the control and performance of local infrastructure.

    Key Benefits

    • Multi-cast support: Efficiently delivers a single stream to multiple internal destinations, ideal for broadcast setups such as studios or stadiums.
    • Uncompressed signal: Preserves full video quality end-to-end before processing.
    • AWS as control plane (plug & play): Simplifies on-premises deployment through centralized, cloud-based orchestration and monitoring.
    • SMPTE 2110 support: Standards-based IP transport for video, audio, and data, compatible with modern broadcast workflows.
    • SDI support: Ensures smooth integration with existing production equipment for backward compatibility.

    The TrackIt TV Streaming Pipeline

    image 7

    Solution Architecture

    The TrackIt TV pipeline is structured around three main stages that together form a complete end-to-end streaming workflow:

    1. On-Premises Encoding

    The workflow begins with video capture and local encoding:

    • A camera with SDI output provides the feed.
    • An SDI acquisition card ingests the signal into a dedicated local server.
    • AWS MediaLive Anywhere, running on the server, encodes the live video stream.

    This ensures that the video is processed locally in compliance with professional broadcast standards before being handed off to the cloud. For example, sports stadiums or live concert venues can encode feeds directly on-site, taking advantage of multicast distribution and SMPTE 2110 support for seamless integration with IP-based broadcast infrastructures.

    2. Server-Side Processing & Ad Insertion

    Once encoded, the video stream is transferred to AWS for adaptive processing, ad insertion, and global delivery:

    • AWS Elemental MediaLive converts the stream into adaptive bitrate formats (HLS/DASH), ensuring consistent playback across devices.
    • AWS Elemental MediaTailor manages server-side ad insertion (SSAI), seamlessly stitching ads into the stream. Ads can be linear (directly embedded) or non-linear (independently displayed, such as overlays).
    • Amazon CloudFront distributes the stream worldwide with low latency, supporting large audiences.

    Non-linear ad insertion is enabled through scheduled triggers:

    • Amazon EventBridge Scheduler initiates an AWS Lambda function at regular intervals.
    • The function injects SCTE-35 ad markers into the MediaLive channel.
    • MediaTailor processes these markers to determine which ads to serve.

    This model enables broadcasters to monetize live streams dynamically without disrupting continuity.

    3. Client-Side Experience

    On the viewer’s side, the stream is rendered by a media player across devices such as desktops, mobile phones, or smart TVs. For non-linear ads, the client communicates with MediaTailor’s Client-Side Ad Tracking API:

    • Every 30 seconds, the client polls MediaTailor.
    • MediaTailor responds with ad payloads, including timing, format, and creative details.
    • The ad is displayed to the viewer as an overlay or interactive element.

    The result is a hybrid monetization model:

    • Linear ads – Traditional stitched-in ads within the live stream.
    • Non-linear ads – Interactive elements complementing the viewing experience.

    This combination provides personalized, relevant ads to viewers while enabling broadcasters to maximize monetization opportunities.

    Applications & Business Impact

    TrackIt TV highlights how a demonstration pipeline can deliver both technical reliability and business value. By combining on-premises encoding, cloud-based orchestration, and advanced ad insertion, the workflow demonstrates a flexible and scalable model suitable for:

    • Live sports streaming
    • Concerts and festivals
    • News and broadcast television
    • Corporate events and conferences

    Organizations can benefit by:

    • Reducing operational costs through simplified infrastructure management.
    • Expanding revenue streams with SSAI and non-linear ad formats.
    • Delivering live content globally with minimal latency using Amazon CloudFront.

    Conclusion

    TrackIt TV demonstrates how a fully functional live streaming workflow can be achieved by integrating AWS Elemental MediaLive Anywhere and AWS Elemental MediaTailor. The pipeline preserves broadcast-grade quality, leverages the scalability of the cloud, and enables advanced, personalized ad monetization. This model provides a future-ready framework for broadcasters and enterprises seeking to deliver premium live streaming experiences while optimizing revenue potential.

    About TrackIt

    TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

    We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

    Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

    ]]>
    How AWS Elemental MediaTailor Uses SCTE-35 for Server-Side Ad Insertion https://trackit.io/how-aws-mediatailor-uses-scte-35-markers-for-ssai/ Mon, 08 Sep 2025 13:23:17 +0000 https://trackit.io/?p=15348 Written by Nathan de Balthasar, Software Engineer at TrackIt

    Digital ad insertion in streaming workflows relies on precise signals that indicate when and where ads should play. Without these markers, platforms cannot synchronize breaks, resulting in poor viewer experiences and ineffective monetization. SCTE-35 has become the industry standard for defining such ad opportunities, ensuring compatibility across both broadcast and OTT environments.

    MediaTailor’s Role in Translating SCTE-35 for Seamless Ad Insertion

    The first article in this series, SCTE-35 Standard: Core Signaling for Digital Ad Insertion, introduced SCTE-35 and explained how it defines cue messages for marking ad breaks, segments, and content boundaries across broadcast and streaming workflows. It also set the stage by showing how services like AWS Elemental MediaTailor leverage SCTE-35 to enable server-side ad insertion (SSAI).

    This second article focuses on MediaTailor itself. It explores how the service ingests SCTE-35 markers, interprets splice_insert and time_signal messages, and translates them into HLS and DASH-compatible ad markers. Practical considerations for choosing between message types, configuring Channel Assembly, and validating ad break behavior will also be covered.

    Inserting Ads

    There are two primary ways to insert ads in AWS Elemental MediaTailor: using time_signal messages or splice_insert messages. Both originate from SCTE-35 but differ in configuration requirements, signaling style, and downstream playback implications. The MessageType property in Channel Assembly determines which type MediaTailor applies for an ad break.

    splice_insert

    A splice_insert message explicitly marks the beginning of an ad break, either immediately or at a scheduled splice point, and may also define an end or duration. This approach is common in legacy broadcast and cable workflows, where deterministic start/stop signals are required to switch between linear content and ads.

    In MediaTailor Channel Assembly, setting the ad break’s MessageType to SPLICE_INSERT ensures that the SCTE-35 payload is translated into enhanced HLS or DASH ad markers.

    time_signal

    A time_signal message is more typical in modern OTT SSAI workflows. Rather than relying on broadcast-style splice commands, it associates a scheduled event with a presentation timestamp. When paired with segmentation descriptors, time_signal messages carry richer metadata, such as:

    • segmentation_type_id (e.g., program start, provider ad, distributor ad)
    • segmentation_duration (length of the ad break)
    • content identifiers (for tracking or uniqueness)

    In HLS, MediaTailor writes this metadata into #EXT-X-DATERANGE tags. In DASH, it becomes part of the EventStream signaling.

    HLS Ad Markup Types in MediaTailor

    Depending on whether time_signal or splice_insert is used, MediaTailor generates different HLS ad markers.

    Daterange Markup (time_signal style)

    When time_signal is configured, MediaTailor emits #EXT-X-DATERANGE tags in the manifest. These tags include attributes for start time, duration, SCTE-35 payload, and optional identifiers.

    image 3

    SCTE-35 Enhanced Markup (splice_insert style)

    When SCTE-35 Enhanced is selected in MediaTailor (based on splice_insert), the manifest includes a richer set of tags:

    • #EXT-X-CUE-OUT – Marks the start of an ad break, with an optional duration
    • #EXT-X-ASSET – Appears alongside #EXT-X-CUE-OUT. It carries metadata about the ad break, including the Conditional Access Identifier (CAID)
    • #EXT-OATCLS-SCTE35 – Contains the raw SCTE-35 payload in base64 form
    • #EXT-X-CUE-OUT-CONT – Appears on each media segment during the break, showing elapsed and remaining time
    • #EXT-X-CUE-IN – Marks the return to program content immediately after the break

    This style mirrors legacy broadcast behavior and is often preferred when interoperability with third-party players or legacy workflows is required.

    Practical Considerations

    • When to use time_signal: Best for OTT-native SSAI, where richer metadata (e.g., segmentation descriptors) improves ad decisioning and analytics.
    • When to use splice_insert: Useful for compatibility with older systems, or when deterministic cue-out/cue-in markers are expected by downstream players.
    • Player support: Not all players interpret #EXT-X-DATERANGE consistently, while #EXT-X-CUE-OUT/#EXT-X-CUE-IN enjoy wider historical support. Cross-environment testing is essential.

    How to Configure Ad Breaks in MediaTailor

    Ad scheduling in MediaTailor begins by creating a Channel that provides linear playback from live or VOD sources. A channel stitches together programs in sequence, and ad breaks are configured at the program level.

    Step 1: Create a Channel

    Define a MediaTailor Channel using the AWS Management Console, AWS CLI, or CloudFormation. A channel serves as the container for programs and ad scheduling.

    Step 2: Add Programs

    Once the channel exists, add programs, which represent the primary content (live feeds, VOD assets, or other media). Each program can have one or more ad breaks configured within its timeline.

    Step 3: Define Ad Breaks

    image 6

    For each program, configure ad breaks by providing:

    • Ad slate (ad source): A VOD asset or fallback content that plays if no ad is returned by the ad decision server (ADS).
    • Start time: The offset within the program where the break should occur.
    • Duration (optional): Specifies how long the break should run. If not set, MediaTailor uses the break duration provided by the SCTE-35 marker.

    Step 4: Choose Message Type

    image 5

    The MessageType property determines how SCTE-35 markers are expressed in the channel’s manifests:

    • SPLICE_INSERT → MediaTailor emits SCTE-35 Enhanced markup (#EXT-X-CUE-OUT, #EXT-X-CUE-IN, etc.).
    • TIME_SIGNAL → MediaTailor emits #EXT-X-DATERANGE tags in HLS or EventStream entries in DASH, with richer metadata support.

    Example (console configuration excerpt):

    image 4

    This configuration schedules a time_signal ad break 10 minutes into the program, with a slate asset used if no ads are available.

    Step 5: Validate and Test

    • Manifests: Inspect HLS or DASH outputs to confirm correct ad marker insertion.
    • Fallbacks: Verify the slate asset plays when no ad decision is returned.
    • Monitoring: Use Amazon CloudWatch metrics and logs to confirm ad break signaling and playback behavior.

    Conclusion

    AWS Elemental MediaTailor acts as the bridge between traditional SCTE-35 signaling and modern OTT delivery. By translating splice_insert and time_signal messages into HLS and DASH ad markers, it enables standards-compliant SSAI that functions reliably across a wide range of players.

    The distinction is clear: splice_insert provides deterministic start/stop cues, while time_signal delivers richer metadata for advanced ad decisioning and analytics. Selecting the appropriate approach and configuring ad breaks effectively ensures smooth and personalized ad experiences at scale.

    For implementation details, see: Using AWS Elemental MediaTailor to create linear assembled streams documentation

    About TrackIt

    TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

    We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

    Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

    ]]>
    Building Cyber Resilience Through Capture the Flag (CTF) Events https://trackit.io/capture-the-flag-ctf-events/ Tue, 02 Sep 2025 06:48:40 +0000 https://trackit.io/?p=15313 Capture the Flag (CTF) events are hands-on cybersecurity exercises where participants are challenged to identify and exploit vulnerabilities in simulated environments. Unlike traditional training, a CTF places participants in the attacker’s seat, helping them understand how systems are breached and what defensive practices are needed to prevent real-world incidents.

    For enterprises, CTFs deliver tangible benefits:

    • Improved security awareness across engineering and IT teams
    • Exposure to common attack vectors in controlled settings
    • Team-building through collaborative problem-solving
    • Insights into weaknesses that may exist within internal practices

    How a TrackIt CTF Event is Organized

    TrackIt designs and facilitates tailored CTF events, delivered on-site but run against a private cloud-based platform. Each event follows a structured flow:

    1. Kickoff Briefing: Introducing the event, rules, and objectives.
    2. Live Hacking Session: Participants exploit vulnerable assets in the sandbox environment.
    3. Debrief & Retrospective: Review of vulnerabilities, remediation strategies, and best practices.

    Events can run from 2–3 hours for software teams to 5–6 hours for security-focused groups. Competitive modes with scoring and prizes are also available.

    Challenge Types

    TrackIt’s CTF events offer hands-on, practical exposure to a wide range of enterprise technologies and attack scenarios. The challenges are tailored to reflect real-world systems, giving participants a realistic environment to sharpen their skills from both attack and defense perspectives.  Challenges span across multiple domains:

    • AWS Cloud Environments: Participants engage directly with services including EC2, S3, RDS, IAM, API Gateway, EKS, FSx, and more, facing scenarios such as privilege escalation, misconfigured storage, insecure APIs, container vulnerabilities, and data exfiltration.
    • Windows Systems: Exercises often focus on Active Directory exploitation, credential harvesting, and privilege escalation in enterprise-style Windows environments.
    • Linux Systems: Challenges include enumeration, service exploitation, lateral movement, and privilege escalation on hardened Linux machines.
    • Forensics: Participants analyze compromised environments, logs, and binaries to trace attacker activity, reconstruct timelines, and extract indicators of compromise.

    Customizable Events

    A key advantage of TrackIt’s CTF is the customizability of challenge levels based on participants’ expertise:

    • For Software Engineers (Introductory Level): Challenges are designed to be accessible and require only lightweight tools such as Nmap and the browser inspector. The goal is to help participants grasp the fundamentals of scanning, probing, and identifying vulnerabilities without needing deep security expertise.
    • For Security Engineers (Advanced Level): Challenges are more complex and require specialized tools, including Nmap, Burp Suite, Postman, Subfinder, AWS CLI, boto3, Scout Suite, Trufflehog, strings, and others. Participants apply advanced penetration testing techniques, exploit chain building, and cloud security analysis in a controlled environment.

    How Vulnerabilities Are Exploited

    CTF challenges cover a wide spectrum of attack types and misconfigurations commonly encountered in enterprise environments. Examples include:

    • SQL Injection: Exploiting unsanitized database queries
    • SSRF (Server Side Request Forgery): Gaining access to internal resources via manipulated requests
    • XXE (XML External Entity Injection): Extracting data through insecure XML parsing
    • Privilege Escalation: Elevating access from a standard user to administrative control
    • Misconfigured IAM or Policies: Abusing excessive permissions for unauthorized access
    • Insecure Data Exposure: Exploiting weak data storage or transfer mechanisms:
      • Misconfigured S3 buckets that unintentionally allow public access
      • Unencrypted database backups or snapshots (e.g., RDS, DynamoDB) exposed to unauthorized users
      • Overly broad access controls (IAM policies, role assumptions) granting unnecessary access to sensitive assets
      • Media-specific transfer points like open Aspera watchfolders or poorly secured FTP servers

    Each challenge mirrors real-world risks while staying fully contained in a secure environment.

    CTF Event Delivery Specifics

    image 1
    image
    • The Platform: The event runs on ctf.trackit.io, TrackIt’s private, purpose-built platform designed to simulate realistic attack surfaces in a secure environment. It provides the infrastructure, scenarios, and tracking needed to deliver an engaging and controlled hacking experience.
    • Participant Onboarding: All participants are invited through email and can join the event directly without installing any additional software. This frictionless process allows teams to focus on the challenges themselves rather than technical setup.
    • Customizable Scenarios: Each event is adapted to reflect the client’s own technology stack, whether it involves AWS services, containerized applications, databases, or operating systems. This ensures that the vulnerabilities explored are directly relevant to the participants’ day-to-day environments.
    • Flexible Formats: The CTF can be organized as a competitive exercise with scoring and prizes or as a collaborative session where teams work together. This flexibility makes it equally suitable for team-building, internal training, or focused security workshops.
    • Educational Retrospective: Once the challenges are completed, TrackIt provides a detailed debrief. The platform integrates with AWS-native monitoring tools such as GuardDuty, CloudWatch, and VPC Flow Logs, ensuring participants see both the offensive and defensive perspectives. Each session concludes with step-by-step remediation strategies that are practical, field-tested, and directly applicable to real-world production environments.

    Strasbourg Capture the Flag Event

    TrackIt recently organized a short-format CTF in Strasbourg, France, designed for students and junior software engineers. The objective was to introduce participants to the basics of ethical hacking, demonstrate how real exploitation techniques are carried out, and highlight the importance of secure coding and proper configuration practices.

    The event was built on a stack that included AWS resources and common web application components, providing a realistic playground for both reconnaissance and exploitation.

    Vulnerabilities Explored

    image 2


    Participants were guided through a series of challenges that reflected common security flaws. They began with a basic Nmap scan to perform network reconnaissance, identifying open ports and potential attack vectors. The session then moved to SQL injection, where unvalidated input could be manipulated to expose or compromise data. Another challenge highlighted XML External Entity (XXE) attacks, showing how insecure XML parsing can be abused to extract sensitive information. Finally, the participants explored metadata server abuse within AWS, learning how improperly secured metadata endpoints can provide access to critical credentials.

    Remediation Methods

    The retrospective session connected each exploit to its corresponding defense strategy. For network scans, the discussion centered on blocking ICMP pings, restricting port access, and whitelisting trusted IPs. In the case of SQL injection, best practices included the use of ORM frameworks, application-layer validation, and AWS WAF SQL protection rules. For XXE attacks, the remediation involved disabling DTDs, enforcing secure XML parsing, and applying AWS WAF managed rule sets. Finally, to prevent metadata server abuse, the team demonstrated how enabling IMDSv2 on EC2 instances effectively mitigates the risk.

    Outcome

    By the end of the event, participants had gained practical, first-hand experience of how attackers approach and exploit vulnerable systems. The retrospective reinforced the lessons, bridging the gap between offensive techniques and defensive practices, and equipping attendees with actionable knowledge they could apply in their future engineering work.

    Conclusion

    Capture the Flag events are far more than simulated exercises—they are immersive, hands-on experiences that bridge the gap between theoretical knowledge and real-world security practice. CTFs equip participants with a first-hand understanding of how attackers think and operate, helping them identify vulnerabilities, anticipate potential exploits, and strengthen both individual and organizational defenses.

    By combining realistic environments, customizable challenge levels, and in-depth post-event retrospectives, TrackIt transforms learning into actionable insight. Teams leave not only with enhanced technical skills but also with a stronger culture of security awareness, making them better prepared to safeguard critical systems and data. 

    About TrackIt

    TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

    We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

    Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

    ]]>
    SCTE-35 Standard: Core Signaling for Digital Ad Insertion https://trackit.io/scte-35-standard-core-signaling-for-digital-ad-insertion/ Wed, 13 Aug 2025 15:13:20 +0000 https://trackit.io/?p=15302 Written by Nathan de Balthasar, Software Engineer at TrackIt

    Digital advertising in broadcast and streaming environments depends on precise signaling to identify when and where ads should be inserted into video content. Whether for traditional cable television, over-the-top (OTT) platforms, or hybrid delivery models, this signaling must be accurate, standardized, and interoperable across multiple systems and workflows. The SCTE-35 standard is one of the core technologies enabling this functionality. It defines how cue messages are embedded within video streams to mark ad breaks, program segments, and other content boundaries. 

    This first article in the series on digital ad insertion technologies introduces SCTE-35, explains its role within broader ad workflows, and sets the stage for understanding how services like AWS Elemental MediaTailor leverage it for server-side ad insertion (SSAI).

    What is SCTE-35?

    The SCTE-35 standard, formally known as the Digital Program Insertion Cueing Message, specifies how content providers and distributors can signal opportunities for advertising, content replacement, and program control. It is widely applied across various delivery formats, including QAM/IP, TV Everywhere (e.g., HBO Max, FOX Sports), and both live and time-shifted delivery (e.g., DVR, VOD services like MyCanal and FreeBox TV).

    SCTE-35 messages can signal:

    • Start and end points for ad breaks.
    • Placement of promotional or alternative content.
    • Program chapter markers and segment boundaries.
    • Content blackout instructions for regulatory compliance.

    Related Standards


    SCTE-35 works alongside other standards that address specific aspects of digital ad insertion:

    • SCTE-30: Splicing advertising into live QAM MPEG-2 Transport Streams.
    • SCTE-130-3: Enabling alternate content decisions for live and time-shifted delivery.
    • SCTE-214-1: Defining how SCTE-35 is carried in MPEG-DASH.
    • SCTE-224: Communicating event and policy information for distribution control.

    The SCTE-67 (2024) document outlines recommended practices for implementing SCTE-35 in digital program insertion workflows for cable.

    SCTE-35 Official standard: Digital Program Insertion Cueing Message

    SCTE-35 Usage using HLS

    In HTTP Live Streaming (HLS) workflows, SCTE-35 messages can be used to trigger seamless ad insertion through playlist (m3u8 manifest) manipulation. Before content reaches the player, the manifest can be updated to include targeted ad segments, enabling smooth playback without buffering.

    Two primary HLS approaches for carrying SCTE-35 messages are:

    1. EXT-X-DATERANGE tags
    2. EXT-X-SCTE35 tags

    These methods can also be adapted for MPEG-DASH using SCTE-214-1. In practice, ad insertion can be done client-side (CSAI) or server-side (SSAI). CSAI offers granular control but can be complex to implement seamlessly, while SSAI, such as that provided by AWS Elemental MediaTailor, centralizes the process on the server side and simplifies integration with live or VOD workflows.

    The following documentation describes the ad insertion process using AWS Elemental MediaTailor.

    Server-Side Ad Insertion using MediaTailor

    AWS Elemental MediaTailor is a managed SSAI service that personalizes video streams by dynamically inserting ads based on viewer data and campaign targeting rules.

    How Ad Servers Fit In

    An ad server (ADS) determines which ads to deliver based on audience data, campaign parameters, and business rules. Based on a multitude of factors, including audience segments, budget, and timeline, the ad server calculates in real-time the best ads to load for a specific audience. Examples of Ad servers include Google Ad Manager (GAM), FreeWheel, or SpringServe.

    The following illustration describes the process of serving ads using an ADS. 

    AD 4nXcSH7ZOdTy P2rrREzWFs giMKGkEsQgJ6RA 0AmniWTmw HNpjSyha74cxE 5Wz k6LrlpdyLNBXbZq2KZ5BSlleptSq5tD6CMTFGzL7F9ActwVXK q9 M yNGpK4fE1RAeLa4jA?key= 9jU4yBCJaXsgjKdFftCuA

    Source: The Video Ad Serving Process 

    • The user visits a site with a video player, which sends a request to the publisher’s web server to retrieve the video content.
    • The server responds with code that tells the browser where to get the main video content from and how to format it in the player window.
    • The video player should support HTML5 video and VAST tags for communication with ad servers.After the video content is fetched, the video player sends a request to the publisher’s ad server to retrieve a video ad (or at least the advertiser’s ad markup). This process requires sending a VAST request. The publisher’s ad server would also count an impression.
    • The publisher’s ad server programmatically decides which ad to display in the ad space and sends back the chosen ad markup.The ad markup loads in the video player and sends a request to the advertiser’s ad server to retrieve the video ad.
    • The advertiser’s ad server counts an impression and sends back a link that directs to the video ad’s location so it can be displayed to the user in the video player. Most of the time, the video ad would be hosted on a content delivery network (CDN).The video player sends a request to the CDN. The CDN returns the video ad file and the video ad is shown to the user

    VAST Requests

    For a video player to play an ad, it must send a VAST (Video Ad Serving Template) request to an ad server, specifying which ad should be played, how it should be played, and what information should be tracked during playback (e.g., CTR, exit or drop-off data, impressions).

    A VAST request is a simple HTTP request with a query string such as:

    http:/www.example.com/?LR_PUBLISHER_ID=1331&LR_CAMPAIGN_ID=229&LR_SCHEMA=vast2-vpaid

    The server response includes details like creative type, ID, dimensions, asset location, and tracking URLs.

    More information can be found here: What Are VAST, VPAID, SIMID, OM, VMAP, and MRAID?

    SSAI on MediaTailor

    The following diagram illustrates the end-to-end workflow of server-side ad insertion with MediaTailor, from the video player to the ad decision server (ADS).

    AD 4nXeVgNRlxS s4eGdd5MLIOYZX5LbthNzHPe1mAQ0s2xp8Ns 7J94NffCedizU6TOo0hXPzryASA3P0pO1WRq3OdHIQ7mafUyig9YowF5Fq qZEFonLfw2Yms7gLgPmUJEFCjKafjQ

    Source: How AWS Elemental MediaTailor Ad Insertion Works

    1. The player or CDN requests HLS or DASH content from MediaTailor, including parameters with viewer-specific information for ad personalization.
    2. MediaTailor forwards the request to the ADS, which selects an ad based on the viewer information and active campaigns. IThe ADS responds with URLs to the ad creatives in VAST or VMAP format.
    3. MediaTailor updates the content manifest to insert the selected ad URLs, transcoded to match the encoding characteristics of the original content.
    4. The personalized manifest is returned to the requesting player or CDN.

    Conclusion

    SCTE-35 is a foundational signaling standard for modern video advertising workflows, enabling precise ad insertion and content control across broadcast, cable, and OTT platforms. Understanding its role is critical for designing scalable, interoperable ad-supported streaming architectures.

    Organizations looking to implement or optimize SSAI solutions can benefit from leveraging AWS Elemental MediaTailor’s native SCTE-35 integration, streamlining the delivery of personalized, high-quality ad experiences. TrackIt offers expertise in configuring and integrating these workflows, ensuring that technical execution aligns with both operational goals and monetization strategies.

    Coming Next – Part 2: How AWS MediaTailor Handles SCTE-35 Markers

    The next article in this series will take a deep dive into how AWS MediaTailor ingests SCTE-35 markers, interprets cue messages, and manipulates manifests to enable real-time server-side ad insertion. It will include architectural diagrams, example workflows, and best-practice recommendations for reliable, scalable deployments.

    About TrackIt

    TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

    We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

    Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

    ]]>
    Live Video Encoding with AWS: Elemental MediaLive vs. MediaLive Anywhere https://trackit.io/live-video-encoding-elemental-medialive-anywhere/ Mon, 11 Aug 2025 08:18:52 +0000 https://trackit.io/?p=15264 Written by Jules Klakosz, DevOps Engineer & Mathis Lorenzo, Software Engineer at TrackIt

    Live video streaming workflows can vary widely depending on production environments, connectivity constraints, and latency requirements. At the encoding stage, AWS offers two primary approaches: AWS Elemental MediaLive, a fully cloud-based service, and AWS Elemental MediaLive Anywhere, a hybrid model that combines on-premises encoding with cloud-based control and monitoring. Understanding the differences between these options is essential for selecting the right architecture for specific broadcast and streaming needs.

    MediaLive vs. MediaLive Anywhere

    AWS Elemental MediaLive is a cloud-based live video processing service that enables broadcasters, content creators, and streaming platforms to encode, package, and deliver high-quality live video streams at scale. Its primary function is to transform incoming video feeds (from cameras, encoders, or other sources) into multiple output formats and bitrates suitable for distribution across different devices and networks.

    AWS Elemental MediaLive Anywhere is a feature of AWS Elemental MediaLive that enables live video encoding to be performed on-premises while using the AWS Cloud for configuration, control, and monitoring. With MediaLive Anywhere, video is processed locally on dedicated appliances, typically provided by AWS Partners, allowing broadcasters to reduce video transport costs, lower latency, and maintain direct access to on-premises video sources such as SDI. This hybrid architecture is ideal for workflows where bandwidth is limited or where real-time processing is critical. While the encoding happens at the edge, all management tasks are handled in the cloud, ensuring centralized control and seamless integration with other AWS Media Services.

    Feature / CapabilityAWS Elemental MediaLiveAWS Elemental MediaLive Anywhere
    Encoding LocationFully cloud-basedOn-premises (via certified appliance)
    Management & ControlCloud (AWS Console or API)Cloud (AWS Console or API)
    Hardware RequiredNoYes (AWS Partner appliance or approved hardware)
    Source IntegrationIP sources (RTMP, RTP, HLS, MediaConnect, etc.)Local sources (SDI)
    Ideal Use CaseCloud-native live streaming workflowsHybrid workflows, remote production, low-bandwidth environments
    Latency & BandwidthDependent on internet connectionLower latency, reduced bandwidth usage for video contribution
    ScalabilityHighly scalable via AWS infrastructureScalable with hybrid deployments
    Best ForOTT platforms, broadcasters with cloud-first strategiesProduction teams requiring on-prem control with cloud flexibility

    Core Components of a MediaLive Anywhere Deployment

    A MediaLive Anywhere deployment is built around several key infrastructure components that work together to run and manage live video channels on-premises. These components, networks, clusters, and nodes, form the core structure of the system:

    • Networks represent local network environments, such as studios or production facilities, where MediaLive Anywhere appliances are physically installed and connected. They define the communication boundaries between the on-premises hardware and AWS.
    • Clusters are logical containers that group encoding appliances (nodes) and the channels they run. They organize deployments, enable efficient resource management, and support high availability.
    • Nodes are the physical on-premises appliances—servers or devices—that perform the live video encoding. Each node runs one or more MediaLive channels and connects to AWS for centralized control and monitoring.

    As illustrated in the diagram above, an AWS Elemental MediaLive Anywhere deployment includes:

    • Networks (bright blue), representing the local environments where appliances are installed.
    • Clusters (blue) group channel placement groups, nodes, and channels to streamline organization and resource allocation. 
    • Nodes (green) are the physical encoding appliances, typically provisioned to meet peak channel demand with additional units for resiliency. 
    • Channel placement groups (yellow) organize channels within a cluster, while channels (orange) are the individual MediaLive channels running on MediaLive Anywhere nodes.

    Use Cases for AWS Elemental MediaLive Anywhere

    While the feature comparison provided above highlights the architectural differences between MediaLive and MediaLive Anywhere, understanding when to choose one over the other depends heavily on the context of your live video workflows. MediaLive Anywhere is particularly well-suited for hybrid production environments where factors such as limited connectivity, local signal acquisition, or latency-sensitive delivery come into play. Below are several concrete use cases where MediaLive Anywhere offers clear operational and technical advantages over a fully cloud-based MediaLive deployment.

    Use Case #1: Remote Broadcast Production with Limited Bandwidth

    Example: A live sports event is being produced in a rural area or temporary venue (e.g., a marathon, cycling race, or mountaintop ski event) with limited or unreliable internet.

    Why MediaLive Anywhere: Encoding occurs locally on-site, sending only the compressed, broadcast-ready stream to the cloud. This significantly reduces the required upstream bandwidth compared to transmitting raw video to MediaLive in the cloud.

    Use Case #2: Studio or Control Room with Existing SDI Infrastructure

    Example: A traditional broadcast studio or control room that already uses SDI (Serial Digital Interface) video routing systems.

    Why MediaLive Anywhere: Local appliances can ingest SDI directly without needing to convert to IP for cloud upload. This reduces complexity and lets teams keep using existing infrastructure.

    Use Case #3: Live Event Production with Ultra-Low Latency Needs

    Example: A financial news outlet producing a real-time live stream of market reactions and trading commentary.

    Why MediaLive Anywhere: Local encoding eliminates round-trip latency to the cloud for initial video processing, providing faster stream turnaround and lower latency (around 5 seconds).

    Use Case #4: Pop-Up or Mobile Production Units

    Example: A mobile news van or production trailer covering live events.

    Why MediaLive Anywhere: Appliances inside the van can handle encoding on-site with minimal cloud reliance, perfect for mobile or pop-up workflows.

    Hardware and Software Setup for MediaLive Anywhere

    Hardware Setup

    To establish a fully functional MediaLive Anywhere node, a dedicated physical server must be assembled according to AWS’s published hardware requirements. Specifications and compatibility details are outlined in the official AWS documentation, ensuring optimal performance and support for high-quality video encoding.

    For this implementation, the Minis Forum MS-01 was selected as the host platform:

    medialive anywhere 1

    By design, the MS-01 lacks native support for SDI capture. As noted in the AWS MediaLive Anywhere compatibility matrix, only AJA PCI-Express cards are currently certified. Accordingly, an AJA Kona 1-S card—the sole option validated for use with the Minis Forum MS-01—was procured:

    AD 4nXdWhBp4A4NTWbMWAU73qIeHoG6ruC5xVZ2xxqPV7VxG YhUbZ xDraxjaUaiejoOYsJeF

    The Kona 1-S card was installed into the MS-01’s PCI-Express slot, resulting in the following hardware configuration:

    AD 4nXeVTt1bfdIwSTw cLD8tXzXiFyRIAjXGX HMC XoInRcEFqc4FmNhyUyFQ4zCUc4NQbuqdk7qAeWxXKkWkc0FiFy09zoVnDHXA1

    With the capture node assembled, an SDI-capable camera was acquired to provide video input:

    AD 4nXfg3jExI10 Cu 4O5ZBuaE0Aqg3CcFdR

    An accompanying SDI cable was also obtained to complete the signal-chain setup:

    AD 4nXeF4vwp65ovdLBKHPjUVTzmKevHspf9F3HBbsgqMupAa0Vuvj1i 7unSrznQO S yt0SS9D1QHBD5I cW0b5Ad dGT9bqYFPA eWfIp RJp64tPGvLQP

    With these components in place, the system was ready for the installation of the MediaLive Anywhere software stack and end-to-end video ingestion validation, forming the basis for a scalable, cloud-connected live streaming workflow compliant with AWS guidelines.

    Software Setup

    An appropriate operating system must be installed on the newly assembled node. Rocky Linux 9.5 was selected due to its compatibility with the MediaLive Anywhere software stack. Although Red Hat Enterprise Linux (RHEL) 9.5 enjoys official support, its licensing requirements led to the choice of Rocky Linux as a cost-effective alternative.

    To support the installation of AJA drivers, Secure Boot was disabled in the BIOS. After booting into Rocky Linux, essential development packages—kernel-headers and kernel-devel—were installed to meet driver build requirements. A system reboot completed the kernel module installation, enabling the node to host the MediaLive Anywhere components.

    Internet connectivity was verified to allow integration with AWS services. In the AWS Management Console, the node was registered to a dedicated MediaLive Anywhere cluster. The AWS-provided setup script was executed on the node, automatically deploying all required packages and services.

    AD 4nXexFwXryUW9ib92uyMtjay2JMZ15S3GgPpXeOOMC2oVxsGZmXju1yUI cQ7FugmY7AhevDLWAZG8G55wltgr9mHq1Aan4NRmgjhpR3lVsepyPGRiVzVtShL8uLouSiZm0qH0lJNgA
    AD 4nXeiwzCCEQNCuMKa85uzqGr9 Y0yHWXB zPjgZXn1pShggS5dSy5C MfFMIRgnXH1bdyf AJ7Y MrkMfMBeST2tzgWMMz8v2q zf wbkszItUpoehEBshX5DwK0YLrJFyS4gytTNBw

    A MediaLive channel was then created, configured to ingest SDI input and deliver an RTMP output to YouTube. Once the channel was successfully initialized, a live stream from the TrackIt Strasbourg office was established and verified as operational.

    image

    Conclusion & Next Steps

    Deploying AWS Elemental MediaLive Anywhere requires careful planning across both hardware and software domains, from selecting compatible appliances and SDI capture cards to configuring operating systems, drivers, and AWS service integrations. When implemented correctly, this hybrid architecture enables broadcasters and content providers to optimize bandwidth usage, reduce latency, and maintain direct access to on-premises video sources while leveraging the scalability and flexibility of AWS Media Services.

    TrackIt specializes in designing, deploying, and optimizing MediaLive Anywhere workflows. With expertise in broadcast infrastructure, cloud integration, and AWS Media Services, TrackIt can guide organizations through every stage of the process—from hardware selection and network configuration to channel setup and performance tuning—ensuring a seamless transition to a reliable, production-ready live streaming environment.

    About TrackIt

    TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

    We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

    Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

    ]]>
    How to Deploy Avid Software on AWS: Installation Guide https://trackit.io/avid-on-aws-installation-guide/ Tue, 05 Aug 2025 05:38:53 +0000 https://trackit.io/?p=15244 Written by Chris Koh, DevOps Engineer at TrackIt

    As editorial and post-production teams continue moving to the cloud, Avid software, including NEXIS, Media Composer, and Pro Tools, offers scalable, high-performance solutions for collaborative media workflows. AWS provides a platform to deploy these tools quickly and securely, eliminating the need for on-premises hardware.

    This guide outlines the practical steps required to deploy Avid software on AWS using the official 2025.5.0 release and AWS CloudFormation. It covers how to select the appropriate deployment mode, configure system components, and install Avid clients across platforms.

    Choosing a Deployment Mode

    Avid NEXIS Cloud Storage supports three deployment modes, each tailored to different networking and security requirements:

    Public Mode

    • Deployed into a public subnet with direct internet access
    • Simplest to set up and ideal for quick testing or demos
    • No S3 endpoints or NAT required

    Online (NAT) Mode

    • Deployed into a private subnet with outbound internet via a NAT Gateway
    • Offers better security posture than public mode
    • Requires NAT and proper routing configuration
    • Suitable for most production environments

    Private (Offline) Mode

    • Fully isolated private subnet with no internet access
    • Requires VPC endpoints for S3 and other AWS services
    • Highest security, but more complex setup
    • Ideal for compliance-sensitive workloads

    Each mode affects network design, licensing method, and deployment automation. Avid provides preconfigured CloudFormation templates to support each.

    Prerequisites and Planning

    Before launching the CloudFormation stacks, prepare the AWS environment and gather required resources.

    IAM Permissions

    The user launching the templates should have full permissions for:

    • EC2 Instances and Elastic IPs
    • IAM roles and instance profiles
    • S3 buckets and endpoints
    • VPC components (subnets, security groups, endpoints)
    • CloudFormation and Systems Manager

    AWS Region Support

    Avid software supports deployment in these AWS regions:

    us-east-1, us-east-2, us-west-1, us-west-2, ca-central-1, ca-west-1,
    mx-central-1, sa-east-1, eu-west-1, eu-west-2, eu-west-3, eu-central-1,
    eu-north-1, eu-south-1, eu-south-2, me-central-1, ap-northeast-1,
    ap-northeast-2, ap-southeast-1, ap-southeast-2, ap-south-1, ap-east-1

    Requirements:

    • A VPC with at least one /24 subnet
    • Internet or NAT Gateway (for public/online modes)
    • VPC endpoints for S3, SSM (for offline/private mode)

    Licensing and Entitlements

    Requirements:

    • Avid NEXIS Cloud license
    • Your Entitlement ID (Ent ID)

    Optional licenses (if using additional components):

    • Media Composer
    • Pro Tools
    • Leostream
    • HP Anyware

    These tools provide editing, audio production, and remote access capabilities and require their own license activation.

    CloudFormation Planning

    Prepare the following:

    • VPC and subnet IDs
    • EC2 instance types
    • Optional software components
    • Security groups
    • Key pair (for SSH or RDP access)

    Deploying the NEXIS System

    The NEXIS “System” refers to the System Director, Media Packs, and underlying infrastructure.

    Launch the System Stack

    Use the Avid CloudFormation template. Configure:

    • Deployment mode
    • Subnet and VPC IDs
    • System Director instance type (c5.4xlarge recommended)
    • Entitlement ID
    • Storage backend:
      • EBS (up to 500 Gbps throughput, usable in all AZs)
      • S3 Express (800 MBps, limited region/AZ support)
      • S3 Standard (90 MBps, available in all regions)

    Networking Requirements

    • Public mode: Requires Internet Gateway
    • Online mode: Requires NAT Gateway
    • Private mode: Requires VPC endpoints (S3, SSM, EC2 messages)

    Once launched, the System Director’s IP is available in CloudFormation outputs.

    🔗 View the latest Avid CloudFormation templates and AMIs

    Deploying the NEXIS Client

    Use the Avid-provided CloudFormation template to deploy a client in the same region and VPC.

    Client Configuration

    • EC2 instance type: t3.xlarge or GPU (e.g., g4dn.xlarge)
    • OS: Windows, Amazon Linux 2, or AL2023
    • Optional installs:
      • Media Composer
      • Leostream or HP Anyware
    • Key pair for SSH or RDP access
    • Same subnet and security group as the System Director

    Clients can connect over NICE DCV or a remote desktop broker (Leostream or HP Anyware).

    System Setup

    Once the System Director is live:

    Access the NEXIS Management Console

    • Open the browser at the System Director’s IP
    • Log in with admin or your designated admin account

    Activate Licensing

    • Go to System Settings → License
    • Enter the Entitlement ID
    • Activation may use AWS SSM or the internet, depending on the selected deployment mode

    Create File System

    • Select a Media Pack and create a file system
    • Bind the file system to the Media Pack
    • Set file system name and storage type

    Create Workspaces

    • Create logical volumes for projects or departments
    • Assign quotas and access permissions
    • Mount workspaces from connected clients

    Installing Avid Client Software

    Client software enables Media Composer, Pro Tools, and NEXIS storage access.

    Windows Clients

    • Automatically installed if selected during stack launch
    • Includes NEXIS Client Manager, and optionally Media Composer or Pro Tools

    macOS Clients

    • Download the installer manually from Avid
    • Install Media Composer, Pro Tools, or NEXIS Client Manager
    • Connect to System Director using IP or DNS

    Linux Clients (Amazon Linux 2 / AL2023)

    Install via RPM:

    • # Amazon Linux 2
    • sudo yum install ./avidnexisclient-<version>.rpm
    • # AL2023
    • sudo dnf install ./avidnexisclient-<version>.rpm

    Then:

    • sudo systemctl start avidnexisclient/etc/avidnexis/nexis_client.conf to set your System Director address.

    Conclusion

    This guide outlined the key steps involved in deploying Avid software—specifically NEXIS, Media Composer, and Pro Tools—on AWS using the 2025.5.0 release and CloudFormation automation. From selecting the appropriate deployment mode to configuring network infrastructure and installing client components across platforms, each section is designed to support a streamlined and secure transition to cloud-based media workflows.

    By running Avid in the cloud, editorial and post-production teams gain flexibility, scalability, and remote collaboration capabilities that are difficult to achieve with traditional on-premises systems. AWS infrastructure enables rapid provisioning, centralized management, and access to high-performance resources on demand. This shift not only accelerates creative workflows but also reduces operational complexity and capital expenditure—marking a significant evolution in how media environments are deployed and maintained.

    Next Steps

    TrackIt, an AWS Advanced Tier Services Partner with the Media & Entertainment Competency, specializes in architecting and deploying cloud-based media workflows tailored to editorial, post-production, and broadcast teams. With deep expertise in Avid deployments, TrackIt can help ensure that environments are optimized for performance, cost, and long-term scalability.

    About TrackIt

    TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

    We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

    Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

    ]]>
    Avid in the Cloud: How AWS is Powering the Future of Post Production https://trackit.io/launching-avid-on-aws/ Tue, 29 Jul 2025 09:40:06 +0000 https://trackit.io/?p=15229 Written by Chris Koh, DevOps Engineer at TrackIt

    The media and entertainment industry is rapidly shifting toward the cloud, and post-production workflows are evolving with it. As production timelines tighten and creative teams spread across the globe, traditional on-premises edit bays are becoming too rigid and expensive to scale. Cloud-native editorial is quickly becoming the new standard.

    Avid, long regarded as the industry standard for video and audio post production, has now fully embraced the cloud. Its flagship tools—Media Composer, Pro Tools, and Avid NEXIS—are available as scalable, cloud-based solutions on AWS. For post-production houses, broadcasters, and studios, this opens the door to faster deployment, global collaboration, and simpler infrastructure management, all while maintaining the performance editors expect.

    What’s New: Avid on AWS

    Media Composer in the Cloud: Media Composer can now be launched directly from the AWS Marketplace as a managed SaaS solution. Users can spin up GPU-powered virtual workstations on demand and only pay for what they use.

    Pro Tools Integration: Pro Tools workflows are now better integrated with Media Composer in the cloud. Audio teams can work in sync with editors by sharing mix metadata and edit markers, reducing the need for manual roundtrips and relinking.

    Avid NEXIS Cloud Storage: NEXIS, traditionally a hardware-based shared storage system, is now cloud-enabled and runs on AWS infrastructure. This allows globally distributed teams to access high-performance shared storage without relying on on-premises hardware.

    Storage Backend Options

    There are multiple storage backends available for NEXIS in the cloud, each suited to different workloads and deployment scenarios:

    • Amazon EBS (Elastic Block Store): High performance and low latency, ideal for editorial workflows. Supports up to 500 Gbps throughput and can be deployed in Local Zones like Los Angeles to keep infrastructure close to creative teams.
    • Amazon S3 Express One Zone: Designed for fast access to large datasets with performance up to 800 MBps. Best for latency-sensitive workflows, but currently limited to a few regions: us-west-2, us-east-1, and us-east-2. Not available in every Availability Zone.
    • Amazon S3 Standard: Best for nearline or archive tier storage. Available in all regions and zones, but offers lower throughput at around 90 MBps. This is a great fit for projects that need broad geographic coverage without real-time performance demands.

    Benefits of Avid on AWS

    Faster Setup: Full editorial environments can be launched in minutes without the delays of shipping equipment or configuring workstations.

    Seamless Collaboration: Editors and audio teams can work from anywhere, accessing shared storage and project files as if they were in the same facility.

    Cost Efficiency: Cloud-based resources are used only when needed. Teams avoid the overhead of idle equipment during downtime.

    High Performance Editing: Media Composer runs on AWS instances equipped with NVIDIA GPUs, making real-time playback and high-resolution editing smooth and reliable.

    Remote Review and Approval: With tools like ClearView Flex, editors can share edits in real-time with directors, producers, or clients, regardless of their location.

    Technical Architecture

    A successful Avid deployment in AWS depends on proper network planning, scalable compute, fast storage, and secure access. The following outlines key architectural considerations.

    Example Architecture

    AD 4nXcXT4wbfTNaVC0KRxTiBlwm X5 zKIcEfWSItsj5Civ HEQia0VZKwcp35R3VxIL1vS0imCw4qUyn

    Networking and Subnet Design

    Prior to deploying Avid components in AWS, it is critical to structure the network appropriately.

    Subnet Sizing
    Use at least a /24 subnet for each Availability Zone. Avid components like Media Composer, Pro Tools, and NEXIS often require multiple IP addresses per workstation for control services, networked storage, and supporting agents. A /24 provides 256 IPs, offering enough room to scale.

    For deployments requiring environment separation by role (e.g., editors, storage, management), a /24 can be further divided into smaller /25 or /26 subnets within the same VPC.

    Private Subnet Support
    When deploying into private subnets without internet access, it is necessary to provision VPC endpoints. These endpoints enable instances to reach essential AWS services such as Amazon S3 and AWS Systems Manager.

    Required endpoints include:

    • S3 Gateway Endpoint: Enables access to S3 buckets without a NAT Gateway
      • com.amazonaws.region.s3
    • Systems Manager Interface Endpoints: Used for connecting through Session Manager, patching, and automation
      • com.amazonaws.region.ssm
      • com.amazonaws.region.ssmmessages
      • Com.amazonaws.region.ec2messages

    This allows secure communication with AWS services while keeping Avid workstations isolated from the public internet.

    Compute

    Virtual Workstations
    Media Composer and Pro Tools run on Amazon EC2 G4dn or G5 instances with NVIDIA GPUs. These instances support real-time playback, multicam editing, and audio mixing. Editors typically connect using NICE DCV or Amazon WorkSpaces, depending on performance and cost needs.

    Storage

    Avid NEXIS in the cloud requires fast, shared storage. High-performance editorial work benefits from block-based storage with low latency, while nearline or archival needs can be met with object storage options. As outlined above, the choice of backend—whether EBS, S3 Express One Zone, or S3 Standard—should align with performance requirements, latency sensitivity, and regional availability.

    Remote Access

    Editors and audio engineers connect to their virtual workstations using one of two main options:

    • Amazon DCV (previously known as NICE DCV): A high-performance remote display protocol developed by AWS. Ideal for direct workstation access, NICE DCV supports multi-monitor setups, 4K playback, USB passthrough, and low latency streaming. It works well in tightly managed VPC environments with minimal overhead.
    • Studio-in-the-Cloud Solutions (e.g., Leostream, HP Anyware): These platforms provide full virtual studio orchestration, including connection brokering, user entitlement management, and cross-region scaling. They are well-suited for larger post-production environments where admins need to manage multiple users and projects at scale. These solutions also support hybrid deployments and can integrate with identity providers like Okta or Active Directory.

    Both options offer secure, high-fidelity remote access to GPU-powered Avid workstations, and the right choice depends on the level of control, flexibility, and automation required by the production environment.

    Collaboration and Review

    ClearView Flex can be integrated for real-time, frame-accurate remote review and approval sessions. Editors can stream content to clients or producers over secure, low-latency channels.

    Assets are often exchanged via Amazon S3 buckets configured for cross-team access or external sharing.

    Management and Security

    • AWS Systems Manager: Enables patching, script automation, inventory, and Session Manager access
    • Amazon CloudWatch: Used for monitoring system health and performance
    • AWS IAM: Manages fine-grained access control to both infrastructure and editorial environments

    Conclusion

    Running Avid in AWS represents a significant shift in how media production environments are architected, deployed, and scaled. By moving traditionally on-premises tools like Media Composer, Pro Tools, and NEXIS into the cloud, broadcasters, post-production houses, and creative teams gain access to on-demand resources, global collaboration, and operational flexibility that were previously difficult to achieve.

    This evolution aligns with a broader industry trend toward cloud-based workflows, driven by the need for remote access, rapid scalability, and integration with modern services such as AI/ML, content distribution, and media supply chain automation. As cloud adoption accelerates, the ability to deploy Avid infrastructure in AWS positions creative teams to be more agile, cost-efficient, and future-ready in an increasingly digital-first media landscape.

    About TrackIt

    TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

    We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

    Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

    ]]>