\

Written by Timothe Coloniel, Junior Software Engineer, TrackIt

Object detection is a fundamental task in computer vision that involves identifying and locating objects within an image. This technology has a wide range of applications, including autonomous driving, facial recognition, and surveillance. It enables applications to interpret visual data, allowing them to understand and interact with the physical world in a meaningful manner.

Recent advancements in machine learning and deep learning have greatly enhanced the accuracy and efficiency of object detection models. The following sections will guide readers through the process of deploying an object detection model, from structuring the data to setting up and using the endpoint.

1. Deploying the Endpoint

Deploying the endpoint involves structuring the necessary data, including the model intended for hosting and a “code” folder. The code folder should contain a requirements.txt file for installing the required Python libraries and an inference.py script to handle endpoint requests and utilize the model. All this data is then stored in an Amazon S3 bucket in a tar.gz format.

image 5
bashCommand = “tar -cpzf  model.tar.gz your_model.pt code/”
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
sm_client = boto3.client(service_name=”sagemaker”)runtime_sm_client = boto3.client(service_name=”sagemaker-runtime”)
account_id = boto3.client(“sts”).get_caller_identity()[“Account”]region = boto3.Session().region_name
model_data = s3.S3Uploader.upload(“model.tar.gz”, “s3://your-bucket-name”)

2. Creating the Model Object

The PyTorchModel is initialized from the sagemaker.pytorch module, which provides a preconfigured container specifically designed for deploying PyTorch models on SageMaker. During initialization, several key parameters are specified:

  1. Python Version: The desired version of Python is selected to ensure compatibility with the code and libraries used in the model.
  2. Environment Variables: These variables are set to configure the runtime environment for inference.py, allowing the script to operate correctly within the container.
  3. Entrypoint: The inference.py script is designated as the entrypoint, meaning it will be executed when the model is deployed. This script handles incoming requests to the endpoint and processes them using the model.
  4. Model Data: The path to the model_data is provided, which is the tar.gz file stored in the S3 bucket containing the model and any other required assets. This data is loaded into the PyTorchModel for deployment.

sess = sagemaker.Session(default_bucket=’Your-bucket-name’)
role = get_execution_role()
model = PyTorchModel(entry_point=’inference.py’,                     model_data=model_data,                     framework_version=’2.1′,                      py_version=’py310′,                     role=role,                     env={‘YOUR_ENV_VARIBLES’:’value’’},                     sagemaker_session=sess)

image 4

 

3. Setting up the Endpoint

Setting up the endpoint involves using the deploy method from the model, where a name is provided, the desired instance type is selected, and a deserializer (a component that converts the incoming data into a usable format, such as JSON) is specified. This deploy method returns a predictor, which is then used to send requests to the endpoint. The endpoint will be running and ready for use once this setup is complete.

predictor = model.deploy(initial_instance_count=1,                         instance_type=’instance_type’,                         endpoint_name=’endpoint_name’,                         deserializer=JSONDeserializer())
image 3

Using the endpoint involves sending requests through the predictor.predict method, which requires only the payload as an argument to transmit data to the endpoint.

image 6

CPU Metrics of requests sent to the endpoint

Sending images to the endpoint requires preprocessing them first. The cv2 library is used to convert the image into an array of bytes using the imencode.to_bytes() method. The resulting byte array is then transmitted to the endpoint using the predict method.

How the Object Detection Endpoint Works

Inside the endpoint, the inference.py script contains four main functions that are automatically called in sequence:

  1. The model_fn(model_dir) function is responsible for loading and returning the model. For instance, it could use YOLO (You Only Look Once, a real-time object detection system) by Ultralytics when working with a YOLO model.
def model_fn(model_dir):    env = os.environ    model = YOLO(os.path.join(model_dir, ‘your_model_name’))    return model
  1. The input_fn(request_body, content_type) function parses the byte array sent to the endpoint and returns the parsed data.
def input_fn(request_body, request_content_type):    if request_content_type == ‘application/x-image’:        jpg_original = np.load(io.BytesIO(request_body), allow_pickle=True)        jpg_as_np = np.frombuffer(jpg_original, dtype=np.uint8)        img = cv2.imdecode(jpg_as_np, flags=-1)        return img
  1. The predict_fn(input, model) function takes the parsed data and model as arguments and returns the model’s prediction.
def predict_fn(input_object, model):    with torch.no_grad():        results = model(input_object, imgsz=[1920, 1080])    return results
  1. The output_fn(predictions, response_content_type) function formats the prediction results and returns the final output of the endpoint.
def output_fn(predictions, response_content_type):    if response_content_type == ‘application/json’:        return json.dumps(predictions)

Any additional Python packages required can be specified in the requirements.txt file, which is loaded during the creation of the endpoint. Errors occurring in the inference script, as well as default logs from the model, will appear in Amazon CloudWatch logs.

image 7
image 2

Closing Thoughts

This article has provided a comprehensive guide to deploying an object detection model, covering each step from data structuring to endpoint usage. By following the outlined process, an efficient and reliable object detection system can be established. Looking ahead, object detection will continue to play a crucial role in various fields, driven by ongoing advancements in AI and machine learning. As these technologies evolve, the accuracy, speed, and applicability of object detection models are expected to improve.AWS is one of the key players in this movement, offering robust services that meet the demands of modern AI models. With tools such as Amazon SageMaker, AWS enables developers to build, train, and deploy machine learning models at scale. These services streamline the implementation of sophisticated AI solutions, making them accessible and effective for a wide range of applications.

About TrackIt

TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.

About Timothe Coloniel

Timothe Coniel Photo

Timothe is a junior software engineer at TrackIt, working remotely. With a few months of experience at the company, he is specializing in AWS services and developing scalable media solutions.

Timothe is passionate about mobile applications and web development.