Amazon OpenSearch, derived from a mature version of Elasticsearch, is a highly scalable, fully managed search and analytics service that has become a cornerstone for businesses seeking to extract valuable insights from their data. With the advent of serverless computing, the landscape of deploying and managing OpenSearch has evolved significantly. One of the most exciting developments in this space is the integration of vector engines. The subsequent sections below introduce vector engines and shed light on how they enhance the serverless experience of Amazon OpenSearch.

Vector Engines and Amazon OpenSearch

Vector Engine

A vector engine is a powerful computational tool that excels at processing high-dimensional data used for machine learning, data analysis, and similarity searching. When using Amazon OpenSearch, vector engines help perform complex operations on data such as similarity searches and recommendation systems.

Vector engines work by representing data points as vectors in a multi-dimensional space, with each dimension corresponding to a specific feature or attribute of the data. This representation assists in measuring the similarity between data points and can be particularly useful in applications such as content recommendation, fraud detection, and image analysis.

Amazon OpenSearch Serverless

Amazon OpenSearch Serverless is a serverless deployment option for OpenSearch that eliminates the need to provision and manage infrastructure. It automatically scales resources based on workloads for cost-efficiency and simplified operations. 

Benefits of Vector Engines for Amazon OpenSearch Serverless

  1. Increased developer productivity

Vectors, text, and other types of data can be colocated to easily query embeddings, metadata, and descriptive text within a single call, this increases search accuracy and reduces system complexity.

  1. Highly-relevant and accurate responses

Customer experiences can be improved with highly-relevant and accurate responses generated by the search results based on vector embeddings trained on business data.

  1. Update vector embeddings in a production-ready search application 

Add, update, and delete vector embeddings in near real time without impacting query performance or re-indexing data.

  1. Scalability and efficiency

Extensive collections of vector embeddings (each consisting of thousands of dimensions) can be managed and swiftly retrieved within milliseconds, all within a user-friendly and high-performance serverless environment.

Vector Engines in Amazon OpenSearch: Lucene, Faiss, and nmslib

Amazon OpenSearch offers three vector engines to choose from, each catering to different use cases:

Vector EnginePurposeKey Features
LuceneFull-text search and retrieval Text-based searching, high customization
FaissVector similarity searchEfficient for high-dimensional vectors, various indexing methods
nmslibNon-metric similarity searchFlexible, supports non-metric spaces and unconventional data

Use Cases for Vector Engines in Amazon OpenSearch Serverless

  1. Semantic Search (Document Retrieval) 

Engine Recommendation: Lucene

Vector embeddings can be leveraged to enhance semantic search capabilities in OpenSearch. This allows users to search for documents based on their meaning and context, improving search accuracy and relevance.

  1. Retrieval Augmented Generation with Language Models 

Engine Recommendation: Faiss (for similarity search) combined with Lucene (for text retrieval)

Vector embeddings and advanced language models can be used to build chatbots and text generation systems such as Amazon Titan Text or ChatGPT. These models can understand and generate natural language text, making them valuable for interactive applications.

  1. Recommendation Engines 

Engine Recommendation: Faiss

Vector embeddings can be used to power recommendation engines in OpenSearch. By representing users and items as vectors, personalized product or content recommendations can be provided to enhance user experience and engagement.

  1. Media Search (Rich Media Query Engine) 

Engine Recommendation: Faiss (for images), nmslib (for unconventional media data)

The capabilities of OpenSearch can be extended to include rich media search such as images, audio, and video. Media data can be converted into vectors using techniques such as CNNs (Convolutional Neural Networks) for images. Vector embeddings enable efficient searching and retrieval of multimedia content, enhancing the overall search experience.

How to use Vector Embeddings in Amazon OpenSearch

Step 1: Configure permissions

Proper configuration of IAM permissions is essential to ensure the effective usage of OpenSearch Serverless. For example, if the task is to create a collection, download and search data, and later delete the collection, a User or Role must have an identity-based policy attached with the following minimum permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "aoss:CreateCollection",
        "aoss:ListCollections",
        "aoss:BatchGetCollection",
        "aoss:DeleteCollection",
        "aoss:CreateAccessPolicy",
        "aoss:ListAccessPolicies",
        "aoss:UpdateAccessPolicy",
        "aoss:CreateSecurityPolicy",
        "iam:ListUsers",
        "iam:ListRoles"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

Step 2: Create a collection

A collection is a group of OpenSearch indexes that work together to support a specific workload or use case.

To create an OpenSearch Serverless collection

  1. Open the Amazon OpenSearch Service console. 
  2. Choose Collections in the left navigation pane and choose Create collection.
  3. Name the collection housing.
  4. For collection type, choose Vector search. (For more information on choosing a collection type, readers can refer to the Amazon OpenSearch Developer Guide.)

  1. Under Security, select Easy create to streamline the security configuration. All the data in the vector engine is encrypted in transit and at rest by default. The vector engine supports fine-grained AWS Identity and Access Management (IAM) permissions to help define who can create, update, and delete encryptions, networks, collections, and indexes.
Fjj0CJwXZEXMo42iajLHiz1wZltEMf3LQODB9OiUlLakdJTux wY75Fh8rK1Ye s8 AZngc2sI8o2Odh3dmhyPzuQivie8kK38wTuRJA7GZ0jc9o6EL5QHjXT7r5njLQZA7JDySrXAhsmCKX1Af8ez0
  1. Choose Next.
  2. Review collection settings and click Submit. Note that it may take several minutes for the collection status to become Active.

Step 3: Upload and search data

An index is a collection of documents with a common data schema that provides a means to store, search, and retrieve vector embeddings and other fields. At this stage of the process, the API or the Console can be used to create an index.

The vector index supports up to 16,000 dimensions and three types of distance metrics: 

  • Euclidean
  • Cosine
  • Dot product

To index and search data in the movies collection

  1. To create a single index for a new collection, send the following request with Postman. By default, this creates an index with a nmslib engine and Euclidean distance.

PUT housing-index
{
   "settings": {
      "index.knn": true
   },
   "mappings": {
      "properties": {
         "housing-vector": {
            "type": "knn_vector",
            "dimension": 3
         },
         "title": {
            "type": "text"
         },
         "price": {
            "type": "long"
         },
         "location": {
            "type": "geo_point"
         }
      }
   }
}

2. To search for properties similar to the ones in the  index, send the following query:

GET housing-index/_search
{
    "size": 5,
    "query": {
        "knn": {
            "housing-vector": {
                "vector": [
                    10,
                    20,
                    30
                ],
                "k": 5
            }
        }
    }
}

Now that the data is indexed in Amazon OpenSearch, vector searches can be performed to find nearest neighbors. There are three main approaches for this:

A-NN (Approximate k-NN) 

The A-NN method takes an approximate nearest neighbor approach. It uses one of several algorithms to return the approximate k-nearest neighbors to a query vector. Usually, these algorithms sacrifice indexing speed and search accuracy in return for performance benefits such as lower latency, smaller memory footprints, and more scalable search. 

Approximate k-NN is the ideal choice when searching large indexes (100,000 vectors or more). However, it should not be used when applying a filter on the index to reduce the number of vectors to be searched. The appropriate choice in such scenarios is to employ the Script Score or Painless Extensions methods.

Script Score k-NN

Script Score is a brute force, exact k-NN search. With brute force, the algorithm conducts an exhaustive search that examines all possible solutions. This approach is more accurate than Approximate k-NN. However, it results in high latencies when dealing with large datasets. Script Score supports binary fields and also allows users to perform searches on a subset of vectors, also known as pre-filter searches. 

Painless Extensions

This method adds the distance functions as painless extensions that can be used in more complex combinations. Similar to the k-NN Script Score, this method can be leveraged to perform a brute force, exact k-NN search across an index which also supports pre-filtering. This approach has slightly slower query performance compared to the k-NN score script. If the use case requires more customization over the final score, this approach can be prioritized over Script Score k-NN.

Step 4: Delete the collection

Because the housing collection is for test purposes, it is important to delete it once the experimentation is complete.

How to Delete an OpenSearch Serverless collection

  1. Go to the Amazon OpenSearch Service console.
  2. Choose Collections in the left navigation pane and select the properties collection.
  3. Choose Delete and confirm deletion.

Conclusion

Generative AI, a burgeoning field in artificial intelligence, aims to produce novel content through advanced modeling techniques. OpenSearch, as a versatile vector database, offers a powerful resource for multiple applications. These applications may include developing generative AI systems, conducting rich media and audio searches, or enhancing semantic search capabilities in existing search-centric applications. OpenSearch provides diverse engine options, algorithmic support, and distance metrics for tailored solutions. Additionally, its scalability and efficient vector search capabilities, even at a massive scale of billions of vectors, make it an invaluable tool for data-intensive tasks.

About TrackIt

TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.