Skip to main content

similarity-vector

Documentation / similarity/similarity-vector

Other​

AutoTokenizer​

type AutoTokenizer<> = AutoTokenizer;

Defined in: similarity/similarity-vector.js:2

Type Parameters​

Type Parameter

calculateCosineSimilarity()​

function calculateCosineSimilarity(vectorA: number[], vectorB: number[]): number;

Defined in: similarity/similarity-vector.js:168

Cosine similarity gets similarity of two vectors by whether they have the same direction (similar) or are poles apart. Cosine similarity is often used with text representations to compare how similar two documents or sentences are to each other. The output of cosine similarity ranges from -1 to 1, where -1 means the two vectors are completely dissimilar, and 1 indicates maximum similarity.

Parameters​

ParameterTypeDescription

vectorA

number[]

vectorB

number[]

Returns​

number

-1 to 1 similarity score

Similarity​

addEmbeddingVectorsToIndex()​

function addEmbeddingVectorsToIndex(documentVectors: string[], options?: object): Promise<HierarchicalNSW>;

Defined in: similarity/similarity-vector.js:97

VSEARCH: Vector Similarity Embedding Approximation in RAM-Limited Cluster Hierarchy​

  1. Compile hnswlib-node or NGT algorithm C++ to WASM JS for efficient similarity search.
  2. Vector index is split by K-means into regional clusters, each being a specific size to fit in RAM. This is better than popular vector engines that require costly 100gb-RAM servers because they load all the vectors at once.
  3. Vectors for centroids of each cluster are stored in a list in SQL, each cluster's binary quantized data is exported as base64 string to SQL, S3, etc.
  4. Search: Embed Query, Compare to each cluster centroid to pick top clusters, download base64 strings for those clusters, load each into WASM, find top neighbors per cluster, merge results sorted by distance.

NGT Algorithm NGT Cluster https://qdrant.tech/articles/memory-consumption/ Lancedb Usearch

Benchmark

Parameters​

ParameterTypeDescription

documentVectors

string[]

An array of document texts to be vectorized.

options?

{ maxElements: number; numDimensions: number; }

Optional parameters for vector generation and indexing.

options.maxElements?

number

The maximum number of data points.

options.numDimensions?

number

The length of data point vector that will be indexed.

Returns​

Promise<HierarchicalNSW>

The created HNSW index.

Author​

Malkov et al. (2016),


convertTextToEmbedding()​

function convertTextToEmbedding(text: string, options?: object): Promise<{
embedding: number[];
embeddingsDict: {
};
}>;

Defined in: similarity/similarity-vector.js:26

Text embeddings convert words or phrases into numerical vectors in a high-dimensional space, where each dimension represents a semantic feature extracted by a model like MiniLM-L6-v2. In this concept space, words with similar meanings have vectors that are close together, allowing for quantitative comparisons of semantic similarity. These vector representations enable powerful applications in natural language processing, including semantic search, text classification, and clustering, by leveraging the geometric properties of the embedding space to capture and analyze the relationships between words and concepts. Text Embeddings, Classification, and Semantic Search (Youtube)

Parameters​

ParameterTypeDescription

text

string

The text to embed.

options?

{ pipeline: AutoTokenizer; precision: number; }

options.pipeline?

AutoTokenizer

The pipeline to use for embedding.

options.precision?

number

default=4 - The number of decimal places to round to.

Returns​

Promise<{ embedding: number[]; embeddingsDict: { }; }>


getAllEmbeddings()​

function getAllEmbeddings(index: HierarchicalNSW, precision: number): number[][];

Defined in: similarity/similarity-vector.js:148

Retrieves all embeddings from the HNSW index.

Parameters​

ParameterTypeDefault valueDescription

index

HierarchicalNSW

undefined

The HNSW index containing the embeddings.

precision

number

8

default=8 - The number of decimal places to round to.

Returns​

number[][]

An array of embedding vectors. *


getEmbeddingModel()​

function getEmbeddingModel(options?: object): Promise<AutoTokenizer>;

Defined in: similarity/similarity-vector.js:50

Initialize HuggingFace Transformers pipeline for embedding text.

Parameters​

ParameterTypeDescription

options?

{ modelName: string; pipelineName: string; }

options.modelName?

string

default="Xenova/all-MiniLM-L6-v2" - The name of the model to use

options.pipelineName?

string

default "feature-extraction",

Returns​

Promise<AutoTokenizer>

The pipeline. *


searchVectorIndex()​

function searchVectorIndex(
index: HierarchicalNSW,
query: string,
options?: object): Promise<object[]>;

Defined in: similarity/similarity-vector.js:132

Searches the vector index for the nearest neighbors of a given query.

Parameters​

ParameterTypeDescription

index

HierarchicalNSW

The HNSW index to search.

query

string

The query string to search for.

options?

{ numNeighbors: number; }

Optional parameters for the search.

options.numNeighbors?

number

The number of nearest neighbors to return.

Returns​

Promise<object[]>

A promise that resolves to an array of nearest neighbors, each with an id and distance.

Throws​

If there's an error during the search process.

Example​

const index = await addEmbeddingVectorsToIndex(documentVectors);
const results = await searchVectorIndex(index, 'example query');
console.log(results); // [{id: 3, distance: 0.1}, {id: 7, distance: 0.2}, ...]

weighRelevanceConceptVector()​

function weighRelevanceConceptVector(
documents: string[],
query: string,
options?: any): Promise<object[]>;

Defined in: similarity/similarity-vector.js:189

Rerank documents's chunks based on relevance to query, based on cosine similarity of their concept vectors generated by a 20MB MiniLM transformer model downloaded locally.

A Complete Overview of Word Embeddings

Parameters​

ParameterTypeDescription

documents

string[]

query

string

options?

any

Returns​

Promise<object[]>