similarity-vector
ai-research-agent / similarity/similarity-vector
Other
AutoTokenizer
type AutoTokenizer<>: AutoTokenizer;
Type Parameters
Type Parameter |
---|
calculateCosineSimilarity()
function calculateCosineSimilarity(vectorA, vectorB): number
Cosine similarity gets similarity of two vectors by whether they have the same direction (similar) or are poles apart. Cosine similarity is often used with text representations to compare how similar two documents or sentences are to each other. The output of cosine similarity ranges from -1 to 1, where -1 means the two vectors are completely dissimilar, and 1 indicates maximum similarity.
Parameters
Parameter | Type | Description |
---|---|---|
|
| |
|
|
Returns
number
-1 to 1 similarity score
Similarity
addEmbeddingVectorsToIndex()
function addEmbeddingVectorsToIndex(documentVectors, options?): Promise<HierarchicalNSW>
VSEARCH: Vector Similarity Embedding Approximation in RAM-Limited Cluster Hierarchy
- Compile hnswlib-node or NGT algorithm C++ to WASM JS for efficient similarity search.
- Vector index is split by K-means into regional clusters, each being a specific size to fit in RAM. This is better than popular vector engines that require costly 100gb-RAM servers because they load all the vectors at once.
- Vectors for centroids of each cluster are stored in a list in SQL, each cluster's binary quantized data is exported as base64 string to SQL, S3, etc.
- Search: Embed Query, Compare to each cluster centroid to pick top clusters, download base64 strings for those clusters, load each into WASM, find top neighbors per cluster, merge results sorted by distance.
NGT Algorithm NGT Cluster https://qdrant.tech/articles/memory-consumption/ Lancedb Usearch
Parameters
Parameter | Type | Description |
---|---|---|
|
| An array of document texts to be vectorized. |
| { | Optional parameters for vector generation and indexing. |
|
| The maximum number of data points. |
|
| The length of data point vector that will be indexed. |
Returns
Promise
<HierarchicalNSW
>
The created HNSW index.
Author
convertTextToEmbedding()
function convertTextToEmbedding(text, options?): Promise<{
embedding: number[];
embeddingsDict: {};
}>
Text embeddings convert words or phrases into numerical vectors in a high-dimensional space, where each dimension represents a semantic feature extracted by a model like MiniLM-L6-v2. In this concept space, words with similar meanings have vectors that are close together, allowing for quantitative comparisons of semantic similarity. These vector representations enable powerful applications in natural language processing, including semantic search, text classification, and clustering, by leveraging the geometric properties of the embedding space to capture and analyze the relationships between words and concepts. Text Embeddings, Classification, and Semantic Search (Youtube)
Parameters
Parameter | Type | Description |
---|---|---|
|
| The text to embed. |
| { | |
|
| The pipeline to use for embedding. |
|
| default=4 - The number of decimal places to round to. |
Returns
Promise
<{
embedding
: number
[];
embeddingsDict
: {};
}>
getAllEmbeddings()
function getAllEmbeddings(index, precision): number[][]
Retrieves all embeddings from the HNSW index.
Parameters
Parameter | Type | Default value | Description |
---|---|---|---|
|
|
| The HNSW index containing the embeddings. |
|
|
| default=8 - The number of decimal places to round to. |
Returns
number
[][]
An array of embedding vectors. *
getEmbeddingModel()
function getEmbeddingModel(options?): Promise<AutoTokenizer>
Initialize HuggingFace Transformers pipeline for embedding text.
Parameters
Parameter | Type | Description |
---|---|---|
| { | |
|
| default="Xenova/all-MiniLM-L6-v2" - The name of the model to use |
|
| default "feature-extraction", |
Returns
Promise
<AutoTokenizer
>
The pipeline. *
searchVectorIndex()
function searchVectorIndex(
index,
query,
options?): Promise<object[]>
Searches the vector index for the nearest neighbors of a given query.
Parameters
Parameter | Type | Description |
---|---|---|
|
| The HNSW index to search. |
|
| The query string to search for. |
| { | Optional parameters for the search. |
|
| The number of nearest neighbors to return. |
Returns
Promise
<object
[]>
A promise that resolves to an array of nearest neighbors, each with an id and distance.
Throws
If there's an error during the search process.
Example
const index = await addEmbeddingVectorsToIndex(documentVectors);
const results = await searchVectorIndex(index, 'example query');
console.log(results); // [{id: 3, distance: 0.1}, {id: 7, distance: 0.2}, ...]
weighRelevanceConceptVector()
function weighRelevanceConceptVector(
documents,
query,
options?): Promise<object[]>
Rerank documents's chunks based on relevance to query, based on cosine similarity of their concept vectors generated by a 20MB MiniLM transformer model downloaded locally.
A Complete Overview of Word Embeddings
Parameters
Parameter | Type | Description |
---|---|---|
|
| |
|
| |
|
|
Returns
Promise
<object
[]>