Skip to content

search: add cross-script name matching using neural embeddings#696

Merged
adamdecaf merged 27 commits into
moov-io:masterfrom
MorganaFuture:MorganaFuture/feat/search_ad_cross-script_name_matching_using_neural_embeddings
Feb 9, 2026
Merged

search: add cross-script name matching using neural embeddings#696
adamdecaf merged 27 commits into
moov-io:masterfrom
MorganaFuture:MorganaFuture/feat/search_ad_cross-script_name_matching_using_neural_embeddings

Conversation

@MorganaFuture
Copy link
Copy Markdown
Contributor

@MorganaFuture MorganaFuture commented Jan 8, 2026

Adds support for matching names across different writing systems (Arabic, Cyrillic, Chinese, etc.) using multilingual embeddings via API providers.

When a non-Latin query comes in, we convert it to a vector using an external embeddings API and find similar vectors in the index. This lets us match "محمد علي" to "Mohamed Ali" across scripts.

Key changes:

  • New internal/embeddings package with OpenAI-compatible API integration
  • Supports multiple providers: Ollama (local), OpenAI, OpenRouter, Azure
  • Hybrid search: embeddings for non-Latin queries, Jaro-Winkler for Latin
  • No default model - users must configure based on their accuracy requirements
  • Build with -tags embeddings to enable

Adds support for matching names across different writing systems
(Arabic, Cyrillic, Chinese, etc.) using multilingual embeddings.

When a non-Latin query comes in, we convert it to a 384-dim vector
and find similar vectors in the index. This lets us match
"محمد علي" to "Mohamed Ali" with 97% similarity.

Key changes:
- New internal/embeddings package with ONNX model integration
- Hybrid search: embeddings for non-Latin, Jaro-Winkler for Latin
- LRU cache for embeddings (~5µs after warm-up)
- Build with -tags embeddings to enable
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @MorganaFuture, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the search capabilities by introducing neural embeddings to enable cross-script name matching. This allows the system to semantically understand and match names written in different languages and scripts, such as Arabic or Cyrillic, against Latin-script names in sanctions lists. The implementation uses a hybrid approach, leveraging embeddings for non-Latin queries and retaining existing string matching for Latin queries, ensuring both accuracy for diverse inputs and performance efficiency. This feature aims to drastically improve the precision and recall for international name screening.

Highlights

  • Cross-Script Name Matching: Introduced support for matching names across different writing systems (e.g., Arabic, Cyrillic, Chinese to Latin) using neural embeddings, enabling matches like 'محمد علي' to 'Mohamed Ali' with high similarity.
  • New Embeddings Package: Added a new internal/embeddings package which integrates ONNX models for generating 384-dimensional multilingual vectors, handling model loading, inference, and L2 normalization.
  • Hybrid Search Approach: Implemented a hybrid search strategy where non-Latin queries utilize the new embedding-based search, while Latin queries continue to use the faster Jaro-Winkler algorithm for efficiency.
  • Performance Optimizations: Incorporated an LRU cache for embedding vectors to significantly reduce inference latency for repeated queries (down to ~5µs after warm-up) and optimized batch processing for encoding multiple texts.
  • Configurable and Opt-in Feature: The embeddings feature is opt-in, requiring a -tags embeddings build flag and configuration via environment variables or YAML, allowing users to enable it based on their needs.
  • Comprehensive Documentation and Testing: Provided detailed documentation on setup, configuration, API usage, and limitations, alongside extensive unit, integration, accuracy, and performance benchmarks for the new embeddings functionality.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and well-implemented feature for cross-script name matching using neural embeddings. The changes are comprehensive, covering the core embedding logic, integration with the search service, configuration, extensive testing, and excellent documentation. The use of build tags to make this an optional feature is a great approach. My feedback focuses on a few areas to improve performance and code clarity, particularly around caching and data lookups.

Comment thread internal/embeddings/service.go Outdated
Comment thread internal/search/service.go
Comment thread internal/embeddings/script_detect.go
Comment thread docs/cross-script-matching.md Outdated
Comment thread docs/search.md Outdated
Comment thread internal/embeddings/accuracy_test.go
Comment thread tools/export_onnx/README.md Outdated
@adamdecaf
Copy link
Copy Markdown
Member

@MorganaFuture getting back to this. Thanks for the docker image - I'm running into a problem with the model.

ts=2026-01-23T15:52:55Z msg="failed to rebuild embedding index: building embedding index: embeddings: failed to encode batch 0: inference failed: unimplemented ONNX op \"ReduceSum\" in Node \"/1/ReduceSum\" [ReduceSum](/1/Mul_1_output_0, onnx::ReduceSum_1922) -> /1/ReduceSum_output_0 - attrs[keepdims (INT)]" app=watchman level=warn version=

This is after running the docker image to output the ~500MB model.

Replace local ONNX model inference with API-based embedding providers.
Simplifies deployment and enables flexibility in choosing providers.

Breaking change: No default model configured. When enabling embeddings,
users must explicitly set Model, Dimension, and BaseURL.

Supported providers (via OpenAI-compatible API):
- Ollama (local)
- OpenAI (recommended for production)
- OpenRouter
- Azure OpenAI
@MorganaFuture MorganaFuture marked this pull request as ready for review February 5, 2026 08:53
@adamdecaf adamdecaf force-pushed the MorganaFuture/feat/search_ad_cross-script_name_matching_using_neural_embeddings branch from a0a762d to 5f88f4d Compare February 9, 2026 18:05
@adamdecaf
Copy link
Copy Markdown
Member

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and valuable feature: cross-script name matching using neural embeddings. The implementation is well-designed, modular, and includes comprehensive testing and documentation. The use of a hybrid approach (embeddings for non-Latin, Jaro-Winkler for Latin) is a smart optimization. The code is generally high quality. I've identified one critical performance issue in the embedding provider implementation and a couple of minor issues in the documentation and tests, which I've detailed in the comments. Overall, this is an excellent contribution.

Comment on lines +82 to +155
func (p *OpenRouterProvider) Embed(ctx context.Context, texts []string) ([][]float64, error) {
if len(texts) == 0 {
return nil, nil
}

ctx, span := telemetry.StartSpan(ctx, "openai-embed", trace.WithAttributes(
attribute.String("provider", p.Name()),
attribute.Int("batch_size", len(texts)),
))
defer span.End()

out := make([][]float64, len(texts))

g, ctx := errgroup.WithContext(ctx)

for i := range texts {
i := i // capture for closure
text := texts[i]
g.Go(func() error {
// rate limit check
if err := p.limiter.Wait(ctx); err != nil {
return fmt.Errorf("rate limit: %w", err)
}

req := operations.CreateEmbeddingsRequest{
Input: operations.InputUnion{
Str: openrouter.String(text),
},
Model: p.config.Model,
// EncodingFormat: openrouter.Pointer(operations.EncodingFormatBase64),
}

resp, err := p.client.Embeddings.Generate(ctx, req)
if err != nil {
return fmt.Errorf("generating embeddings failed: %w", err)
}

// backoff := p.calculateBackoff(attempt) // TODO(adam): ??

if body := resp.CreateEmbeddingsResponseBody; body != nil {
if len(body.Data) > 0 {
out[i] = body.Data[0].Embedding.ArrayOfNumber
}

// if body.Usage != nil {
// fmt.Printf(" Tokens: Prompt=%.2f Total=%.2f", body.Usage.PromptTokens, body.Usage.TotalTokens)
// if body.Usage.Cost != nil {
// fmt.Printf(" Cost=%.2f", *body.Usage.Cost)
// }
// fmt.Printf("\n")
// }
}

return nil
})
}

err := g.Wait()
if err != nil {
span.RecordError(err) // TODO(adam): panics
return nil, err
}

// Normalize if configured
if p.normalize {
out = normalizeL2Batch(out)
}

// span.SetAttributes(
// attribute.Int("tokens_used", embResp.Usage.TotalTokens), // TODO(adam):
// )

return out, nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation of Embed makes a separate API call for each text in the input batch, which is highly inefficient and defeats the purpose of batching. OpenAI-compatible APIs support sending multiple strings in a single request. You should make one API call for the entire texts slice by using operations.InputUnion{ StrArray: texts }. This will significantly improve performance and reduce the number of API calls.

func (p *OpenRouterProvider) Embed(ctx context.Context, texts []string) ([][]float64, error) {
	if len(texts) == 0 {
		return nil, nil
	}

	ctx, span := telemetry.StartSpan(ctx, "openai-embed", trace.WithAttributes(
		attribute.String("provider", p.Name()),
		attribute.Int("batch_size", len(texts)),
	))
	defer span.End()

	// rate limit check
	if err := p.limiter.Wait(ctx); err != nil {
		return nil, fmt.Errorf("rate limit: %w", err)
	}

	req := operations.CreateEmbeddingsRequest{
		Input: operations.InputUnion{
			StrArray: texts,
		},
		Model: p.config.Model,
	}

	// TODO(adam): Add retry logic here.
	resp, err := p.client.Embeddings.Generate(ctx, req)
	if err != nil {
		span.RecordError(err)
		return nil, fmt.Errorf("generating embeddings failed: %w", err)
	}

	if resp.CreateEmbeddingsResponseBody == nil || len(resp.CreateEmbeddingsResponseBody.Data) != len(texts) {
		err = fmt.Errorf("%w: expected %d embeddings, got %d", ErrInvalidResponse, len(texts), len(resp.CreateEmbeddingsResponseBody.Data))
		span.RecordError(err)
		return nil, err
	}

	body := resp.CreateEmbeddingsResponseBody
	// Sort by index to ensure order matches input, as not all providers may guarantee it.
	sort.Slice(body.Data, func(i, j int) bool {
		return body.Data[i].Index < body.Data[j].Index
	})

	out := make([][]float64, len(texts))
	for i, data := range body.Data {
		out[i] = data.Embedding.ArrayOfNumber
	}

	// Normalize if configured
	if p.normalize {
		out = normalizeL2Batch(out)
	}

	// TODO(adam): Extract and record token usage from body.Usage.

	return out, nil
}

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, will fix in a follow on PR.

Comment thread docs/cross-script-matching.md Outdated
Comment thread internal/embeddings/provider_open_router_test.go Outdated
adamdecaf and others added 2 commits February 9, 2026 12:19
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@adamdecaf adamdecaf force-pushed the MorganaFuture/feat/search_ad_cross-script_name_matching_using_neural_embeddings branch from 2e0576e to a658785 Compare February 9, 2026 18:19
@adamdecaf adamdecaf merged commit d236c6e into moov-io:master Feb 9, 2026
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants