search: add cross-script name matching using neural embeddings#696
Conversation
Adds support for matching names across different writing systems (Arabic, Cyrillic, Chinese, etc.) using multilingual embeddings. When a non-Latin query comes in, we convert it to a 384-dim vector and find similar vectors in the index. This lets us match "محمد علي" to "Mohamed Ali" with 97% similarity. Key changes: - New internal/embeddings package with ONNX model integration - Hybrid search: embeddings for non-Latin, Jaro-Winkler for Latin - LRU cache for embeddings (~5µs after warm-up) - Build with -tags embeddings to enable
Summary of ChangesHello @MorganaFuture, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the search capabilities by introducing neural embeddings to enable cross-script name matching. This allows the system to semantically understand and match names written in different languages and scripts, such as Arabic or Cyrillic, against Latin-script names in sanctions lists. The implementation uses a hybrid approach, leveraging embeddings for non-Latin queries and retaining existing string matching for Latin queries, ensuring both accuracy for diverse inputs and performance efficiency. This feature aims to drastically improve the precision and recall for international name screening. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant and well-implemented feature for cross-script name matching using neural embeddings. The changes are comprehensive, covering the core embedding logic, integration with the search service, configuration, extensive testing, and excellent documentation. The use of build tags to make this an optional feature is a great approach. My feedback focuses on a few areas to improve performance and code clarity, particularly around caching and data lookups.
|
@MorganaFuture getting back to this. Thanks for the docker image - I'm running into a problem with the model. This is after running the docker image to output the ~500MB model. |
Replace local ONNX model inference with API-based embedding providers. Simplifies deployment and enables flexibility in choosing providers. Breaking change: No default model configured. When enabling embeddings, users must explicitly set Model, Dimension, and BaseURL. Supported providers (via OpenAI-compatible API): - Ollama (local) - OpenAI (recommended for production) - OpenRouter - Azure OpenAI
…name_matching_using_neural_embeddings
a0a762d to
5f88f4d
Compare
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a significant and valuable feature: cross-script name matching using neural embeddings. The implementation is well-designed, modular, and includes comprehensive testing and documentation. The use of a hybrid approach (embeddings for non-Latin, Jaro-Winkler for Latin) is a smart optimization. The code is generally high quality. I've identified one critical performance issue in the embedding provider implementation and a couple of minor issues in the documentation and tests, which I've detailed in the comments. Overall, this is an excellent contribution.
| func (p *OpenRouterProvider) Embed(ctx context.Context, texts []string) ([][]float64, error) { | ||
| if len(texts) == 0 { | ||
| return nil, nil | ||
| } | ||
|
|
||
| ctx, span := telemetry.StartSpan(ctx, "openai-embed", trace.WithAttributes( | ||
| attribute.String("provider", p.Name()), | ||
| attribute.Int("batch_size", len(texts)), | ||
| )) | ||
| defer span.End() | ||
|
|
||
| out := make([][]float64, len(texts)) | ||
|
|
||
| g, ctx := errgroup.WithContext(ctx) | ||
|
|
||
| for i := range texts { | ||
| i := i // capture for closure | ||
| text := texts[i] | ||
| g.Go(func() error { | ||
| // rate limit check | ||
| if err := p.limiter.Wait(ctx); err != nil { | ||
| return fmt.Errorf("rate limit: %w", err) | ||
| } | ||
|
|
||
| req := operations.CreateEmbeddingsRequest{ | ||
| Input: operations.InputUnion{ | ||
| Str: openrouter.String(text), | ||
| }, | ||
| Model: p.config.Model, | ||
| // EncodingFormat: openrouter.Pointer(operations.EncodingFormatBase64), | ||
| } | ||
|
|
||
| resp, err := p.client.Embeddings.Generate(ctx, req) | ||
| if err != nil { | ||
| return fmt.Errorf("generating embeddings failed: %w", err) | ||
| } | ||
|
|
||
| // backoff := p.calculateBackoff(attempt) // TODO(adam): ?? | ||
|
|
||
| if body := resp.CreateEmbeddingsResponseBody; body != nil { | ||
| if len(body.Data) > 0 { | ||
| out[i] = body.Data[0].Embedding.ArrayOfNumber | ||
| } | ||
|
|
||
| // if body.Usage != nil { | ||
| // fmt.Printf(" Tokens: Prompt=%.2f Total=%.2f", body.Usage.PromptTokens, body.Usage.TotalTokens) | ||
| // if body.Usage.Cost != nil { | ||
| // fmt.Printf(" Cost=%.2f", *body.Usage.Cost) | ||
| // } | ||
| // fmt.Printf("\n") | ||
| // } | ||
| } | ||
|
|
||
| return nil | ||
| }) | ||
| } | ||
|
|
||
| err := g.Wait() | ||
| if err != nil { | ||
| span.RecordError(err) // TODO(adam): panics | ||
| return nil, err | ||
| } | ||
|
|
||
| // Normalize if configured | ||
| if p.normalize { | ||
| out = normalizeL2Batch(out) | ||
| } | ||
|
|
||
| // span.SetAttributes( | ||
| // attribute.Int("tokens_used", embResp.Usage.TotalTokens), // TODO(adam): | ||
| // ) | ||
|
|
||
| return out, nil | ||
| } |
There was a problem hiding this comment.
The current implementation of Embed makes a separate API call for each text in the input batch, which is highly inefficient and defeats the purpose of batching. OpenAI-compatible APIs support sending multiple strings in a single request. You should make one API call for the entire texts slice by using operations.InputUnion{ StrArray: texts }. This will significantly improve performance and reduce the number of API calls.
func (p *OpenRouterProvider) Embed(ctx context.Context, texts []string) ([][]float64, error) {
if len(texts) == 0 {
return nil, nil
}
ctx, span := telemetry.StartSpan(ctx, "openai-embed", trace.WithAttributes(
attribute.String("provider", p.Name()),
attribute.Int("batch_size", len(texts)),
))
defer span.End()
// rate limit check
if err := p.limiter.Wait(ctx); err != nil {
return nil, fmt.Errorf("rate limit: %w", err)
}
req := operations.CreateEmbeddingsRequest{
Input: operations.InputUnion{
StrArray: texts,
},
Model: p.config.Model,
}
// TODO(adam): Add retry logic here.
resp, err := p.client.Embeddings.Generate(ctx, req)
if err != nil {
span.RecordError(err)
return nil, fmt.Errorf("generating embeddings failed: %w", err)
}
if resp.CreateEmbeddingsResponseBody == nil || len(resp.CreateEmbeddingsResponseBody.Data) != len(texts) {
err = fmt.Errorf("%w: expected %d embeddings, got %d", ErrInvalidResponse, len(texts), len(resp.CreateEmbeddingsResponseBody.Data))
span.RecordError(err)
return nil, err
}
body := resp.CreateEmbeddingsResponseBody
// Sort by index to ensure order matches input, as not all providers may guarantee it.
sort.Slice(body.Data, func(i, j int) bool {
return body.Data[i].Index < body.Data[j].Index
})
out := make([][]float64, len(texts))
for i, data := range body.Data {
out[i] = data.Embedding.ArrayOfNumber
}
// Normalize if configured
if p.normalize {
out = normalizeL2Batch(out)
}
// TODO(adam): Extract and record token usage from body.Usage.
return out, nil
}There was a problem hiding this comment.
Yea, will fix in a follow on PR.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2e0576e to
a658785
Compare
Adds support for matching names across different writing systems (Arabic, Cyrillic, Chinese, etc.) using multilingual embeddings via API providers.
When a non-Latin query comes in, we convert it to a vector using an external embeddings API and find similar vectors in the index. This lets us match "محمد علي" to "Mohamed Ali" across scripts.
Key changes:
-tags embeddingsto enable