Accessibility Statement Skip Navigation
  • Back to Global Sites
  • +972-77-2005042
  • Blog
  • Journalists
  • GDPR
  • Send a Release
PR Newswire: news distribution, targeting and monitoring
  • News
  • Products
  • Contact
  • Hamburger menu
  • PR Newswire: news distribution, targeting and monitoring
  • Send a Release
    • Telephone

    • +972-77-2005042 from 8 AM - 11 PM IL

    • Contact
    • Contact

      +972-77-2005042
      from 8 AM - 11 PM IL

  • Request More Information
  • Journalists
  • GDPR
  • Request More Information
  • Journalists
  • GDPR
  • Request More Information
  • Journalists
  • GDPR
  • Request More Information
  • Journalists
  • GDPR

Qodo achieves best code embedding performance with small 1.5 billion parameter model


News provided by

Qodo

27 Feb, 2025, 16:00 IST

Share this article

Share toX

Share this article

Share toX

New model outperforms OpenAI and Salesforce with a more efficient solution

TEL AVIV, Israel, Feb. 27, 2025 /PRNewswire/ -- Qodo, the quality-first AI coding platform, announced today the release of a Qodo-Embed-1-1.5B, a new code embedding model that outperforms OpenAI and is the best overall model while being a fraction of the size, 1.5 billion parameters as opposed to 7 billion. The model sets a new standard for efficiency in code understanding, enabling AI systems to better process and work with code at any scale. With the ability to run on low-cost GPUs, it makes advanced code search and retrieval capabilities accessible to development teams of all sizes.

Code embedding models are essential to how AI systems understand and work with large-scale codebases - they enable accurate code search, help AI assistants retrieve relevant context, and ultimately allow AI coding agents to understand existing complex codebases. While much attention has focused on code-generating AI, the ability to accurately search and understand existing code remains crucial for both AI systems and human developers. These embedding models power everything from finding similar code patterns to enabling retrieval-augmented generation (RAG) systems that ground AI responses in real codebases.

Qodo-Embed-1-1.5B stands out for its exceptional efficiency-to-performance ratio. It surpasses larger competitors, including OpenAI's text-embedding-3-large model (65.17), scoring 68.53 on CoIR (Code Information Retrieval Benchmark). It also surpasses models of comparable size, including Salesforce's SFR-Embedding-2_R (67.41). Qodo-Embed-1-7B, Qodo's larger model, also outperforms models of the same size, scoring 71.5. CoIR is the industry's most comprehensive benchmark for code retrieval capabilities across multiple programming languages and search tasks. This efficiency of Qodo's smaller model is crucial for large-scale embedding tasks, enabling teams to process and search through extensive codebases without requiring massive computing resources.

"While powerful new LLMs like OpenAI-o3 and DeepSeek-R1 are making headlines for reasoning and thinking capabilities, real-world development tasks require more than just logic from AI—they need to retrieve, interpret, and contextualize code" said Itamar Friedman, CEO of Qodo. "By focusing on code understanding and developer workflows, we're creating AI that doesn't just suggest code, but understands the entire software engineering context."

The model's performance stems from the creation of high-quality synthetic training examples based on permissive open-source code that enable it to better represent the relationships between code and natural language descriptions. This allows for more accurate code search when users make queries in plain English, which has been a weak point for existing models.

Qodo's code embedding model is available on HuggingFace, with the 1.5B parameter version released under the OpenRAIL++-M license and additional model sizes under custom licensing terms. The model will also be available through NVIDIA's NIM platform, and AWS Sagemarker Jumpstart, making it easily available to enterprise development teams.

Contact: 
Gavriel Cohen
Concrete Media for Qodo
[email protected] 

SOURCE Qodo

Modal title

Also from this source

Qodo Named a Visionary in the 2025 Gartner® Magic Quadrant™ for AI Code Assistants

Qodo, the Agentic Code Quality platform, today announced that Qodo has been positioned by Gartner as a Visionary in the Magic Quadrant for AI Code...

Qodo Unveils Top Deep Research Agent for Coding, Outperforming Leading AI Labs on Multi-Repository Benchmark

Qodo Unveils Top Deep Research Agent for Coding, Outperforming Leading AI Labs on Multi-Repository Benchmark

Qodo, the agentic code quality platform, today announced Qodo Aware, a new flagship product in its enterprise platform that brings agentic...

More Releases From This Source

Explore

Computer & Electronics

Computer & Electronics

Artificial Intelligence

Artificial Intelligence

Computer Software

Computer Software

Computer Software

Computer Software

News Releases in Similar Topics

Contact PR Newswire

  • +972-77-2005042
    from 8 AM - 11 PM IL

Global Sites

  • APAC
  • APAC - Traditional Chinese
  • Asia
  • Brazil
  • Canada
  • Czech
  • Denmark
  • Finland
  • France
  • Germany

 

  • India
  • Indonesia
  • Israel
  • Italy
  • Mexico
  • Middle East
  • Middle East - Arabic
  • Netherlands
  • Norway
  • Poland

 

  • Portugal
  • Russia
  • Slovakia
  • Spain
  • Sweden
  • United Kingdom
  • United States

Do not sell or share my personal information:

  • Submit via [email protected] 
  • Call Privacy toll-free: 877-297-8921
Global Sites
  • Asia
  • Brazil
  • Canada
  • Csezh
  • Denmark
  • Finland
  • France
  • Germany
  • India
  • Israel
  • Italie
  • Mexico
  • Middle East
  • Netherlands
  • Norway
  • Poland
  • Portugal
  • Russia
  • Slovakia
  • Spain
  • Sweden
  • United Kingdom
  • United States
+972-77-2005042
from 8 AM - 11 PM IL
  • Terms of Use
  • Privacy Policy
  • Information Security Policy
  • Site Map
  • Cookie Settings
Copyright © 2025 Cision US Inc.