Product Updates

The most recent MongoDB product releases and updates

Announcing DirectQuery Support for the MongoDB Atlas Connector for Power BI

Last year, we introduced the MongoDB Atlas Power BI Connector , a certified solution that has transformed how businesses gain real-time insights from their MongoDB Atlas data using their familiar Microsoft Power BI interface. Today, we’re excited to announce a significant enhancement to this integration: the introduction of DirectQuery support. DirectQuery mode provides a direct connection to your MongoDB Atlas database, allowing Power BI to query data in real-time. This means that your Power BI visualizations and reports will always reflect the latest data without importing and storing data within Power BI. This is especially beneficial for analyzing large datasets where up-to-date information is crucial, ensuring decisions are made efficiently without losing performance due to repetitive data imports and storage complexities. How DirectQuery in MongoDB Atlas Power BI Connector works: The Power BI Connector is supported through MongoDB’s Atlas SQL Interface , which is easily enabled from the Atlas console. Atlas SQL, powered by Atlas Data Federation , allows you to integrate data across sources and apply transformations directly, enhancing your analytics. Once enabled, you’ll receive a SQL Endpoint or URL to input into your MongoDB Atlas SQL Connection Dialog within Power BI Desktop. Here, you can choose between two connectivity modes: Import or DirectQuery. Once connected through DirectQuery, Query folding takes place with Power Query , which is how data retrieval and transformation of source data is optimized. You can also achieve data transformation using a SQL Statement, either with the SQL Statement option in the Atlas SQL Interface or within the M Code script accessed via the Power Query Advanced Editor. After your data is transformed and ready for analysis, start building reports with your Atlas data within the Power BI Desktop! Then, simply save, publish, and distribute within the Power BI online app, which is now part of the Microsoft Fabric platform. Watch our comprehensive tutorial below covering how to connect your Atlas data to Power BI , control SQL schemas in Atlas, and use DirectQuery to gain real-time access to your data for business insights. Power BI Connector for MongoDB Atlas is a Microsoft-certified solution. It not only supports the advanced capabilities of DirectQuery but also continues to offer Import Mode for scenarios where data volume is manageable and detailed data modeling is preferred. Whether you’re analyzing real-time data streams or creating comprehensive reports, the Atlas Power BI Connector adapts to your needs, ensuring your business leverages the full power of MongoDB Atlas. DirectQuery Support is available now and can be accessed by updating your existing MongoDB Atlas Power BI Connector or downloading it here . Start transforming your data analysis and making more informed decisions with real-time Atlas data. Log in and activate the Atlas SQL Interface to try out the Atlas Power BI Connector ! If you are new to Atlas or Power BI, get started for free today on Azure Marketplace or Power BI Desktop .

May 13, 2024
Updates

MongoDB Provider for Entity Framework Core Now Generally Available

We are pleased to announce that the MongoDB Provider for Entity Framework Core (EF Core) is now generally available. This allows developers using EF Core to build C# and .NET applications with MongoDB and to take advantage of our powerful developer data platform while continuing to use APIs and design patterns they already know and love. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Building for the C# and .NET communities Nearly one-third of all developers use C# to build applications, with the population of C# developers reaching upwards of 10 million developers worldwide . What’s more, 39 percent of C# developers use EF Core , which is beloved as an abstraction layer to simplify working with data during development. In the past, C# developers could use MongoDB’s C# driver but didn’t have first-party support for EF Core, so some turned to community-built projects that could be helpful—but lacked official backing or ongoing support from MongoDB. With the official MongoDB Provider for EF Core now generally available, developers can confidently use C# and EF Core when building with MongoDB for production-grade workloads. Gaurav Seth, Partner Director, Product Management at Microsoft, shared his excitement about the new integration, highlighting its importance for the .NET developer community: We are pleased to deepen the relationship between .NET developers and MongoDB through the new MongoDB Provider for Entity Framework Core,” said Gaurav Seth. “This advancement bridges the gap between MongoDB and Entity Framework Core, enabling .NET developers to leverage the full spectrum of MongoDB’s capabilities within the familiar EF environment. With this integration, .NET developers can now more easily incorporate MongoDB’s powerful features into their EF-based applications, further enhancing the robustness and scalability of their solutions. Gaurav Seth, Partner Director, Product Management at Microsoft What's in the new Provider for EF Core With the general availability release, the MongoDB Provider for EF Core offers developers the following capabilities, building upon the foundational features released in the public preview: Compatibility with Entity Framework Core 8 & .NET 8: Fully compatible with the latest EF Core and .NET versions, ensuring your projects are up-to-date with the newest features and improvements. Advanced Querying and Data Operations: Provides a comprehensive suite of querying options, including complex operations and aggregates like Where, OrderBy, and ThenBy, enabling precise data retrieval and deeper analytical insights within your applications. Mapping and Configuration Flexibility: Extended mapping capabilities for properties and entities, including support for various data types and composite keys, providing greater flexibility and precision in how data is structured and stored. Array and List Handling: Improved handling of arrays and lists, enabling more complex data structures to be easily managed and manipulated within your applications. Logging: Enhanced logging for better visibility of operations. We will continue to offer support for the following capabilities launched in the Public Preview: Support for code-first workflows : Allows users to build without an initial database; you create the classes for your application and then match your data model to the classes, not the other way around. Basic CRUD methods: Basic create, read, update, and delete (CRUD) operations are supported. String and numeric type operators: String and numeric type operators needed for basic CRUD operations will be supported. We anticipate supporting more complex operators in future iterations of the Provider. Embedded documents: The Provider supports embedded documents, making it easier to store related information in the same database record. Class mapping and serialization: Your classes in C# will map to MongoDB in a predictable way, including when working with IDs as well as date and/or time values. LINQ query support: The Provider will support LINQ queries with fluent query syntax. Change tracking: The Provider allows you to track and save changes made to entities with each DbContext instance back to your MongoDB database. Benefits of using the Provider for EF Core With the MongoDB Provider for EF Core, C# developers can unlock the full power of MongoDB's developer data platform to build modern applications while leveraging a familiar API interface, query paradigm (LINQ), and design patterns. Developers looking to modernize their data layer can do so with MongoDB while remaining free from cloud vendor lock-in since MongoDB works with all major cloud providers and for multi-cloud deployments. How to get started with MongoDB Provider for Entity Framework Core All you need to do is download the MongoDB Provider for EF Core from the NuGet package manager and build a DbContext that points to a MongoDB Provider instance. The Provider connects to MongoDB and handles the rest, so you can quickly harness the joint value of EF Core and MongoDB. Learn more by diving into our documentation . After you try the new Provider for EF Core, leave us feedback . Your input is important for helping us continue to improve the product experience. Get started today to unleash the power of your data with MongoDB and EF Core.

May 3, 2024
Updates

Welcome to MongoDB.local NYC 2024!

AI promises to upend how enterprises operate and reach customers … if only they could first find the "On" button. Despite the tremendous promise of AI, most companies still find themselves in the experimentation phase, working through proofs of concept, hampered by unfamiliar technologies that don't work well together. But MongoDB is uniquely positioned to help developers turn all this AI noise into "signal" that benefits customers. This week at MongoDB.local NYC, thousands of developers and executives—representing Fortune 500 companies and cutting-edge startups—have gathered to discuss and demonstrate the real-world successes they've had building on MongoDB's developer data platform. MongoDB is fast becoming the industry’s go-to memory database for retrieval-augmented generation (RAG) and agentic systems, offering a unified data model across the entire AI stack. But this isn’t just a technology story, as important as that is. MongoDB also now offers essential programs and services to make AI much more accessible. In short, MongoDB is taking developers from experimentation to impact, and advancing our long-standing mission of making it easy to work with data. Demystifying AI Businesses are eager to adopt generative AI, but they don’t know where to start. The AI landscape is incredibly complex—and seems to get more so by the minute. This complexity, coupled with limited in-house AI expertise and concerns about the performance and security risks of integrating disparate technologies, is keeping too many organizations on the sidelines. MongoDB can help. To get organizations started, we’re announcing the MongoDB AI Applications Program (MAAP) . With MAAP, we give customers the blueprints and reference architectures to easily understand how to build AI applications. We also take on the heavy lifting of integrating MongoDB's developer data platform with leading AI partners like Anthropic, Cohere, Fireworks AI, Langchain, LlamaIndex, Nomic, Anyscale, Credal.ai, and Together AI, all running on the cloud provider of your choice. MAAP will be available to customers in early access starting in July. In addition to MAAP, we’re also introducing two new professional services engagements to help you build AI-powered apps quickly, safely, and cost-effectively: An AI Strategy service that leverages experts to help customers identify the highest-impact AI opportunities and to create specific plans on how to pursue them. For customers who have already identified use cases to pursue, an AI Accelerator service that brings expert consulting—from solution design through prototyping—to enable customers to execute their AI application roadmap from idea to production. Once developers get to building AI apps, they’ll find that MongoDB allows them to speak the data “language” of AI. Our developer data platform unifies all different data types alongside your real-time operational data—including source data, vector embeddings, metadata, and generated data—and supports a broad range of use cases. Not only do we give developers the most intuitive way to work with their data, we also keep improving where they can do so. Many developers first experience MongoDB in a local environment before moving to a fully managed cloud service like MongoDB Atlas. So, I'm excited to share that we will be introducing full-text search and vector search in MongoDB Community Edition later this year, making it even easier for developers to quickly experiment with new features and streamlining end-to-end software development workflows when building AI applications. These new capabilities also enable support for customers who want to run AI-powered apps on devices or on-premises. As customers begin to mature these applications, cost becomes an important consideration. Last year, we introduced dedicated nodes for Atlas Search on AWS. Using dedicated nodes, customers can isolate their vector search workloads and scale them up or down independently from operational workloads, improving performance and ensuring high availability. By giving customers workload isolation without data isolation, they can manage resources efficiently without additional complexity. Today, we’re announcing Atlas Search nodes on all three cloud providers, which customers can configure programmatically using the Atlas CLI or our Infrastructure-as-Code integrations . Learn more about how MongoDB is the best solution to the challenges posed by the fast-moving generative AI landscape . Real-time and highly performant Though AI rightly claims center stage at MongoDB .local NYC this week, it's not the only way we're helping developers. From real-time fraud detection , to predictive maintenance , to content summarization , customers need to efficiently process large volumes of high-velocity data from multiple sources. Today, we’re also announcing the general availability of Atlas Stream Processing , the public preview of Atlas Edge Server , and improved performance of time series workloads with MongoDB 8.0. Together, these capabilities enable customers to design applications that solve virtually any business challenge. Learn more about how MongoDB powers modern application requirements . These are just a few of the things we're announcing this week. Whether you’re just dipping your toes into the world of generative AI or are well on your way, MongoDB’s developer data platform, strong and diverse network of partners, and proven industry solutions will give you a competitive edge in a fast-moving market. Please take a minute to see what we've built for you, so that you can more easily build for your customers. Enjoy the conference, and we hope to see you soon! To see more announcements and get the latest product updates, visit our What’s New page. And head to the MongoDB.local hub to see where we’re stopping along our 2024 world tour.

May 2, 2024
Updates

Top AI Announcements at MongoDB.local NYC

The AI landscape is evolving so quickly that it’s no surprise customers are overwhelmed by their choices. Between foundation models for everything from text to code, AI frameworks, and the steady stream of AI-related companies being founded daily, developers and organizations face a dizzying array of AI choices. MongoDB empowers customers through a developer data platform that helps them avoid vendor lock-in from cloud providers or AI vendors in this fast-moving space. This freedom allows customers to choose the large language model (LLM) that best suits their needs - now or in the future, whether it's open source or proprietary. Today at MongoDB.local NYC, we announced many new product capabilities, partner integrations, services, and solution offering that enable development teams to get started and build customer-facing solutions with AI. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Run everywhere, with whatever technology you are using in your AI stack MongoDB’s flexible document model is built on the ethos of “data that is accessed and used together is stored together.” Vectors are a natural extension of this capability, meaning customers can store their source data, metadata, and related vector embeddings in the same document. All of this is accessed and queried with a common Query API, making vector data easy to combine and work with other types of data stored within MongoDB. MongoDB Atlas—our fully managed, multi-cloud developer data platform—makes it easy to build AI-powered applications and experiences, with the breadth and depth of MongoDB’s AI partnerships and integrations—no matter which language, application framework, foundation model, or technology partner is used or preferred by developers. This year, we’re continuing to focus on our AI partnerships and integrations to make it easier for developers to build innovative applications with generative AI, including: Python and JavaScript using the dedicated Langchain-MongoDB package Python and C# Microsoft Semantic Kernel integration for Atlas Vector Search AI models from Mistral and Cohere AI models on the Fireworks AI platform Addition of Atlas Vector Search as a knowledge base in Amazon Bedrock Atlas as a datastore enabling storage, query, and retrieval using natural language in ChatGPT Atlas Vector Search as a datastore on Haystack Atlas Vector Search as a datastore on DocArray Collaboration with Google Gemini Code Assist and Amazon Q to quickly prototype new features and accelerate application development. Google Vertex AI Extension to harness natural language with MongoDB queries MongoDB integrates well with a rich ecosystem of AI developer frameworks, LLMs, and embedding providers. We continue investing in making the entire AI stack work seamlessly, enabling developers to take advantage of generative AI capabilities in their applications easily. MongoDB’s integrations and our industry-leading multi-cloud capabilities allow organizations to move quickly and avoid lock-in to any particular cloud provider or AI technology in a rapidly evolving space. Build high-performance AI applications securely and at scale Workload isolation, without data isolation, is critical for building performant, scalable AI applications. Search Nodes in MongoDB Atlas provide dedicated computing and enable users to isolate memory-intensive AI workloads for superior performance and higher availability. Users can optimize resource consumption for their use case, upsizing or downsizing the hardware for that specific node irrespective of the rest of the database cluster. Search Nodes make optimizing performance for vector search queries easy without over or under-provisioning an entire cluster. The IaC integrations with Hashicorp Terraform Atlas Provider and Cloudformation enable developers to configure and programmatically deploy Search Nodes at scale. Search Nodes are an integral part of Atlas - our fully managed, battle-tested, multi-cloud platform. Previously, we announced the availability of Search Nodes for our AWS and Google Cloud customers. We are excited to announce the preview of Search Nodes for our Azure customers at MongoDB.local NYC. Search Nodes on Atlas helps developers move faster by removing the friction of integrating, securing, and maintaining the essential data components required to build and deploy modern AI applications. Improve developer productivity with AI-powered experiences Today, we also announced new and improved releases of our intelligent developer experiences in MongoDB Compass , MongoDB Relational Migrator , and MongoDB Atlas Charts , aiming to enhance developer productivity and velocity. With the updated releases, developers can use natural language to query their data using MongoDB Compass, troubleshoot common problems during development, perform SQL-to-Query API conversion right from within MongoDB Relational Migrator , and quickly build charts and dashboards using natural language prompts in MongoDB Atlas Charts. Collectively, these intelligent experiences will help developers build differentiated features with greater control and flexibility, making it easier than ever to build applications with MongoDB. Enable development teams to get started and build customer-facing solutions faster and easier with AI MongoDB makes it easy for companies of all sizes to build AI-powered applications. To provide customers with a straightforward way to get started with generative AI, MongoDB is announcing the MongoDB AI Application Program (MAAP). Based on usage patterns for common AI use cases, customers receive a functioning application built on a reference architecture backed by MongoDB Atlas, vetted AI models and hosting solutions, technical support, and a full-service engagement led by our Professional Services team. We’re launching with an incredible group of industry-leading partners, including Anthropic, Anyscale, AWS, Cohere, Credal.ai, Fireworks.ai, Google Cloud, gravity9, LangChain, LlamaIndex, Microsoft Azure, Nomic, PeerIslands, Pureinsights, and Together AI. MongoDB is in a unique position in the market to be able to pull together such an impressive AI partner ecosystem in a single customer-focused program, and we’re excited to see how MAAP will help customers more easily go from ideation to fully functioning generative AI applications. Last year, to further enable startups to build AI solutions with MongoDB Atlas, we launched the AI Innovators Program , an extension of MongoDB for Startups , which offers an additional $5000 in Atlas credits to our AI startups. This year, we are expanding the program by introducing an AI Startup Hub , which features a curated guide for getting started with MongoDB and AI, quickstarts for MongoDB and select AI partners, and startup credit offerings from our AI partners. We provide two new AI Accelerator consulting packages for larger enterprise companies: AI Essentials and AI Implementation. While MAAP is aimed exclusively at building highly vetted reference architectures, these consulting packages allow customers to design, build, and deploy open-ended AI prototypes and solutions into their applications. Data has always been a competitive advantage for organizations, and MongoDB makes it easy, fast, and flexible to innovate with data. We continue to invest in making all the other parts of the AI stack easy for organizations: vetting top partners to ensure compatibility with different parts of the application stack, building a managed service that spans multiple clouds in operation, and ensuring the openness that's always been a part of MongoDB which avoids vendor lock-in. How does MongoDB Atlas unify operational, analytical, and generative AI data services to streamline building AI-enriched applications? Check out our MongoDB for AI page to learn more.

May 2, 2024
Updates

MongoDB Introduces Workload Identity Federation for Database Access

MongoDB Atlas customers run workloads (applications) inside AWS, Azure, and Google Cloud. Today, to enable these workloads to authenticate with MongoDB Atlas cluster—customers create and manage MongoDB Atlas database users using the natively supported SCRAM (password) and X.509 authentication mechanisms and configure them in their workloads. Customers have to manage the full identity lifecycle of these users in their applications, including frequently rotating secrets. To meet their evolving security and compliance requirements, our enterprise customers require database users to be managed within their existing identity providers or cloud providers of their choice. Workload Identity Federation will be in general availability later this month and allows management of MongoDB Atlas database users with Azure Managed Identities, Azure Service Principals, Google Service Accounts, or an OAuth2.0 compliant authorization service. This approach makes it easier for customers to manage, secure, and audit their MongoDB Atlas database users in their existing identity provider or a cloud provider of their choice and enables them to have "passwordless" access to their MongoDB Atlas databases. Along with Workload Identity Federation, Workforce Identity Federation , which was launched in public preview last year, will be generally available later this month. Workforce Identity Federation allows organizations to configure access to MongoDB clusters for their employees with single sign-on (SSO) using OpenID Connect. Both features complement each other and enable organizations to have complete control of database access for both application users and employees. Workload Identity Federation support will be available in Atlas Dedicated Clusters on MongoDB 7.0 and above, and is supported by Java, C#, Node, and Python drivers. Go driver support will be added soon. Quick steps to get started with Workload Identity Federation: Configure Atlas with your OAuth2.0 compatible workload identity provider such as Azure or Google Cloud. Configure Azure Service Principal or Google Cloud Service Accounts for the Azure or Google Cloud resource where your application runs. Add the configured Azure Service Principal or Google Cloud Service Account as Atlas database users with Federated authentication. Using Python or any supported driver inside your application, authenticate and authorize with your workload identity provider and Atlas clusters. To learn more about Workload Identity Federation, please refer to the documentation . And to learn more about how MongoDB’s robust operational and security controls protect your data, read more about our security features .

May 2, 2024
Updates

Atlas Stream Processing is Now Generally Available!

We're thrilled to announce that Atlas Stream Processing —the MongoDB-native way to process streaming data—is now generally available, empowering developers to quickly build responsive, event-driven applications! Our team spent the last two years defining a vision and building a product that leans into MongoDB’s strengths to overcome the hard challenges in stream processing. After a decade of building stream processing products outside of MongoDB, we are using everything that makes MongoDB unique and differentiated—the Query API and powerful aggregation framework, as well as the document model and its schema flexibility—to create an awesome developer experience. It’s a new approach to stream processing, and based on the feedback of so many of you in our community, it’s the best way for most developers using MongoDB to do it. Let’s get into what’s new. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . What's new in general availability? Production Readiness Ready to support your production workloads, ensuring reliable and scalable stream processing for your mission-critical applications. Time Series Collection Support Emit processor results into Time Series Collections . Pre-process data continuously while saving it for historical access later in a collection type available in MongoDB Atlas built to efficiently store and query time series data. Development and Production Tiers Besides the SP30 cluster tier available during the public preview, we’re introducing an SP10 tier to provide flexibility and a cost-effective option for exploratory use cases and low-traffic stream processing workloads. Improved Kafka Support Added support for Kafka headers allows applications to provide additional metadata alongside event data. They are helpful for various stream processing use cases (e.g., routing messages, conditional processing, and more). Least Privilege Access Atlas Database Users can grant access to Stream Processing Instances and enable access to only those who need it. Read our tutorial for more information. Stream Processor Alerting Gain insight and visibility into the health of your stream processors by creating alerts for when a failure occurs. Supported methods for alerting include email, SMS, monitoring platforms like Datadog, and more . Why Atlas Stream Processing? Atlas Stream Processing brings the power and flexibility of MongoDB's document model and Query API to the challenging stream processing space. With Atlas Stream Processing, developers can: Effortlessly handle complex and rapidly changing data structures Use the familiar MongoDB Query API for processing streaming data Seamlessly integrate with MongoDB Atlas Benefit from a fully managed service that eliminates operational overhead Customer highlights Read what developers are saying about Atlas Stream Processing: At Acoustic, our key focus is to empower brands with behavioral insights that enable them to create engaging, personalized customer experiences. To do so, our Acoustic Connect platform must be able to efficiently process and manage millions of marketing, behavioral, and customer signals as they occur. With Atlas Stream Processing, our engineers can leverage the skills they already have from working with data in Atlas to process new data continuously, ensuring our customers have access to real-time customer insights. John Riewerts, EVP, Engineering at Acoustic Atlas Stream Processing enables us to process, validate, and transform data before sending it to our messaging architecture in AWS powering event-driven updates throughout our platform. The reliability and performance of Atlas Stream Processing has increased our productivity, improved developer experience, and reduced infrastructure cost. Cody Perry, Software Engineer, Meltwater What's ahead for Atlas Stream Processing? We’re rapidly introducing new features and functionality to ensure MongoDB delivers a world-class stream processing experience for all development teams. Over the next few months, you can expect to see: Advanced Networking Support Support for VPC Peering to Kafka Clusters for teams requiring additional networking capabilities Expanded Cloud Region Support Support for all cloud regions available in Atlas Data Federation Expanded Cloud Provider Support Support for Microsoft Azure Expanded Data Source and Sink Support We have plans to expand beyond Kafka and Atlas databases in the coming months. Let us know which sources and sinks you need, and we will factor that into our planning Richer Metrics & Observability Support for expanded visibility into your stream processors to help simplify monitoring and troubleshooting Expanded Deployment Flexibility Support for deploying stream processors with Terraform. This integration will help to enable a seamless CI/CD pipeline, enhancing operational efficiency with infrastructure as code. Look out for a dedicated blog in the near future on how to get started with Atlas Stream Processing and Terraform. So whether you're looking to process high-velocity sensor data, continuously analyze customer data to deliver personalized experiences, or perform predictive maintenance to increase yields and reduce costs, Atlas Stream Processing has you covered. Join the hundreds of development teams already building with Atlas Stream Processing. Stay tuned to hear more from us soon, and good luck building! Login today or check out our introductory tutorial to get started.

May 2, 2024
Updates

Elevating Database Performance: Introducing Query Insights in MongoDB Atlas

Today, at .local NYC, MongoDB Atlas introduced the new Query Insights tab, enhancing how users monitor, manage, and optimize their database performance directly within the Atlas UI. This new feature offers developers deeper insights into their database’s performance, with a more powerful query analysis tool and detailed namespace-level metrics for faster issue resolution and enhanced performance. Applications and workloads change over time, making it increasingly difficult to track inefficient queries that strain a database's resources. Metrics can spike for various reasons, and developers need the right tooling to determine the source of the problem so they can quickly identify and resolve the issue. MongoDB Atlas's Query Insights directly tackles these challenges by enhancing MongoDB's observability capabilities with two crucial features: Namespace Insights and an upgraded Query Profiler. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Query Insights delivers performance optimization through actionable intelligence The introduction of MongoDB Atlas Query Insights demonstrates MongoDB’s commitment to advanced database management. This feature enhances our platform’s observability capabilities with detailed and actionable insights. This feature integrates Namespace Insights and an upgraded Query Profiler within a new dynamic interface, helping boost database performance by streamlining diagnostics and reducing troubleshooting times. The newly added Namespace Insights provides users with collection-level latency statistics and a comprehensive view of how the hottest collections on a cluster perform over time. This enables developers to answer "Who or what is causing the problem?” which is instrumental in identifying performance trends and prioritizing query optimizations. The enhanced cluster-centric Query Profiler introduces a more comprehensive view of slow and inefficient queries over a broader period. Having an overall view of data across the entire cluster facilitates more straightforward navigation between nodes and a longer lookback period to identify trends. This ultimately reduces troubleshooting time, thereby enhancing developer productivity and improving overall database performance. Key benefits of Query Insights Query Insights brings MongoDB Atlas users several new benefits, including: Granular telemetry: Faster identification and resolution of database issues with namespace-level latency statistics Improved observability: It is easier to spot performance trends, identify root causes, and debug applications Enhanced productivity: Reduced troubleshooting time thanks to a more comprehensive view of slow operations Try it out! The Query Insights page provides more granular insights into database performance by providing collection and operation-level details. The Namespace Insights page provides metrics for the top 20 collections by total latency. Hover over the charts to see how collections perform relative to each other over time. This information makes it easier to answer the question: “who/what is causing the problem?” Use the Query Profiler to view specific slow operations. Click on a point in the scatter plot to bring up additional metadata about each slow operation. Click on View More Details to see more metrics and metadata about each slow operation, including the app name, the operation, the plan summary, execution stats, etc. Empowering users for peak performance The launch of Query Insights in MongoDB Atlas underscores MongoDB’s commitment to enhancing our platform's observability capabilities. By providing users with the necessary tools and insights for optimal database performance, MongoDB enables developers to spend less time debugging and more time creating—lowering the total cost of ownership and maximizing efficiency, adding significant value to our users' operations. Sign up for MongoDB Atlas , our cloud database service, to see Query Insights in action, and for more information, see Monitor Query Performance .

May 2, 2024
Updates

Atlas Edge Server is Now in Public Preview

We’re excited to announce that Atlas Edge Server is now in public preview! Any developer on Atlas can now deploy Edge Server for their connected infrastructure. Learn more in our docs or get started today. Developers value MongoDB’s developer data platform for the flexibility and ease of use of the document model, as well as for helpful tools like search and charts that simplify data management. As a crucial component of our Atlas for the Edge solution, Atlas Edge Server extends these capabilities to remote and network-constrained environments. First announced at MongoDB.local London 2023, Atlas for the Edge enables local data processing and management within edge environments and across edge devices, reducing latency, enhancing performance, and allowing for disconnection resilience. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . What's new in public preview? One of our top priorities is providing developers with a seamless experience when managing their data and applications. We continuously seek to enhance this experience, which is why, starting today, Atlas Edge Server can be directly downloaded, configured, and managed through the Atlas UI. Developers who deploy from the Atlas UI will be able to choose between two onboarding flows to ensure that their configuration is tailored to their needs. This includes both developers who want to connect their edge server with a MongoDB driver or client, and those who want to support connecting to the Edge Server via Device Sync. Why Atlas Edge Server? While edge computing brings data processing closer to end-users and offers substantial benefits, such as network resilience and increased security, a number of challenges inherent to edge computing can make it difficult to fully leverage. Edge computing challenges include managing complex networks, handling large volumes of data, and addressing security concerns, any of which can deter organizations from adopting edge computing. Additionally, the costs associated with building, maintaining, and scaling edge computing systems can be significant. Atlas for the Edge and Atlas Edge Server alleviate these challenges. Atlas Edge Server provides a MongoDB instance equipped with a synchronization server that can be deployed on local or remote infrastructure. It enables real-time synchronization, conflict resolution, and disconnection tolerance. This ensures that mission-critical applications and devices operate seamlessly, even with intermittent connectivity. Edge Server allows for selective synchronization of only modified fields, conserving bandwidth and prioritizing crucial data transfers to Atlas. It also maintains edge client functionality even with intermittent cloud connectivity, preventing disruptions to essential operations like inventory management and point-of-sale systems. Processing data locally reduces latency and enables rapid data insights, reducing dependency on central databases. We'll meet you at the edge The Public Preview of Atlas Edge Server underscores MongoDB’s ongoing commitment to enhancing our developer data platform for distributed infrastructures. As we continue to invest in Atlas for the Edge, MongoDB’s goal is to equip teams with a robust data solution that not only offers an exceptional developer experience but also empowers them to drive innovative solutions for their businesses and customers. Get started today , or visit the Atlas for the Edge web page to learn more about how companies are benefiting from our edge solution.

May 2, 2024
Updates

Workload Isolation for More Scalability and Availability: Search Nodes Now on Google Cloud

May 2, 2024: Announcing Search Nodes in preview on Microsoft Azure Today we’re excited to take the next step in bringing scalable, dedicated architecture to your search experiences with the introduction of Atlas Search Nodes, now in general availability for Google Cloud. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Since our initial announcement of Search Nodes in June of 2023, we’ve been rapidly accelerating access to the most scalable dedicated architecture, starting with general availability on AWS and now expanding to general availability on Google Cloud. We'd like to give you a bit more context on what Search Nodes are and why they're important to any search experience running at scale. Search Nodes provide dedicated infrastructure for Atlas Search and Vector Search workloads to enable even greater control over search workloads. They also allow you to isolate and optimize compute resources to scale search and database needs independently, delivering better performance at scale and higher availability. One of the last things developers want to deal with when building and scaling apps is having to worry about infrastructure problems. Any downtime or poor user experiences can result in lost users or revenue, especially when it comes to your database and search experience. This is one of the reasons developers turn to MongoDB, given the ease of use of having one unified system for your database and search solution. With the introduction of Atlas Search Nodes, we’ve taken the next step in providing our builders with ultimate control, giving them the ability to remain flexible by scaling search workloads without the need to over-provision the database. By isolating your search and database workloads while at the same time automatically keeping your search cluster data synchronized with operational data, Atlas Search and Atlas Vector Search eliminate the need to run a separate ETL tool, which takes time and effort to set up and is yet another fail point for your scaling app. This provides superior performance and higher availability while reducing architectural complexity and wasted engineering time recovering from sync failures. In fact, we’ve seen a 40% to 60% decrease in query time for many complex queries, while eliminating the chances of any resource contention or downtime. With just a quick button click, Search Nodes on Google Cloud offer our existing Atlas Search and Vector Search users the following benefits: Higher availability Increased scalability Workload isolation Better performance at scale Improved query performance We offer both compute-heavy search-specific nodes for relevance-based text search, as well as a memory-optimized option that is optimal for semantic and retrieval augmented generation (RAG) production use cases with Atlas Vector Search. This makes resource contention or availability issues a thing of the past. Search Nodes are easy to opt into and set up — to start, jump on into the MongoDB UI and follow the steps do the following: Navigate to your “Database Deployments” section in the MongoDB UI Click the green “+Create” button On the “Create New Cluster” page, change the radio button for Google Cloud for “Multi-cloud, multi-region & workload isolation” to enable Toggle the radio button for “Search Nodes for workload isolation” to enable. Select the number of nodes in the text box Check the agreement box Click “Create cluster” For existing Atlas Search users, click “Edit Configuration” in the MongoDB Atlas Search UI and enable the toggle for workload isolation. Then the steps are the same as noted above. Jump straight into our docs to learn more!

March 28, 2024
Updates

AI-powered SQL Query Converter Tool is Now Available in Relational Migrator

When I traveled to Japan for the first time it was shortly after translation apps on smartphones had really taken off. Even though I knew enough phrases to get by as a tourist I was amazed at how empowered I was by being able to have smoother conversations and read signs more easily. The power of AI helped me understand a language I had only a passing familiarity with and drastically improved my experience in another country. I was able to spend more time enjoying myself and spend less time looking up common words and sentences in a phrase book. So what does this have to do with application modernization? Transitioning from relational databases as part of a modernization effort is more than migrating data from a legacy database to a modern one. There is all the planning, designing, testing, refactoring, validating, and ongoing operation that makes modernization efforts a complex project to navigate successfully. MongoDB’s free Relational Migrator tool has helped with many of these tasks including schema design, data migration, and code generation, but we know this is just the beginning. One of the most common challenges of migrating legacy applications to MongoDB is working with SQL queries, triggers, and stored procedures that are often undocumented and must be manually converted to MongoDB Query API syntax. This requires deep knowledge of both SQL and the MongoDB Query API, which is rare if teams are used to only using one system or the other. In addition, teams often have hundreds, if not thousands of queries, triggers, and stored procedures that must be converted, which is extremely time-consuming and tedious. Doing these conversions manually would be like traveling abroad and looking up each object one by one in a phrase book instead of using a translation app. Thankfully with generative AI, we are finally able to get the modern version of the translation app on your phone. The latest release of Relational Migrator is able to use generative AI to help your developers quickly convert existing SQL queries, triggers, and stored procedures to work with MongoDB using your choice of programming language (JavaScript, C#, or Java). By automating the generation of development-ready MongoDB queries, your team can be more efficient by redirecting their time to more important testing and optimization efforts — accelerating your migration project. Teams that are familiar with SQL can also use the Query Converter to help close their MongoDB knowledge gap. The SQL objects they're familiar with are translated, making it easier to learn the new syntax by seeing them next to each other. Let’s take a closer look at how Query Converter can convert a SQL Server stored procedure to work with MongoDB. Figure 1: The MongoDB Query Converter Dashboard We’ll start by importing the stored procedure from the relational database into our Relational Migrator project. This particular stored procedure joins the results from two tables, performs some arithmetic on some of the columns, and filters the results based on an input parameter. CREATE PROCEDURE CustOrdersDetail @OrderID int AS SELECT ProductName, UnitPrice=ROUND(Od.UnitPrice, 2), Quantity, Discount=CONVERT(int, Discount * 100), ExtendedPrice=ROUND(CONVERT(money, Quantity * (1 - Discount) * Od.UnitPrice), 2) FROM Products P, [Order Details] Od WHERE Od.ProductID = P.ProductID and Od.OrderID = @OrderID Developers who are experienced with the MongoDB aggregation framework would know that the equivalent method to join data from two collections is to use the $lookup stage. However, when migrating a relational database to MongoDB, it often makes sense to consolidate data from multiple tables into a single collection. In this example, we are doing exactly that, by combining data from the Orders , Order Details , and Products table into a single orders collection. This means that, when considering the changes to the schema, we do not actually need a $lookup stage at all, as the data from each of the required tables has already been merged into a single collection. Relational Migrator’s Query Converter works alongside the schema mapping functionality and automatically adjusts the generated query to work against your chosen schema. With JavaScript chosen as our target language, the converted query avoids the need for a costly join and includes MongoDB equivalents of our original SQL arithmetic functions. The query is now ready to test and include in our modernized app. const CustOrdersDetail = async (db, OrderID) => { return await db.collection('orders').aggregate([ { $match: { orderId: OrderID } }, { $unwind: '$lineItems' }, { $project: { ProductName: '$product.productName', UnitPrice: { $round: ['$lineItems.unitPrice', 2] }, Quantity: '$lineItems.quantity', Discount: { $multiply: ['$lineItems.discount', 100] }, ExtendedPrice: { $round: [ { $multiply: [ '$lineItems.quantity', { $subtract: [1, '$lineItems.discount'] }, '$lineItems.unitPrice' ] }, 2 ] } } } ]).toArray(); }; Relational Migrator does more than just query conversion, it also assists with app code generation, data modeling, and data migration, which drastically cuts down on the time and effort required to modernize your team's applications. Just like a language translation app while traveling abroad it can drastically improve your experience converting and understanding a new language or technology. The new Query Converter tool is now available for free for anyone to try as part of a public preview in the Relational Migrator tool. Download Relational Migrator and try converting your SQL queries and stored procedures today.

March 25, 2024
Updates

Introducing Semantic Caching and a Dedicated MongoDB LangChain Package for Gen AI Apps

We are in an unprecedented time in history where developers can build transformative AI applications quickly, without being AI experts themselves. This ability is enabling new classes of applications that can better serve customers with conversational AI for assistance and automation, advanced reasoning and analysis using AI-powered retrieval, and recommendation systems. Behind this revolution are large language models (LLMs) that can be prompted to solve for a wide range of use cases. However, LLMs have various limitations, like knowledge cutoff and a tendency to hallucinate. To overcome these limitations, they must be integrated with proprietary enterprise data sources to build reliable, relevant, and high-quality generative AI applications. That’s where MongoDB plays a critical role in the modern generative AI stack. Developers use MongoDB Atlas Vector Search as a vital part of the generative AI technique known as retrieval-augmented generation (RAG). RAG is the process of feeding LLMs the supplementary data necessary to ground their responses, ensuring they're dependable and precise. LangChain has been a critical part of this journey since the public launch of Atlas Vector Search, enabling developers to build better retriever systems powered by vector search and store conversation history in the operational database. Today, we are excited to announce support for two enhancements: Semantic cache powered by Atlas vector search, which improves the performance of your apps A dedicated LangChain-MongoDB package for Python and JS/TS developers, enabling them to build advanced applications even more efficiently The MongoDB Atlas integration with LangChain can now power all the database requirements for building modern generative AI applications: vector search, semantic caching (currently only available in Python), and conversation history. Earlier, we announced the launch of MongoDB LangChain Templates , which enable the developers to quickly deploy RAG applications, and provided a reference implementation of a basic RAG template using MongoDB Atlas Vector Search and OpenAI and a more advanced Parent-document Retrieval RAG template using MongoDB Atlas Vector Search. We are excited about our partnership with LangChain and will continue innovating. Improve LLM application performance with semantic cache Semantic cache improves the performance of LLM applications by caching responses based on the semantic meaning or context within the queries themselves. This is different from a traditional cache that works based on exact keyword matching. In the era of LLM the value of semantic cache is increasing tremendously, enabling sophisticated user experiences that closely mimic human interactions. For example, if two different users enter two different prompts, “give me suggestions for a comedy movie” and “recommend a comedy movie”, the semantic cache can understand that the intent behind the queries are same and return a similar response, even though different keywords are used, whereas a traditional cache will fail. Figure 1: Semantic cache using MongoDB Atlas Vector Search Check out this video walkthrough for the semantic cache: Accelerate development with a dedicated package With a dedicated LangChain-MongoDB package, MongoDB is even more deeply integrated with LangChain. The Python and Javascript packages contain the following LangChain Integrations: MongoDBAtlasVectorSearch ( Vector stores ) and MongoDBChatMessageHistory ( Chat Messages Memory ). In addition, the Python package includes the MongoDBAtlasSemanticCache ( LLM Caching ). The new package langchain-mongodb contains all the MongoDB-specific implementations and needs to be installed separately from langchain, which includes all the core abstractions. Earlier, everything was in the same package, making it challenging to correctly version and communicate what version should be used and whether any breaking changes were made. Find out more about the langchain-mongodb package: Python: Source code , LangChain docs , MongoDB docs Javascript: Source code , LangChain.js docs , MongoDB docs Get started today Check out this accompanying tutorial and notebook on building advanced RAG with MongoDB and LangChain, which contains a walkthrough and use cases for using semantic cache, vector search, and chat message history. Check out the “ PDFtoChat ” app to see langchain-mongodb JS in action. It allows you to have a conversation with your proprietary PDFs using AI and is built with MongoDB Atlas, LangChain.js, and TogetherAI. It’s an end-to-end SaaS-in-a-box app and includes user authentication, saving PDFs, and saving chats per PDF. Read the excellent overview of semantic caching using LangChain and MongoDB.

March 20, 2024
Updates

Announcing Search Index Management in MongoDB Compass

You can now create and manage Atlas Search and Atlas Vector Search indexes on the interface many of you know and love: MongoDB Compass . Seamlessly build full-text and semantic search applications on top of your Atlas database, delivering swift and relevant results for a range of use cases including e-commerce sites, customer support chatbots, recommendation systems, and more. Gone are the days of juggling multiple tools to bring your search queries to fruition. And, with a variety of templates to choose from, Compass simplifies learning search index syntax so you can focus on what’s most important to you: building exceptional end-user experiences on top of your search queries. Try it out To get started, connect to an Atlas cluster from Compass. If you don’t have one, sign up . From there, simply navigate to Compass’ Indexes tab and select Create Search Index . It’s easy to build your first search index using one of our templates. Select either Search or Vector Search, and use the appropriate template. In this example, we’re going to create a Vector Search index. Once you're satisfied with your index definition, click Aggregate to start testing out your pipeline in Compass. Compass’ new search index experience leads you to results in just three guided steps, all without leaving the comfort of Compass. To learn more about search indexing in Compass, visit our documentation . If you have feedback about Compass’ search index experience, let us know on our feedback forum . Happy indexing!

March 18, 2024
Updates

Ready to get Started with MongoDB Atlas?

Start Free