The new end-to-end AI monitoring solution is set to help engineers build and run applications using large language models (LLMs), using in-depth insights to optimise observability processes.
Users will be able to keep tabs on AI model performance, quality, cost, and compliance, allowing for more effective finance and risk management.
Over 50 integrations across the AI data stack will be made available to New Relic customers, allowing for more room for service customisation. These include:
- Azure, AWS, GCP and Kubernetes infrastructure;
- LangChain orchestration framework;
- Pytorch, Keras and TensorFlow machine learning libraries;
- Compatibility with Amazon SageMaker and AzureML.
Three ideal scenarios for anomaly detection with machine learning — Machine learning can prove ideal for anomaly detection throughout the company network. Here are three key scenarios where this can be put to good use.
Volumes of telemetry data need to be tracked and analysed by software engineers managing an array of applications.
This calls for single-view access for troubleshooting, comparing and optimising LLM prompts and responses for performance, cost, security, and quality issues.
“Almost every company is deciding how they are going to integrate AI into their operations and product offerings,” said Manav Khurana, chief product officer at New Relic.
“Observability is fundamental to the function and growth of AI. With AIM, we are giving engineers the necessary visibility and control needed to navigate the complexities of AI and build applications in a safe and cost-effective manner.”
Mitigating the organisational risks of generative AI — Risks including bias and hallucinations abound across generative AI projects. Jeff Watkins explains how businesses can mitigate them long-term.
Edo Liberty, founder and CEO of early product trialists Pinecone, commented: “By integrating the Pinecone vector database with New Relic AI Monitoring, we are helping organisations embrace next-generation search capabilities with generative AI.
“We have seen incredible demand for vector databases as companies build and deploy AI applications since it is an essential part of the AI stack.
“Now, customers can build better search and generative AI solutions by ensuring relevant, accurate, and fast responses alongside their AI observability practice.”
LLM challenges to address
LLMs remain susceptible to output risks including bias and misinformation, the latter of which can be caused by data hallucinations as well as insufficient information assets.
Meanwhile, operational costs for maintaining AI models such as GPT-4 and PaLM2 can become unmanageable if data and finance teams aren’t properly aligned with possible changes.
As well as helping to overcome these common challenges, insights into how AI systems are operating, to ensure AI application compliance across the stack, can be utilised by New Relic customers.
AIM is now available in early access to New Relic users worldwide. More information can be found here.
Related:
Observability – everything you need to know — Morgan Mclean, director of product management at Splunk, speaks with Information Age about all things observability.