🔥 Meet New Relic Grok, the world’s first GenAI assistant for observability!
It leverages OpenAI’s large language models (LLMs) so that any engineer can use everyday language and a familiar interface to ask questions and get insights, without prior observability experience. Request early access:
OpenAI is obviously hot right now. New Relic not only provides the only plugin for observability around your organizations internal use of ChatGPT-type functions, but has also just integrated OpenAI's LLM into our own platform to bring immediate answers to complex questions about performance of your mission critical applications.
🔥 Meet New Relic Grok, the world’s first GenAI assistant for observability!
It leverages OpenAI’s large language models (LLMs) so that any engineer can use everyday language and a familiar interface to ask questions and get insights, without prior observability experience.
Learn more here: https://lnkd.in/g4ajcTkT
Request early access:
We’ve just updated our Page with a more detailed description of our services and proven process!
Hoping to connect with other AI enthusiasts, who see the immeasurable value this tech is delivering!
This is so cool, y'all! I use generative AI every single day, and now #NewRelic users can use everyday language to ask New Relic Grok to help them with all their observability and data needs.
This is going to be a game-changer. I'm so excited for it.
#ai#data#newrelic#observability#newrelicgrok
A video explaining New Relic Grok, the world’s first generative AI assistant for observability!
And now, observability is as simple as asking New Relic Grok, “What’s wrong with my cart service?” or “Generate my server health report” in plain language via a familiar chat interface in-product.
Request early access here: https://lnkd.in/gESCzNCM
NVIDIA Senior Research Manager & Lead of Embodied AI (GEAR Group). Stanford Ph.D. Building Humanoid robot and gaming foundation models. OpenAI's first intern. Sharing insights on the bleeding edge of AI.
This is an artist's visualization of Transformer's computation. It's hilariously confusing, but eerily satisfying to watch! Happy Veteran's Day weekend!
From Google DeepMind. Source: https://lnkd.in/gg-hDdWA
Thanks to Anton Troynikov for sharing!
This is exactly the problem with the various self-styled blogs, tutorials, animated cartoons, various self-styled LLM universities (totaling at least 100s of million) spread all over Medium posts, LinkedIn, Youtube videos peddling this view that Transformers are some emergent AI machines :) 😊
Various people made various cartoons illustrating how Tranformers work without illustrating the basic maths equations that it uses or a simplified code structure to illustrate it.
In fact it uses
1. Simple maths, multiple dot products across different word/token's embeddings and taking a weighted sum of them to create hidden representations for these words/tokens (which can be seen as highly discriminative features in the sense that they are semantically rich)
2. The next level of Transformer's power comes from the decoder-encoder cross attention which creates an excellent semantic memory of what words has been outputted so far and/or the input text prompt and then autoregressively outputs the the "correct" tokens/words that make "most sense" to complete the seen/outputted words seen so far. Of course it's auto-grad which is slowly making these components learn on the training data as the training progresses.
3. This "most sense" part comes from training this Transformer on ~10-1000 billion tokens collected from the internet text articles or Github repos. In the former case, it can output text answer/completion to a text prompt. In the latter it becomes code assistant capable of write 30-100% legible code given a problem statement and its complexity.
That's it! It's a great AI model but It's not an emergent AI 😊
Andrej Karpathy's minGPT and https://lnkd.in/ezPMFa-G are great code level illustrations of Transformer which is ChatGPT!
And ofcourse there is Transformers library from HF, but it's too dense and with complex code structure (which is totally unnecessary) for a single person to decipher the codebase alone. Andrej's minGPT repo is better written code for one's understanding without the unnecessary code complexity in HF Transformer's
NVIDIA Senior Research Manager & Lead of Embodied AI (GEAR Group). Stanford Ph.D. Building Humanoid robot and gaming foundation models. OpenAI's first intern. Sharing insights on the bleeding edge of AI.
This is an artist's visualization of Transformer's computation. It's hilariously confusing, but eerily satisfying to watch! Happy Veteran's Day weekend!
From Google DeepMind. Source: https://lnkd.in/gg-hDdWA
Thanks to Anton Troynikov for sharing!
Discover the unsung heroes of RAG, the retrievers. From BM25's keyword prowess to VectorStore's semantic insights—learn how these key components are shaping the future of intelligent search. Read on and join the conversation! #generatieveai#rag#informationretrieval
Ever wondered when to use Super Synapse over Professor Synapse? 🤔 Let's break it down together! Discover the unique feature that sets Super Synapse apart and how it could transform your AI interactions in the future. 🚀💭
👀 Watch now to stay ahead in the AI game and share your thoughts below! ➡️
https://hubs.li/Q02h7Ypr0