Pinned post

As inference splits into prefill and decode, Nvidia's Groq deal could enable a "Rubin SRAM" variant optimized for ultra-low latency agentic reasoning workloads (Gavin Baker/@gavinsbaker)

Gavin Baker / @gavinsbaker : As inference splits into prefill and decode, Nvidia's Groq deal could enable a “Rubin SRAM” variant optim...

8 July 2024

Hebbia nets $130M to build the go-to AI platform for knowledge retrieval  - 2024-07-08 20:57:02Z

Title:Hebbia nets $130M to build the go-to AI platform for knowledge retrieval  Summary: Hebbia has 1000+ use cases in production with organizations like CharlesBank, Center View Partners and the U.S. Air Force. Link: Hebbia nets $130M to build the go-to AI platform for knowledge retrieval 

Best Sellers