Sign in to view Sagar’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Sagar’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
United States
Sign in to view Sagar’s full profile
Sagar can introduce you to 10+ people at Stash
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
1K followers 500+ connections
Sign in to view Sagar’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Sagar
Sagar can introduce you to 10+ people at Stash
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Sagar
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Sagar’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Activity
1K followers
- Sagar Jauhari reposted thisSagar Jauhari reposted thisBest product. Best team. Best rates. We showed up to GDC Festival of Gaming full force with one mission: make sure every studio has what they need to win at DTC. Stash x GDC 2026. The bar has been raised. 🕹️
- Sagar Jauhari posted thisThis week we announced a major milestone for Stash: we’ve acquired Galleon, the Tel Aviv team that built the native D2C infrastructure behind SuperPlay’s Dice Dreams, Domino Dreams, and Disney Solitaire. Their technology has already proven itself at scale—powering millions of daily players and driving hundreds of millions in annual D2C revenue through fully native, in-game flows that convert higher, preserve immersion, and materially increase wallet share. This is the kind of DNA you want to add! Galleon (acquired by Stash) built and iterated their platform inside live games, not as an external checkout bolted on later. Their approach to context-preserving commerce, deep integration with live-ops cadence, and rigor around latency, UX polish, and fault tolerance aligns directly with how we’ve been scaling Stash Pay. Combined with our global Merchant of Record infrastructure (supporting 100+ payment methods and 150+ currencies), loyalty systems, and analytics, we can now offer studios a technically unified path to achieving 40%+ D2C wallet share. Super excited to work closely with Or Briga, Shai Arnon, and the entire Galleon team as we expand our engineering footprint. The future of game monetization is native, frictionless, and player-first. With this acquisition, we’re accelerating that future and raising the bar for what D2C performance should look like for top global studios. Read more on our blog https://lnkd.in/gsDhmSbV
- Sagar Jauhari reposted thisSagar Jauhari reposted thisGet ready to unlock more D2C revenue 🚀 Stash and Adyen are teaming up to provide enterprise-level payments to top game studios. Since rulings like Apple v Epic and soon DMA, there’s no better time than now to scale D2C game revenue. But when high processing fees and checkout friction chip away at margins, even the best payments strategy falls short. That’s why we’ve partnered with Adyen. Now, your game can count on: 💰𝐋𝐨𝐰𝐞𝐫 𝐜𝐨𝐬𝐭𝐬: No middlemen, only the lowest processing costs possible 💰𝐌𝐨𝐫𝐞 𝐫𝐞𝐯𝐞𝐧𝐮𝐞: Higher approval rates means more revenue from players 💰𝐏𝐫𝐞𝐦𝐢𝐮𝐦 𝐩𝐥𝐚𝐲𝐞𝐫 𝐞𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞: Localized, flexible payments that feel effortless to players 💰𝐆𝐥𝐨𝐛𝐚𝐥 𝐬𝐜𝐚𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲: Process payments from around the world with zero downtime 💰𝐄𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞-𝐠𝐫𝐚𝐝𝐞 𝐫𝐞𝐥𝐢𝐚𝐛𝐢𝐥𝐢𝐭𝐲: Power your D2C growth with Adyen, the payments infrastructure behind biggest technology companies in the world The news is already spreading 📣 Thanks Dean Takahashi and GamesBeat for the great interview with Justin Kan, Archie Stonehill, and Trevor Nies! Read the coverage here: https://lnkd.in/eAB9U9Jg Or check out the Stash blog: https://lnkd.in/eqF9NXkN
- Sagar Jauhari shared thisI’m thrilled to join the amazing team at Stash!! So excited to listen, learn, and help shape what the future of D2C experiences can look like — for players, creators, and beyond. Let's build! 🚀Sagar Jauhari shared thisSuper excited to welcome Sagar Jauhari and Valerie Lo to the Stash team 👏 Big things ahead 👀 want to join us, too? 👇 https://lnkd.in/dHE-K4nH
- Sagar Jauhari reposted thisSagar Jauhari reposted thisI'm #hiring a Senior Product Security Engineer to join a highly impactful team that builds usable #security solutions at scale. This position can be #remote or in-person based out of #Canada. If I know you and you are interested, reach out! Otherwise, follow the link below to apply. https://lnkd.in/eDqnWBQfSenior Product Security Engineer at Instacart - come join our tableSenior Product Security Engineer at Instacart - come join our table
- Sagar Jauhari reposted thisSagar Jauhari reposted thisOver the past few months, we’ve been building Ava, an internal AI assistant powered by OpenAI models. What originally started as a Hackathon project is now leveraged by over half our peers at Instacart, every month. Proud of how quickly the team has moved on this! When it comes to realizing the true power of Generative AI however, we are only just scratching the surface. Lots more in the works and we are super excited about the future!Scaling Productivity with Ava — Instacart’s Internal AI AssistantScaling Productivity with Ava — Instacart’s Internal AI Assistant
- Sagar Jauhari shared thisSagar Jauhari shared thisThe future of grocery is both online and in store, and AI has an important role to play in powering the experience — no matter how you choose to shop. I’m excited to share more on how AI can enhance the consumer shopping experience online and in store, bridge the gap between those shopping experiences, and help retailers optimize their in-store operations in today’s newsletter:AI Online and In-Store: Powering the Omnichannel ExperienceAI Online and In-Store: Powering the Omnichannel ExperienceFidji Simo
- Sagar Jauhari shared thisSagar Jauhari shared thisToday Instacart is officially “open for business” with the launch of Instacart Business, a new offering that brings the best of our service to business owners. I’ve spent my career building my own businesses and helping others build theirs, and one of my favorite parts of this job is supporting the small businesses that use Instacart not only to grow, but to thrive. I’m so proud of the progress we’ve made in helping local and independent grocers and emerging brands find their audiences on Instacart, and Instacart Business marks our first steps toward helping everyday business owners reach their business goals, too. We already fulfill millions of business orders every quarter, and we want to reach other business owners who may not already use us – many of whom likely deal with multiple vendors, lengthy onboarding processes, untenable order minimums, prohibitive shipping costs, and other challenges just to get what they need for their day-to-day operations. Inventory, for example, can often eat up as much as 25% of a business’s budget and drive disproportionate upfront start-up costs. And, with labor representing as much as 70% of a small business’ spending, businesses suffer when valuable employee time is spent making runs to the store for last-minute supplies. We’re making procurement simpler by rolling out business onboarding today at instacart.com/business, which is an easy way to allow businesses to start identifying themselves as a business and get shopping on Instacart in just a few clicks. Instacart Business allows purchasers easy access an affordable selection from retailers they already trust – with no monthly minimums, no additional contracts to shop and same-day delivery that allows businesses to “skip the ship” on prohibitive delivery fees, bypass costly employee runs to the store, and stock only the inventory they need at a given time. We’re also building up our product experiences that will help businesses save time, costs and valuable employee resources that would be otherwise spent running to the store for supplies. Beyond our product commitments, we’ll also be partnering with the U.S. Black Chambers, U.S. Hispanic Chamber of Commerce, U.S. Pan Asian American Chambers of Commerce EF, Women’s Business Enterprise National Council, and Black Enterprise to demonstrate how Instacart Business can help small businesses save time and money, spotlight diverse SMBs to help them attract new customers and inspire other small businesses, sponsor events to help these businesses expand their network, and provide access to training and resources to help these businesses compete. Combined, these organizations represent hundreds of thousands of diverse businesses that make a difference in their communities, and this kind of support is a critical part of our approach to supporting local businesses. Now - let’s get down to #business! https://lnkd.in/gkpb6qFn
- Sagar Jauhari reacted on thisSagar Jauhari reacted on thisBest product. Best team. Best rates. We showed up to GDC Festival of Gaming full force with one mission: make sure every studio has what they need to win at DTC. Stash x GDC 2026. The bar has been raised. 🕹️
- Sagar Jauhari reacted on thisSagar Jauhari reacted on this🛒 We analyzed 6 months of webshop checkout data. 🛒 One insight changed how we talk to studios about monetization. It's in the slides. 𝚂𝚠𝚒𝚙𝚎 𝚝𝚑𝚛𝚘𝚞𝚐𝚑 ↑ In our next dive, we're exploring the impact on conversion. What do you think — is conversion negatively impacted? Or will we see a neutral to positive effect? Comment below and stay tuned for deeper insights. 🎮 We'll be at GDC next week hosting The Vault Lounge: March 9–13, Montgomery 44, San Francisco. Come talk monetization with us in person: Spencer T. Henry Lowenfels Valerie A. Archie Stonehill Toby H. #GDC2026 #gaming #monetization #webshop #DTC #TheVault
- Sagar Jauhari reacted on thisSagar Jauhari reacted on this𝙳𝚃𝙲 𝚒𝚜 𝚝𝚊𝚋𝚕𝚎 𝚜𝚝𝚊𝚔𝚎𝚜. 𝙼𝚘𝚗𝚎𝚝𝚒𝚣𝚊𝚝𝚒𝚘𝚗 𝚒𝚗𝚝𝚎𝚕𝚕𝚒𝚐𝚎𝚗𝚌𝚎 𝚒𝚜𝚗'𝚝. Every vendor can process your payments. Not every vendor can tell you why players convert, where they drop, and how to compound every decision in the funnel. Casual to midcore — we're seeing strong DTC share across every genre. Because we don't do one-size-fits-all. We build for how YOUR players actually play and buy. We'll be at GDC next week hosting The Vault Lounge: March 9–13, Montgomery 44, San Francisco. Spencer T. Henry Lowenfels Archie Stonehill Toby H. Valerie A. 🗓️ See you in SF 🎮 #GDC2026 #TheVault
- Sagar Jauhari reacted on thisSagar Jauhari reacted on thisP&L control got studios to go DTC. It won't be what keeps them winning there. Spencer T. built DTC monetization at Scopely and Glu (acquired by Electronic Arts (EA)) before the playbook existed. He knows the difference between escaping platform fees and actually compounding LTV off-platform. 🎥 👇 🔐 Meet Spencer and the Stash team at GDC: Montgomery 44, Mar 9–13. #TheVault #DTC #GDC2026
- Sagar Jauhari reacted on thisSagar Jauhari reacted on thisI'm extremely excited to share that I'll be joining Cursor as an AI deployment manager! 🚀 I've been a power user of Cursor for over a year and its easily one of the best products I've ever used! (Checkout https://devlog.tv or https://techsidedoor.com which I built with the help of Cursor). This moment in AI truly feels like the next technological inflection point and I'm extremely grateful to have the opportunity to contribute! 🤖
- Sagar Jauhari reacted on thisI am thrilled to be joining the Simplicity team! Excited to help schools and educators drive impact through data!Sagar Jauhari reacted on thisWe’re excited to welcome Jason Larsen as our Chief Technology Officer! Jason brings deep experience building platforms that help schools turn fragmented data into clear, actionable insight. He was previously CTO at Panorama Education and Everyday Speech, leading data, engineering, and product teams to develop tools used by millions of students and educators. At Simplicity, he’ll lead the evolution of our product as we help schools unify their data and drive impact.
Experience & Education
-
Stash
**** ** ***********
-
*********
****** *********** *******
-
*****
***** ******** ******** * **** ****
-
***** ******** ***** **********
****** ** ******* **** ******** ******* undefined
-
**** ******
** ******** ******** *******
View Sagar’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Sagar’s full profile
- See who you know in common
- Get introduced
- Contact Sagar directly
Explore more posts
-
Ilyes.T. M.
GM CAPITAL HOLDING • 10K followers
Google Antigravity exposes the critical flaw in autonomous AI agent architectures: trust based governance at scale. Antigravity represents a major shift in how developers work. Instead of writing code line by line, you delegate entire tasks to autonomous agents that can modify files, run tests, browse the web, and execute changes across your codebase in parallel. The problem? These agents operate on trust, not proof. One developer reported: On Day 3, an agent confidently refactored a utility function and silently deleted a critical edge case check. This is not a bug. This is the inevitable result of autonomous agents operating without cryptographic authority validation. When you have three agents working asynchronously across different files, two critical questions emerge: how do you enforce what each agent is authorized to do, and when multiple agents coordinate on shared resources, how do you maintain isolation between their operations? Policy based guardrails do not work at this scale. I have solved both problems through complementary cryptographic architectures. For individual agent authorization, my 13 layer cryptographic governance system validates AI agent authority mathematically before execution. For multi agent coordination, my YIN COLLAB architecture implements Agent Specific Compliance Tokens with per agent privacy isolation maintaining cryptographic boundaries preventing cross contamination. Every action carries immutable proof of authorization. Every decision boundary is pre validated cryptographically. 26 USPTO patents. 2,330 claims. Validated with 640x timing resistance and 500 plus concurrent agent support with sub 15ms latency. Mathematical proof, not policy promises. For developers using Antigravity, Cursor, or any autonomous agent platform: the question is whether you can prove mathematically that unauthorized operations are impossible and that multi agent coordination maintains isolation. Because as agent orchestration becomes the dominant development paradigm, the liability surface expands exponentially. One misconfigured agent with access to production systems is an organizational failure. One agent leaking sensitive data to another agent through shared context is an architectural vulnerability. Making it mathematically impossible for agents to violate boundaries is the only governance model that scales. Autonomous agents are the future of software development. Cryptographic governance is the only way to make that future safe. https://lnkd.in/ejPREk9D #GoogleAntigravity #AIGovernance #AutonomousAgents #Cybersecurity #AIAgents #DeveloperTools #CryptographicSecurity #ZeroTrust #AICompliance #SoftwareDevelopment #TechInnovation #AIEthics
2
-
Artem Mirzabekian
Sovcombank • 7K followers
When math draws a hard line for large language models A recent research paper by Vishal Sikka and his son Varin Sikka takes a very different approach to evaluating large language models. Instead of benchmarks and demos, it uses mathematics. The authors show that LLMs operate within a fixed computational limit. Once a task requires more computation than the model can perform during inference, two things happen: the model cannot reliably solve the task and cannot verify correctness either. Incorrect output becomes unavoidable. This applies directly to many “agentic AI” scenarios - long planning chains, multi-step decision making, global optimization, and autonomous workflows. The paper does not argue that LLMs are useless. Quite the opposite. Within their computational domain, they are incredibly powerful tools. What it shows is that some classes of problems sit permanently beyond what transformer-based models can handle, no matter how much data or training we add. AI can become a phenomenal accelerator - for code, analysis, automation, and knowledge work. At the same time, treating it as a universal reasoning engine or a fully autonomous problem solver creates real risk. The research is a good reminder that understanding the limits of a tool is just as important as admiring its strengths. You can read about in more detail here: https://lnkd.in/dqQBQZxd
12
-
Parth Upadhya
FlyByHire.ai • 1K followers
In my last blog, I covered Distributed Inference Serving for LLMs. This time, I’ve written about training, breaking down DDP vs FSDP and what to use when for large-scale deep learning. Link: https://lnkd.in/gcgtSZEk #AI #DeepLearning #DistributedTraining #DDP #FSDP #PyTorch #MLTraining #Scalability #MLOps #GPUComputing #AIInfrastructure
10
1 Comment -
Samkit Kothari
THG • 2K followers
Is the "Search & Browse" era of E-commerce officially evolving into something entirely new? 🛒🤖 I just finished watching an incredible deep dive with Manav Garg (Together Fund) on Think School. As a developer in e-commerce space, one takeaway really stood out: The transition to Agentic Commerce. For years, the industry has been obsessed with perfecting filters and recommendation engines. But the next big leap isn't a better UI, it’s the "headless" interaction between AI agents. (Timestamp: 18:05) The Shift: From UI to Infrastructure In the near future, customers may not spend 20 minutes scrolling. Instead, personal AI agents will negotiate with brand agents to find the best deal based on a user's specific goals and budget. (Timestamp: 15:08) What this means for the future of our craft: Logistics is the new Front-end: If the "Browse" layer becomes automated, the real competitive edge moves to the physical layer i.e supply chain, speed, and fulfillment infrastructure. This is where vertically integrated platforms really shine. (Timestamp: 21:31) The Rise of "Orchestration": With tools like Cursor and AI-assisted coding, we're moving away from boilerplate and toward building complex "Orchestration Layers" that allow these agents to talk to our platforms seamlessly. (Timestamp: 08:30) Efficiency as a Feature: As we integrate LLMs, the focus moves to making them profitable through smarter token usage and caching, turning "hype" into sustainable margins. (Timestamp: 36:52) We often focus on the 95% of time users spend browsing. (Timestamp: 17:52) But as that time shrinks, the platforms that win will be the ones built for agents, not just humans. Are we ready for Agentic Commerce, or are we still building for a world that's quickly moving past the search bar? Video Source: https://lnkd.in/ghfmbqw3 #Ecommerce #SoftwareDevelopment #AI #AgenticCommerce #ThinkSchool #Innovation
22
-
Bryan Fordham
Self Financial, Inc. • 503 followers
Kagi has a translator that will turn English into McKinsey-speak. Also Gen Z. The avian asset’s presence facilitated a positive shift in emotional sentiment, driven by its high-gravity aesthetic and disciplined posture. I initiated a stakeholder inquiry: "Despite your suboptimal grooming and lack of plumage, you demonstrate significant risk appetite. As a legacy entity from the nocturnal ecosystem, what is your core brand identity on the Plutonian shore?" The Raven responded with a definitive, non-negotiable strategic pivot: "Nevermore." https://lnkd.in/ezKWuRxQ
1
-
Jan Brunia
265 followers
Anthropic the maker of Claude, a family of LLM's, published a very interesting paper on the inner workings of LLM's: Tracing the thoughts of a large language model https://lnkd.in/gJHGiu5s What was so interesting to me besides how the LLM's come to an answer, is how the makers of LLM's were unaware of the innerworkings and analyzed their own models to get an idea on how they actually come to an answer which was an eye opener me. You make something that works but why it works and how? 🤔 That was researched. This paper sheds some light on how LLM's come to an answer discussing: - How is Claude multilingual? - Does Claude plan its rhymes? - Mental math Claude wasn't designed as a calculator—it was trained on text, not equipped with mathematical algorithms. Yet somehow, it can add numbers correctly "in its head". How does a system trained to predict the next word in a sequence learn to calculate, say, 36+59, without writing out each step? - Are Claude’s explanations always faithful? Very interesting paper worth a read to get your head around the inner workings of LLM's 😎. Think I learned something again today! 😎 🥳🙂 #Anthropic #LLM #Claude #LLMThoughts #LLMThinking #LLMInnerworkings #LLMMath
1
-
Sameer Bhardwaj
Layrs • 48K followers
You are in a system design interview at Meta for an IC4 role. The interviewer leans in and asks: "If a photo is already on my phone, how can WhatsApp still delete it when the sender taps Delete for Everyone?" Here is how you should break it down 👇 A lot of people think “Delete for Everyone” means WhatsApp somehow reaches into your gallery and erases a file from your device. That is not really what is happening. Btw, if you’re preparing for system design/coding interviews, check out my free mock interview tool on Layrs. You can use it for free here: https://lnkd.in/gpCn7t2T The real answer depends on where the photo is stored, how the chat app references it, and what exactly the app is allowed to delete. 1. First clarify the product behavior Before jumping into architecture, say this clearly in the interview: There are two different cases here. a) The photo exists only inside the chat app’s managed storage b) The photo has already been saved to the user’s gallery / camera roll / filesystem This distinction is everything. If the photo is only inside WhatsApp’s controlled storage, the app can remove the local file and remove the message reference from the chat UI. If the photo was exported or auto-saved to the gallery, WhatsApp usually cannot reliably delete that copy, because that file is now outside the app’s normal ownership boundary. So “Delete for Everyone” is mostly about: - deleting the message record - deleting the media attachment reference - deleting any app-managed cached copy - syncing that deletion event to all participants It is not magic remote file deletion across the whole phone. 2. High-level idea The clean way to think about this is: A chat message is metadata plus optional media. For a photo message, the system usually has: - message_id - chat_id - sender_id - media_id - timestamp - status - optional encryption metadata And the actual photo itself may live in: - encrypted object storage on server for temporary delivery - app sandbox / local database / media cache on device - optionally the user’s gallery if exported When sender taps Delete for Everyone, the system does not chase every byte everywhere. It sends a deletion command tied to the message_id. 3. What happens when the photo is first sent Here is the send flow: 1. User picks a photo 2. App encrypts media and prepares message metadata 3. App uploads encrypted media blob or sends it through media pipeline 4. App sends message metadata referencing that media 5. Receiver downloads the encrypted media 6. Receiver stores it locally in app-managed storage 7. Chat UI shows the message by resolving message_id -> media_id -> local file Read rest of the post: https://lnkd.in/gpv-wGze
158
12 Comments -
Debasish Ghosh
Conviva • 5K followers
An awesome interview, as part of SIGPLAN blog with Prof Ranjit Jhala, Professor at University of California, San Diego. The topics ranged from philosophical discussions, grad school experiences, how he developed an interest in model checking and PL research and finally down to a cool demo of refinement types in Rust. Absolutely loved the demo and surely would be playing around with Flux. Some of my rough notes from the discussion : - Model checking and Floyd Hoare logic : FH logic is symbolic model checking where u r using assertions to represent sets of states and manipulating these sets of states symbolically (preconditions, post conditions etc.) - How lazy abstraction evolved from a talk that was led to thoughts on how to reuse parts of a search space that has already been explored in previous iterations. Frugality of abstractions - don’t do stuff unless u really need it .. did I hear “Laziness by default is not my cup of tea ?” - How liquid types evolved organically as a fusion of constraint solving (that type systems do), model checking and lazy abstractions. - Thoughts on type systems in Haskell and Rust - Rust type system is extremely complex yet the compiler produces so good error messages. And there is a compelling value proposition - you do the type checking and since there is no GC, your code runs very fast. - The cool demo on Flux, a language-integrated verifier for Rust that leverages liquid types, a form of dependent type system. It was also cool to see CoPilot auto-completing flux annotations ;-). First runs rust type and borrow checker and then take the type information and then do liquid types style reasoning using SMT solvers .. Talks about the challenges of implementing refinement types in an imperative language, and here is flux. - Philosophy of Verse as a unification of functional and logic programming. Has a very very sophisticated type system - we expect to hear more in the near future. - Interesting discussion on whether CoPilot like LLMs will stifle new programming language design and research (as we don’t have many examples in the wild to train on) ... and a lot more. Well worth the one and a half hours .. Link to the interview: https://lnkd.in/g-ygZxcw
10
-
Deepak Krishnamurthy
Lincoln Electric • 734 followers
Ragas serve as essential melodic frameworks in Carnatic music, a sub-genre of Indian classical music. For those trained in Carnatic music or well-acquainted with it, identifying a raga is straightforward. This made me curious if an ML model could be trained to do the same. In my latest article, I explore how a VGGish model, combined with a CNN classifier, can be utilized to identify ragas in music.
14
-
Bhavna Negi
Mobile Premier League (MPL) • 5K followers
** How I use multiple LLMs for interview prep — as a system, not a shortcut As interviews become more senior and nuanced, relying on a single AI tool stopped making sense. Instead, I use different LLMs for what they do best—almost like a comparative matrix. 1. Gemini (Live + Deep Research): real-time mock interviews, STAR story stress tests, company and strategy decoding 2. NotebookLM: personal knowledge base—story banks, interview debriefs, pattern detection across feedback 3. ChatGPT: structuring responses, sharpening executive framing, testing logic and trade-offs 4. Claude: refining narrative clarity, tone, and leadership presence in written and spoken answers Why this works: Each model plays a distinct role—interviewer, editor, strategist, or mirror. Together, they surface blind spots, tighten signal, and turn preparation into a continuous improvement loop. AI doesn’t replace preparation. It raises the floor and sharpens the ceiling—if used intentionally. For anyone preparing for senior TPM, Program Ops, or CoS-style roles, this multi-LLM approach has quietly worked in my favor. #lifeofTPM #AILearning #LLMs
5
-
Paramananda Ponnaiyan
Brivo • 767 followers
Reflecting on my recent experiments in reinforcement learning got me thinking deeply about how both traditional RL models and modern neural networks/attention networks fundamentally operate. In reinforcement learning, the core interaction between an agent and its environment revolves around just three signals: - The agent’s actions - The current state of the environment - The reward associated with each state By using these signals, a learning agent figures out the optimal behavior needed to maximize its rewards. For instance, when I trained a model to play tic-tac-toe, the result was essentially a mapping of states to optimal actions. At first glance, the model seemed to exhibit a kind of “second order thinking”, but on closer examination, it was really just performing a sophisticated state-to-action lookup. There was no real “thinking”, just a pattern of responses encoded by experience. I later studied Bellman’s Equation, the mathematical backbone of decision making in Markov Decision Processes. Through this, RL boils down to calculating the value associated with each state, allowing the agent to “look up” and select actions leading to states with the highest expected future reward. Again: state to action, via value lookup. But what about neural networks and modern attention mechanisms? Don’t these architectures add something more? It can feel that way, especially with state-of-the-art models generating nuanced language and creative outputs. However, when you peel away the layers, neural networks perform a lossy compression of the immense state-action mapping. The action here is to choose the next word. The output is determined by the math, a complex “lookup” influenced by randomness or context injected as part of the state. That’s why models can return different, but semantically similar, answers to the same prompt. Still, it’s ultimately just a mapping. This brings me to an intriguing thought: What, then, sets humans apart from these algorithms? For all their complexity, neural networks must always follow where the math leads them. They can't “choose” a different path if it goes against the calculated optimum. Humans, on the other hand, can exercise choice, even acting against conditioning or immediate context. Of course, what we feel as free will is sometimes just deeply ingrained habits surfacing. In a way, our minds have been “trained” our entire lives. Overcoming these default pathways, really exercising freedom, takes conscious effort, which is at the heart of practices like yoga, meditation, and martial arts. Ultimately, I’m fascinated by the similarities (and crucial differences) between artificial learners and the human mind. While machines look up what to do, we can (with practice) choose otherwise. True AGI, I feel would need to be some other kind of technology than LLMs.
5
-
Antonio Mallia
Seltz • 4K followers
⚡ Exciting to see our Block-Max Pruning (BMP) technique in Infinity, an open-source AI-native database designed for LLM applications! In their latest VLDB paper, “Balancing the Blend: An Experimental Analysis of Trade-offs in Hybrid Search”, Hai Jin, Yingfeng Zhang, and co-authors present a rigorous evaluation of hybrid search architectures — combining full-text, sparse, dense, and tensor retrieval. To support efficient sparse vector search at scale, they’ve integrated BMP into Infinity’s SVS engine — a nice validation of our work on fast, top-k lexical retrieval. 🔗 BMP paper: https://lnkd.in/dsc33hGc 🔗 BMP code: https://lnkd.in/dxBxv225 🔗 Infinity: https://lnkd.in/ddRK5mbr 🔗 Hybrid Search paper: https://lnkd.in/dfBuDXmt Great to see ideas from traditional IR continuing to shape the next generation of retrieval infrastructure!
31
-
PRAKASH REVANNA
ace turtle • 496 followers
I got tired of manually comparing LLMs… so I published this, rankthellmmodel.info (Rank the LLM model), which I built for my own use. based on my requirements. While working with LLMs, I kept hitting the same problem: • Checking multiple sites for benchmarks • Looking up pricing separately • Comparing everything manually • Still guessing which model is actually “better.” It was slow and confusing. 📊 A simple leaderboard that ranks models based on: ✔ Quality ✔ Cost ✔ Speed (Data sourced from artificialanalysis.ai , will keep adding more benchmarks) ⚙️ How it works Quality → GPQA, MMLU, coding scores Cost → price per million tokens Speed → tokens/sec, latency Overall = 0.5 × Quality + 0.3 × Cost + 0.2 × Speed 👉 Good + Fast + Affordable 💡 What I realized • The smartest model is not always the most useful • Faster models are often “good enough.” • Some top models are too expensive for production 🚧 Next: • Custom weights • Real API latency testing • More benchmark data Feel free to check it out and share your feedback #AI #LLM #GenAI #MachineLearning #OpenAI #Gemini #Claude #Engineering
54
1 Comment -
Ian Kibandi
Dev Canary • 611 followers
Excited to share my latest blog post diving into cache eviction policies in storage engines. In this post, I explore various techniques like LRU, the Clock Algorithm, and dive deep into the innovative TinyLFU frequency-based policy. If you’re interested in how these policies impact performance and storage efficiency, definitely check it out and let me know your thoughts. Feel free to share it with your network. Read the full blog here: https://lnkd.in/defBuaeF
12
2 Comments -
Yiming Chen
Meta • 1K followers
Thanks to this nice talk from Jitendra, I got a chance to re-exercise my control theory muscle (a little bit) after so many years. My key take-aways: - **From dynamics to world models**. The "world model" hyped in the AI community is essentially the "dynamic model" from the 1960s. Dynamic models (State Space Models, LQR, etc) are exactly the right ideas — we just have to generalize them from linear to nonlinear, which we now know how to do with multi-layer neural networks. - **System 1 + System 2**. For robotics it is very important to maintain both: (a) directly learned reactive policies (System 1) for situations where the robot has lots of experience from training data, and (b) a dynamics model and planning with it (System 2) for novel situations never seen during training. - **3D as the right latent space**. Prediction in pixel space is not a good idea (exactly as Yann has emphasized in JEPA). Jitendra's belief is that the right latent space is 3D — 3D hands, 3D objects, 3D human bodies — using explicit parametric models. This differs somewhat from Yann's preference for learned latent spaces via self-supervised methods. Personally I lean toward Jitendra's view here: explicitly decomposing the world into entities (objects, fields, etc.) should help both interpretability and compositional generalization.
15
-
Vivek Nayyar
Qoala • 2K followers
Recently, I implemented Mixture of Experts (MoE) - the same concept powering models like Mixtral and DeepSeek from scratch in PyTorch. The idea is simple yet powerful: instead of activating the entire feed-forward network for every token, MoE splits it into multiple experts, and a router dynamically decides which ones should handle each input. This makes the model both efficient and scalable, as only a few experts are active per token. In this implementation, I’ve added detailed comments explaining every step: from how tokens are routed and dispatched to experts, to how outputs are aggregated back. If you’re curious about how MoE layers actually work under the hood, check out the code here 👇 https://lnkd.in/gPR2bFhF
24
-
Satishkumar Dhule
Salesforce • 2K followers
Last Tuesday, Deliveroo's rider-switch Rails endpoint spiked to 4s latency and intermittent 503s. The flame graphs told the real story and became our speed playbook. Technical context: Flame graphs visualize time across call stacks using sampling profilers. In Rails, latency often stems from DB calls, serialization, or middleware. This makes root causes visible to newcomers. Recent tech context includes AI agents and RAG patterns (LangGraph, Claude 3.5, GPT-4o, Gemini 2.0) guiding performance analyses. KEY INSIGHTS: 🔍 Flame graphs reveal hot stacks and where wall time sits. ⚡ Latency on the hot path fell from 4s to ~800ms after fixes. ───────────────────────── 🔗 Read the full article: https://lnkd.in/ghcREzpZ 🎯 Practice interview questions: https://lnkd.in/gmy5drNw #performancetesting #cpuprofiling #memoryprofiling #flamegraphs #latency
1
1 Comment -
Prabhakara Changala
Sri Venkateswara University • 1K followers
🇮🇳 India’s AI Moment is Here This month, India introduced powerful home-grown Large Language Models - built for Indic languages, scale, and real-world impact. From Sarvam AI’s 30B & 105B models, to BharatGen’s multilingual foundation models, and domain-focused LLMs for education, India is clearly moving toward sovereign, inclusive AI. This is not just innovation - it’s AI built for India, by India. The ecosystem is evolving fast, and the journey has just begun. #IndiaAI #LLM #ArtificialIntelligence #GenerativeAI #DataAndAI #MadeInIndia #AIForImpact
3
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content