Sign in to view Brett’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Brett’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Greater Boston
Sign in to view Brett’s full profile
Brett can introduce you to 10+ people at Google
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
294 followers 295 connections
Sign in to view Brett’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Brett
Brett can introduce you to 10+ people at Google
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Brett
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Brett’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Activity
Sign in to view Brett’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
- Sure do miss the Ghostred days! https://lnkd.in/evF5Ggj Zalyia Grillet Kyle Lehtinen Brian Marshburn Anthony Wiseman Rob Fuller Brett Csorba
Sure do miss the Ghostred days! https://lnkd.in/evF5Ggj Zalyia Grillet Kyle Lehtinen Brian Marshburn Anthony Wiseman Rob Fuller Brett Csorba
Liked by Brett Csorba
- We’re honored to top the 2016 Great Place to Work list! Thank you to all of our employees for their contributions to making Google a great workplace…
We’re honored to top the 2016 Great Place to Work list! Thank you to all of our employees for their contributions to making Google a great workplace…
Liked by Brett Csorba
Experience & Education
-
Google
***** ******** ********
-
**
*********** ******** ******** ********
-
********** ** ********
********** *********
-
********** ** ********
******** ** ******* ** ******** ************** ******* ***** *** ***** ******** *************** ********** ************** ********** *** **************** ***** 3.86
-
-
********** ** ********
******** ** ******* ***** ***** *** ***** ******** ******** *********** ******* ****
-
View Brett’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Patents
-
System and Method for Determining String Similarity
Issued US 9,269,028
Algorithm and defined metric to compute a string similarity metric for 2 input strings in O(m+n) time and O(m+n) space where m, n are the lengths of the input strings. Csorba-Kurzer Similarity is more efficient than comparable edit distance algorithms including Levenshtein, Dameru-Levenshtein, Needleman–Wunsch, Smith–Waterman algorithm, all of which run in O(mn) time and O(mn) space. Out metric is also believed to satisfy the triangle inequality, allowing for the use of pruning models and…
Algorithm and defined metric to compute a string similarity metric for 2 input strings in O(m+n) time and O(m+n) space where m, n are the lengths of the input strings. Csorba-Kurzer Similarity is more efficient than comparable edit distance algorithms including Levenshtein, Dameru-Levenshtein, Needleman–Wunsch, Smith–Waterman algorithm, all of which run in O(mn) time and O(mn) space. Out metric is also believed to satisfy the triangle inequality, allowing for the use of pruning models and space partitioning data structures. Use cases for the algorithm include DNA sequencing, protein comparisons, network topology evaluation, phishing detecting, and malware classification.
Other inventorsSee patent
Projects
View Brett’s full profile
- See who you know in common
- Get introduced
- Contact Brett directly
Other similar profiles
Explore more posts
-
Siddharth Ramakrishnan
Scale Venture Partners • 2K followers
Cursor's new Composer model is (not so quietly) Chinese under the hood. Cursor shipped the new version of Composer as its "first in-house frontier coding model," optimized through RL for real coding tasks. Within days, users noticed Composer's hidden thinking was full of Chinese characters. Industry press connected the dots: both Cursor's Composer and Windsurf's SWE-1.5 appear based on Chinese large models (likely Qwen bases), fine-tuned and wrapped in US products. Community reaction is split. Some call it a major downgrade: slower and struggling with complexity. Others report speed gains and slightly better performance. So why is the reception so mixed? Gavin Leech's AI Tigers post (link in comments) shows that on fresh benchmarks, Chinese models lose around 21% performance compared to roughly 10% for Western models. They look closer on leaderboards than they perform on novel problems. The cost story is similar: 3-6x cheaper per token, but often requiring 2-4x more tokens, which shrinks the cost advantage fast. So why would Cursor make this trade? Because they're not trying to replace GPT or Claude. They're owning a narrow, high-volume behavior and squeezing it with fine-tuning. Cursor has massive amounts of product-specific data (traces, diffs, accept/reject signals) and can invest in the RL loop to extract value from a cheaper base. This is where Chinese and open-source models actually fit in the stack. They're a strong fit for narrow problems with abundant training signal: tab complete, inline edits, code transforms. They're a weaker fit as primary reasoning for high-stakes, long-horizon workflows. Frontier US models still dominate the second category. Chinese open-source is taking over the first. Composer's rollout is a case study in how powerful Chinese models have become, and a live demo of where their limits still show.
39
12 Comments -
Yang Pei
Aven • 2K followers
Deep dive into RL for Recommendations: Lessons from OpenOneRec & verl. 🧠💻 I’ve been exploring the OpenOneRec project lately, specifically how they’ve integrated Reinforcement Learning into recommendation pipelines. It’s a fascinating look at how RecSys is moving toward a generative "Foundation Model" approach. Key Technical Takeaways: - The Power of verl: I was impressed by how they utilize the verl framework to manage the complexity of RL. It provides a clean abstraction for the Actor-Critic-Ref-Reward worker groups, making distributed RL training actually manageable. - Hybrid Workers: One of the smartest design choices is the OneRecActorRolloutRefWorker. It’s a hybrid worker that handles the Actor, Rollout, and Reference roles, significantly reducing communication overhead during the PPO phases. - vLLM Integration: Seeing how they use vLLM for 2-stage rollouts while maintaining FSDP sharding for training shows the level of optimization needed for industrial-scale RL. If you are interested in how LLM-style training is merging with Recommendation Systems, this is a must-read OSS project. Substack: https://lnkd.in/gnKz6diu Blog: https://lnkd.in/g8fd3sJw #RecSys #OpenSource #vLLM #ReinforcementLearning #RLInfra
130
-
Gunnar Morling
Confluent • 9K followers
Many folks underestimate the long-tail effects which good blog posts can have. It's easy to be discouraged when writing something, and then there are no reactions to it at all. Yet, in some cases, I've received comments and feedback for posts months after publishing them. Some of them keep being shared years later. So focus on writing high value pieces and have some patience, they'll find their readership eventually. Blogging is a long game.
213
10 Comments -
Colleen Farrelly
Post Urban • 12K followers
Some recent, exciting news on compressed tokenizers and how individual tokens modify images in encoder-decoder models. I hope this opens up a new avenue of research with smaller models across frameworks (perhaps JEPA?). https://lnkd.in/evFw8WBu
69
2 Comments -
Michael Serpico
Grubhub • 2K followers
Seeing Microsoft lay off thousands—especially in Xbox—while doubling down on AI is already troubling. Then I saw Mr. Turnbull’s LinkedIn, where an Xbox Games Studios Publishing executive suggested using AI tools (like ChatGPT/Copilot)—the very tech arguably linked to job losses—as a therapist after being laid off. That’s one of the most dystopian takes I’ve encountered.
8
-
Taylor Mullen
Google • 5K followers
Here is Gemini CLI’s September 22nd weekly update for v0.7.0 - 🎉 Build Your Own Gemini CLI IDE Plugin: We've published a spec for creating IDE plugins to enable rich context-aware experiences and native in-editor diffing in your IDE of choice. (skeshive) - 🎉 Gemini CLI Extensions - Flutter: An early version to help you create, build, test, and run Flutter apps with Gemini CLI - Nano banana: Integrate nano banana into Gemini CLI - Telemetry Config via Environment: Manage telemetry settings using environment variables for a more flexible setup. (jerop) - Experimental Todo’s: Track and display progress on complex tasks with a managed checklist. Off by default but can be enabled via "useWriteTodos": true (anj-s) - Share Chat Support for Tools: Using /chat share will now also render function calls and responses in the final markdown file. (rramkumar1) - Citations: Now enabled for all users (scidomino) - Custom Commands in Headless Mode: Run custom slash commands directly from the command line in non-interactive mode: gemini "/joke Chuck Norris" (capachino) https://lnkd.in/g3D4Y7-S 🧵 #GeminiCLI #Gemini #AI #OSS #Flutter #IDE
95
2 Comments -
Zhoutong Fu
Hippocratic AI • 4K followers
🧪 Diffusion language modeling is becoming a new area of exploration in the LLM space. Early efforts like Mercury are now being joined by larger players, with Seed (ByteDance) and Gemini (Google) introducing diffusion-based alternatives for text generation. It’s still early, but exciting to see diffusion techniques being adapted beyond vision, potentially offering new modeling perspectives for language. 🔗 Learn more about Seed’s diffusion model: https://lnkd.in/d6VTUSe2
30
-
Akhil Reddy Danda
HHA Hospital Medicine • 6K followers
Andrew Ng released an "Agentic Reviewer" for research papers. It hit near human-level agreement after training on real ICLR 2025 reviews. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗶𝘁 𝘁𝗮𝗿𝗴𝗲𝘁𝘀 Paper review is slow. Each cycle takes around six months. One student saw six rejections over three years. Iteration speed, not ideas, became the bottleneck. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀 The system learns from real conference feedback. It reads your paper, then searches arXiv for related work. The flow is simple: Analyze claims and structure Ground comments in published research Produce structured reviewer-style feedback It works best in fields with open literature. 𝗛𝗼𝘄 𝗴𝗼𝗼𝗱 𝗶𝘁 𝗶𝘀 Human-to-human review correlation sits at 0.41. AI-to-human correlation reaches 0.42. That is near reviewer agreement today. Link: http://paperreview.ai
12
2 Comments -
Saranyan Vigraham
Meta • 5K followers
Spent a few hours with Anthropic's Claude 4. The jump isn't just incremental - this feels like a different category of tool entirely. These are qualities I think more teams should care about: 1. Handles Long Contexts Without Losing the Thread Claude can work with 200K+ tokens. That’s more than just scale — it changes how you think about model interactions. You can feed in entire legal docs, policy drafts, even books, and it still keeps its bearings. Great for continuity across large, messy inputs. 2. It Knows When to Pause One thing I appreciate: Claude is more comfortable saying “I’m not sure” or identifying ambiguity than most models. It doesn’t rush to fill in blanks. That kind of epistemic humility goes a long way in research and policy conversations. 3. Built-in Value Alignment Claude’s responses often reflect its “constitutional” training — it tends to explain why it's taking a stance and does so in a transparent way. If you’re working on systems where safety, fairness, or ethics matter, this foundation makes it a strong candidate. 4. Less Performative, More Human While GPT can sometimes default to polished, confident output, Claude feels... quieter. More thoughtful. Especially in sensitive settings: cross-cultural dialogue, mental health, collaborative ideation, etc. This tone can make all the difference. 5. Surprisingly Good at Reading the Room I've not yet seen Claude overstep. It seems to wait for cues. In multi-turn conversations, I’ve seen it defer, ask for clarification, or slow down when things get fuzzy. That’s a different kind of intelligence — one tuned to collaboration, not control. Best thing I appreciated was that it didn't feel like a model trying to “wow” me. One that’s trying to understand. That distinction might define the next era of AI systems. #AI #Future #LLM #Dev #Claude
592
31 Comments -
Obie Fernandez
ZAR • 6K followers
I just published a piece that helped me clarify something I’ve been feeling for a while about AI, LLMs, and why reactions to them are so polarized. The backlash against AI isn’t really about truth, creativity, or jobs. It’s about identity. Some people experience meaning through isolation, effort, and visible struggle. Others experience it through coordination, delegation, and direction. LLMs collapse the cost of externalized thinking, which threatens identities built around suffering as proof of authenticity. That’s why AI-assisted coding, music, or even dating can feel “soulless” to some people, even when it works better. This isn’t a new conflict. It echoes earlier panics around the printing press, automation, and other coordination tools. What’s new is that AI pushes this shift into cognition itself. I wrote about this through the lens of two models of selfhood: Sisyphus versus the Conductor. If you’ve felt either excitement or disgust when using AI tools, this piece might help explain why. 👉 https://lnkd.in/e3xTccsp #AI #LLMs #SoftwareEngineering #Creativity #FutureOfWork #Identity #Leadership
32
4 Comments -
James Carr
Zapier • 6K followers
I've been away for a bit after my mother's passing, but I'm aiming to get back to a regular blogging schedule again. I wrote this post before things went sideways and never got around to sharing it. I'd been exploring Temporal's workflow patterns in a blog series, and this last post in the series tackles a problem I think more teams are running into as they build AI agents: what happens when your agent fails mid-execution and loses all its context and completed work? I built a multi-model intelligence system called NetWatch that uses Temporal's AI SDK integration to make every LLM call and tool invocation automatically durable and retryable. The most interesting piece is a scatter/gather pattern that queries Claude Haiku, Sonnet, and Opus in parallel, then aggregates results. Each model gets the same tools implemented as Temporal Activities, and if one model fails the others still deliver. Temporal's Vercel AI SDK plugin means the code looks nearly identical to standard AI SDK usage. One line change gives you durable execution. API credentials only live on worker nodes, not your HTTP server. And every tool call, retry, and model response shows up in Temporal's event history automatically, which makes debugging agent behavior dramatically easier. The obvious tradeoff with multi-model scatter/gather is cost since you're burning 3x the tokens, but it's a strong pattern during evaluation phases or for high-stakes decisions. Temporal makes it straightforward to dial back to a single model once you know which tier handles your workload. This wraps up my Temporal series for now. Next I'll be looking at how these same patterns translate to other durable execution and workflow frameworks like Argo Workflow. Full write-up with architecture diagrams and working code: https://lnkd.in/gaK82qdn
17
4 Comments -
Rehan Mulla
Splunk • 2K followers
🤖 What if AI became its own prompt engineer? That question just got very real. Imagine an AI not just following prompts, but rewriting and refining its own instructions to work smarter. I just read how Anthropic’s latest multi-agent system does exactly that – with multiple Claude 4 agents not only performing tasks, but actually improving the tools and prompts they use. In essence, the AI becomes its own prompt engineer. AIs collaborate, where one even acts as a tool-tester to make the whole system better. For example, one Claude agent grabs a flawed scheduling tool and puts it through its paces. It runs the tool dozens of times, diagnoses the errors, then rewrites the tool’s instructions to prevent those failures. The outcome? Other agents using that tool later finished tasks faster and with far fewer mistakes These results (✅ 40% faster task completion, ✅ more reliable outputs) aren’t just stats – they hint at a new paradigm. We’re not just building AI solutions; we’re building systems that learn to improve themselves mid-flight. AI can be its own best coach. This kind of self-improvement loop could be a game-changer for how we build and trust production grade AI systems going forward! #MultiAgentAI #PromptEngineering #AIAgents #LLM
16
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content