Vũ Châu, a Senior Android Engineer based in Pennsylvania, talks about problem-solving, classical music, and more.
What motivates you to come to work? I genuinely enjoy helping my team tackle problems every day and seeing our solutions come to life in unexpected ways. There’s always something new to learn, and I love applying my skills to deliver real impact.
Can you discuss a skill you’ve acquired or developed while working at LoopMe? LoopMe has given me the opportunity to develop my AI engineering skills in a hands-on, practical way. I’m not only refining my efficiency with AI tools daily, but also sharpening my development abilities to help myself, my team, and the broader organization.
Do you have any WFH routines or rituals? I tend to reach for my classical music playlists when tackling tough problems. A dose of Chopin really helps me focus.
Do you have any hobbies? I enjoy street photography, especially with my new Ricoh GR IV that I brought along on a recent trip to Vietnam. At home, I’m usually working through my enormous Audible collection—mostly WWII history titles.
If you could live anywhere in the world, where would you choose and why? Hands down, Vietnam. It’s where I was born and grew up, and the country I return to most often these days. The cuisine is top-notch, the culture is vibrant, and—last but not least—it’s where I met my wife.
If you could invite any public figure to dinner, who would you choose and why? Stephen Hough, the acclaimed English classical pianist. I actually had the chance to hang out with him once, and I’d love another opportunity to hear his perspectives on music and life.
Want to join the LoopMe team? Take a look at our open positions.
We’re excited to share that LoopMe has secured a new patent, the third for our Intelligent Marketplace, bringing LoopMe’s total patent count to six, with a further twelve pending.
Our newly issued patent titled “Automated Hybrid, Optimized Advertising Auction System and Method” acknowledges how LoopMe connects supply and demand in programmatic auctions.
The patented technology plays a foundational role in the ability of LoopMe’s multidimensional bid optimization algorithm to dynamically set bid floors and margins. This enables advertisers to reach audiences with greater precision while maximizing performance and value across LoopMe’s exchange.
“This new patent is another strong validation of our long-term investment in proprietary, AI-driven advertising solutions that deliver real impact for partners across the digital ecosystem,” commented LoopMe Chief Data Scientist Leonard Newnham.
With dedicated data science and engineering teams, LoopMe is focused on developing cutting-edge AI to create better decision making in real time from the best data sets available.
Lessons from rolling out agentic AI across a real engineering organisation
Generative AI is reshaping software engineering—but not in the way most people expect. The narrative often goes like this: give developers an AI assistant, plug in a code agent, add GPT-5, and voilà—productivity skyrockets.
Reality is far more nuanced.
At LoopMe, our 20-person Data Science team sits inside a 400-person adtech company, and we’ve spent the last 18 months operationalising GenAI for real engineering work. We’ve lived through the hype cycle, the scepticism, the false starts, and the breakthroughs. And we’ve learned what truly boosts velocity—and what simply doesn’t.
This isn’t a theoretical article. It’s based on actual adoption, real code, real pull requests, real frustrations, and real wins.
Here’s what happened.
We started early—but usage was uneven
Our first experiments began with JetBrains AI Assistant inside PyCharm. Some developers used it constantly. Others ignored it. Most used it like a slightly cleverer StackOverflow: helpful, but not transformative.
Then agentic tools emerged, like JetBrains Junie, and access broadened to the frontier models: GPT-5, Claude 4.5, Gemini 2.5.
Surely now everyone would embrace AI coding? They didn’t. Better tools alone didn’t change behaviour. We needed something else.
We made AI usage visible—but safe
We introduced two simple conventions in every pull request:
##AI: <percentage>
##Junie: <percentage>
This showed how much of the code was AI-assisted. Then we built (using agentic AI!) a Python script to scrape all pull requests after each sprint so we could track usage trends over time.
Crucially, we didn’t use this for micromanagement. We didn’t publish a leaderboard of “low adopters.” We didn’t shame anyone.
Instead, at our bi-weekly Data Science meeting, we recognised the top three “AI wizards”. Nothing heavy. No pressure. Just positive reinforcement.
Did it increase adoption? Yes—significantly.
Making AI usage visible but not punitive was one of the biggest cultural unlocks.
We discovered something bigger: GenAI requires a different way of coding
The biggest surprise was this: using an AI assistant effectively changes how you think about coding, not just how fast you type.
Traditional coding: “What’s the function I need to write?”
AI-accelerated coding: “How do I frame the problem so the agent can solve 80% of it without derailing itself?”
Teams had to learn a new skillset:
How to define achievable, bounded tasks
How to avoid overly vague or overly ambitious prompts
How to detect when the AI is spiralling into over-engineering
How to guide the agent back to the real objective
How to evaluate its output as ruthlessly as you’d evaluate a human
The people who mastered this new mental model saw the biggest productivity gains—sometimes shaving weeks off tasks.
AI became more than boilerplate: it suggested entirely new ideas
Many assume coding agents are mainly good for scaffolding, refactoring, and tests. That’s not what we saw.
We repeatedly got suggestions like:
“This smoothing can be stabilised with a Laplace prior.”
“This optimisation resembles a convex projection problem—try XYZ.”
“Use sparse matrices here; complexity drops from O(n²) to O(n).”
These weren’t regurgitated snippets from StackExchange. They were genuinely creative algorithmic insights. The models, with their vast exposure to techniques and patterns, often offered ideas none of us had considered.
When that happened, the team started to trust AI as a thinking partner—not just a typing assistant.
We also learned what absolutely doesn’t work
1. Expecting usage to rise automatically as models get better
Switching to GPT-5 didn’t magically increase adoption. Nor did adding Claude 4.5 or Gemini 2.5. Tools don’t change behaviour. Rituals do.
2. Letting AI usage remain a private, individual habit
Without visibility, adoption stalls. People think “no one else is doing this,” or “maybe this isn’t allowed,” or simply forget to use it.
Making usage socially normal—without pressure—was essential.
3. Expecting AI to control its own scope
Agents happily generate complexity: elegant abstractions, nested class structures, entire architectures that solve a problem you don’t have.
Humans still need to:
keep the scope tight
prune complexity
recognise dead ends
apply context
GenAI accelerates everything—including going in the wrong direction.
The biggest surprise: junior team members adopted GenAI fastest
We’ve had several data analysts transition into full data scientists. They learned most of their heavy coding with GenAI tools. For them, “AI-first coding” isn’t a shift—it’s the default.
Meanwhile, some experienced engineers were slower to adapt. Not because they’re less capable, but because their muscle memory is stronger.
This mirrors what we hear across the industry: the next generation of engineers will expect AI-first workflows by default.
Gamification helped more than we expected
Every two weeks, we showed:
The trendline for AI usage
Sub-team progress
The top three AI power users
This turned adoption into something fun and social. Everyone improved, and usage rose naturally—without any of the politics or resentment that naming-and-shaming would have caused.
So… did productivity actually improve?
Early signs say yes. We’re seeing:
More high-quality, peer-reviewed code
Faster prototyping
Faster refactoring
Faster “first viable attempt” at new tasks
Significant speedups in complex optimisation work
Anecdotally, several multi-week tasks were completed in days. Quantifying pure productivity is tricky, but the qualitative evidence is strong.
GenAI isn’t just a tool—it’s a new engineering discipline
The organisations that treat it as such will move faster than those waiting for “the perfect agent” to arrive.
The teams that win will be the ones that:
Make usage visible and celebrated
Teach engineers how to think differently
Use AI for thinking, not just typing
We’re still early in this transition—but the velocity curve is bending in the right direction.
If you’re adopting GenAI in your engineering team, I’d love to hear what’s worked (and what hasn’t) for you.