WadoLabs

Xcode 26.3 Goes Agentic — Apple Keeps Developers Where It Wants Them

appleaiclips

Xcode 26.3 unlocks the power of agentic coding

Xcode 26.3 introduces support for agentic coding, a new way in Xcode for developers to build apps, powered by coding agents from Anthropic and OpenAI.

Apple Newsroom

Apple Newsroom:

Xcode 26.3 integrates Anthropic's Claude Agent and OpenAI's Codex, enabling developers to leverage AI agents directly within the IDE for autonomous task completion.

Over the past year, the biggest threat to Apple's developer ecosystem wasn't Flutter or React Native — it was Cursor. Developers were leaving Xcode not because they stopped building for Apple platforms, but because the AI tooling elsewhere was so much better that it was worth the tradeoff. Apple just closed that gap in a single release.

The real move here is MCP. By implementing a full Model Context Protocol server, Apple didn't just bolt on two AI providers — they made Xcode an open target for any MCP-compatible agent. Claude Code, Cursor, whatever comes next — they can all drive Xcode natively now. That's not how Apple usually plays it. They chose an open standard over a proprietary integration, which tells you how seriously they took the developer experience.

Agents can autonomously search documentation, explore file structures, modify project settings, capture and iterate through Xcode Previews, execute builds, and implement fixes based on feedback.

This is the part that matters for day-to-day work. Xcode Previews have always been SwiftUI's killer feature but also its biggest friction point — constant rebuilds, layout tweaks, state wrangling. Having an agent that can capture a preview, evaluate it, and iterate without you touching the keyboard is genuinely new. It's the kind of integration you can't replicate in VS Code because VS Code doesn't have Xcode Previews.

Apple's playbook has always been the same: make the first-party experience good enough that leaving isn't worth it.

Claude Code Is the #1 AI Dev Tool After Just Eight Months

aiclaudeclips

AI Tooling for Software Engineers in 2026

Claude Code dominates tool usage, leaders are more positive about AI than engineers, staff+ engineers are the biggest users of AI agents, and more. Exclusive data and analysis from 900+ respondents

newsletter.pragmaticengineer.com

Gergely Orosz and Elin Nilsson, The Pragmatic Engineer:

Claude Code has gone from zero to be the #1 tool in only eight months.

Not a huge surprise. Everyone I talk to uses Claude Code — it's become the de facto go-to tool. The reason is simple: Anthropic ships at a relentless pace. They are day by day sherlocking community tools by absorbing their best ideas into the first-party experience. Remote Control, hooks, worktrees, MCP connectors — features that used to require third-party tools just keep getting built in. The agentic-by-default approach is the other half of it. Claude Code doesn't suggest — it does. It reads your files, runs your tests, makes commits, and chains actions together without you babysitting each step. That's a fundamentally different workflow than autocomplete, and once you experience it, everything else feels slow.

63.5% of Staff+ engineers use agents regularly, versus 49.7% for regular engineers.

This one is personal. A few years ago, I genuinely thought my coding career was winding down due to age. AI tools revived my passion completely — my brain feels like it's back to my 25-year-old self. It's like having five pairs of hands and five brains. I grew tired of boilerplate years ago, and the speed at which I can now automate that away and stay in flow state on high-impact features is contagious. I think a lot of Staff+ engineers in my generation feel the same — either they were being aged out, or they hit a ceiling on what one person could build. AI gave them superpowers, and superpowers are addictive.

As for why younger engineers have lower adoption — I think it's the classic "you don't know what you don't know." When your work is scoped to the things you already understand, you don't feel the friction that makes these tools indispensable. But once your eyes open to the full complexity of end-to-end systems — backend, frontend, DevOps, SQL, video processing, all of it — your creative juices start flowing and you naturally reach for tools that let you do everything at once. Experience breeds ambition, and ambition breeds adoption. There's probably a direct correlation between years of experience and AI agent usage, and this data backs that up.

MacBook Pro M5 Pro & M5 Max: AI Performance Is the New GHz

appleclips

Apple introduces MacBook Pro with all-new M5 Pro and M5 Max

Apple announced the latest 14- and 16-inch MacBook Pro with the all-new M5 Pro and M5 Max.

Apple Newsroom

Apple Newsroom:

Up to 4x AI performance compared to the previous generation, and up to 8x AI performance compared to M1 models — with Neural Accelerators integrated into GPU cores enabling on-device LLM execution.

Love this — Apple is making AI performance the standard metric for how powerful a personal computer is. It's the new GHz.

There's a real distinction between running local models and using hosted ones like Claude or ChatGPT though. I can totally see the case for local compute — processing tons of video, running models offline, all that good stuff — but honestly, most of my workflow runs through Claude. So for 80% of my use cases, this is probably overkill.

Will I ultimately resist buying it? Probably not.

AI Adoption Is a Mile Wide and an Inch Deep — We Are Still Early

aiclips

First — 35-year-olds are millennials, not boomers. Boomers are 60 to 80. And the data is shakier than the tweet implies. A Gallup poll from January 2026 found that 51% of US workers have used AI at work — but only 12% use it daily. ChatGPT alone hit 900 million weekly active users this month. People know AI exists.

But the sentiment is right — we are still early. Almost everyone I talk to outside of tech says they've used AI. They've asked ChatGPT a question, had it rewrite an email, maybe summarized a document. They see it as useful. But that's where it stops. Very few have tried to apply it to their day-to-day as a form of automation — chaining tasks together, building workflows, letting it handle things end to end. The jump from "I asked it a question once" to "this runs part of my job now" is massive, and almost nobody has made it yet.

That's the gap. Not awareness — adoption depth. When people start integrating AI into how they actually work, not just poking at a chatbot when they're curious, that's when it changes. I work in tech and only started using it in a meaningful way in the last few months. Even among my peers — early adopters by definition — most are just now reaching that turning point.

Claude Code Remote Control: Does This Replace OpenClaw?

claudeopenclawclips

Continue local sessions from any device with Remote Control - Claude Code Docs

Continue a local Claude Code session from your phone, tablet, or any browser using Remote Control. Works with claude.ai/code and the Claude mobile app.

Claude Code Docs

Claude Code now has Remote Control — start a session at your desk, then pick it up from your phone or any browser. Your local environment stays fully intact: filesystem, MCP servers, tools, project config. Nothing moves to the cloud. I shut down my OpenClaw instance to try it.

It's a good start. The core idea — /remote-control from an existing session and you're connected from anywhere — is exactly how I want to work. But it needs some rough edges smoothed out before it replaces a dedicated setup.

For one, your machine can't sleep. I was writing this post from my iPad via Remote Control and had to reconnect and restart the session when my Mac dozed off. You'll want to tweak your energy settings or run a keep-alive if you plan to step away for real. The session also times out after about 10 minutes of lost connectivity, so there's no fire-and-forget.

What's missing for a full OpenClaw replacement is orchestration. There's no persistent memory between sessions, no heartbeat or scheduler to kick off recurring tasks, and no way to queue up work while you're away. Right now it's a remote window into a live session — not a background agent. To get there, you'd either need Anthropic to build those primitives into the product, or roll your own orchestration layer on top.

Jony Ive's First OpenAI Device Will Be Smart Speaker With Camera, 2027 Launch Planned

openaiclips

Jony Ive's First OpenAI Device Will Be Smart Speaker With Camera, 2027 Launch Planned

OpenAI is working on several AI hardware devices in partnership with former Apple designer Jony Ive, and the first product that comes out could be a smart speaker. The company is developing a smart speaker, a smart lamp, and considering AI glasses, according to The Information, with the speaker set to come out in early 2027. OpenAI's smart speaker has an integrated camera and it is designed to learn information about who is using it and what's around them.

MacRumors

Juli Clover, MacRumors:

The speaker would observe users and suggest actions to help them achieve goals, such as suggesting an early bedtime ahead of a morning meeting.

OpenAI has 200+ people on hardware and engineers are already complaining about LoveFrom's pace. But the real story is Apple — behind on AI, they're infiltrating the home with a suite of Siri-powered products this year, and OpenAI is trying to beat them to it.

Apple's March 4 Launch: The Smart Home Gap Is the Story

appleclips

Apple’s March 4 launch event: New products and what to expect - 9to5Mac

Apple is holding a special event March 4 for press around the world, and rumors indicate there are lots of new products that could launch.

9to5Mac

Ryan Christoffel, 9to5Mac:

Apple intends to roll out new products daily in the lead-up to March 4. That means we could get hardware launches via press release on Monday March 2, Tuesday March 3, and Wednesday March 4.

Nine products in three days with no keynote? Apple is speed-running the launch calendar. The real tell is what's missing — HomePod Touch and the new Apple TV are held back by Siri. That means Apple Intelligence still isn't ready for the living room. The Mac and iPhone lineup is the distraction. The smart home gap is the story.

Hello World

intrometa

Welcome to WadoLabs! I'm Edward Chan — a software engineer based in Los Angeles. This is the first post on the blog, so I figured I'd introduce myself.

A Bit About Me

I've been building software professionally for over two decades. I've spent most of my career at the intersection of engineering and product — leading teams, designing systems, and shipping things people actually use. Currently, I work at Genius Sports.

I've always been the type to have a side project going. Whether it's a macOS utility, a web app, or something I just wanted to exist — I like making things. WadoLabs is the home for all of that.

When I'm not writing code, you might find me coaching youth basketball. I'm the head coach of the Super Saber Brothers — it's a blast and keeps me honest about communication and teamwork in ways that software never could.

Why This Site

I wanted a place to share what I'm working on outside of my day job — somewhere to document projects, write about technical decisions, and put things out into the world. No algorithms, no feeds, just a simple site with things I've built and things I've learned.

The first project up is KeyLime, a lightweight keyboard remapping app for macOS. It lets you remap keys, create layers, and use tap-hold gestures — all from the menu bar. If you've ever wished Caps Lock did something useful, give it a try.

What to Expect

I'll be writing about:

  • Side projects — walkthroughs, technical decisions, and lessons learned
  • Engineering practices — tools, workflows, and approaches I find useful
  • Technology deep dives — exploring interesting problems and how to solve them
  • Bite-size clips and takes — quick links and commentary on topics I find interesting

Stay Tuned

More posts coming soon. In the meantime, check out the projects page to see what I've been building.