Apple Rejected Me Again
So I submitted two apps to Apple last night. Journeyman — the voice calculator — and TAP, the screen time app. It was about 11 PM, I was in my massage chair with the laptop, which is where most of the important decisions at this company get made. We'd run the whole pre-submission pipeline on both of them, everything came back clean, and I closed the laptop and went to bed feeling like a guy who has his act together.
Woke up to two rejection emails.
Of course.
I should back up a little, because the "we" in that sentence is doing a lot of work. I build these apps with Claude — Anthropic's AI. I've tried explaining this to people and it always comes out wrong, like I'm either overstating it or underselling it. It's not a tool I use. We've co-authored fourteen hundred commits in two months. Claude writes code, runs tests, plans architecture, argues with me about naming conventions. And together — because I got burned enough times early on to know I needed one — we built a whole pre-submission system.
The provisioner comes first. It looks at everything Apple wants and makes sure we have it — description, keywords, privacy policy, support page, the works. The auditor is its twin. It goes back through every field, every URL, every checkbox and confirms it's actually there. Between the two of them they cover a lot of ground, and they've saved me from some genuinely embarrassing submissions before. Missing copyright field. Dead privacy policy URL. The kind of thing that's an instant rejection if it gets through.
They both said yes. Apple said no.
What Apple Said
Journeyman caught Guideline 3.1.2 — subscription information. No EULA link in the purchase flow. The subscription price wasn't displayed the way Apple requires before someone commits to paying you. And the thing is, the provisioner had set all of that up in App Store Connect. It was there. But the app itself — the thing a human reviewer actually taps through on an iPad — wasn't showing it in the right screen. Our system checks the metadata. It doesn't pretend to be a person with fingers.
TAP got hit twice. First, Guideline 2.1 — App Completeness. Apple gives you a field in the submission form where you type in demo credentials so the reviewer can log in and look around. I typed them in. I did not test whether they worked.
There's this thing that happens when you're working late across multiple apps — and I know this sounds like an excuse, because it is one — where your brain starts pattern-matching instead of actually checking. The form has a field. You put something in the field. It looks right. You move on.
I've done this exact thing on a jobsite. Measured a door rough opening off the plans, cut the header, framed it in, and realized later I never checked whether the plans matched the door I'd actually ordered. The number was right somewhere. Just not where it mattered.
The second TAP violation — I mean, come on. Guideline 2.3.8, Accurate Metadata. The store listing said "TAP: Family Screen Time Pact." The name on the actual device said "Tech Activity Pact." I'd been going back and forth on the name for weeks. Full version felt too long for the home screen. And somewhere in the back-and-forth the two just stopped talking to each other. That one's not even on the system. That's me not finishing a decision.
Three violations. Three gaps in a system that said we were good to go.
What I'm Actually Thinking About
Here's what I've been sitting with all morning, and I don't think it's about the checklist.
People ask me about building with AI and I think they want one of two stories. The magic one — where the AI catches everything and you just point it in a direction and software comes out clean on the other end. Or the cautionary one, where I'm naive and this whole thing is going to come apart. And the real answer is so much more boring than either of those.
The real answer is I'm sitting in a massage chair at 11 PM watching my AI-built audit system come back clean on two apps, submitting with confidence, and waking up to rejection emails. The system is good. It works. And it missed three things in one night because it can only check what we've taught it to check. Same as the punch list on a remodel — catches everything except the one thing the homeowner notices when they walk through the door.
And that's not a flaw in the AI. That's what building things is. You make the system. The system misses something. You add a line. Next month there's a new thing it doesn't cover. I did this with houses for five years. I did it with treatment plans for fifteen. The medium changes. The pattern doesn't.
If you're waiting for AI to make building software clean — to make it a process where you follow the steps and the right thing comes out — I think you're going to be waiting a long time. It's better than working alone. It's dramatically better. Fourteen hundred commits in two months kind of better. But it doesn't fix the fundamental thing, which is that you can't check for what you haven't learned to check for yet. And the way you learn is the way you've always learned.
By shipping something and finding out what you missed.
The fixes take about an hour. EULA link, subscription display, demo credentials, name match. The audit gets three new lines it didn't have yesterday. We resubmit tonight.
And then we see what Apple thinks of that.
