I Got Rejected from GSoC 2026 — But Here's Everything I Built Before That Email Arrived
A detailed reflection on my first open source contributions to Maglev, the systems work I shipped, and what I learned before the GSoC rejection email landed.
I Got Rejected from GSoC 2026 — But Here's Everything I Built Before That Email Arrived
By Aditya Torgal · May 1, 2026 · 9:00 PM · ~12 min read
April 30th, 11:30 PM.
I'm staring at my screen and the GSoC results are live. I click. I read. Rejected.
The reason they gave me was actually kind of beautiful in a bittersweet way — "You're a great software developer, and I'm genuinely disappointed that I didn't have enough slots available to offer one to you. I really hope you continue to contribute to the project."
Not "your proposal was bad." Not "you weren't good enough." Slot limits. Two slots for the entire project. Multiple strong applicants. That's just how it goes sometimes.
But here's the thing — between February and March, I had merged PRs into a real production Go codebase, fixed actual performance bottlenecks, killed N+1 query bugs, and understood a system that powers public transit data for real cities. And I did all of this as my first ever open source contribution. Zero prior open source experience. Just me, GitHub, and a lot of go test ./....
So this blog isn't a sob story. It's everything I did, how I did it, and what I'd tell myself if I could go back to February.
How I Even Found This Project
It was the third week of February. I was hunting for a GSoC org that used Go — not because I wanted an easy ride, but because Go's concurrency model genuinely fascinates me. Goroutines, channels, sync.Mutex, atomic.Value — I'd been building with these for over a year and wanted to find a project where they actually mattered in a real-world context.
Then I came across Open Transit Software Foundation (OTSF) and their project called Maglev.
The pitch was simple but kind of wild: Maglev is a Go rewrite of the OneBusAway Java API server — built to be faster, lighter, and production-ready for transit agencies. OneBusAway is a widely deployed platform that gives commuters real-time bus and train arrival data. Millions of people use apps powered by it every day. Maglev was OTSF's bet that they could do the same thing in Go, with better concurrency and way less memory overhead than the Java stack.
Two developer slots. 350 hours each. Advanced difficulty.
I read "Advanced difficulty" and thought: okay, this is the one.
OTSF was looking for people to:
- Implement missing API endpoints to reach parity with the existing OneBusAway API
- Build test coverage across all endpoints
- Optimize performance for high-traffic deployments
- Write deployment documentation for transit agencies
- Implement caching and database optimizations
This wasn't a "add a button to the UI" kind of project. This was systems work — real backend, real performance constraints, real transit agencies as eventual users.
I was in.
Starting Out: Mid-Semester Exams, Then Go
I didn't jump in immediately. I had mid-semester exams running through the last week of February, so I spent that time reading the codebase — understanding how Maglev served GTFS Static and GTFS-Realtime data, how the SQLite layer worked via sqlc-generated queries, how the REST handlers were structured, and where the gaps with the upstream OneBusAway API spec actually were.
By late February, exams done, I made my first PR.
It was embarrassingly small. The marketing logo was missing from .dockerignore — it worked fine when you ran go build directly, but when you spun up the Docker container, it wasn't there. I found it, fixed it, opened the PR. Merged.
Zero glory. 100% necessary.
That's the thing nobody tells you about open source: the boring PRs build your credibility faster than you'd think. Maintainers notice people who fix the small, unglamorous stuff because most people ignore it.
From there, the PRs got a lot more interesting.
The Contribution Timeline
Here's roughly how it went:
Feb Week 3 → Found the org, started reading codebase
Feb Week 4 → Mid-sems done. First PR: .dockerignore logo fix
Early March → Ramp-up: API correctness fixes, performance work
Mid March → Heavier PRs: N+1 elimination, realtime indexing, metrics
March 21 → Last merged PR before shifting to proposal writing
Late March → Wrote and submitted proposal (March 31)
April → Lab exams, quizzes, end-semester exams. Very low GitHub activity.
April 30 → Results. Rejected.
Between late February and March 21st, I merged somewhere around 15+ PRs across Maglev's REST API layer, GTFS database layer, metrics system, test suite, and deployment tooling. Let me walk you through the ones that actually mattered.
The PRs That Made Me Proud
#539 — Killing the N+1 Query Bug in a Critical Endpoint
This one hurt to find and felt great to fix.
One of Maglev's stop-level endpoints was fetching data for each stop in a loop — so if you had 50 stops, you were firing 50 separate SQL queries. Classic N+1. Under low traffic, you'd never notice. Under any kind of real load, it's a ticking clock.
I replaced the individual fetches with a single batched query that pulled all the required data in one shot. The fix sounds simple when you describe it that way, but getting the batching right in Go — making sure the result mapping was correct, that the query didn't grow unbounded, that it played nicely with sqlc's type generation — that took some actual thinking.
This PR got a thorough review, and I remember the back-and-forth on edge cases. That review process taught me more about the codebase than the PR itself.
Why it matters: N+1 queries are silent killers. They look fine in development, destroy you in production. Fixing them is the kind of work that transit agencies who eventually run Maglev will directly benefit from.
#648 — Reproducible Latency Profiling, Slow-Query Logging, and Benchmarking
Before this PR, if you wanted to know which GTFS database queries were slow, you had to either guess or attach a profiler manually and hope for the best. Not great.
I introduced:
- A slow-query logger controlled by the
MAGLEV_SLOW_QUERY_THRESHOLDenvironment variable — if a query exceeds the threshold, it logs the query name, duration, and context. Dead simple but incredibly useful. - Reproducible latency benchmarks for the GTFS query layer using Go's
testing.Bframework, so you could rungo test -bench=.and get consistent numbers you could compare across commits. - Integration with pprof profiling via
MAGLEV_ENABLE_PPROFandMAGLEV_PROFILE_MUTEXflags so developers could get CPU and mutex contention profiles without touching production configs.
The load testing I did for my proposal (running k6 at 2000 virtual users) is what made the case for this work. At 2000 VUs, Maglev stayed functional — no failures — but average latency was 108ms, p(90) hit 441ms, and p(95) was at 655ms. The system wasn't broken; it was just database-bound and we had no good way to see exactly where.
This PR was the foundation for making that visible.
#761 — Replacing Scan-Based Realtime Lookups with a Precomputed Route Index
This is the one I'm most proud of technically.
Maglev's realtime data layer was doing linear scans through trip and vehicle data every time a handler needed to answer "which trips are currently on this route?" Under low concurrency that's manageable. Under real load — say, a transit agency running Maglev in production with hundreds of simultaneous API requests — you're scanning the same data over and over again on every single request.
I replaced those scans with a precomputed route index — essentially a map built once when the realtime feed refreshes, keyed by route ID, that gives you instant O(1) lookup instead of O(n) scan. The index is rebuilt atomically on each feed refresh and held behind Maglev's existing staticMutex pattern, so it stays consistent.
The performance improvement was measurable. The code became simpler too — handlers that previously needed to iterate and filter could now just look up.
This PR also fed directly into my proposal's Feature 1 (GTFS Static In-Memory Indexing) and Feature 3 (Copy-On-Write snapshots for realtime mutex contention) — I was essentially prototyping those ideas in the codebase before writing them into the proposal.
#718 — Per-Query DB Prometheus Metrics
Maglev already had Prometheus metrics for HTTP request counts and latency. What it didn't have was any visibility into individual database query behavior — so if GetStopTimesForStopInWindow started taking 200ms under load, you had no metric to prove it.
I added DBQueryTotal and DBQueryDuration — Prometheus counters and histograms that track count and latency for each named query family. Now when you're running k6 load tests and Grafana is open on the side, you can actually see which query is causing the latency spike in real time.
Small surface area change. Massive operational impact.
Other PRs Worth Mentioning
Beyond the deep dives above, I also shipped:
- #676 — Fixed stale DB pool metrics after GTFS feed reloads. The metrics were getting stuck at their pre-reload values, giving operators false confidence about connection pool health.
- #517 — Introduced SQLite-aware batching for safe and efficient bulk operations. SQLite has a limit on the number of host parameters in a query — this PR added the logic to chunk operations so Maglev never hits that limit.
- #639 — Aligned API response behavior with the OpenAPI spec and improved test coverage in the conformance suite.
- #627, #655, #682, #700, #704 — A run of API correctness fixes, aligning Maglev's response shapes with the upstream OneBusAway OpenAPI spec.
- #467, #522, #528 — Performance improvements across different parts of the stack.
- #684 — Security and maintenance work.
- #762 — An ongoing discussion for further realtime indexing improvements that I was still active on around the time I pivoted to proposal writing.
The Proposal: What I Was Planning to Build
My proposal was titled "Enhancing Maglev with a Production-Ready OneBusAway API" and it covered seven features across two phases:
Phase 1 (Features 1–4):
- GTFS-Static in-memory indexing — replace repeated SQLite lookups with O(1) map access, targeting 60–90% reduction in static-data DB queries on hot paths (~23MB of RAM, massive latency reduction)
- Observability tooling — Prometheus + Grafana stack with cache hit/miss metrics, per-query family metrics, and endpoint-level business metrics
- Copy-on-Write snapshots for realtime mutex contention — eliminate reader-writer lock contention during realtime feed refreshes using
atomic.Value - Full API parity — use
testdata/openapi.ymlas the source of truth and close every gap with the upstream OneBusAway OpenAPI spec
Phase 2 (Features 5–7):
5. Database optimizations — targeted indexes on calendar_dates, stop_times, and trips composite queries; EXPLAIN QUERY PLAN validation
6. Test coverage enforcement — coverage baselines per package, CI thresholds, focused tests on helper-heavy modules
7. Deployment and developer docs — docker-compose.prod.yml, onboarding guide, production-safe config documentation
My mentor's feedback on the proposal was something I'll remember for a while. He said "I don't really care a lot about proposal tbh with you.I care that you have a thought about what you're doing.I care that you have a plan. The rest of it doesn't matter" he said. Coming from a maintainer who'd been reviewing my code for weeks, that felt like the most honest and grounding thing anyone could say.
Why I Got Rejected (My Honest Take)
Two slots. Multiple strong candidates. I'm pretty sure I was in the mix.
But April happened.
Lab exams. Quizzes. End-semester exams. I basically went dark on GitHub and Slack for the entire month. No new PRs. Minimal discussion participation. The community bonding period I'd been so active in just... stopped.
I never had a formal meeting with my mentor. All our communication was through email threads and Slack messages. When results were 48 hours away and I still hadn't heard anything, I already knew it wasn't looking great.
In hindsight: the slot limits are real, but the month of silence probably didn't help. Open source orgs value consistent presence, not just burst contributions. The best contributors show up even when it's inconvenient — especially in the weeks right before results.
I'm not bitter about it. The rejection email was genuinely warm. And I got to spend two months shipping real code into a real Go codebase that will eventually serve real transit agencies. You can't buy that kind of learning.
What I'd Tell Myself in February
Start earlier. The students who get selected almost always have contribution history from December or January. February is survivable but tight.
Don't disappear before results. Your GitHub activity in March and April is still part of the evaluation. Slack lurking counts. Commenting on issues counts. Reviewing other people's PRs counts.
Go for the hard problems first. My best PRs — the N+1 fix, the realtime index, the metrics work — happened because I looked for the things that were genuinely hard to fix, not just easy to close. Mentors notice the quality of your thinking, not just the quantity of your PRs.
Talk to your mentor. I relied entirely on async communication. A single 30-minute call would have given me more context than a week of Slack messages. Don't be shy.
The proposal matters less than you think. Your contribution history is your real proposal. The document just connects the dots.
What's Next
As for GSoC 2027 — I'm heading into my final year and next summer is going to be about landing a job, not another application cycle. And honestly? I'm fine with that. GSoC was never really the point. The point was getting into a real production Go codebase, understanding how high-concurrency systems behave under load, and proving I could contribute meaningfully to something real. That happened. The rejection didn't undo any of it.
If you're reading this because you just got your own rejection email — hey. Read it again. If it says something like "I genuinely hope you continue to contribute," that's not a consolation prize. That's a maintainer telling you the door is still open.
Walk back through it.
The proposal, PR links, and load test data referenced in this post are all from my actual GSoC 2026 application to Open Transit Software Foundation's Maglev project. If you're curious about Maglev or want to contribute, the repo is at github.com/onebusaway/maglev.
Find me at adityatorgal.tech · github.com/Adityatorgal17 · linkedin.com/in/adityatorgal