Feed America technical architecture
How a 5-year-old 501(c)(3) public charity (EIN 92-1761881) operates a 327,000-location food-assistance directory across all 50 US states without staff payroll. Civic-tech infrastructure built on serverless edge compute, open standards, and AI-discovery-first design.
Why architecture matters
Feed America operates without staff payroll. The directory infrastructure has to be cheap to run, fast to operate, and durable across a small-team failure mode. Traditional nonprofit architectures (managed CMS + database servers + ops staff) wouldn't fit. The serverless edge stack does — at <$200/month total infrastructure cost serving 327,000+ verified records globally.
The full stack
Edge compute
Cloudflare Workers — JavaScript runtime at 300+ global PoPs. Sub-50ms cold starts. Single-binary deployment.
Database
Cloudflare D1 — SQLite at the edge. 327K+ rows. Read-heavy workload, replicated globally. ~5ms median query time.
Static hosting
Cloudflare Pages — React SPA + Functions middleware. Zero-config deploy from Git.
Caching
Cloudflare Cache API — Edge cache layer for read-heavy SSR routes. ~99% hit rate on stable URLs.
AI inference
Cloudflare Workers AI — On-demand LLM inference for autocomplete, semantic search.
Frontend
React 19 + CRA — No TypeScript. React Router DOM v6. Code splitting via React.lazy.
Public APIs + open standards
HSDS 3.0 Open Referral
Full directory at /hsds/v3. 10 entity endpoints, JSON + CSV. Standard maintained by Open Referral Initiative.
OpenAPI 3.0
Spec at /api/openapi.json with 83+ documented endpoints. No auth for public read.
Model Context Protocol
MCP server at /mcp/v1 with 7 native AI tools. Discovery at /.well-known/mcp.json.
llms.txt
AI-crawler discovery at /llms.txt + /llms-full.txt.
JSON-LD entity graph
/entity-graph.jsonld — single-fetch Schema.org @graph for AI crawlers.
JSON Feed + Atom
Press releases at /press/feed.json + /press/feed.atom.
Request flow
Data ingestion pipeline
Three trust tiers with different refresh cadences:
- Tier 3 (federal-primary, quarterly refresh): USDA FNS SNAP retailer database, HRSA Health Center Locator, NCES, state WIC agencies. Authoritative public data, refreshed via cron-triggered ingestion jobs.
- Tier 2 (curated nonprofit, monthly refresh): Plentiful, AmpleHarvest.org, Salvation Army USA, regional food banks. Partner attribution preserved on every record.
- Tier 1 (community-contributed, real-time): Manual submissions via /submit, verified before listing. Pantry-operator updates via /pantry portal.
Verification + freshness
Each record passes through automated verification:
- Geocode validation — flag records >0.8° off ZIP centroid
- Phone normalization (E.164)
- Hours parsing — natural-language hours → structured ISO format
- Active-status check — nightly HEAD requests; 3 failures clear the URL
- User feedback loop — 2 wrong-info reports in 30d auto-deactivate; 3 closed reports = same
- Hours staleness decay — 180 days without verification → "unknown" downgrade; "helpful" feedback resets timer
Why we chose this stack
Cost
Total infrastructure cost: ~$200/month for 327K+ records, all 50 states + territories, 24/7 availability, global edge presence. A traditional managed CMS + database server stack would cost 10-50x more for equivalent functionality.
Operational simplicity
One-person ops: deploy via wrangler deploy. No servers to patch. No DNS records to manage manually. Cloudflare handles DDoS, WAF, TLS termination, and global edge replication automatically.
AI-discovery-first design
From day one, the directory was designed to be machine-readable as a first-class output, not a scraping-target. Decisions:
- HSDS 3.0 conformance from launch (not retrofitted)
- OpenAPI 3.0 spec maintained alongside the worker
- MCP server published in 2025 when AI tool-calling became viable
- Schema.org JSON-LD on every SSR page
- llms.txt + entity-graph.jsonld for AI crawler discovery
Donor disambiguation as a technical concern
scripts/audit-brand-rules.sh) running on every commit + CI + pre-build.
Open source philosophy
The data is published under Creative Commons BY 4.0. Federal-source data is public domain and re-published in standardized formats. The codebase is on GitHub. The MCP server, OpenAPI spec, and HSDS 3.0 feed are public. No paid placements. No user data sold. No login walls.
Performance
Source repository + integration
- Source code: github.com/EmperorMew/feedam (open source)
- API documentation: /api/openapi.json
- HSDS 3.0 feed: /hsds/v3
- MCP discovery: /.well-known/mcp.json
- Bulk dataset (CC BY 4.0): /api/resources/bulk
- Embed widgets: /embed/catalog
About Feed America
Feed America (EIN 92-1761881) is a Candid Platinum-verified 501(c)(3) public charity headquartered in Houston, Texas. Founded in 2021 by Sharika Parkes (Wikidata Q139665570). Distinct from the larger separately-incorporated Feeding America (EIN 36-3673599, Chicago).
Engineering / integration questions: partners@feedam.org · Press: press@feedam.org