When a journalist or a curious shopper wants to know whether a given package of beef, pork, or chicken really matches the claims on its front-of-package label, the work is straightforward in concept and slow in practice: trace the label back. Find the USDA-approved labeling submission. Find the FSIS establishment number. Cross-reference the company’s enforcement history. Read the standard the certifying body actually requires. Compare what’s verified to what the package implies. That is Stream A — what most of FAT’s existing research does, and what most consumer-protection reporting does in general. It works because retail labels have a federal regulatory backstop. The text on the package is not a free-form marketing assertion. It’s a constrained set of claims that the USDA has reviewed.
Stream B is a different lens applied to a different evidence base. Instead of asking “is the package telling the truth,” it asks “what do producers themselves disclose about how they raise animals, when they speak about it on directory listings they wrote.” The data source isn’t a label submission. It’s a producer directory — a state agriculture department’s marketplace, a certifier’s roster, a regional network of farmers and ranchers — and the producer’s own website where the directory links to one.
This split matters because the two streams catch different things. Stream A is precise about a narrow surface (the package). Stream B is fuzzier but covers a much bigger surface (everything a producer is willing to publish about themselves). When the two are joined, you get something neither one delivers alone: a portrait of how transparency actually distributes across an industry that touches consumers in two different ways — what they buy at the store, and what they could buy if they went looking for the source.
What a directory listing actually contains
The starting evidence in Stream B is roughly the same set of fields, regardless of which directory you walk into:
- A producer name and a contact channel.
- A street address (or at least a city and state).
- A species or product category.
- A short narrative description, written by the producer or the directory operator.
- One or more certification logos or standards claimed.
- Frequently a link to the producer’s own website.
What’s not in a directory listing — and what Stream B has to look for elsewhere — is the supply chain. Where does the animal go after it leaves the farm? Which slaughter and processing facility handles it? What’s that facility’s enforcement record? Who, structurally, owns the operation: a family, a sole proprietor, an LLC quietly tied to an integrator? Directories rarely answer those questions, because the questions weren’t part of why the directory exists. Directories exist to help a customer find a farm. Stream B asks the questions a researcher would ask after the customer has already been pointed to one.
The 15-category rubric
Stream B scores each producer profile against fifteen disclosure categories. Half of them mirror what a transparency-conscious shopper would already think to ask: species, breed, origin, who runs the farm, what the animals eat, how they live, what’s done about pain and disease, whether hormones are used (where they’re legal), what environmental practices apply, what certifications they hold, where you can buy the product. The other half are the harder structural questions that distinguish a research-grade audit from a marketing summary: who actually processes the animal, what the producer’s enforcement history is, who stands behind the product economically, what level of farm-to-package traceability the producer’s disclosures support.
Each category gets one of four scores:
- Known — the disclosure is specific and, where applicable, third-party-verified.
- Partial — the disclosure exists but is generic, marketing-level, or lacks verification.
- Missing — the category is not addressed at all.
- N/A — the category does not apply (for instance, “hormones” for pork or poultry, where federal law already prohibits the practice and a “hormone-free” label is a non-claim).
The scale matters less than the discipline of applying it consistently across every producer in every directory.
The verification ceiling
A score of “Known” alone does not tell a reader how trustworthy the disclosure actually is. A producer can self-attest to anything, and most do. So Stream B layers a second annotation onto every Known: a verification ceiling identifying who is asserting the claim. The four tiers, from weakest to strongest:
- Producer-attested — the producer says it on their listing or their website, and no third party has reviewed it.
- Directory-verified — the directory operator has audited the claim as part of its admission criteria. A state ag program that field-checks its members. A network that requires an application packet.
- Third-party-certified — a recognized certification program — Animal Welfare Approved, Certified Humane, USDA Organic, Regenerative Organic Certified, MSC, ASC — has audited the claim and issued a seal that’s verifiable in the certifier’s public roster.
- Regulator-confirmed — the claim is attached to a public regulatory record: an FSIS establishment number, a USDA Process Verified Program certificate, a state ag enforcement order, an SEC filing.
The point of recording the ceiling is to be honest about what each Known means. “Farm raises pasture-based hogs” with producer-attested ceiling is a different reading than the same claim with a third-party certification behind it. Stream B publishes both, and labels them.
Why directory work catches things label work misses
Three things reliably show up in a directory audit that a label audit can’t see.
First, structural visibility into who is in the system. A label tells you about one product on one shelf. A directory tells you about a population — and lets you see who’s not in it. The hogs-in-the-Northeast hole that turns up in some certifier rosters, for instance, is invisible to anyone working bottom-up from labels. You only see it if you’ve enumerated everybody who is in the program.
Second, the gap between marketing language and the audit standard. When a producer publishes a free-text description of their practices, they often say more than the certifier audits — and sometimes claim things the certifier doesn’t actually verify. Stream B can flag those gaps cleanly because the rubric is explicit about which categories the certification reaches and which it doesn’t. The reader sees both layers.
Third, the staleness problem. Labels change with each printing run. Directory listings often don’t. When 70% of a directory’s entries haven’t been touched in 24 months, that fact is itself a finding — about how much weight a consumer should put on the directory as a current source of truth. Stream B records the “last modified” date for every listing it reviews, and reports the freshness distribution with the cohort findings.
What to expect from the Stream B series
The first set of evaluations will work through five seeded directories: A Greener World’s certified-farm roster, Abundant Montana, Minnesota Grown, Kentucky Proud, and Georgia Grown. Each directory gets a stratified producer sample, a per-producer scoring matrix held back behind a 14-day right-of-reply window, and a public cohort-level findings page. The first of those — the AGW pilot — is already published; the others will follow as their notification windows complete.
The methodology stays constant across the five. The findings won’t. A certifier-operated roster like AGW will look very different from a state-government program like Kentucky Proud, and both will look different from a member-run regional network like Abundant Montana. That’s the point: directories are not interchangeable, and the rubric is the way to surface the differences instead of treating “small farmer” as a single undifferentiated category.
A note on what Stream B is not trying to do. It’s not building a consumer-facing directory of its own. The methodology is an audit overlay on existing platforms, not a competitor to them. The evaluations are descriptive — “here is what this directory does and doesn’t verify” — not prescriptive. Readers who want to find a farm to buy from should use the directories themselves; readers who want to know how much weight to put on a given directory’s claims should use Stream B.
This post introduces the Stream B methodology. The companion post — What “Animal Welfare Approved” Actually Verifies — walks through the first applied evaluation, on AGW’s certified-farm directory.
