Science and The Ugly Truth (Part One)

By Zach Harvey

When I first jumped into the second piece of my career in the fishery, the writing and editing part of the fishing world, in 2002, I ran across what has since proven to be uncommonly sound career advice (and view as a best practice for all editorial writers): If you’re going to tackle sensitive subjects and dredge up public outrage, you should take responsibility for the fallout by providing readership with some appropriate avenue to make their opinions known. You should avoid the sometimes-considerable temptation to let rip on some topic, stirring up the coals into a raging inferno, then close up shop, go on vacation, and leave the mess for the next guy who ends up in your shitstorm’s path.

I want you to understand that, to write about fisheries regulation for the better part of two decades without triggering a screaming, destructive rampage or two in a fishing community that employs an alarming number of your friends and neighbors, is a demonstration of inhuman restraint (and made possible mostly by the number of fishery miles I’ve trudged in other guys’ boots, so to speak).

The last couple years, during which my daughter, Kaya, has begun to take a real interest in fishing with me, the whole state of our fisheries has started to wear me down in the playing-nice department.

Really, it comes down to the line you’ve all heard in one context or another: “Insanity is doing the same thing over and over and over and expecting a different result.”

The question I have is how you could tell whether that behavior is insanity as opposed to greed or stupidity. My best guess is that the distinction would relate to the degree of surprise when the thing that is inevitable and yet somehow unthinkable happens. Again. For the third–nay, fourth–time.

The trouble in the world of fishing is that when the bad thing happens again, surprise is seldom the first response. It gives me the strong suspicion that we’re not crazy–just stupid and greedy.

To be clear, a good deal of blame belongs to the regulatory end of the equation, and, by extension, our elected officials, especially the ones in Congress, whose attention span for complex fishing/marine problems covers about one feel-good, symbolic bill–let’s declare striped bass Congressionally awesome!–per election year. Beyond that, their interest in solving problems with dire implications for what’s left of our domestic fleet can be calculated using the ratio of fishing-industry lobbying/campaign dollars to oil or beef or pharmaceutical or environmental industry lobbying/campaign dollars. It works out to roughly .23 hours of focused effort per legislative session.

Then again, I see what’s going on in the world. Fishermen are not, as an industry (if recreational, commercial fishermen, and manufacturers of gear can even be called a singular industry), one of the larger (on a per capita or economic basis) special-interest groups, and in a legislative session that hasn’t seen healthcare or border walls or any other issue affecting much larger segments of US population through, it’s unrealistic to think congressmen will have time to work on such niche legislation.

Even if they had the necessary time to understand the enormous complexity of the problems the fleet or regulators face under an already cumbersome load of statute, it would all still boil down to lack of funding. That’s a reckless oversimplification. Strike that from the record.

Let me rephrase.

Almost every major, immediate problem in fisheries management stems from science–the lack of it, the cost of it, the scope of it, the quality/credibility of it, the current management approach’s procedural hunger for it. Hell, even the word, “science,” is wreaking havoc on fish, on regulators and regulated, at this point.

Let me back up for a moment. In the first, say, 20 years of US fisheries management, a period that witnessed the rise of so-called fisheries science, the regional councils, established in 1976 with the passage of the Magnuson Act, were populated mainly by fishermen. Even as key commercial stocks like cod or winter flounder started to buckle under staggering commercial pressure, regulators used landings data mainly as trend indicators, and were under no formal pressure to get it right. If the fleet overshot the target, they’d tighten up a little the next fishing year.

It allowed managers to kick the really painful choices down the road indefinitely, and with a notable exception or two, a grossly overcapitalized domestic fleet, built on government-subsidized loans from the early years, hammered our fish stocks–high volume, low price–to pretty near the brink of biological oblivion. They beat on specific subpopulations of spawning fish, wiped some off the face of the earth, and absolutely flattened a great deal of once bountiful seabed. In short, as has been said many times, the “foxes ran the henhouse,” and fish lost in a big way.

The passage of the Sustainable Fisheries Act in the late 1990s marked a major turning point. That bill set rebuilding targets for depleted (overfished) stocks, and set a 10-year deadline by which the rebuilding needed to be complete–the latter the first so-called accountability measure in our regulatory regime.

The 2007 reauthorization of Magnuson added new accountability measures to ensure regulatory compliance with rebuilding goals, including quota “paybacks,” which required that one year’s quota overruns be subtracted from the following year’s catch targets.

Target quotas, historically set by a statistical process that compared population estimates (stock assessments) with previous year’s landings, were now required to factor in the degree of scientific uncertainty as a percentage, which would come right off the top as a precaution (commercial figures from “hard” fish house numbers, recreational catch from the output of the notoriously error-prone Marine Recreational Fishing Statistical Survey, or MRFSS).

There were other accountability measures as well–some aimed in the other direction. One required a total overhaul of the MRFSS, a primitive data collection program instituted more than a decade earlier to give regulators an admittedly rickety indication of recreational catch trends, year to year. MRFSS punched a limited number of angler-reported catch stats and an even more limited number of randomly generated follow-up survey calls into a convoluted equation that contained all sorts of suspect statistical assumptions. The idea, right from the outset, was that the accuracy of its numeric yield wouldn’t matter, since a few years of it would create a frame of reference to show trend: increase/decrease in rec. fishing effort or success by species. It was explicitly not meant to calculate anything a fully-formed human would mistake for actual quota performance.

With the establishment of stock-rebuilding targets and payback schemes, managers were forced to use this comically unreliable “science” to calculate actual recreational quota performance. I used to sit at my desk, querying the data, howling at some of the numbers that system kicked out. One year, as I recall, Massachusetts shore-based tautog fishermen outfished the entire Rhode Island party and charterboat fleet. Among other problems, I noted that one of my friends fron Mass had personally landed more tautog along the Mass shoreline than over a hundred of the survey’s Mass shore tautog fishermen had landed. That same year, the charterboat on which i worked had caught twice what the survey tallied up for the grand total in two months fishing by the state’s entire fleet. And that number–200% of the MRFSS all-year, fleet-wide total–we’d landed in our first two trips.

The real problem was (and still is, despite a whole new survey program, MRIP, instituted under statutory pressure some years back) that these numbers fed every new set of numbers the Council or Commission or the state agencies (DEM, DEP, DEC) crunched, including stock assessments that were regulators’ baseline number for the population of a given species.

It all gets back to the most dangerous three words in fisheries management: “best available science.”

Don’t get me wrong: I’m a believer in the power of statistics, understand fully that accuracy climbs in proportion to the length of the data’s time series, the size of the sampling base, and the extent to which data collectors respect the sampling parameters.

But when five consecutive years of data collection and algorithmic number-crunching turn five years of provable real-world zeroes into five different numbers between 6 and 13, and offers a corresponding annual percent standard error (PSE) calculation, ranging from 20% to 60%?

Not one number of the lot looks even slightly defensible, or has any relationship to fishing/biological reality, but the regulatory process is going to use these same numbers to tell me we overshot our allocation, will have to take a reduction next season.

And that’s not the last of those numbers. No, they’ll be dumped into a longer equation containing all sorts of different data points, including some yielded by the best kind of field science performed by pros I trust. Other numbers in the equation will come from a scientist-led trawl survey of thousands of random tows, Georges Bank to the Carolinas, conducted over two-week cruises, spring and fall, every year for the last 30. The old numbers contain at least one year when a 7-fathom difference in the length of the research vessel’s tow cables meant their net was coming across the bottom sideways and closed. The new numbers–10 years, maybe?–will have been calibrated by side-by-side tows with fisherman-operated gear. Any way you cut it, there will be apples-to-oranges comparisons, some suspect estimate numbers, some numeric assumptions that look dodgy at best against my own observations the last 20 years on the water.

None of the models in current management use can begin to account for even the most reliable, observed predator-prey relationships.

There’s no way to correct for the territorial shifts in stocks like fluke, per climate change. We’ll probably see the shift like a passing shadow in the trawl survey reports, but it won’t fit the stock assessment. And if it can’t be packaged up as peer-reviewed science as a number compatible with the models, we can’t find a legal means to use the info in the next regulatory measures.

Editor’s Note: While we strive to avoid continuing columns over multiple issues, the sheer volume of information I’ve tried to condense down into this entry on the way science—a vital part of effective fisheries regulation, but also an expensive and time-intensive part of the process that is broke and short-handed—has often hamstrung decision-makers and delayed critical management action. Next month, some points on science and political meddling, and thoughts on turning it around for fish and fleet…

By Zach Harvey