FCC 2010 Quadrennial Review “Consumer Survey” gets preliminary go-ahead from OMB

Readers may recall a couple of posts here last November, describing a “Consumer Survey” that the Commission had sent over to OMB for its approval. (Those posts may be found here and here.) The FCC had commissioned a survey which would generate data to be “used to examine the impact of local media market structure on consumer satisfaction with available broadcast radio and television service”. It’s all part of the Commission’s 2010 Quadrennial Review.

You may also recall that, in sending it over for OMB’s thumbs up, the Commission urged that the survey be approved by November 22, 2010 because the results were needed for a study that was due to the Commission by January 31. (Less than two weeks after pleading for expedited treatment, the Commission advised OMB that, oops, it had mis-stated when the various studies were due to be completed – but it still wanted the OMB to approve the survey by November 22.) According to the FCC, “[a]ny delay in administering the survey will make the contractors’ already tight deadlines unworkable.”

Good news!! OMB approved the survey . . . on January 13, not quite two months beyond the Commission’s outside deadline.

It appears that OMB had a number of concerns about the survey as it was initially presented. The materials available for review at the OMB website reflect a major league overhaul of the Commission’s “Supporting Statement” along with a number of tweaks to the survey itself.

The Supporting Statement now consists of 23 pages, split up into two separate sections. That’s in contrast to the original version of the Supporting Statement filed back in November – which weighed in at a meager four pages. (Oddly, the original version appears to have gone missing from the OMB website. No worries – we kept a copy; you can read it here.) The first section (presumably from somebody at the Commission, although it’s unsigned and unattributed) now waxes eloquent about “competition” and “diversity” and “localism”. According to the Commission, the survey will, among other things, “collect information on consumers’ perception of the quality and quantity of the local content provided.” 

But the only survey question focused on “localism” reads

A media environment with low localism provides very little or no information on local news and events. With medium localism, there is some local information, and it reflects some of the interests of your community. With high localism, the information reflects many of the issues and interests of your community.

Consider the sources of information from your media environment. Please indicate their level of localism [on a three-level scale, i.e., “Low”, “Medium” or “High” localism].

Later questions solicit the respondent’s “satisfaction” (on a five-point scale) with the level of “localism” which he/she perceives to be available. The concept of “satisfaction” is not defined in any discernable way. Neither is “localism” or “local content” (other than through a reference to “examples” like reports on “school sporting results”, “city/county elections” or “neighborhood crime”). The words “quality” and “quantity” don’t appear in the survey at all.

With all due respect, it’s difficult to see how that survey question (even with follow-ups about the undefined notion of “satisfaction”) could possibly produce any useful information at all about “consumers’ perception of the quality and quantity of local content” available to them. 

And are consumers’ “perceptions” a meaningful consideration in any event? If “localism” really is a valid regulatory concern, shouldn’t the Commission be concerned about the nature, amount and source of “local” programming actually available? (With respect to the Commission’s self-serving description of the regulatory significance of “localism”, readers might want to take a look at these Comments (continued here and here) and this law review article for a different perspective on the subject of localism.)

The second part of the new-and-improved Supporting Statement appears to have been prepared by the non-FCC folks who drafted the survey. To say that it’s technically challenging is an understatement. Be sure to have a dictionary on hand if you try to read it. (Sample terms: “cross-sectional regression analysis”, “exogenously”, “collinearity”, “dichotomous”, “bivariate probit”, “computationally intractable”)

It’s also got a boatload of stuff like:

A linear approximation to the household conditional utility function is:

U* = β1COST + β2ADVERTISING + β3DIVERSITY + β4LOCALISM

+ β5MULTICULTURALISM + e                                                                       

where U* is (unobserved) utility, β1 is the marginal disutility of COST, β2, β3, β4 and β5 are the marginal utilities for the media environment features, ADVERTISING, DIVERSITY, LOCALISM and MULTICULTURALISM, and e is a random disturbance.

We’ve established in previous posts that this particular blogger is not a probs/stats jock, so you won’t find me criticizing this end of things. But even if we assume the fancy math is all exactly right, doesn’t the validity of the results depend ultimately on the validity of the data being fed into all those fancy equations?

In addition to the Supporting Statement (Parts A and B), the OMB website now also includes a “Response to OMB Review” apparently submitted by somebody on the survey design team who seems to have been responding to more or less specific OMB questions. The interesting point here is that it looks like OMB actually did critique the original version of the survey in some detail.

Not that it did much good.

Oh sure, at OMB’s suggestion, the original reference to USA Today (at Question 42, about “diversity”) has been changed to The Wall Street Journal. And where, in the same question, the original version parenthetically assigned political values to “CNN news (more liberal)” and “Fox news (more conservative)”, those parenthetical descriptions have gone away in the new version.   Another change – in Question 48, the following “example of multiculturalism” has been deleted:

[N]ews outlets that, rather than only reporting negative news from African American or Hispanic neighborhoods, such as robberies and shootings, provide a balanced story of “what is going on” in these neighborhoods.

But when OMB questioned the “vagueness of the features of the media environments” or the use of “Low”, “Medium” and “High” as descriptives for levels of, e.g., “localism” and “multiculturalism”, the survey designers simply declined to make any changes.

And with those changes and non-changes, OMB has approved the survey. Kind of. The survey has been cleared only for “the focus group and pre-test portions”, meaning that when those preliminary hurdles have been crossed, any additional changes will have to be sent back to OMB for further review. OMB also imposed several technical conditions.

Notwithstanding the conditions and possible further review, though, it appears that OMB is not going to stand in the way of the deployment of the Consumer Survey. It will be interesting to see whether the FCC and its contractors ever get around to administering the survey. After all, back in November, the Commission took the position that any delay in administering the survey “will make the contractors’ already tight deadlines unworkable.” OMB still hasn’t fully approved the survey, we’re now half-way through January, and it’s hard to imagine that those deadlines that were “already tight” two months ago have become any more “workable”. 

But where there’s a will, there generally turns out to be a way. Ostensible reliance on extended fact-finding is something agencies like to trot out when their rulemaking actions are appealed. Cynical observers might suggest that the survey is just an effort to generate a nice batch of seemingly scientific statistics to cite in support of whatever conclusions the Commission would like to reach. The near total absence of useful definitions of crucial terms, together with other obvious shortcomings (e.g., an apparent failure to clearly delineate “broadcast” from “nonbroadcast” sources of video programming), does nothing to dispel such musings. It’ll be interesting to see how this plays out – but don’t be surprised if you hear a lot more about the survey when the FCC tries to move forward with its Quadrennial Review.