New York City Film Permits – Coming to a Neighborhood Near You

19 Mar

Want to call a bit of New York real estate all your own? Just request a film permit from the city and, once granted, you’ll be able to arrogate, as the city puts it, “the exclusive use of city property, like a sidewalk, a street, or a park” – at least for a day of two.

Quite a bit of this sort of short-terming goes on in Manhattan these days, and in the other four boroughs too; and you can arrogate the dataset devoted to all that activity, and for free, on the New York city Open Data site via this link:

Once you’ve gotten there, just click the blue Export button holding down the center-right of the screen, and select CSV for Excel once the Download As menu appears.


The data commit to cell form (as opposed to celluloid), nearly 40,000 permits secured between January 1, 2012 and December 16, 2016 (as of this writing; the data are ongoingly updated) somewhere around the city, and not just of the cinematic oeuvre. Television, commercial shootings and what are termed theatrical Load in and Load Outs – presumably scenery and equipment installation work for on-stage productions – are likewise inventoried.

As per so many open data renderings, a quick autofit of the sheet’s columns need be clicked, and you’d also probably want to let the EventAgency and Country fields in F and M respectively fall to the cutting room floor; their cell entries are unvarying and as such, unneeded.

A first, obvious plan for the data would have us break out the number of permits by New York’s five boroughs. This straightforward pivot table should serve that end:

Rows: Borough

Values: Borough

Borough (again, show values as % of Column Total)

I get:


Ok – the field headers need retitling, but the data speak clearly to us. What’s slightly surprising, at least to me, is the smallish majority accruing to Manhattan. This native New Yorker would have thought that the storied island’s mystique would have magnetized still more filmic attention to itself; but perhaps Brooklyn’s latter-day repute for hipness packs a measure of ballast into the weightings. A very quick read of the Brooklyn zip (or postal) codes (column N) attracting the cameras turns up neighborhoods in the borough that border the East River, e.g., the perennially photogenic Brooklyn Heights (i.e. lower Manhattan skyline looming large in the background) and trending Williamsburg, for example.

We could then proceed with a natural follow-on, this a distribution of permits by category:

Rows: Category

Values: Category (Count)

Category (again, by % of Column Total)

I get:


(Note: for information on New York’s permit categories, look here.) Television permits rule decisively; presumably the repeated filming regimen demanded by weekly series explains the disparity.

Now let’s break out category by borough, something like this:

Row: Category

Column: Borough

Values: Category (% of Row Total)

And we should turn off grand totals for rows; they all add to 100%.

I get:


Remember that the percentages read across. I’m particularly surprised by Manhattan’s relatively small plurality of movie and television shoots, and by extension Brooklyn’s relative appeal. What’s needed here, however, are comparative data from previous years; for all I know, nothing’s changed among the borough distributions. (Remember as well that the above table plots percentages, not absolute figures. Exactly one Red Carpet/Premiere was shot during the five years recorded here.) Note at the same time Manhattan’s huge differential among Theater permits, a predictable concomitant of its concentration of Broadway and off-Broadway venues.

And what of seasonality? We can answer that question easily enough by installing StartDateTime into Rows – grouping by Months and Years – and pitching the same field (or really any field, provided all of its cells are populated with some sort of data) into values via Count.

I get (the screen shots have been segmented in the interests of concision):


Among other gleanings, October seems to present a particularly attractive month for filmmakers, though the reasons why would and probably could be searched. Note in addition the spike in permit activity in 2015, and the fairly pronounced retraction in permits last year, at least through December 16.

But permit counts don’t tell us about the duration of the shoots, which naturally vary. But those data are here, and are welcomingly precise. To calculate permit lengths, all we need do is fashion a new field in the next free column (its position depends on whether you’ve deleted the superfluous fields I identified above), call it Duration, and enter in row 2:


That paragon of simplicity yields a decimal result, quantifying the proportion of the day awarded the permit holder. Copy it down the Duration column and then try this, for starters:

Row: Category

Values: Duration (Average, rounded to two decimals).

I get:


Remember we’re working with fractions of days; if you wanted results expressed in hourly terms you’d need to supplement the formulas in Duration with a *24 multiple.

We see a notably consistent range of average permit time allotments across categories with the obvious exception of Theater, whose set-up needs appear to require substantial permit times. Remember that a duration of .63, for example, signifies about 15 hours.

And if you simply add the durations (at least through December 16), the aggregate permit day count evaluates to 32,843.97. Divide the result by an average 365.25-day year, and 89.92 years worth of permit time report themselves, across a jot less than five years. That’s a lot of New York that’s been declared temporarily off-limits to the public.

Now you may also want to distribute permit prevalence by New York’s zip codes, but here a pothole obstructs the process. Because the areas requisitioned by the permits often straddle multiple codes, that plurality is duly recorded by the ZipCode(s) field, e.g. 11101,11222,11378.

But a workaround is at hand, though not the one I’d first assumed would have done the deal. What does seem to work is this:  First, range-name the ZipCode(s) field Zip, and in an empty cell – say R1 (or somewhere on a blank sheet), enter a sample zip code, say 10001. Then, say in R2, enter


That’s an array formula, its entry requiring Ctrl-Shift-Enter, programming into its result those telltale, curly braces. The formula conducts a FIND of every cell in Zip for evidence of 10001; and when it finds it – somewhere in the cell – it posts a 1 to the formula, after which all the 1’s are counted. In this case I get 1950 permits having freed up a smidgen of acreage somewhere in the 10001 code, in the neighbourhood of the Empire State Building.

You can next copy and paste all of New York’s zip code listings into a blank spreadsheet area from here, entering New York City in the search field. Enter the above formula with the appropriate cell reference in the first of the listed codes and copy all the way down. If you sort the results largest to smallest, these codes return more than 1000 hits (note I’ve retained the web site’s neighbourhood field):


That generic “Brooklyn” 11222 code topping the list points to the borough’s Greenpoint neighborhood, one of those districts hard by the East River, as is Queens’ Long Island City.

The formula that doesn’t work, contrary to my original surmise is

=COUNTIF(Zip, “*”&R1&”*”)

That’s in effect the one that had served us so well in my various key-word assays of Donald Trump’s tweets. It doesn’t work here because, for example, the zip code 11101 entered singly is vested with a numeric format; 11101, 11222,11378, on the other hand, stands as a textual datum, and COUNTIF likewise regards the two entries as different; and because it does, it won’t find both instances of the 11101s in the same formula. But FIND does.

Got that? I think that’s a wrap.

Chicago Homicide Data: Two Months, and Sixteen Years

8 Mar

Even given crime’s perennial, dark-sided newsworthiness, the burgeoning toll of homicides in Chicago has come in for recent, concentrated scrutiny. The New York Times has subject the recent spate of killings in the city to recurring coverage, and even the president has informed his followers that if the bloodshed isn’t staunched there he’ll send the “feds” in to enforce the peace.

While it’s not clear who the feds are, less uncertain data on the totals are available, and with them some perspective. A recent piece on the fivethirtyeight site reminds us that while Chicago’s homicide rates have indeed bolted upward in the past two years, the figures don’t approach Chicago’s deadly accumulations of the 90s – not exactly good news to be sure, but a measure of context, at least, that grounds the discussion.

And as could be expected these days Chicago’s Open Data portal makes the homicide data available to us, along with the records of other reported crimes. You can access the crime data by traveling to the portal and clicking this link on its home page:


(note that the different Crimes – 2001 to Present – Dashboard link on the page will take the user to a series of charts founded upon the data.)

I then filtered the crime data set for records beginning January 1, 2016:



If you’re doing the same, and have downloaded the filtered 300,000-plus records (again, these recall all reported crimes) you may want to disgorge some of their fields, ones you’re not likely to apply towards any analysis (e.g., the X and Y coordinates that in effect duplicate latitudes and longitudes, the Year field, whose information can be derived from Date, and the curious Location, whose records bundle crime-location latitudes and longitudes that already appear singly and more usefully in Latitude and Longitude).

And because our download request gathers records for 2017 as well we can begin to develop some inter-year comparisons between homicide rates.

It should be noted by way of additional preamble that Donald Trump’s tweeted augury about federal action dates back to January 25, at which point the Chicago homicide total for 2017 exceeded that as of the comparable date last year by 23.5%. But as we’ve since advanced more deeply into the year (my download runs through February 27), let us extend the comparison through this pivot table:

Rows: Date (Grouped by Month and Year)

Values: Primary Type

Slicer: Primary Type, and tick Homicide

(Note the deployment of the Primary Type field in both the Values and Slicer position – an allowable tack).

I get:


You’ll note that the 2017-2016 January/February differential has disappeared. Homicide totals are now nearly identical for the two months; again, the Trump tweet referenced a homicide total of 42 through January 25, and likened it to last year’s figure of 34 through the same day. Those numbers of course are relatively (and thankfully) small, and raise a corollary sample-size question. And of course it remains to be seen if the May-August surge in homicides last year will be duplicated across the same interval in 2017.

Of course, once the Slicer’s Primary Type field selection is in place, any and all of its items remain available for the analyssis; and by clicking some of the other Primary Types the January/February inter-year comparisons don’t trend uniformly. For example, tick Narcotics in the Slicer (and I’ve turned off the pivot table subtotals) and I get:


Here the 2017 January/February totals fall far beneath the 2016 aggregates for those two months – by about 40%, and I don’t know why. The temptation is to ascribe the retraction to some rethink of the reporting protocol, but I doubt that’s the case, though my surmise is easily researched. Click Weapons Violation, on the other hand and

Here the 2017 figures far outpace those of the preceding year, even again as the homicide totals for the January/February intervals stand as near-equivalent. Perhaps the publicity about weapon-inflicted murders in Chicago has spurred the city’s police to identify weapon wielders more concertedly, with that aggressive enforcement pre-empting still more homicides. But that is speculation.

It also occurred to me that, because the larger Chicago crime data set reaches back to 2001, we could download the homicide data for all the available years, and analyze these in a data set dedicated to that lethal offense. I thus returned to the Chicago set and filtered:


The records of 8,334 homicides, again dating from 2001, populate the rows. A simple first pivot table could confirm the yearly totals:

Rows: Date (Grouped by year; remember that the 2016 version of Excel will perform that grouping automatically)

Values:  Date (Count)

I get:


The precipitous ascent of the city’s murder rate, up 85% from 2013 to 2016, is confirmed (of course some control for population increase need be factored). But some deeper looks at the data also avail. (Note that columns E through G contain uniformly identical data down their rows, and as such could be deleted. Note as well that Chicago’s population may have actually declined slightly over the reported period above.) For example – have the distributions of the crime across the 24 hours of the day varied over the 2001-2016 span? By itself, that question presents what is normally a straightforward task for a pivot table, but in this case we need to call for a workaround. That’s because if we want to simultaneously break out the data by year and hour of day we’d have to derive those data from the same field –  i.e., Date in two different various Grouping modes – and install these in the Row and Column areas; but you can’t assign the one and the same field both to Row and Column.

And because you can’t, I’d claim the next available column, call it Hour, and enter in row 2:


And copy down the column. Now we have a second, independent field that reports a time reading, enabling us to proceed:

Rows: Date (by Years)

Columns: Hour

Values: Hour (Count, % of Row Total).

Filtering out the incomplete 2017 data I get:


The table is dense but readable and worth reading, at least selectively. Keep in mind that the percentages read horizontally across the years, returning the proportions of homicides for any year that were perpetrated by hour (for example – a percentage beneath the number 7 records the percentage of all the year’s homicides committed between 7:00 and 7:59 am ). Some numerous and notable variations are there to be considered, e.g., homicides during 2004 accounted for 3.96% of the year’s total during the 7:00 am time band, but in the following year the figure for that hour fell to .44%. In absolute terms the numbers were 18 and 2. The percentages at 21:00 pm for 2006 and 2012 come to 8.81% and 3.18% respectively; the actual totals stood at 42 and 16.

Are these fluctuations predictably “chance”-driven, or rather, statistically and sociologically significant?

For that question, I’m not confident about my confidence-level skills.

POTUS is an Anagram for POUTS: the President’s Tweets

27 Feb

Mr. President Trump comes at you now from two Twitter handles – his stalwart @realDonaldTrump identity, which thus appears to have gained security clearance, and the unimpeachably irreproachable POTUS id, or President of the United States, for acronym watchers worldwide.

And that very plurality – and the President knows something about pluralities, excepting perhaps the one by which he lost the popular vote – begs an obvious question for Trumpologists everywhere: namely, when does the incumbent decide to tweet from this account or that one?

It sounds downright sociological if you ask me, and even if you don’t.  The presidency is what those sociologists call a master status, with its relentless gravitas seeming to bear down upon its owner just about all the time, whether he wants it to or not; and as such, can the chief executive be said, or be allowed, to check his status at the door along with his shoes when he  bowls a few frames down at Trump Lanes (I’m remaining mum about those gutter balls), or spills his popcorn again even as he thrills yet one more time to that Chuck Norris epic?

I don’t know those answers, but we can go some ways toward resolving my Twitter-authorial question at least, courtesy of the go-to twdocs site, which granted me nine bucks worth of recent presidential downloads (these as of February 25), both from the POTUS account and the last 3050 epistolary gleanings (excluding retweets) from the @realDonaldtrump alter ego. (I’m perpetrating the legal fiction that you’ve downloaded these data as well, as I’m not sure what liberties I can take with my paid-for copies.)

What’s most noteworthy about the POTUS download is the confining of its output to tweets post-dating the president’s January 20 inauguration, even though I was prepared to pay for the twdoc max of 3200. It’s clear the messages of the erstwhile president have been retweeted to some vast clandestine archive, or the Smithsonian, or both. (In fact the download reports that the current POTUS account was initiated during the evening on January 19, while Barack Obama was technically still in charge.) As a result the now-fledgling POTUS account divulges a mere 7 tweets bearing the inimitable authorship of the POTUS himself; that count is tipped by the president’s signature DJT capping each self-written tweet, and which I captured and counted in Excel formulaic terms thusly (after I named the text-bearing field in column B Text, if you really are downloading along with me):


(We’ve seen this formula before.)

Yet as of February 25 Mr. Trump has fired off 210 tweets from his @realDonaldTrump id since his instatement, or about six a day; it is clear, then, that master statuses notwithstanding, that the username signifies the….real Donald Trump.

So what is the president wanting to tell us these days? After having coined the range name incumbent for those 211 tweet-rows, I put this formula to the data:


Thus putting in a search order for one of Mr. Trump’s current adjectives of choice. 22 of the tweets, or 10.48% of all his presidential messages, return the term, and that result appears to be unassailably real, along with Mr. Trump’s fetching penchant for emblazoning the label FAKE NEWS (appearing 19 times in the range, and 28 times among all 3050 tweets) in all-caps. Indeed, Mr. Trump’s peculiar alacrity of expression is affirmed by his recurring fondness for the exclamation mark; an extraordinary 125, or 59.8% of all his post -January 20 tweets, sport the punctuation. But in view of the larger fact that 61.5% of all his 3050 tweets are so embellished, we can’t be too surprised by his enthusiasms, though I am not sure what his tweet of February 4, reprinted here in full: MAKE AMERICA GREAT AGAIN! Means to exclaim.  I think we’ve heard it before.

For some other search-term frequency counts:


One could allow oneself to be struck by the paucity of references to “Islamic” or “Putin”, and the relatively prominent and decorous “thank you” and “Congratulations”. No one said Mr. Trump isn’t a nice guy. He does a have thing about “media”, though.

I was additionally interested in how Mr. Trump’s tweets spread themselves across time. That is a question we had explored in an earlier post, but there with a qualification: because twdocs’ Created At field details tweet times per French standard time, and because Trump could have been here or there in mid-campaign calibrating his time of transmission, as opposed to the time recorded by twdocs’ French server, stood as something of challenge. Trump the office seeker could have, after all, been in New York or Los Angeles, or somewhere in between. But now that he has entered the office we may be safe in assuming that the great preponderance – if not all – of his presidential tweets to date have been east coast-timed. And if so, then a constant six-hour decrement – the time difference between France and Washington – could be applied to each of the Created At times reported in column A. And again, that workaround can be realized by opening a new column to the immediate right of A, titling it EastCoast or something kindred and entering, in what is now B7 (the first row of data):


The decimal of course represents one-quarter of an elapsed day, or six hours. Thus the date/time in A7 – 2/25/2017 23:02 – retracts to 2/25/2017 17:02 in B7, once the -.25 is applied. Copy down the B column and then try out this pivot table:

Rows: EastCoast (Grouped by Hours only)

Values: EastCoast (Count)

EastCoast (again, by % of Column total)

I get:


We see that the President is an early-morning tweeter, squeezing 40% of his output into the 6 to 9AM stretch (note that the 8AM grouping registers tweets through 8:59), and not a chirp to be heard between 1 and 5AM.

But doesn’t that measure of silence, at least, qualify as good news?

NHS Patient Waiting Times, Part 3: Weighting the Formulas

12 Feb

Welcome back to the next installment of spreadsheet cubism, in which the same data task is imaged from multiple formulaic vantages.

We’ve already submitted the NHS patient waiting data to two such looks, and here comes the third, an arrestingly different one; arresting, both because of its startling simplicity, and because its workings have been hidden in plain sight from Excel users for some time.

Again, our interest is in learning the number of patients waiting for an identified number of weeks for treatment in one of 19 different medical specialties (and we’re continuing to work with the IncompProv tab). Our vantage begins to come into view when we select the F3:BG22 range that rectangulates (it’s a word; I checked) the week-waiting numbers, bordered by medical specialty.

Then perform a pair of identical finds and replaces, first on F4:F22 and then on G3:BG3 – or the ranges that carry medical speciality and week-wait headers respectively:


That is, replace every instance of a space among the earmarked cells with an underscore – yes, very much a to-be-explained step.

Re-select F3:B22 and turn next to an old Excel capability to which I’ve probably given rather short shrift, but have grown to appreciate of late – the Create Names from Selection feature, brought to you via the Defined Names button group in the Formula tab:


The default decisions above ascribe range names to every row and column in the selected range, the names coterminous with the labels in the range’s top row and left column – but you’ve probably figured that one out.

Our medical speciality dropdown-list menu remains in place in E1, and we’ll proceed to slot another dropdown in D1, this one comprising the week-wait labels in G3:BG3 (in fact the Treatment Functions Description in F3 likewise names its range, i.e. the medical speciality rubrics in F4:F22, but that name isn’t contributory to the process here). Then enter yet another dropdown in D2, referencing precisely the same range assigned to the menu in D1; and that apparent redundancy needs to be understood, of course (in fact you can simply copy D1 to D2; the menu will be duplicated).

And now, by the way – and this aside is far from incidental – we’ve learned why we needed to substitute the underscore for all those spaces a few paragraphs ago. Excel named ranges must comprise labels of contiguous text, and so when the spreadsheet meets up with a multi-worded header. it insists on exchanging a word-linking underscore for every space when it forges the range names. We thus had no choice but to do the same with the labels in F4:F22 and G3:B3, in order to anticipate and Excel’s range-naming mandate.

Now back to our budding formula. Say we want to learn the number of patients who have waited up to six weeks for treatment in oral surgery. Select that speciality in E1, and select Gt_00_To_01_Weeks_SUM in D1:


Tick Gt_05_To_06_Weeks_SUM in E2, because we’re counting patient numbers through week 6 for the oral surgery specialty.

Then, in a free cell, enter:


Which evaluates to 68,045, the number of oral surgery patients waiting up to six weeks for treatment.

That expression doubtless merits a to-be-explained as well. First, that is indeed a space insinuating itself between the first and second INDIRECT, and not a typo, begging in turn a rather pressing if rhetorical question: where does one find a space pushing its way inside an Excel formula?

You find it here. The space – and again, its functionality isn’t new to Excel – qualifies as an actual mathematical operator, of an operational piece with the traditional go-tos  +, -, /, *. The space performs an act of identification: that is, in its base mode pinpoints a value standing at the intersection of two ranges. (And thanks to Jordan Goldmeier’s and Purnachandra Duggirala’s Dashboards for Excel, which promotes and explains the space operator approach.)

By way of a more straightforward introductory example, consider this assortment of test scores lined by student names and subjects, say in A1:G11:


(Note the collapsed space in the polisci entry, in anticipation of the Create from Selection’s name-underscoring practice.) Dubbing ranges again via the Create from Selection protocol, this unnervingly spare formula:


returns 72, the value that stands at the confluence of ranges Maureen and Art.

The solution starkly pares the standard INDEX/MATCH solution to what is in effect this lookup task; indeed, so lean is the space-operator prescription that one is bidden to ask why it doesn’t predominate among users and commentators. I’m asking, but I don’t have the answer.

Now of course our denser formula departs from the above demo expression. For one thing it packs several instances of INDIRECT into the mix (see our discussion of that function here), because our three contributory dropdown-menu selections have returned merely textual references to the ranges with which we’re working, and these thus need to be synergized into actual, working range citations.

And the


half of the formula empowers it to find all the values dotting the intersections of the oral surgery specialty and all the columned ranges between and including the two we’ve actually identified in the formula; the space operator can do that, too (and note the placement of the parentheses, by the way).

And because the formula has sighted the multiple data points crossing their multiple intersections, the SUM function must gather these into a single total – our answer. Note that the standard INDEX/MATCH lookup can’t do carry out this additive step – at least I don’t think it can.

It’s probably worth your while, then, to learn more about the streamlining efficacy of the space operator. It’s been worth my while – and I’m lazier than you.

NHS Patient Waiting Times, Part 2: Weighting the Formulas

30 Jan

A certain type of spreadsheet bliss attaches to relative ignorance. Know just one formulaic way around a task, and your decision rule requires no deciding: write the formula.

But know at least a couple of ways to get where you want to go, and you’ll need to break out the map and plot your best-course scenario, or at least try to. Last week’s post described one means, driven in part by a teaming of the OFFSET and CELL functions, for totalling the number of patients having to wait up to a specified number of weeks for treatment in an identified medical speciality, those data emanating from the National Health Service spreadsheet upon which we drew in the post. But alternative means toward that end are available; and why you’d make use of this as opposed to that one stands as a good question, the answer to which has a lot to do with the confidence you can marshal toward the approach. Find one formula among the options the easiest to write and you’ll likely be magnetized in that direction – even if in some absolute, textbook-ish sense, some other formula comes best in show for elegance.

In any case we can, in the interests of informed choice-making, review two other formula possibilities, braced by the corollary concession that still others may be camouflaged in the brush. Remember we’re interested in learning the number of patients in a medical speciality who waited a maximum, stated number of weeks for treatment (we’re continuing to address the workbook introduced last week). We can begin to put our second option into play by retaining the dropdown menu of medical specialities in E1 (founded on cells F4:F22) we forged last week (look here for a precis of dropdown menu construction, if you’re new to the idea. You don’t really need to name the range, though, in spite of the linked discussion’s advisory – at least not in our case, as its contents won’t be augmented). Then enter a week-waited number in D2, say 12, and for illustration’s sake select ENT from the speciality dropdown in E1:


Thus we’ve declared in interest in discovering the number of individuals who had to wait up to 12 before receiving ENT treatment. Then, enter this expression, say in H1:


This formula likewise calls for an exposition, needless to say. SUMPRODUCT, Excel’s quintessential off-the-shelf array formula, is perhaps the deepest of the application’s functions, its diffident, user-familiar tip (i.e., it multiplies pairs of values and proceeds to add them all) transmitting the weakest of signals about the iceberg immured beneath.

Here, SUMPRODUCT combs F4:F22 for the medical speciality – ENT – we ticked in E1.  And you’ll observe that the search and find for ENT is pressed without any syntactical resort to the standard, conditional IF; that word is nowhere to be found in the formula. When it finds ENT – in F7 – the formula moves to examine row 7 for the values running across its contiguous, number-bearing cells, in columns G through BG (note too that the F4:F22=E1 phrase is couched in parentheses).

And that sweep through the columns takes us to the *(COLUMN(G3:B3)<=D2+6) piece of the formula. The star/asterisk reminds us that SUMPRODUCT continues to do what it’s been programmed to do –  assign a value of 1 to its successful sighting of the requested medical speciality ENT (a sighting that first imputes the name TRUE to the finding in F7 – and in the Boolean language of array formulas, TRUE is next quantified into a 1). In turn, the other non-complying entries in F4:F22 are deemed FALSE, and incur a 0 as a result. COLUMN identifies the absolute column number of any cell reference; thus =COLUMN(X34) returns 24, for instance.

Befitting its multi-calculation, array formula character, COLUMN(G3:BG3) flags the column number of each of the G3:BG3 entries, asking if any of these equal or fall beneath the value 18, that number a resultant of the 12 we entered in D2 – plus 6, a necessary additive that squares our formula with the fact that the first column we’re inspecting – G – has a column value of 7. Adding the 6, then, to the 12 in D2 – the week wait figure – transports us to the 18th column of the worksheet, R. And R contains the Gt 11 to 12 Weeks data that marks the outer bound of our search. And any column that satisfies our criterion – any week equal to or less than 12 – likewise receives a TRUE/1 evaluation.

Remember again that, convolutions notwithstanding, SUMPRODUCT is about multiplication – and here the 1 assigned F7 is multiplied (remember the asterisk) by all columns in receipt of a 1. All other cells – that is, all the other medical specialties in the F column, and all the weeks in excess of 12 – receive a zero, and their multiplications yield zero and drop out of the formula. The remaining 1’s, so to speak, are multiplied by the ENT values for week 12 and before in cells G4:BG22, and ultimately added – because that’s what SUMPRODUCT does.

Again, we’re counting the number of ENT patients needing to wait up to 12 weeks for treatment, and in this case I get 190,480. Type a different week number in D2 and/or select a different speciality from the dropdown menu in E1, and the sum should respond accordingly. And because SUMPRODUCT is a homegrown Excel function, it stores itself into its cell via a simple strike of the Enter key – and not the storied Ctrl-Shift-Enter in which customized array formulas are obliged.

Hope that’s halfway clear – though halfway probably won’t help you write the formula when and if you need to write something like it some other time. This application of SUMPRODUCT is a good deal more thought-provocative that its simpler implementations, but if it makes you feel any better I had to think about it, too.

The point again is that SUMPRODUCT has delivered us down an alternate formulaic route to our answer, and whether it’s to be preferred to the OFFSET-mobilized variant we explored last post remains a good question.

But there are still other possibilities.

NHS Patient Waiting Times, Part 1: Weighting the Formulas

22 Jan

Divide finite resources by a spiking demand and the quotient shrinks, a mathematical truism all too pertinent to the UK’s National Health Service nowadays. Evidence of a system in crisis commands the country’s media and political attention, and of course data contribute integrally to the debate. One testimony – an NHS cataloguing of hospital treatment waiting times distributed by hospital and medical specialty – is here:


The workbook’s front end comprises a Report tab, whose dropdown menus synopsize patient waiting time data for any user-selected hospital and treatment area. You may want to trace the contributory formulas, and note the lookups performed by the OFFSET function, about which we hope to say a bit more a bit later. Note as well the section topped by the Patients Waiting to Start Treatment reference data, vested in the IncompProv sheet; information about patients who completed their what the NHS terms their pathway and started treatment is displayed by the two recaps that follow, the first keyed to the AdmProv sheet, the second to NonAdmProv.

Those sheets – that is, the ones bearing the Prov term – are all comparably organized, each earmarking and cross-tabbing its first 19 records by medical speciality aggregates and waiting times, these expressed in one-week intervals. The next, 20th record then vertically sums the per-week totals; the remaining records break out the data by individual hospitals.

Those 20 aggregating rows loose a familiar complication upon the data sets, at for the pivot tablers among you. As per previous discussions, the rows in effect double-count the patient figures, by duplicating the hospital-by-hospital totals. And again, for the pivot-table minded, the weekly wait fields could have been more serviceably sown in a single field, for reasons which recalled numerous times.

But those impediments don’t daunt all efforts at analysis. You’ll observe the GT 00 to 18 Weeks SUM fields posted to the far reaches of the PROV sheets, these tabulating the numbers of patients obliged to wait up to 18 weeks for the respective attentions described in the sheets. I began to wonder, then, if an efficient formulaic way to measure patient waits for any number of weeks could be drawn up, and I came up with two (and there are others).

That is, I want to be able to learn for example how many General Surgery patients had to wait up to 14 weeks for treatment, by empowering the user to enter any medical specialty and week-wait duration (I’m assuming here that the wait is to be invariably measured from the inception of the wait – 0 weeks).

Assuming I’m looking at the IncompProv sheet – though what we do there should transpose pretty effortlessly to the other Prov data – I’d begin by instituting a dropdown menu of medical specialties, say in E1. I’d continue with another dropdown, this one unrolling the wait headers in set down row 3 (that is, the dropdown data – and not the menu itself – are to be found in row 3)..

I’d next select F3:B22, thereby spanning all the specialty aggregate names along with the week-wait values data. While the selection is in force I can muster an old but valuable option, Formulas > Create from Selection in the Defined Names button group. That run-through takes me here:


Click OK and each row and column in the selection receives a range name coined by the label to its left or immediately above its column (note that labels themselves are not enrolled in the range, and Excel’s reference to values in the above may mislead you. The ranges here are all textual). And by way of a final, preparatory touch, I’d consecutively number the weeks right above their columns in row 2, assigning 1 to 0-1 Weeks, 2 to 1-2 Weeks, and so on.

Remember we’re attempting to tally the aggregate number of patients awaiting treatment in a selected speciality for from 0 to a specified number of weeks. Per that agenda, I’ll enter the number of weeks – that is, the maximum specified week wait – in F2. Let’s say I enter 15, and then proceed to click on Ophthamology cell F8. I’m thus instructing the budding formula to calculate the number of patients who awaited treatment in that speciality for a period of up to 15 weeks. Next I’ll enter the formula, say in E2:


I sense an explanation is in order, so here goes. In the final analysis, of course, the above expression sums patient numbers for the given specialty across a variable number of weeks – in the example, 15; and the formulaic potential to confront those variables is realized here by the OFFSET function.

OFFSET offers itself in two syntactical varieties, one requiring three arguments, the other five. The lengthier version, applied here, serves to identify the coordinates of a range, which will in turn submit itself to the SUM. In our case, the first of OFFSET’s arguments gives itself over to the INDIRECT function, itself performing a cell-evaluational role upon the nested CELL(“address”) expression. CELL(“address”) will, upon a refresh of the spreadsheet (triggered by the F9 key or some data entry elsewhere on the worksheet), return the address of the cell in which the cellpointer currently finds itself. Thus, for example, if you enter =CELL(“address”) in A12 and proceed to enter a value in C32, the function back in A12 will return $C$32 – the cell reference, not the cell’s value. But $C$32 is a textual outcome, i.e., not =$C$32. INDIRECT then restores a bona fide cell-referring status to $C$32, and reports the value currently stored in that cell.

So what does this have to do with our task? This: click on any medical specialty in F3:F22 and refresh the sheet. The address of that specialty populates OFFSET. For example, click on the name Ophthalmology refresh, and OFFSET will turn up F8, for our purposes the first cell in the range we’re striving to define. The two zeroes that follow tell OFFSET to in fact to define the range precisely at the F8 inaugural point – that is, not to move any rows or any columns away from that F8 range anchor. The 1 next instructs the formula that the imminent range comprises but one row – the row on which the Ophthalmology data are resting; and the F2+1 measures the range’s width in number of columns – in our case 16, or 15 plus 1 – plus 1, because the data for week 15 stand in the 16th column from the medical speciality in which we clicked to initiate the formula.

The answer in this particular case, then, is 330,472, counting of the number of ophthalmology patients who had to wait up to 15 weeks to receive treatment; and you could divide that figure by the Ophthalmology patient total – VLOOKUP(CELL(“contents”),F4:BH22,55,FALSE) – to learn that about 86% of the patients were waiting up to 15 weeks for treatment. So again, by clicking on Ophthalmology in F8 and refreshing the spreadsheet, the formula is put to work.

But there are other formulas that will work, too.

The Guardian’s Top 100 Footballers: Rating the Rankings

8 Jan

You like to window shop, I like to Windows shop; and my curiosity-driven gaze through the virtual panes shone its beam on yet one more data set devoted to the ranking of athletes, this time of the football/soccer genus.

Wheeled into view by the Guardian in Google spreadsheet mode and attired anew here for you in Excel:


the data sort the results of the aggregated, columned (P through EI) judgements of 124 sportswriters, each of whom nominated up to 40 players assessing them a point evaluation ranging from a maximum of 40 downwards (you may want to consult Sky Sports’ differently-ordered top 100 here).

Heading the list – which highlights the top 100, though 380 players were presented to the judges – and unsurprisingly so, is Portugal’s. and Real Madrid’s. Cristiano Ronaldo, putting 68 points worth of distance between himself and the no-less-unsurprising Lionel Messi, he of Argentina and Barcelona. That three of the rankings’ top five players populate the same Barcelona front line. even as that team merely holds down second place in its league. is the kind of sporting discrepancy sure to give the sages something to think about, but I’m a spreadsheet guy. And the fact that only Ronaldo and Messi appeared on every sportswriter’s ballot – and that, as a consequence, some rather formidable players found no place at all among soe writers’ top 40 – is perhaps equally extraordinary. Indeed – the Algerian and Leicester City star Riyad Mahrez won one first-place vote – even as he was completely shunned by 16 other writers.

But what of the spreadsheet? Start at the beginning, with its header data walking on the sheet’s ceiling at A1:O1. Those identifiers need to be lowered into row 4, hard by their attendant field data; and by extension, the hyperlink to the judges’ names and rules for assigning rankings stretching across the merged cell P3 must be moved, or its contiguity to the dataset will commit an offside against any pivot table building from the records against the perpetrator.

I’m not sure why the sheet’s numbers were left aligned, either, though I’m not pressing any charges for that formatting decision. And because it’s clear that the Guardian has player birth dates – otherwise, the Age at 20 Dec 2016 data could not have been furnished – exactitude might have been slightly better served, and pretty easily at that, via the YEARFRAC function, and an age calculation extended to a couple of decimal points, e.g. 27.83.

But if all that qualifies as a nit-pick, the rankings themselves in column A could be more justifiably questioned. Players with equivalent ratings, e.g. Alex Teixeira and Anthony Modeste at 156, were enumerated thusly: 156=, an expression that strips the number of its quantitative standing. Why not assign a 156 to each player instead, as do the women’s and men’s tennis rankings we reviewed in our immediately previous posts?

Moreover, some apparently identical scores appear in fact to have been differentiated. Raphaël Varane and Serge Aurier both check in with 34 points and five sportswriter votes cast, the latter a rankings disambiguator for the Guardian; yet Varane earns a ranking of 118 to Aurier’s 119. The same could be said about Omar Abdul Rahman and William Carvalho, invidiously niched at 133 and 134. In any event, if you do want to numerate all the rankings, say by figuring average ranking by country and/or team, then point a find-and-replace at the data, finding every = and replacing these with nothing.

I’d also withdraw the blank, colored row 105 that hems the top 100 from the lesser-rated crew beneath; while the row means to delimit and frame the footballing elite, per the Guardian’s story, any analysis of the larger cohort of course would need to unify the data set – and that means dismissing its blank rows. In that spirit, then, you’d also want to disavow row 129; whatever informational service its text may perform, it isn’t a ranked player’s record. I also can’t explain why some of the country and club entries, e.g. cells G207 and H219, appear in blue, or why Antonio Candreva in row 213 is described as Italian when his countrymen are identified with Italy. In addition, Nani’s (row 83) nationality is ascribed to Portugal, but with a superfluous space, as is Paul Pogba’s France (row 21).

Also, the formulas cascading down the Up/down field in B that meter a player’s current movement through the rankings from his 2015 score could have simply read =C5-A5, for example, sans the SUM function and its parenthetical braces.

But note as well that players who went unranked in the preceding year received a hard-coded NEW classification for 2016 that in fact could have been subsumed formulaically, e.g.


You’ll also note that the formulas in O compiling the number of a player’s first-place votes look like this:

=COUNTIF(P5:EI5). “40”)

The quotes are superfluous, and I confess to surprise that the formula works. The entries in P5:EI5 are values, after all, not labels. But work it does.

And for another matter that warrants our scrutiny, consider the Highest Score Removed field in L. The Guardian determined that any player’s highest rank – or at least one instance thereof – be stricken from his final score as an outlier. That sort of decision rule isn’t unprecedented – figure skating and gymnastics judging protocols often drop highest and lowest scores – but in those sports the extremes at both the high and low end are ignored; the Guardian only points its thumb down at the high – again, just one high, even if others have been issued to the player. Ronaldo’s 63 first-place votes are thus contracted to 62, but those 62 of course exhibit precisely the same score as the ostensible outlier.

Moreover, and unlike other juried events, the number of judges who decided to score a given player here is very much a variable. Thus we need to account for the 76 players whose final score of 0 belies their receipt of an actual, if solitary, vote. That vote, of course, was barred as an outlier, leaving the player with nothing, so to speak. Along these exclusionary lines, it follows then that players named by exactly two sportswriters incurred the loss of their higher score – even though one could challenge the insistence that these are somehow more outlying than their other, lower score.

The matter of how to dispose of score extremes has been disputed (see this mathematical exploration)  – it would be difficult, for example, to imagine an even halfway-well-intentioned teacher dropping a student’s highest test score (though the classroom scenario features one judge of many test performances; in figure skating many judges arbitrate one performance) – but in the interests of pressing on we could, for example, learn something about the larger aggregate picture by approving a data set comprising the 254 players who received at least one Raw Total point, , i.e. prior to the removal of their highest score. If we’re provisionally satisfied with the makeup of that cadre, we could for starters simply pivot table a count of players by country and country raw point total:

Row: Nationality

Values: Nationality (by Count, of course; the data are textual).


RAW TOTAL (again, by Average, formatted to two decimals and with a comma)

I get, in excerpt:


We see that Spanish players win appearance honors, but among the more productive countries Argentina claims the highest average player score, and by quite a margin.

And if you’re wondering, there’s one American player in there, even as he didn’t make the screen shot cut above – Christian Pulisic, who plays for the German Borussia Dortmund squad and is ranked 138th, with 25 raw-totalled points. But he’s only 18 years old – and if we rank the players by age, he comes in at number 2.