Archive | Uncategorized RSS feed for this section

Philadelphia Police Complaints, Part 2: One Civilian’s Review

20 May

Now that I’ve spoken my piece about the putative redundancy of the two Philadelphia civilian police complaint workbooks, fairness insists that I actually take a look into the data that’s roosted inside worksheet number two – because there are findings to be found in there, in spite of all.

That book, ppd_complaints_disciplines, concentrates its gaze on the officers against whom civilians preferred a grievance. The complaints are additionally parsed into the districts in which the alleged offenses were perpetrated, and class the complaints along with their dispositions.

Once we’ve widened the fields via the usual auto-fit we see that a good many of the complaint incidents identify multiple officers, as well as different allegations. We could then move to determine, for starters, the average number of allegations lodged per complaint. But that simple-sounding intention isn’t realized quite so straightforwardly, because we need to isolate the number of unique complaint ids in column A before we divide them into all the complaint particulars; and the elegant way out would have us travel here, to this array formula:

=COUNTA(A2:A6313)/SUM(1/COUNTIF(A2:6313,A2:6313))

The denominator – or rather the pair of denominators commencing with the SUM function – exemplifies a well-known array formula for calculating unique values in a range. The COUNTIF element subjects the A2:A6313 range of complaint ids to what are in effect criteria furnished by each and every record. Thus each of the four instances of id 15-0001 are assessed against every id entry, four of which of course happen to present the selfsame 15-0001. Thus each instance here evaluates to a count of 4, and the formula’s “1/ “ numerator reduces each to ¼ – and by adding 4 1/4s a 1 is returned – tantamount to treating 15-0001 as a single instance. That reciprocal routine is applied to each and every value in column A and then summed – yielding in our case 2779. Divide that outcome into the field’s 6312 records and we wind up with average of 2.24 allegations per complaint. (It should be added that Excel’s dynamic-array UNIQUE function would streamline the approach on which I’ve embarked here, but the dynamic arrays remain behind a locked door somewhere in Redmond, and I have no idea when the key will be found. Note as well that the dynamic arrays will only download to the Office 365 iteration of Excel.)

But that average, however informative, doesn’t apprise us of the number of actual, discrete officers implicated by each complaint, because the same officer is often cited for multiple allegations laid to the same complaint. Again, for example, complaint 15-0001 and its four allegations actually identify but two different officers – and that is the number we’re seeking here, as it would go on to contribute to a real officer-per-case average.

One way – an inelegant one – for getting there would be to pour the data through the Remove Duplicates sieve, selecting the complaint_id and officer_id fields for the duplicate search. Click through and you’ll wind up with exactly 4700 unique, remaindered records, of which 313 are blank, however; and we can’t know how many of those ciphers do, and do not, point to a given officer but once per complaint. On the other hand, because most officers are in fact identified we can acceptably assume that for those complaints directed at multiple officers the unknown party is likely not the one(s) who is named.  That supposition can’t dispel all our questions, of course, but divide 4700 by the 2779 unique complaints we derived above, and we learn that 1.69 distinct officers fell under investigative scrutiny per case – although the real quotient is probably a bit smaller.

In any event, that figure emerges at the cost of dismissing 1600 records from the data set, after which can we subject the stragglers to a formula, e.g.

{=COUNTA(B2:B4701)/SUM(1/(COUNTIF(A2:A4701,A2:A4701)))}

Inelegant indeed. For a sleeker alternative, we could first concatenate the complaint and officer ids in a new field in column I that I’m calling complaintofficer, e.g. in I2:

=A2&B2

That step positions us to cull unique officer ids by case; by running the unique-record array formula at these data we should be able to emulate the 4700 total and divide it by the other unique-finding expression aimed at the complaint ids:

=SUM(1/COUNTIF(I2:I6313,I2:I6313))}/{=SUM(1/COUNTIF(A2:I6313,A2:I6313))}

Elegance is a relative term, mind you, but it works.

On the other hand, if you wanted to develop a racial breakout of the number of individual officers charged with at least one complaint, you may here want to mobilize a Remove Duplicates by the po_race variable, because the breakout comprises multiple items (i.e. “races”; and if you adopt this tack you could save the results under a different file name, thus conserving the original, complete data set). That sequence yields 2549 separate officers, and conduces toward this pivot table:

Rows: officer_id

Value: officer_id

officer_id (again, by % of Column Total)

I get:

blogphil1

Now of course the proportions tell us little, because we need to spread them atop the racial makeup of the Philadelphia police force before any exposition can rightly commence. Note by the way that only one UNKNOWN officer informs the count here, even as we earlier turned up 313 such blank records; that’s because, of course, all the unknowns have the same blank, “duplicate” id.

Returning to the data set in toto, we can distribute allegations by their varieties. Remember of course that the 2779 complaints have accumulated an average of 2.24 charges, but each charge is exactly that – a complaint in its own right. Thus this conventional pivot table:

Rows: allegations_investigated

Values: allegations_investigated

Allegations_investigated (again, % of Column Total)

reports:

blogphil2

Apart from the indefiniteness of the modal Departmental Violation type, we need to recall that our previous post numbered 2782 such allegations populating the ppd_complaints workbook we reviewed then. It’s seems clear that the count enumerated there imputed but one allegation per complaint, a coarser take on the data than the more detailed table we’ve just minted above. In the earlier sheet, for example, Departmental Violations contribute 24.05% of all complaints; here they amount to 31.07%.

We also need to explain why our array formula here totaled 2779 unique complaint ids, when the count in ppd_complaints came to 2782.  In that connection I simply copied the already-unique ids in the ppd_complaints to a new sheet, and directed a Remove Duplicates to ppd_complaint_disciplines keyed to the same field there, and copied these as well to the new sheet. Scrolling about and doing some due diligence, I did find a few discrepancies, e.g. an absent 15-0176 among the ppd_complaint_disciplines ids.

But what’s a mere three records between spreadsheets?

 

Advertisements

Philadelphia Police Complaints, Part 1: One Civilian’s Review

2 May

We’ve looked at civilian complaints about police conduct before – about three years ago, in fact, when I reviewed complaint data collected for the city of Indianapolis; and I’ve had to refresh my memory about that analytical encounter because a visit to Philadelphia’s open data site brought me to a similar, but not identical, record of allegations against the local constabulary. Indeed – a wider, cross-city study of how civilian complaints are conceived and organized might make for a most instructive, if subsidiary, comparative survey of spreadsheet construction.

But what of the Philadelphia complaints? In fact, two spreadsheets detail the histories here:

ppd_complaint_disciplines

ppd_complaints

The first, ppd_complaints, straightforwardly gathers the incidents into five fields via a neatly-sequenced id scheme, and Its dates received in column B are genuinely quantified beaides. You’ll want to auto-fit columns B and D, but probably not E, bearing the text-expressed summaries of complaints; because no field adjoins its right edge an auto-fit won’t realize any gain in visibility there. The data appear to span the complaints for 2015-18 in their entirety, and tack on complaints for the first month of this year as well (at least through January 30). Thus an obvious first reconnoiter would step us through complaint totals by year:

Row: date_received (Year)

Values: date_received (Count)

I get:   phil1

We see that civilian complaints have slackened steadily across the yearly bins, down 21% from their 2015 peak. Still, the January 2019 total of 31 seems low, projecting linearly for the year to about 360. But could it be, rather, that Januarys experience fewer complaints?

Of course we can answer the question by regrouping the complaint numbers both by year and month and shifting the month parameter (curiously and misleadingly holding fast to the date_received field name) into Columns:

phil2

We see then that January is something of a slow month for complaints, although 2019’s lowest reading suggests (but only suggests) that this year may drive the downward curve still further down its axis. Yet the figures for the contiguous December trend far higher – though a highly disproportionate accumulation of complaints dated the 31st of that month seem to prevail. Of the 267 December entries, 63 are time-stamped the 31st, even as chance would have projected a complaint total of around 9.

I arrived at the 63 by installing a set of temporary set of formulas in the next-available F column (don’t be fooled by the encroaching text in E – F is in fact free), starting with this one in F2:

=IF(AND(MONTH(B2)=12,DAY(B2)=31),1,0)

The formula asks if a given date evaluates both to the 12th month (December) and the month’s 31st day. Copy down, add the 1’s, and you get 63.

Is the skew a precipitate of some bookkeeping maneuver designed to hem complaints into a year about to elapse? I don’t know, but someone’s story-seeking antenna should be wagging madly. Indeed – 31 of the 87 December complaints lodged in 2015 fell on the 31st, a clustering that should have you reaching for your probability tables, and 17 of the 88 2016 petitions were similarly dated. That the December 31st numbers slinked back to 8 and 7 in 2017 and 2018 suggest in turn that some sort of correction was introduced to the archiving routine, but that speculation is exactly that.

We could continue and cross-tab complaint type of incident – what the worksheet calls general_cap_classification – by year, and because the classes outnumber the years I’d slide the latter into Columns for presentational fitness, plant general_cap_classification into Rows, and reprise the latter into Values. I get:

phil3

The categories beat out a relatively constant distribution, by and large, though Civil Rights Complaints – their small numbers duly noted – seem to have spiked in 2018. It should be added that the free-text precis of complaints in the summary field incline toward the vague, e.g., a civil-rights submission in which “…they were treated unprofessionally by an officer assigned to the 18th District,” a manner of understatement that could do with some expository padding (note too that the district references for the filings in the district_occurrence field report multiples of the actual district number, e.g. 1800 for 18).

But remember there is a second complaint worksheet among the Philadelphia holdings, ppd_complaint_disciplines, one that historicizes the same complaints and bears identical ids, but per a different complement of defining parameters. Here the complaints identify the race and gender of the officers charged, along with the disposition of the allegation brought against them (investigate_findings). Thus two sheets instigate a next question: since the sheets recall the same incidents, could they not have been consolidated into a single workbook, with each record roping in the fields from each sheet?

That question is a fair one indeed, but I think the Philadelphia data compilers opted for two sheets over a single, unifying rendition is because ppd_complaint_disciplines comprises multiple references of the same complaint id. Because a given complaint may have been levelled at several officers the data for each officer are set down distinctly, aggregating to 6312 records, about 3500 more than the ppd_complaints sheet in which the complaints are recorded uniquely. If each of these were to be matched with the complaints cited in the latter, those complaints would in many cases appear several times – once each for every officer charged in the incident, and those redundant citations might read awkwardly. But those reiterations aren’t necessarily superfluous, because some complaints triggered different charges.

The very first complaint entry, for example, 15-0001, is enumerated four times in the ppd_complaint_disciplines sheet, corresponding to the four actual complaint entries registered for the incident. But it gets a bit messier than that, because the four complaints in fact reference only two discrete officers, who are “named” in the officer_id field. It’s clear in this case that two charges were preferred against each, an recognition that uncovers another complication in turn: the allegations_investigated field discloses three Departmental Violation charges, and another brought for Verbal Abuse.

Yet 15-0001 is described in the ppd_complaints sheet as an instance of Departmental Violations only. For another example 15-0005 is named in that sheet as a case of Physical Abuse, even as the one officer charged in ppd_complaint_disciplines incurred two complaints, one for Criminal Allegation, the other for Harassment.

It’s possible, then, that because each of a set of multiple charges for the same complaint could be regarded independently, one workbook might suffice, with perhaps the date_received and summary fields appended to the others in ppd_complaint_disciplines.

In that light it should also be noted that the Indianapolis data at which I looked earlier works with but one workbook, featuring multiple complaint records as well as date information. Perhaps Philadelphia could have done the same.

But I’m not complaining.

The Hockey Stick Effect: Wayne Gretzky’s Goals, Part 2

12 Apr

There’s another parameter-in-waiting pacing behind the Wayne Gretzky goal data, one that might be worth dragging in front of the footlights and placed into dialogue with the Date field in column B. National Hockey League seasons bridge two calendar years, generally strapping on their blades in October and unlacing them in April. For example, Gretzky’s last goal – time-stamped March 29, 1999 – belongs to the 1998-1999 season, encouraging us to ask how those yearly parings might be sprung from the data, because they’re not there yet.

Of course, a catalogue of Gretzky’s season-by-season scoring accumulations is no gnostic secret; that bundle of information been in orbit in cyberspace for some time (here, for example), and so developing those data won’t presume to teach us something we don’t already know. But the seasonal goal breakdowns could be joined to other, more putatively novel findings awaiting discovery among the data, and so the exercise could be justified.

So here’s my season-welding formula. Pull into next-available-column R, head it Season, and enter in R2:

=IF(MONTH(B2)>=5,YEAR(B2)&”-“&YEAR(B2)+1,YEAR(B2)-1&”-“&YEAR(B2))

We’re looking to concatenate two consecutive years, and so the formula asks if the month of any given entry in B equals or exceeds 5, or May, or falls beneath that value. If the former, the year in B is joined to the following year, via the +1 addendum. If the month equals or postdates May, then the preceding years, operationalized by the -1, is concatenated with the year returned in the B column.

The formulas seemed to work, but as a precision check I rolled out this simple pivot table:

Row: Season

Values: Season (count, of necessity; the data are textual. The values should denote goal total by respective year).

I wound up with this, in excerpt:

Gretz1

Cross-referencing the results with the Gretzky goal data in the above hockey-reference.com link yielded a pleasing equivalence across the years.

Now for some looks in earnest at the data. Starting simply, we can juxtapose Gretzky’s goals scored at home to the ones he netted in away games:

Row: Home/Away

Values: Home/Away (count)

Home/Away (again, % of Column Total)

I get:

Gretz2

We learn that Gretzky scored a palpable majority of his goals at home, but we’d expect as much. As in nearly all team sports, NHL teams enjoy the proverbial home advantage, winning about 55% of the time – a near-equivalence to Gretzky’s ratio. That is, if home teams prevail disproportionately then their goal totals should exhibit a kindred disproportion, kind of. One difference with Gretzky, of course, is that he simply scored more of them.

And does the distribution of his goals by period pattern randomly? Let’s see:

Rows: Per (for Period)

Values: Per (Count)

Per (% of Column Totals)

I get:

Gretz3

Gretzky’s production appears to mount in games’ later stages (OT stands for the overtime period), but that finding needs to be qualified on a number of counts. We’d need first of all to track Gretzky’s average presence times on the ice; that is, was he deployed more often as games advanced toward their denouements and his valuable self was sent ice-bound at clutch time? And we’d also need to plot Gretzky’s goal timings against the league averages for such things; and while I haven’t seen those data, we can assume they’re out there somewhere.

Next, it occurred to me that a look at the winning percentages of games in which Gretzky scored might prove enlightening, once the task was properly conceived. Remember that, as a consequence of his numerous multi-goal flourishes, Gretzky’s goals scatter across far fewer than 894 games. The idea, then, is to fashion a discrete game count across which the goals were distributed; and that sounds like a call for the Discrete Count operation we’ve encountered elsewhere (here, for example). Once we isolate the actual-game totals – which should be associated uniquely with game dates – our answer should follow.

And this pivot table seems to do the job, enabled again by a tick of the Add this data to the Data Model box:

gretz4

Rows: Result

Values: Date (Distinct Count, % of Column Total)

I get:

gretz5

What have we learned? Apart from the up-front factoid that Gretzky scored in 638 of the 1487 games he played across his NHL career (638 is the numeric Grand Total above, before it was supplanted by the 100% figure in the pivot table; note Gretzky also appeared in 160 games in the World Hockey Association), we don’t know how his when-scoring 64.89% win percentage compares with his teams’ success rate when he didn’t score. I don’t have that information, and don’t know where to track it down. But it too is doubtless available.

For another analytical look-see, we can ask if Gretzky’s goals experienced some differential in the number of contributory assists that prefaced them. That is, players (up to two of them) whose passes to a teammate conduce toward the latter’s goal are rewarded with an assist; and the question could be asked, and answered here, if Gretzky’s assist-per-goal average fluctuated meaningfully.  We might to seek to know, for example, if during Gretzky’s heyday his improvisatory acumen freed him to score more unaided goals than in his career dotage, when he may have been bidden to rely more concertedly on his mates.

Since two Assist fields, one for each of the two potential per-goal assists, accompany each goal, the simplest way perhaps to initiate our query would be to enter column S, title it something like AssistCount, and enter in S2:

=COUNTA(J2:K2)

And copy down. That insurgent field readies this straightforward pivot table:

Rows: Season

Values: AssistCount (average, formatted to two decimals)

I get:

gretz6

Not much pattern guiding the data, but if you want to group the seasons in say, five-year bins, remember that because the season entries are purely textual you’ll have to mouse-select five seasons at a time and only then successively click the standard Group Selection command, ticking the collapse button as well if you wish:

gretz7

Even here, then, the variation, is minute – strikingly so.

Now for a last question we could ask about those teammates who were literally Gretzky’s most reliable assistants – that is, the players whose assist counts top their collaborative pairings with the Great One. The problem here is the two-columned distribution of the assist names, one for the first assist on a goal, the other for the (possible) second. I don’t know how a pivot table can return a unique complement of names across two fields simultaneously, preparatory to a count. If you do, get back to me; but in the meantime I turned again to the Get & Transform Data button group in the Data ribbon and moved to unpivot the data set via Power Query, by merging only the assist fields, e.g.:

gretz8

By selecting Assist1 and Assist2 and advancing to Transform > Unpivot Columns and Home > Close and Load the result looked like this, in excerpt:

gretz9

And of course you can rename Attribute and Value, say to Assist and Player.

Once there, this pivot table beckons – after you click TableTools > Tools > Summarize with Pivot Table:

Rows: Player

Values: Player (Count, sort Highest to Lowest)

I got, in excerpt:

gretz10

Nearly 22% of Gretzky’s goals received a helping hand – at least one wrapped around a stick – from his erstwhile Edmonton Oiler and Los Angeles King colleague Jari Kurri, no scoring slouch either with 601 goals of his own – a great many doubtless the beneficiary of a Gretzky assist. Then slip the Assist field beneath Player in Rows and:

gretz11

Now we learn that more than 60% of Kurri’s assists were of the proximate kind; that is, he was the penultimate custodian of the puck, before he shipped it to Gretzky for delivery into the net.

Now that’s how you Kurri favor with the Great One.

 

 

 

The Hockey Stick Effect: Wayne Gretzky’s Goals, Part 1

1 Apr

What is the measure of greatness? How about 894 records, one for each of the goals driven home by the National Hockey League’s Wayne Gretzky, aka the Great one?

That spreadsheet is as large as it gets for NHL scorers, and Tableau ace Ben Jones has infused the goal count with lots of supplementary background about each and every one of the 894, archiving the data for download on the data.world site here.

In fact the workbook makes itself available in both Excel and CSV mode, the latter requiring a text-to-columns parsing that likens it to the former. Either way, a few organizational points need to be entered.

For one thing, you’ll note that what’s called the Rank field in column A numerically ids Gretzky’s goals, in effect sorting them by newest to oldest. That is, Gretzky’s first goal – scored on October 14, 1979 – has received id 894, with the numbers decrementing ahead in time until his final score – tallied almost exactly 20 years ago on March 29, 1999 – has bottomed out with the number 1. It seems to me – and I suspect you’ll share the opinion – that the enumeration should have pulled in the opposite direction, with Gretzky’s last goal more properly checking in at 894. With that determination in mind I reversed the sequence via a standard autofill, entering 894 in cell A2, 893 in A3, and copying down.

You’ll also be struck by the unremittingly monotonic entries in the Scorer field, comprising 894 iterations of the name Wayne Gretzky. We’ve seen this before in other data sets, of course, being dragged into the data set as a likely accessory to some generic download protocol. Again, you can either ignore the field or delete it. Either way, you’re not going to use it.

And your curiosity will be stirred anew by the blank column-heading cells idling atop columns D, F, and G. It’s difficult to believe that Ben Jones, who doubtless knows whereof he speaks, would allow these most rudimentary oversights to escape his notice, but alternative explanations notwithstanding, the headings aren’t there and must be supplied.

Column D reports a binary datum – whether a Gretzky goal was scored at his team’s arena or at the rink to which his team traveled for an away game. I’ll thus entitle the field Home/Away and proceed to do something about the data themselves, whose cells remain empty when signifying a home goal and register an @ for “at”, that is, a goal netted at someone else’s arena. A pair of finds and replaces – the first, substituting an H for the blank cells, with the second supplanting the @ signs with a companion, alphabetized A – should sharpen the field’s intelligibility.

The headless column F archives game outcomes, i.e. wins, losses, or ties, and so I’ll call the field Result, or something like it. Column G denotes the phase of a game when the goal was scored, either during regulation time or overtime – or so I assumed. But a second thought soon followed on the heels of that hunch, if I may mangle the metaphor: it occurred to me that the Regulation/Overtime opposition simply recalls whether or not the game itself swung into an overtime period, irrespective of the actual times at which Gretzky scored. Could that uncertainty be relieved?

I think so, and I played it this way: first, I named the doubtful field Reg/OT, and ran a find and replace at the F column, substituting Reg for any empty cell therein. I then moved toward a pivot table:

Row Labels: Date (ungrouped, in order to exhibit each date)

Columns: Reg/OT

Values: Date (Count)

What I found is that no game date featured a value for both a regulation and overtime goal, a discovery that goes quite some way toward clinching the second speculation – namely, that the Reg/OT field entries do no more than inform us if the games necessitated an overtime period.

After all, if we confine the analysis momentarily to the games that spilled into overtime, one could most reasonably imagine that a scorer with Gretzky’s gifts would have occasionally lodged a goal in both the regulation and overtime phases of the same game; but the pivot table uncovers no such evidence. For any given date, Gretzky’s score(s) appear in either the OT or the Reg column. Moreover, some of the games – for example, November 27, 1985 – record two overtime goals, a unicorn-like impossibility in a sport in which overtime ends when the first goal is scored. (You’ll note by the way that the overtime-column goals only begin to appear in 1983, when a five-minute overtime period was instituted.)

Thus I’d aver that the Reg/OT field conveys little understanding of Gretzky’s scoring proclivities; all it does is identify games that happened to have extended themselves into overtime, and in which he scored – some time.

The Strength field cites the demographic possibilities under which Gretzky accrued his goals: EV refers to even strength, when both teams’ numeric complements on ice were equal, PP, or power play, during which the scoring team team temporarily outnumbered the other after a player was remanded to the penalty box, and SH or shorthanded, the rarest eventuality – when Gretzky scored while his team was outnumbered.

I do not, however, know with certainty what the EN entry in the Other field represents even though I probably should, and I see nothing in Data World’s data dictionary that moves to define it. It may very well stand for end, as in end of game, however; each of its 56 instances are joined to goals there were scored with fewer than two minutes left in their respective games. EN may then stand for scores achieved after the opposing goalie skated off in a losing cause and was replaced by offensive player, in order to buttress a desperate try at equalizing the game. Indeed – all 56 of the EN goals were scored in wins by Gretzky’s team.

As a matter of fact, I think I’m right. Filter the Other field for its ENs and look leftward at the Goalie field in L. There’s nothing there.

U.S. State Dept. Travel Advisories: Getting There

17 Mar

First principles: before you subject a dataset to your imposing, if caffeinated, spreadsheet acumen you have to actually get the data. But that blitheringly obvious stipulation is, nevertheless, sometimes easier stated than achieved.

It’s true that most open data sites aim to please and affably release their holdings, via a tick of a well-placed and intelligible Download or Export button, or something like it. But there is a particular brand of spreadsheet manqué that issues its data in stages – that is, it mothballs them across several web pages, and not then in a unitary place.

That segmented storage strategy might enhance the data’s readability, or might not, though if nothing else the multipage design spares viewers from scrolling relentlessly down the page. My question about these kinds of datasets is simple: can they be downloaded directly into a single spreadsheet without contortion?

And that question placed itself before me anew a short while ago when I met up with the US Department of State’s travel advisory dataset, brought to my attention by the Far and Wide site. That dataset looks like this, necessarily in part, given its multi-page distribution:

travel 1a

The set’s three fields seem perfectly limpid (save perhaps the Worldwide Caution entry, which appears to commend a global, and not a country-specific advisory); but their data are spread across five pages, each of which must be clicked separately:

travel5

Yes; as earlier indicated, It’s one of those kinds of datasets. And again – can I get it all into my spreadsheet without that most unbecoming of resorts: five copy-and-pastes, rife with pinched column widths and text that needs to be unwrapped? (Note by the way that as of this writing New Zealand’s Level 1 assessment I the data set hadn’t changed, having been last updated on November 15 2018. However, an alert for the country has been entered here.)

When assailed by such questions, a hopeful right-click upon the data gives us nothing to lose and something, perhaps, to gain e.g.

travel6

The third option from the bottom looks promising. Click there and we’re told that the data before us will find their way into my waiting spreadsheet. But I seem to recall having viewed that Export instruction elsewhere on other sites, with decidedly mixed results.

But why not. I gave it a click and to my surprise observed something actually happening on my blank worksheet, an ellipsis-freighted “External Data_1: Getting Data…” message that deliberated on screen for several minutes before finally giving way to an actual, unitary spreadsheet, e.g.

travel2

In other words, the export actually exported. Note that the advisory data in column A appears on site in a stream of hyperlinks that when clicked directs the viewer to a deeper background on the country in question; presumably the export routine thus executes a Paste Values protocol that strips the records of their more exotic contents. On the other hand, the download did introduce the Date Updated field to the spreadsheet in actual, numerically viable date mode.

So it worked, to my pleasant surprise, though a few significant qualifications of the process need be entered. First, a right-click on the first, Advisory field did not summon the Export to Microsoft Excel option; it was only when I attempted a click over the second or third field that the context menu disclosed the command. Second, and perhaps most importantly, the Export possibility only seems to make itself available when the sites are broached with Internet Explorer. Attempts to coax Export from the menu in the course of perusals conducted in Chrome or Firebox failed, and I am presently unaware of an enabling workaround for either browser, though I am happy to be reeducated on this count.

Now of course Excel and Internet Explorer spring from the same shop in Redmond, and so one could be left to draw one’s own conclusions on the matter. Is this what they mean by “seamless integration”? Seamless, but unseemly? I don’t know.

In any case, my success with the export fired up the obvious follow-on question: could the deed be replicated with similar datasets thronging the internet?

In search of answer, I stopped off at the US News and World Report rankings of law schools and clicked its Table View button, assuming that something like a spreadsheet would eventuate as a consequence. Once in view I again right-clicked the data’s second, Tuition field, revisited Export to Microsoft Excel (remember, I’m back in Internet Explorer), and was delivered this compendium (in excerpt):

travel3

While the above tableau and its quiver of alternating blank rows won’t pass Spreadsheet Design 101 (and I’m an easy grader), the “fault”, if I may be so judgmental, lies with the site and not the export, which seems to have captured the data as they appear on site. In short, the export seems to have worked again.

In the interests of building up a scientifically workable sample I turned next (again in Explorer) to the Times Higher Education world university rankings, which look something like this it situ:

travel4

Again, a right click upon the data ushered Export to Microsoft Excel to view (though the command appeared after clicking either the data’s first and third fields, but not the second), but this time the spreadsheet registered nothing but a companionless header row for the rankings. I retried the export numerous times, but met with the identical result on each attempt.

I can’t explain the discrepancy, i.e. why some data sets comply with the export request and others resist. That’s not to say an explanation can’t be adduced, of course, but I’ll have to assign that accounting to a web programmer. Clearly, the kinds of data tropes we’ve reviewed here embody a different genotype from that inherited by the standard open-data-site collections we usually confront here, those designed in large measure to download immediately into spreadsheets; and while it’s true that these web-emplaced data are probably meant to facilitate searches for a particular item and little more, it might be a good idea for their designers to anticipate the prospect that someone out there might wish to analyze the whole lot, and by treating the data as a whole, and seeing to it that they find their target in a spreadsheet all the time.

Screen Shots: Charting Movie Ratings

1 Mar

Granted; the line between an iconoclast and a smart aleck is but a few pixels wide, and so it’s not impossible to plant one foot on either side of the characterological divide. And while I’m straddling that gossamer border I’ll voice the notion, something at which I’ve hinted before: that Excel can do some things that other people do by mustering what feels like heavier artillery. That is, even as some practitioners of the programming arts unleash charts and vizzes of no mean quality upon their blogs, I’ll suggest in a whisper that highly reasonable facsimiles of the above can be emulated by Excel.

I’ve intimated as much before, e.g. my previous post, wherein I framed a scatter plot dotting NBA players’ offensive-to-defensive margins that successfully (if I do say so myself) imitated the chart synthesized by a corps of programmers applying a language or two (ggplot2) to the task. You may also recall my Excel-driven approximation of the celebrated charts of disease eradication here; and now I’m at it again.

My model this time is the chart shaped by the graphics adepts at the BBC that positions the respective film ratings of critics and viewers of 2017 Oscar-nominated films, e.g in excerpt:

movie1

Audience evaluations are captured by the red dot with the greenish speck symbolizing aggregated critics’ scores, and the interposing gray bar conveying the scope of the discrepancy between the two assessments.

It’s a neat representation, one redolent of the controls you’d slide across this or that hi-tech contraption, though of course – excuse the iconoclasm – you could ask if the above visualization delivers its message more crisply than an off-the-shelf, industry-standard column chart that treats each film to a pair of comparative bars.

But that judgement aside, a surmise followed on: could I do something similar in Excel? I suspect you know the answer.

First, the data, which was gleaned from the Metacritic site that compiles review scores. Not being able to contrive out how to access these directly I simply transcribed the numbers from screen shots of the two sheets bearing the figures, e.g.

movie2

There are indeed two sheets – one reporting movies having received relatively more favorable critic ratings, the other enumerating the ones for which audience scores were the higher. In practice for our purposes, of course, only one sheet is required; there’s no operational reason for parting the sheets on the basis of the critic/audience margins.

Moreover and as you see, the screen shots feature the critic/audience rating gap for the movies, a differential that, for our charting intentions, is irrelevant. After all, the intervening gray bar should portray those differences as a matter of course (it appears by the way that some of the critic-preferred subtractive differences aren’t quite correct).

For the sake of demonstration I then simply transcribed the data onto a spreadsheet for 20 records (say A4:C24, reserving the top row for headers) – ten in which critics’ estimations topped the popular appraisals and ten in which the support trended the other way, and rounding off the numbers when required in the interests of demo simplicity.

I then selected D5:CY5 – thus bridging the 100 columns following C and so making room for any potential audience/critic rating – and proceeded to clip their widths to .42 (a measure on which you may want to experiment), and entered in D5:

=IF(OR($B5=COLUMN(D5)-3,$C5=COLUMN(D5)-3),CHAR(149),0)

What is this formula doing? First, it needs to subtract 3 from each column reference (COLUMN identifies the column number of a reference, e.g. =COLUMN(D4) returns 4) in order to for D5:CZ5 to span values 1 through 100. It then asks if a given cell’s column value equates to the film rating stored in either B5 or C5; if so, the qualifying cell(s) (one of which should satisfy the stipulation in B5, the other in C5) installs a dot, via the CHAR(149) expression, the character number for that symbol in Calibri. And that’s a dot, not a sentence-capping period. (Note that I’ve elected to size the dots to 13 pts, given the column widths, a reading you may want to tweak.) I then copied the formula across D5:CY5.

In view of the tact that my first film – the one I recorded in row 5, Warcraft – received an average 38 ranking from critics and a boffo 82 from filmgoers, the dots should instate themselves in positions 38 and 82 – in default black. A hip color, perhaps, but we want to emulate the green and red buttons the BBC set forth – the green for the audience rating, and red for the critics’ assay. That sounds like a job for a couple of conditional formats; and so after selecting D5:CY I can write

movie3

and

movie4

These expressions ask if any cell falling within D5 and CY5 equals the value in B5 – the critics’ rating – or C5, that of the general viewers. If the former condition is met the dot turns the critics’ red; fulfillment of the latter logical test colors the dot green. Note that because the CHAR(149) realizes a textual result – that is, the dot is just that and not a conditional formatting icon – the conditional format invokes a font color change.

Now we next need to engineer the gray band that connects the dots, as it were. I selected D5:CY5, and fired up another conditional format formula:

=AND(COLUMN(D5)-3>MIN($B5:$C5),COLUMN(D5)-3<MAX($B5:$C5))

The AND statement looks for cells whose column values register a number between the respective critic and audience ratings. Cells that conform to the criteria receive a gray fill color.

And all that means that the charted scores for Warcraft look like this:

movie5

Not too bad, but I’m biased.

If you’re happy with that take, you can copy the contents of D5:CY5 down through the other films, a move that’ll bring the conditional formats along with them. Next, in the interests of presentational clarity, I’d insert a blank row between each pair of films. (If that stratagem sounds slightly tedious, see this alternative that might or might meet with your approval.)

That chore attended to, the ratings start to look something like this:

movie6

I think it’s ready for release; I may submit it to Metacritic, in fact. I, for one, give it an 84.

NBA Field Goal Data, Part 2: More Than 3 Points to Make

11 Feb

Among the metrics figured and plotted by the Medium look at NBA shot-making (and missing) is a two-way-player analysis, a comparison of the points scored by a given player to the points surrendered in his defensive capacity. As the study authors allow, the measure’s validity need be qualified by several cautions, e.g. the fact that an offensive dynamo might be assigned to guard a scorer of lesser prowess, thus padding his points differential. In any case, we can ask how a spreadsheet might applied to the task.

And that task is encouraged by the data’s CLOSEST_DEFENDER field, which identifies the player nearest a shooter at the point when he launched the shot. Thus by totalling a player’s points and subtracting the sum scored “against” him in his closest-defender role, the metric is realized. (Remember that the data issue from three-quarters of the games comprising the 2014-15 season.) But in view of the way in which the data present themselves, calculating that difference is far from straightforward.

It’s simple enough to drop this pivot table into the equation:

Rows: player_id

player_name

Values: PTS

That resultant – indubitably straightforward – apprises us of the number of points (scored via field goals, but not foul shots) credited to each player, and we’ve earmarked player_id here for inclusion in the table in order to play the standard defense against the prospect of multiple players with identical names. (Subsidiary point, one that’s been confronting my uncomprehending gaze for quite some time: fashioning a pivot table in tabular layout mode substitutes the actual data source field names in the header for those dull “Row Label” defaults. Thanks to Barbara and her How to Excel at Excel newsletter.)

But it turns out that an another, data-set-specific requirement for player_id imposes itself on the process. In fact, the player names in CLOSEST_DEFENDER are ordered last name first, surname distanced from the first by a comma, Yet the entries in player_name hew to the conventional first name/surname protocol, daubing a viscous blob between the two fields. Excuse the pun, but properly comparing the fields would call for a round of hoop-jumping that won’t propel me off my couch – not when I can make a far simpler resort to both player_id and CLOSEST_DEFENDER_PLAYER_ID, which should encourage a more useful match-up (but I can’t account for the mixed caps/lower-case usages spread across the field headings).

That understanding in tow, we can plot a second pivot table, one I’ve positioned on the same sheet as its predecessor, set down in the same row:

Rows: CLOSEST_DEFENDER_PLAYER_ID

Values:  PTS

The paired tables should look something like this, in excerpt:

nba21

Once you’ve gotten this far you may be lightly jarred by an additional curiosity scattered across the data: namely, that the closest defender outcomes comprise far more players, a few more hundred, in actuality. I looked at the stats for two of the players who appear in CLOSEST_DEFENDER_PLAYER_ID only – ids 1737 and 1882, i.e. Nazr Mohammed and Elton Brand, and learned that their offensive stats for 2014-15 were rather sparse (check out www.basketball-reference.com for the data); Mohammed averaged 1.3 shots per game that years, with Brand checking in at 2.6. It may be, then, that the data compilers decided to omit field goal stats for players falling beneath an operationalized threshold, but that conjecture is precisely that.

You may also wonder why two pivot tables need to be impressed into service, when both train their aggregating gaze at the same PTS field. It’s because we’re directed different parameters to the respective Row Label areas – one identifying the scorers, the other naming what are in effect the same players but in their capacity of defender, there lined up with someone else’s points, so to speak.

In any case, once we’ve established the tables, we can dash off a column of relatively simple lookup formulas alongside the first pivot table, one that searches for the equivalent player id in the second. I’ve named the data range in the second table defense, and can enter, assuming the first receiving cell is stationed in A4 (I’ve named the budding field Points Surrendered in A3. Remember of course that the field is external to the actual pivot table):

=VLOOKUP(A4,defense,2,FALSE)

And copy down the column.

(The FALSE argument is probably unnecessary, as the ids in both pivot tables should have been sorted as a matter of course.)

The lookups track down the ids of the players listed in the first pivot table, and grab their points surrendered totals, culminating in a joint scenario resembling this shot in excerpt:

nba22

And once engineered you can, among other things, subject the lookup results to a simple subtractive relation with the sum of pts to develop the offense/defense differential on which the Medium piece reports. You could also divide players’ points by points surrendered instead, developing a ratio that would look past absolute point totals.

Remember, however, that Point Surrendered “field” and the suggested follow-on formulas are grafts alongside, but not concomitant to, the pivot table, and as such you could unify all the fields’ status by selecting the first pivot table and running a Copy > Paste Values upon the results, thereby sieving the pivot data into a simple data set now of a piece with Points Surrendered and kindred formulas.

If we go ahead and divide pts by points surrendered and sort the results highest to lowest we see, in excerpt:

nba23

The findings are both interesting and cautionary.  Dwayne Wade’s and Lebron James’ enormous differentials may have more to do with their offensive puissance than their preventive talents, offset by the understanding, on the other hand, that a good offense may well be the best defense. What’s really needed, however, is a finer scrutiny of the players to which they’ve been assigned – and the same could be said about those who cede far more points than they score on the other end of the sort.

Of course, with all those parameters there’s no shortage of looks you can cast at the data. For example, try this pivot table:

Rows:  CLOSEST_DEFENDER

CLOSEST_DEFENDER_PLAYER_ID

Values: PTS

ShotPct (the calculated field we hammered together in the previous post).

I get in excerpt:

nba24

The intent here is to compare players’ points surrendered – an absolute measure – and the shooting percentages of the players they’ve guarded. Scan the list and you’ll see that Lebron James “held” his shooters to a middling .442 percentage, but Dwayne Wade restricted his opponents to a .394 mark, suggesting his defensive goods are for real. But again – the numbers need to be checked against the overall percentages of shooters. It may be that Mr. Wade has been issued a light workload.

And for a concluding, graphical touch, the Medium piece offers a scatter plot pairing players’ points scored by and against:

nba25

I don’t know with what tool the authors plied the chart, but I very much doubt it was Excel. In any case I managed to achieve something very similar with that application:

nba26

How? Well first, recognize that Excel simply can’t put together a scatter plot from a pivot table. If you try, you’ll be told “Please select a different chart type, or copy the data outside the Pivot Table.”

Opted for the latter counsel, I copied these data, for example:

nba27

And pasted them into a blank sheet area via Copy > Paste Values. I then selected the two columns of data, and headed toward Insert > Insert Scatter (X, Y) or Bubble Chart (to add data labels, see this You Tube video).

I did all this stuff without a programming language in sight. Does that make me a philistine?