Best U. S. Cities: Letting the Data Decide

11 Nov

Thinking of a change of scenery? No; I’m not talking about that fetching screen saver you just downloaded. I’m talking a real move, of the bricks and mortar kind – apocryphal floor plans, burly guys manhandling your Wedgwood and your pet, change of address forms, having two keys for three locks – in short, big fun.

But if you’re still planning on going ahead, take a look at and a download of Ben Jones’ Data World-sited ranking of what he terms 125 of the Best US Cities, a reworking of the listing assembled by US News and World Report.

It’s all a judgement call, of course, but US News has marshalled a set of parameters, which when judiciously factored, yield their rankings. City number 1, and for the third year in a row: Austin, Texas, the state’s capital and a college town, leading the pack for its “value for the money, strong job market, high quality of life and being a desirable place to live.”

OK – that last plaudit seems a touch tautological, but Austin it is. And if you’re wondering, New York City pulls in at 90, Los Angeles at 107, and San Juan, Puerto Rico, – the latter situated in a not-quite-a-state – floors the list at 125. But the larger point, of course, is to make some aggregate sense of the ranking criteria dropped across the sheet’s columns. I see no master formula into which the criteria contribute their evidence and output the rankings, so we’ll have to make do with some homemade reads on the data.

To start with, we could draw up a series of correlations pairing the rankings with the numerically-driven parameters in columns D-Q, e.g., correlate ranking with Median Month Rent. Enough of those might deliver a crude but sharpened sense of city stature.

But you’ll respectfully demur. You’ll submit that the ranking data in column A is ordinal, that is, expressive of a position in a sequence without delineating the relative space between any two positions. Thus, for example, the grade average of a school valedictorian isn’t twice as high as the student holding down the second position, even as their respective class standings are quantified 1 and 2. On the other hand, a median monthly rent comprises interval data, whereby a rent of $2,000 is indeed twice as high as a charge of $1,000.

You’re right, and as such the correlation between city ranking and rents, for example, is assailable, but not, in my view meaningless. In spite of the gainsaying, I’d allow that higher-ranking cities should in the main associate more favorably with the variables the sheet reports. But let’s see.

We can figure the correlations easily enough by aiming Excel’s CORREL function at the fields. For example, to correlate city ranking and Median Home Price in H:

=CORREL(A2:A126,H2:H126)

I get -.191, or a small negative association with home prices and ranking. With higher city rankings (denoted by the lower the number) then, comes a small but not overwhelming uptick in home prices, befitting those cities’ desirability. But again, the connection is small. (Note that if one of the paired values in a row features an NA the CORREL simply ignores the row.)

We can than apply the correlations to some of the other parameters:

With Metro Population: .426

Average Annual Salary: -.382

Median Age: -.105

Unemployment Rate: .679

Median Monthly Rent: -.211

Percent Single: .515

Violent Crime: .329 (By the way – I’m assuming the crime rates are superimposed atop a per-100,000 denominator, which appears to serve as the standard.)

Property Crime: .251

You’ll note the emphatic, not terribly surprising correlation between ranking and unemployment rate (here both dimensions reward a lower score). A city’s plenteous job market should naturally magnetize people through its perimeters; and note in turn the trenchant, but less decisive, association with ranking and average salary. The fairly high correlation with Metro Population suggests that a city’s appeal heightens somewhat with its relative smallness.

More curious and decidedly less obvious is the impressive correlation of ranking and percent single. Here loftiness of a city’s standing comports with a smaller rate of unmarried residents (but no, I don’t know if live-in partners are identified as single). (Again, don’t get confused; a “smaller” ranking of course denotes the more desirable city.) That finding could stand a round of analysis or two; after all, city quality is stereotypically paired with young, hip, unattached urban types. And by the way – though there’s nothing revelatory here – the correlations between annual salary and median home price, and median monthly rent: .658 and .756.

But of course, it’s a trio of fields, e.g. Avg High/Low Temps, AVG Annual Rainfall and Avg Commute Time – we haven’t correlated that begs the next questions: namely, why not? The answer – and we’ve encountered this complication before – is that the values staking this collection of averages have been consigned to textual status. Austin’s average commute time of 26.8 minutes, expressed precisely in those terms, is hard to quantify, because it isn’t a quantity. But couldn’t the field have been headed Avg Commute Time in Minutes instead, with authentic values posted down the column?

But the field is what it is, and so in order to restore the commute times to numerical good standing we could select the Avg Commute Time cells and point this Find and Replace at the range:

City1

That’s a space fronting the word “minutes”, a necessary detail that obviates results of the 26.8(space) kind, which would remain textual.

Now with the textual contaminants purged the correlation can proceed, evaluating to a most indifferent .012.

But while we’re at it, this array formula can correlate average commute times and city ranking without removing the minutes term:

{=CORREL(A2:A126,VALUE(LEFT(L2:L126,4)))}

The formula extracts the four leftmost characters from each Avg Commute Time cell (all of whose first four characters are numerics), convert them into values, and gets on with the correlation. (But note, on the other hand, that the array option doesn’t appear to comport with the AVG Annual Rainfall data in I, because the NAs obstruct the array calculation. The Find and Replace alternative will work, however).

And by the same token, analytical ease would have been served by allotting a field each to the average high and low temperatures, and without the degree symbol. Here you might have to make room for a couple of blank columns and enter something like

=IFERROR(VALUE(LEFT(F2,4)),”N/A”)

 

Fo the average high temperature, and copy down. You need all that because a straight-ahead VALUE(LEFT ) will turn up a good many VALUE errors, which will stymie the correlation. If you go ahead you’ll realize an association between city ranking and high average temperature of .1975 – something, but not much.

And for low temperatures, don’t ask. But if you do, try:

=IFERROR(VALUE(MID(F2, 9,4)),”N/A”)

Correlation, if you’re still with me: .2159.

Maybe not worth the effort.

 

 

 

 

 

Bicycle Race Data: Signs of the Times

5 Sep

The thing about formatting a cell value is that it’s never wrong – or at least, rather, never “wrong”. Those quotes point to an irony of sorts, of course, meaning to reaffirm the powerless effect of formatting touches upon the data they re-present. That is, enter 23712 in cell A6. retouch it into $23,712.00, 2.37E+04, or Tuesday, December 01, 1964, and you’re still left with nothing but 23712. Write =A6*2 somewhere and you get 47424 – no matter how that value is guised onscreen.

But a format that fails to comport with its data setting can seem like Donald Trump delivering a keynote address to a Mensa conference – that is, unthinkably inappropriate. Baseball aficionados don’t want to be told that Babe Ruth hit Saturday, December 14, 1901 home runs, after all, when they’re expecting to see 714 – and not even 714.00.

Thus data appearances matter, of course, even as they don’t change anything, as it were. And for a case in point in how they do and don’t matter, pedal over to the Velopace site, an address devoted to a showcasing of that company’s adroitness at timing sporting events with all due, competition-demanding precision. By way of exemplification Velopace has archived the results of a raft of races at which it’s been called upon to time, including the 2018 LVRC National Championships Race A, a contest bearing the aegis of the UK’s  League of Veteran Cyclists. Those results come right at you in spreadsheet form here:

Velopace results

Apart from the usual obeisance to column auto-fitting, the data make a few interesting claims on our scrutiny. Consider, for example, the timings for the first five finishers, lined up in the spreadsheet:

format1

Then turn to the same quintet in the source web-site iteration:

format2

First, note that the four Lap readings (the Finish parameter incorporates the times for the fourth lap) are cumulative; that is, Lap 2 joins its time to that of Lap 1, and so on. Note in addition that the Total Race Time field seems to merely reiterate the Finish time result, and as such could be deemed organizationally redundant, and perhaps a touch confusing.

But it’s the spreadsheet version’s formatting may force you to pull off the road for that jolt of Red Bull. Here, for starters, the timings have been rounded off to tenths of a second, in contradistinction to the web-versioned drill-down to thousandths – if nothing else, supporting testimony to Velopace’s skill at clocking exactitude. Now while that fine touch makes sense, Lap 2’s time for race victor Simon Laws in cell I2 reads 10:07.2. A click on that cell bares its formula-bar content of 1:10:07 AM – that is, Laws’ aggregated two-lap time, and expressed in the web version as 01:10.07.180. We need to ask first of all about the missing hour reference in the spreadsheet time in I2, which appears to you and me as 10-plus minutes. Remain in I2 and right-click Format Cells and you’ll be brought here:

format3

That customized format excludes the hour parameter, and so should properly read something like:

hh:mm:ss.0

Getting there asks you to click the bar immediately beneath the Type: caption and add hh: to the expression:

format4

The hour is thereby returned to view (note the sample above, returning the newly-formatted, actual time value in I2), and a mass application of the Format Painter will transmit the tweak to all the times recorded across the spreadsheet, including the sub-hour entries for lap 1, which will be fronted by two leading zeros. The 0 following the decimal point above instates a code that regulates the number of in-view decimals visiting the cell; thus h:mm:ss.000 will replicate Laws’ 01:10:07.180.

But the first question that need be directed at the data is why the above repairs had to be performed at all. Indeed, and by way of curious corroboration, two other race results I downloaded from the Velopace site in which cyclist times pushed past the hour threshold were likewise truncated, but It would be a reach of no small arm’s length to surmise that the spreadsheet architects had built the shortcoming into their designs. Could it be then that the peculiarly clipped formats facing us owe something to some shard of HTML code that went wrong? I don’t know, but after downloading the LVRC file in both CSV and Excel modes (the latter dragging along with it some additional formatting curiosities), I found the hours missing either way.

Now for one more formatting peccadillo, this one far more generic: enter any duration, say 2:23, and the cell in which you’ve posted it will report 2:23 AM, as if you’ve decided to record a time of day, e.g. 23 minutes after 2 in the morning (yes; type 15:36 anywhere and you’ll trigger a PM). I do not know how to eradicate the AM, though Excel is smart enough not to default it into view, consigning it to formula-bar visibility only. Indeed, if you want to see the AM in-cell, you’ll need to tick a custom format in order to make that happen.

But the quirks keep coming. If, for example, you enter 52:14 – that is, a time that bursts through the 24-hour threshold – Excel will faithfully replicate what you’ve typed in its cell (in actuality 52:14:00), but will at the same time deliver

1/2/1900 4:14 AM

to the formula bar. That is, once a time entry exceeds a day’s worth of duration, Excel begins to implement day-of-the-year data as well, commencing with the spreadsheet-standard January 1, 1900 baseline. But as you’ve likely observed, that inception point doesn’t quite start there. After all, once the dates are triggered by postings in excess of 24 hours, one might offer that 52:14 should take us to January 3, 1900 – the first 48 hours therein pacing off January 1 and 2, with the 4:14 remainder tipping us into the morning of the 3rd.

But we see that the expression doesn’t behave that way. It seems as if the first 24 of the above 52 hours devote themselves to an hourly reading alone, only after which the days begin to count off as well. Thus it seems that Excel parses 52:14 into an inaugural, day-less 24 hours – and only then does the 28:14 remainder kick off from January 1, 1900.

But still, format 52:14 as a number instead and the expression returns 2.18 – that is, the passage of 2.18 days – or 4:14 on January 3, 1900.

Because even when formatting looks wrong, it’s always right. Now why don’t they say that about my plaid tie and polka dot shirt?

 

The Grey Lady Learns Pivot Tables: NY Times J-Course Data, Part 2

15 Jul

The Intermediate tine of the three-pronged New York Times data journalistic syllabus casts its analytic lot with pivot tables, and the kinds of questions properly placed before that mighty aggregating tool. Among its several data offerings awaits a 2900-record gathering of New York (i.e. Manhattan) real estate data, naming property owners, their holdings and their assessed value, and a trove of additional metrics. Per the course’s pedagogical remit, the learners are prompted to pose a few questions of their own of the data – surely a useful heuristic – before they turn to the assignments at hand, which among other things asks “What are the things you notice about this dataset that shows it isn’t perfect?”

A good, important, and generic question, one that nevertheless isn’t advanced by the Census data sheet we reviewed in the previous post. In fact, worksheet data imperfections can assume at least two forms: a discernible scruffiness among the records, and/or design impediments that could constrain analysis, and I’m conjecturing the Times wants it staffers to concern themselves with flaws of the former stripe.

For example, if this qualifies as a blemish: once downloaded to Excel, both the start and end_date year entries present themselves in text form, thus obstructing any unmediated attempt to group those data. Yet the year_built data remain indubitably numeric, and I can’t account for the discrepancy. But at the same time, however, these data in their native Google sheet mode appear appropriately numeric, and when I copied and pasted some of the start dates to a Google sheet of my own they behaved like good little values; and moreover, the left orientation imparted to the numbers in the end_date field suddenly righted themselves (that was a pun intended) via my paste. Another Google-Microsoft lost-in-translation flashpoint, perhaps, and not a species of data imperfection, if one remains in Sheets. (Another note: I’ve been heretofore unable to actually work with the Times sheets in the their Google trappings, as access to them defaults to View Only, and seems to require permission from the sheet owners in order to actually work with them. My requests for permission have gone unrequited to date, but in fact you can copy and paste the data to a blank Google sheet and go ahead. The data are open-sourced, aren’t they?)

Far more problematic however, and presumably one of the data failings over which the Times hoped its learners would puzzle, are the disparate spellings in the owner_name field of what appears to be the one and the same New York City Department of Housing:

nyt1

(Note the last entry above is simply misspelled. The data were drawn from the coredata site, by the way, a project of New York University’s Furman Center.) And, while we’re at it:

nyt2

But the Times’ marching orders don’t oblige its learners to proceed and do something about the inconsistencies. Absent that determination, no accurate answer to the Times’ question (number 6) – “Which owner has the most buildings?” – can be enabled. Remember that the Intermediate unit is pivot-table-driven, and any table invoking the owner_name field is going to loose the untidy spate of spellings reported above.

Yet one more imperfection besetting the selfsame owner_name field is the formidable complement of cells – 381 of them, to be exact, or about 13% of all the records – that contain no owner name at all, a lacuna that likewise comprises the analysis. The Times asks its learners “Who are the biggest owners in each neighborhood based on the number of units? Limit your table to owners who have more than 1,000 units”, an exercise which would appear to call for a pivot table that looks something like this:

Rows:  Neighborhood

owner_name

Values: res_units (filtered in the Rows area for sum of res_units equal to or greater than 1000)

And that alignment of parts kicks out a set of results that, in excerpt, embody the problem:

nyt3

Indeed, both data shortcomings – the blanks and the variant spellings – degrade the findings prohibitively.

The Times also wants its learners to “Compare the average value per unit for different neighborhoods. Which is the most expensive and which is the cheapest?” That chore seems to call for a calculated field, e.g. in Excel:

nyt4

I’m just wondering if the Times cohort learned the equivalent feature for Google Sheets; perhaps it did, after all.  Its Data Training Skills List merely records the Pivot Tables rubric without elaboration. (Note in addition that the housing data sheet hoards an Income sheet from which the Neighborhood population, income, and diversity fields on the Housing sheet have been presumably drawn, probably through a series of VLOOKUPS whose yields have been subject to a mass Copy > Paste Special routine directed to the Housing sheet.)

Of course, that surmise points to a larger question: the breadth of spreadsheet capabilities spanned by the Times training. How, for example, were learners expected to apply themselves to this assignment: “Which neighborhoods will be most affected (in terms of number of units) in each of the next 10 years by expiring subsidies and which one is the most secure?” I’d try this:

Rows: Neighborhood

Columns: end_date (filtered for years 2019-2028)

Values: program_name (Count, necessarily; the data are textual)

And my table looks like this:

nyt5

Thus Central Harlem is the neighborhood most vulnerable to short-term expirations of program subsidies – by far – with the Stuyvesant Town/Turtle Bay district, really a mélange of sections on Manhattan’s East Side, the least exposed. But does my pivot table approximate toward the strategy the Times was seeking?  Again I don’t know, but a conversation with the paper’s syllabus architects about their intentions for the exercises would prove instructive – at least for me.

And that conduces toward the inexorable follow-on, then: I’m happy to offer my services to the Times, in the edifying form of a weekly column on spreadsheets, and for a magnanimously modest emolument; and I’d make myself available to help with the in-house training, too.

Just one question: will my press pass get me into all the Yankees games for free?


Addendum to the above: My previous post recounted my inability to access and edit the Times’ files in native Google Sheet mode. The paper’s Elaine Chen did get back to me yesterday (July 16), pointing to the File > Download alternative. One assumes, after all, that Times doesn’t want to approve shared file access for the multitudes, and probably for good reason. I should add that if one downloads the data in CSV instead of Excel mode, the formatting discrepancies I described in Part 1 seem to disappear.

 

The Grey Lady Learns Pivot Tables: NY Times J-Course Data, Part 1

28 Jun

This just in: The newspaper of record is rebranding itself into the newspaper of records. The Times – the one from New York, that is – has moved to evangelize the data-journalistic thing among its staff, and towards that admirable end has crafted an extended in-house workshop, syllabus and practice files/exercises made available to all the rest of us in Google Sheets here and here, respectively (ok, ok, call me the Luddite; I’m downloading the files into Excel).

The latter link above points to a sheaf of workbooks classed Advanced, Intermediate and Beginner (these rubrics sorted alphabetically, thus interpolating Beginner between the two ostensibly more challenging data collections. And note the Times cautions that even as the data sets have been mined from real-world repositories they’ve been alloyed, the better to serve their instructional purposes), and it occurred to me that a look some of the course contents might prove instructive in its own right.

We can begin with the Beginner Census_Characteristics of Older Americans (2016, 2019) workbook, whose associated Census: Worksheet exercise document asks us to unhide all its sequestered columns (about 65 of them in fact, most of which are massed at the far end of the data, something I missed repeatedly). Remember I’m downloading the data to Excel, an improvised recourse that bunches the field headers into ill-fitting Wrap Text mode. But by directing the download in Open Document mode instead the book nevertheless returns to Excel, but with the headers properly, visibly wrapped, though the columns could do with a bit of resizing (I don’t know if these little disjunctions bear traces of Google-Microsoft woofing).

The exercise text proceeds to let us know “We roughly color the group of categories. For example, the race and Hispanic stats are in light orange, and green columns are about marital status”. But no; these tinted ranges aren’t conditionally formatted, and to be fair can’t really lend themselves to those cellular  ornamentations. What shared textual/numeric datum, for example, could encourage all the ethnic data cells in columns K through V to turn orange? On the other hand, the columns brandish their colors all the way down to row 999, Google Sheet’s default row allotment maximum, though the data come to a halt at row 52.

Next, among other questions the exercise puts to us, we’re asked to “Take the average of the state mean earnings [presumably for Americans age 60 and over] and then look up the mean average for the US. Why do these numbers differ? “ Again, devoting ourselves to the 60-and-older data in the “states, 2016” sheet, and more particularly the 60+Pop; mean earnings field in column BB, that average is realized easily enough. But what mean average for the US does the Times want us to look up, and how? Of course, that very requisition may contribute to the exercise; and so after a bracing scroll-through across the 419 fields bulking up the “2016, US” sheet I stepped atop its cell K2, the one reporting mean household earnings for the 60+ plus demographic of $65,289 (sans curency format). But my lookup was eyeball-driven, and  certainly not under the steam of any maneuver typically entrusted to the redoutable V, or HLOOKUP function. Those instruments, after all, assume we know the identity of the pertinent lookup value – and we can’t know that the value reads “60 years and over; Estimate; INCOME IN THE PAST 12 MONTHS (IN 2016 INFLATION-ADJUSTED DOLLARS) – Households – With earnings – Mean earnings (dollars)”, the header entry in cell KL1:

Times1

And so by “look up” I’m constrained to assume that the Times is asking of us a simple, unmediated, visual hunt for that information. In other words: look up, not LOOKUP.

And with the respective means – the national average recorded unitarily in the “2016, US” sheet and the state-by-state average figured in “states, 2016” – in hand, we can propose an answer to the question the exercise puts to us: namely why the two averages differ. The answer, I hope: that the state average accords equivalent weight to each of the 51 (Washington DC appears in the list) income figures irrespective of population size, while the single national figure in effect tabulates the earnings of every American, thus flattening out any skew.

And speaking as a mere auditor of the Times workshop, I’d pose the same strategic conjecture about the exercise question “Which 3 states have had the largest percentage increase in their residents who are above 60 over that time?” That is, I’d wonder if the Times expects its tutees to simply do the basic math and literally look for the three most prominent state increases – or rather, filter out a top three, a la Excel’s filter option.

But the filtering alternatives in Google Sheets can pull users in two very directions. One pathway transports them to a filter view resembling the standard Excel dropdown-menu mechanism – but I can’t find a Top 10 (or 3) possibility registered among its capabilities here. The other byway to a top 3 drops us off at the most cool FILTER function, a de facto progenitor of the yet-to-be-released Excel dynamic array function of the same name; but its workings demand an intricacy not likely to be broached in a beginner class. Thus, I suspect that the Times again wants its learners to resort to simple visual inspection in order for them to glean that top 3.

As for the actual line-item math here, should you or a Times staffer choose to proceed with the exercise, I’d hammer in a new column somewhere in the “states, 2016” sheet and slot this formula in for the District of Columbia, the first “state” in row 2:

=D2/C2-J2/I2

The column references – D,C, J, and I – offer up the base and 60+ population data for 2016 and 2009. (And yes, the formula can make do without parentheses: The order of operations take care of themselves.)

Copy down the ad hoc column, and the trio of states divulging the largest increments in the 60+ cohort will be in there, somewhere.

And if the filter isn’t working for you, why not sort the numbers largest to smallest, and check out rows 2 through 4?

 

Philadelphia Police Complaints, Part 2: One Civilian’s Review

20 May

Now that I’ve spoken my piece about the putative redundancy of the two Philadelphia civilian police complaint workbooks, fairness insists that I actually take a look into the data that’s roosted inside worksheet number two – because there are findings to be found in there, in spite of all.

That book, ppd_complaints_disciplines, concentrates its gaze on the officers against whom civilians preferred a grievance. The complaints are additionally parsed into the districts in which the alleged offenses were perpetrated, and class the complaints along with their dispositions.

Once we’ve widened the fields via the usual auto-fit we see that a good many of the complaint incidents identify multiple officers, as well as different allegations. We could then move to determine, for starters, the average number of allegations lodged per complaint. But that simple-sounding intention isn’t realized quite so straightforwardly, because we need to isolate the number of unique complaint ids in column A before we divide them into all the complaint particulars; and the elegant way out would have us travel here, to this array formula:

=COUNTA(A2:A6313)/SUM(1/COUNTIF(A2:6313,A2:6313))

The denominator – or rather the pair of denominators commencing with the SUM function – exemplifies a well-known array formula for calculating unique values in a range. The COUNTIF element subjects the A2:A6313 range of complaint ids to what are in effect criteria furnished by each and every record. Thus each of the four instances of id 15-0001 are assessed against every id entry, four of which of course happen to present the selfsame 15-0001. Thus each instance here evaluates to a count of 4, and the formula’s “1/ “ numerator reduces each to ¼ – and by adding 4 1/4s a 1 is returned – tantamount to treating 15-0001 as a single instance. That reciprocal routine is applied to each and every value in column A and then summed – yielding in our case 2779. Divide that outcome into the field’s 6312 records and we wind up with average of 2.24 allegations per complaint. (It should be added that Excel’s dynamic-array UNIQUE function would streamline the approach on which I’ve embarked here, but the dynamic arrays remain behind a locked door somewhere in Redmond, and I have no idea when the key will be found. Note as well that the dynamic arrays will only download to the Office 365 iteration of Excel.)

But that average, however informative, doesn’t apprise us of the number of actual, discrete officers implicated by each complaint, because the same officer is often cited for multiple allegations laid to the same complaint. Again, for example, complaint 15-0001 and its four allegations actually identify but two different officers – and that is the number we’re seeking here, as it would go on to contribute to a real officer-per-case average.

One way – an inelegant one – for getting there would be to pour the data through the Remove Duplicates sieve, selecting the complaint_id and officer_id fields for the duplicate search. Click through and you’ll wind up with exactly 4700 unique, remaindered records, of which 313 are blank, however; and we can’t know how many of those ciphers do, and do not, point to a given officer but once per complaint. On the other hand, because most officers are in fact identified we can acceptably assume that for those complaints directed at multiple officers the unknown party is likely not the one(s) who is named.  That supposition can’t dispel all our questions, of course, but divide 4700 by the 2779 unique complaints we derived above, and we learn that 1.69 distinct officers fell under investigative scrutiny per case – although the real quotient is probably a bit smaller.

In any event, that figure emerges at the cost of dismissing 1600 records from the data set, after which can we subject the stragglers to a formula, e.g.

{=COUNTA(B2:B4701)/SUM(1/(COUNTIF(A2:A4701,A2:A4701)))}

Inelegant indeed. For a sleeker alternative, we could first concatenate the complaint and officer ids in a new field in column I that I’m calling complaintofficer, e.g. in I2:

=A2&B2

That step positions us to cull unique officer ids by case; by running the unique-record array formula at these data we should be able to emulate the 4700 total and divide it by the other unique-finding expression aimed at the complaint ids:

=SUM(1/COUNTIF(I2:I6313,I2:I6313))}/{=SUM(1/COUNTIF(A2:I6313,A2:I6313))}

Elegance is a relative term, mind you, but it works.

On the other hand, if you wanted to develop a racial breakout of the number of individual officers charged with at least one complaint, you may here want to mobilize a Remove Duplicates by the po_race variable, because the breakout comprises multiple items (i.e. “races”; and if you adopt this tack you could save the results under a different file name, thus conserving the original, complete data set). That sequence yields 2549 separate officers, and conduces toward this pivot table:

Rows: officer_id

Value: officer_id

officer_id (again, by % of Column Total)

I get:

blogphil1

Now of course the proportions tell us little, because we need to spread them atop the racial makeup of the Philadelphia police force before any exposition can rightly commence. Note by the way that only one UNKNOWN officer informs the count here, even as we earlier turned up 313 such blank records; that’s because, of course, all the unknowns have the same blank, “duplicate” id.

Returning to the data set in toto, we can distribute allegations by their varieties. Remember of course that the 2779 complaints have accumulated an average of 2.24 charges, but each charge is exactly that – a complaint in its own right. Thus this conventional pivot table:

Rows: allegations_investigated

Values: allegations_investigated

Allegations_investigated (again, % of Column Total)

reports:

blogphil2

Apart from the indefiniteness of the modal Departmental Violation type, we need to recall that our previous post numbered 2782 such allegations populating the ppd_complaints workbook we reviewed then. It’s seems clear that the count enumerated there imputed but one allegation per complaint, a coarser take on the data than the more detailed table we’ve just minted above. In the earlier sheet, for example, Departmental Violations contribute 24.05% of all complaints; here they amount to 31.07%.

We also need to explain why our array formula here totaled 2779 unique complaint ids, when the count in ppd_complaints came to 2782.  In that connection I simply copied the already-unique ids in the ppd_complaints to a new sheet, and directed a Remove Duplicates to ppd_complaint_disciplines keyed to the same field there, and copied these as well to the new sheet. Scrolling about and doing some due diligence, I did find a few discrepancies, e.g. an absent 15-0176 among the ppd_complaint_disciplines ids.

But what’s a mere three records between spreadsheets?

 

Philadelphia Police Complaints, Part 1: One Civilian’s Review

2 May

We’ve looked at civilian complaints about police conduct before – about three years ago, in fact, when I reviewed complaint data collected for the city of Indianapolis; and I’ve had to refresh my memory about that analytical encounter because a visit to Philadelphia’s open data site brought me to a similar, but not identical, record of allegations against the local constabulary. Indeed – a wider, cross-city study of how civilian complaints are conceived and organized might make for a most instructive, if subsidiary, comparative survey of spreadsheet construction.

But what of the Philadelphia complaints? In fact, two spreadsheets detail the histories here:

ppd_complaint_disciplines

ppd_complaints

The first, ppd_complaints, straightforwardly gathers the incidents into five fields via a neatly-sequenced id scheme, and Its dates received in column B are genuinely quantified beaides. You’ll want to auto-fit columns B and D, but probably not E, bearing the text-expressed summaries of complaints; because no field adjoins its right edge an auto-fit won’t realize any gain in visibility there. The data appear to span the complaints for 2015-18 in their entirety, and tack on complaints for the first month of this year as well (at least through January 30). Thus an obvious first reconnoiter would step us through complaint totals by year:

Row: date_received (Year)

Values: date_received (Count)

I get:   phil1

We see that civilian complaints have slackened steadily across the yearly bins, down 21% from their 2015 peak. Still, the January 2019 total of 31 seems low, projecting linearly for the year to about 360. But could it be, rather, that Januarys experience fewer complaints?

Of course we can answer the question by regrouping the complaint numbers both by year and month and shifting the month parameter (curiously and misleadingly holding fast to the date_received field name) into Columns:

phil2

We see then that January is something of a slow month for complaints, although 2019’s lowest reading suggests (but only suggests) that this year may drive the downward curve still further down its axis. Yet the figures for the contiguous December trend far higher – though a highly disproportionate accumulation of complaints dated the 31st of that month seem to prevail. Of the 267 December entries, 63 are time-stamped the 31st, even as chance would have projected a complaint total of around 9.

I arrived at the 63 by installing a set of temporary set of formulas in the next-available F column (don’t be fooled by the encroaching text in E – F is in fact free), starting with this one in F2:

=IF(AND(MONTH(B2)=12,DAY(B2)=31),1,0)

The formula asks if a given date evaluates both to the 12th month (December) and the month’s 31st day. Copy down, add the 1’s, and you get 63.

Is the skew a precipitate of some bookkeeping maneuver designed to hem complaints into a year about to elapse? I don’t know, but someone’s story-seeking antenna should be wagging madly. Indeed – 31 of the 87 December complaints lodged in 2015 fell on the 31st, a clustering that should have you reaching for your probability tables, and 17 of the 88 2016 petitions were similarly dated. That the December 31st numbers slinked back to 8 and 7 in 2017 and 2018 suggest in turn that some sort of correction was introduced to the archiving routine, but that speculation is exactly that.

We could continue and cross-tab complaint type of incident – what the worksheet calls general_cap_classification – by year, and because the classes outnumber the years I’d slide the latter into Columns for presentational fitness, plant general_cap_classification into Rows, and reprise the latter into Values. I get:

phil3

The categories beat out a relatively constant distribution, by and large, though Civil Rights Complaints – their small numbers duly noted – seem to have spiked in 2018. It should be added that the free-text precis of complaints in the summary field incline toward the vague, e.g., a civil-rights submission in which “…they were treated unprofessionally by an officer assigned to the 18th District,” a manner of understatement that could do with some expository padding (note too that the district references for the filings in the district_occurrence field report multiples of the actual district number, e.g. 1800 for 18).

But remember there is a second complaint worksheet among the Philadelphia holdings, ppd_complaint_disciplines, one that historicizes the same complaints and bears identical ids, but per a different complement of defining parameters. Here the complaints identify the race and gender of the officers charged, along with the disposition of the allegation brought against them (investigate_findings). Thus two sheets instigate a next question: since the sheets recall the same incidents, could they not have been consolidated into a single workbook, with each record roping in the fields from each sheet?

That question is a fair one indeed, but I think the Philadelphia data compilers opted for two sheets over a single, unifying rendition is because ppd_complaint_disciplines comprises multiple references of the same complaint id. Because a given complaint may have been levelled at several officers the data for each officer are set down distinctly, aggregating to 6312 records, about 3500 more than the ppd_complaints sheet in which the complaints are recorded uniquely. If each of these were to be matched with the complaints cited in the latter, those complaints would in many cases appear several times – once each for every officer charged in the incident, and those redundant citations might read awkwardly. But those reiterations aren’t necessarily superfluous, because some complaints triggered different charges.

The very first complaint entry, for example, 15-0001, is enumerated four times in the ppd_complaint_disciplines sheet, corresponding to the four actual complaint entries registered for the incident. But it gets a bit messier than that, because the four complaints in fact reference only two discrete officers, who are “named” in the officer_id field. It’s clear in this case that two charges were preferred against each, an recognition that uncovers another complication in turn: the allegations_investigated field discloses three Departmental Violation charges, and another brought for Verbal Abuse.

Yet 15-0001 is described in the ppd_complaints sheet as an instance of Departmental Violations only. For another example 15-0005 is named in that sheet as a case of Physical Abuse, even as the one officer charged in ppd_complaint_disciplines incurred two complaints, one for Criminal Allegation, the other for Harassment.

It’s possible, then, that because each of a set of multiple charges for the same complaint could be regarded independently, one workbook might suffice, with perhaps the date_received and summary fields appended to the others in ppd_complaint_disciplines.

In that light it should also be noted that the Indianapolis data at which I looked earlier works with but one workbook, featuring multiple complaint records as well as date information. Perhaps Philadelphia could have done the same.

But I’m not complaining.

The Hockey Stick Effect: Wayne Gretzky’s Goals, Part 2

12 Apr

There’s another parameter-in-waiting pacing behind the Wayne Gretzky goal data, one that might be worth dragging in front of the footlights and placed into dialogue with the Date field in column B. National Hockey League seasons bridge two calendar years, generally strapping on their blades in October and unlacing them in April. For example, Gretzky’s last goal – time-stamped March 29, 1999 – belongs to the 1998-1999 season, encouraging us to ask how those yearly parings might be sprung from the data, because they’re not there yet.

Of course, a catalogue of Gretzky’s season-by-season scoring accumulations is no gnostic secret; that bundle of information been in orbit in cyberspace for some time (here, for example), and so developing those data won’t presume to teach us something we don’t already know. But the seasonal goal breakdowns could be joined to other, more putatively novel findings awaiting discovery among the data, and so the exercise could be justified.

So here’s my season-welding formula. Pull into next-available-column R, head it Season, and enter in R2:

=IF(MONTH(B2)>=5,YEAR(B2)&”-“&YEAR(B2)+1,YEAR(B2)-1&”-“&YEAR(B2))

We’re looking to concatenate two consecutive years, and so the formula asks if the month of any given entry in B equals or exceeds 5, or May, or falls beneath that value. If the former, the year in B is joined to the following year, via the +1 addendum. If the month equals or postdates May, then the preceding years, operationalized by the -1, is concatenated with the year returned in the B column.

The formulas seemed to work, but as a precision check I rolled out this simple pivot table:

Row: Season

Values: Season (count, of necessity; the data are textual. The values should denote goal total by respective year).

I wound up with this, in excerpt:

Gretz1

Cross-referencing the results with the Gretzky goal data in the above hockey-reference.com link yielded a pleasing equivalence across the years.

Now for some looks in earnest at the data. Starting simply, we can juxtapose Gretzky’s goals scored at home to the ones he netted in away games:

Row: Home/Away

Values: Home/Away (count)

Home/Away (again, % of Column Total)

I get:

Gretz2

We learn that Gretzky scored a palpable majority of his goals at home, but we’d expect as much. As in nearly all team sports, NHL teams enjoy the proverbial home advantage, winning about 55% of the time – a near-equivalence to Gretzky’s ratio. That is, if home teams prevail disproportionately then their goal totals should exhibit a kindred disproportion, kind of. One difference with Gretzky, of course, is that he simply scored more of them.

And does the distribution of his goals by period pattern randomly? Let’s see:

Rows: Per (for Period)

Values: Per (Count)

Per (% of Column Totals)

I get:

Gretz3

Gretzky’s production appears to mount in games’ later stages (OT stands for the overtime period), but that finding needs to be qualified on a number of counts. We’d need first of all to track Gretzky’s average presence times on the ice; that is, was he deployed more often as games advanced toward their denouements and his valuable self was sent ice-bound at clutch time? And we’d also need to plot Gretzky’s goal timings against the league averages for such things; and while I haven’t seen those data, we can assume they’re out there somewhere.

Next, it occurred to me that a look at the winning percentages of games in which Gretzky scored might prove enlightening, once the task was properly conceived. Remember that, as a consequence of his numerous multi-goal flourishes, Gretzky’s goals scatter across far fewer than 894 games. The idea, then, is to fashion a discrete game count across which the goals were distributed; and that sounds like a call for the Discrete Count operation we’ve encountered elsewhere (here, for example). Once we isolate the actual-game totals – which should be associated uniquely with game dates – our answer should follow.

And this pivot table seems to do the job, enabled again by a tick of the Add this data to the Data Model box:

gretz4

Rows: Result

Values: Date (Distinct Count, % of Column Total)

I get:

gretz5

What have we learned? Apart from the up-front factoid that Gretzky scored in 638 of the 1487 games he played across his NHL career (638 is the numeric Grand Total above, before it was supplanted by the 100% figure in the pivot table; note Gretzky also appeared in 160 games in the World Hockey Association), we don’t know how his when-scoring 64.89% win percentage compares with his teams’ success rate when he didn’t score. I don’t have that information, and don’t know where to track it down. But it too is doubtless available.

For another analytical look-see, we can ask if Gretzky’s goals experienced some differential in the number of contributory assists that prefaced them. That is, players (up to two of them) whose passes to a teammate conduce toward the latter’s goal are rewarded with an assist; and the question could be asked, and answered here, if Gretzky’s assist-per-goal average fluctuated meaningfully.  We might to seek to know, for example, if during Gretzky’s heyday his improvisatory acumen freed him to score more unaided goals than in his career dotage, when he may have been bidden to rely more concertedly on his mates.

Since two Assist fields, one for each of the two potential per-goal assists, accompany each goal, the simplest way perhaps to initiate our query would be to enter column S, title it something like AssistCount, and enter in S2:

=COUNTA(J2:K2)

And copy down. That insurgent field readies this straightforward pivot table:

Rows: Season

Values: AssistCount (average, formatted to two decimals)

I get:

gretz6

Not much pattern guiding the data, but if you want to group the seasons in say, five-year bins, remember that because the season entries are purely textual you’ll have to mouse-select five seasons at a time and only then successively click the standard Group Selection command, ticking the collapse button as well if you wish:

gretz7

Even here, then, the variation, is minute – strikingly so.

Now for a last question we could ask about those teammates who were literally Gretzky’s most reliable assistants – that is, the players whose assist counts top their collaborative pairings with the Great One. The problem here is the two-columned distribution of the assist names, one for the first assist on a goal, the other for the (possible) second. I don’t know how a pivot table can return a unique complement of names across two fields simultaneously, preparatory to a count. If you do, get back to me; but in the meantime I turned again to the Get & Transform Data button group in the Data ribbon and moved to unpivot the data set via Power Query, by merging only the assist fields, e.g.:

gretz8

By selecting Assist1 and Assist2 and advancing to Transform > Unpivot Columns and Home > Close and Load the result looked like this, in excerpt:

gretz9

And of course you can rename Attribute and Value, say to Assist and Player.

Once there, this pivot table beckons – after you click TableTools > Tools > Summarize with Pivot Table:

Rows: Player

Values: Player (Count, sort Highest to Lowest)

I got, in excerpt:

gretz10

Nearly 22% of Gretzky’s goals received a helping hand – at least one wrapped around a stick – from his erstwhile Edmonton Oiler and Los Angeles King colleague Jari Kurri, no scoring slouch either with 601 goals of his own – a great many doubtless the beneficiary of a Gretzky assist. Then slip the Assist field beneath Player in Rows and:

gretz11

Now we learn that more than 60% of Kurri’s assists were of the proximate kind; that is, he was the penultimate custodian of the puck, before he shipped it to Gretzky for delivery into the net.

Now that’s how you Kurri favor with the Great One.