As I wrote about in a recent blog post, I’m interested in the idea of building and fostering a data-driven culture within government and non-profit organizations. My inspiration is the work of Carl Anderson and his book Creating a Data-Driven Organization. While I’d been intending to let these ideas ferment ahead of the NYC DCAS Management Academy class I’m scheduled to teach in mid-January, I got the opportunity to take a first stab at presenting this during a recent Data Analytics for Managers class I taught as part of the professional development program for the City University of New York (CUNY).
While not a government agency, the CUNY system was chartered by the New York State Legislature, combining several colleges founded and run by the City of New York. The system is funded by both the State of New York and the City of New York, allowing employees to enjoy many of the same benefits as public sector workers. The researchers at the various CUNY schools enjoy a much closer relationship with city and state agencies based on this quasi-governmental status.
The participants in this class came from a diverse cross-section of the CUNY system including facilities, administration, marketing, grants, and human resources across a number of the CUNY campuses around New York City. Almost from the start, the participants in this class were bringing up key issues in building a data-driven culture in their offices and schools. This included concerns with the unreliable databases they use, the lack of support they received for using reliable analysis to guide decision making, and the general lack of comfort with data they noticed in those they worked with. These were concerns I’ve heard voiced before in my classes and isn’t unique to New York City.
Among the key struggles they had was getting buy-in from their supervisors and fellow employees for a more data-driven approach to their work. They were trying to better utilize the tools that would enable them to do more analysis in their work, supporting their decision making with more and deeper insights gleaned from the data they collected or otherwise had available. Several specifically mentioned this in their introduction, expressing a hope they would learn techniques to deal with these issues.
Hearing this, I offered them the opportunity to forgo my usual afternoon open data discussion to discuss Anderson’s ideas of building a data-driven culture, with the caveat that this would be impromptu and heavily borrowed directly from Anderson with little preparation or gestation. They agreed and what followed was an interesting exploration into the key elements of Anderson’s thesis on how data-driven cultures are built.
Ultimately, I felt I was evangelizing the evangelizers. Some were already providing data leadership in their offices, demonstrating the value of more and better data analysis in successfully meeting (and exceeding) the expectations of their managers. Some were working with the near impossible task of wrangling large amounts of data into actionable insights. Others were just trying to improve the data work they were already doing. They were all keenly aware of how important the “central source of truth” (as Anderson would say) was for their job, particularly because their central source of truth (or “system of record” as I like to call it) was unreliable. Individual offices were still managing their own data in order to do their work and many could sympathize with the third of executives who make decisions on unreliable data.
The idea of “goals first” also seemed to resonate with the group. They were fully aware of what happens when data and metrics are used to put a positive spin on something after it’s been done, rather than having the “success metric” (as Anderson calls it) defined in advance and widely understood by everyone in the organization. They intuitively understood the value of broad data literacy, having taken it upon themselves to develop their own data literacy and that of their co-workers. We discussed “p-hacking” and it’s analogues in government and public institutions like CUNY. People at various levels want to show impact and will often select metrics that show the impact they want to promote, rather than the metrics that tell the whole story.
Less familiar to them was the idea of iterative development, which I explained in the context of software development. The idea of starting small with something and developing toward the eventual goal was something they seemed to quickly grasp, especially when I contrasted it with the approach of trying to plan out everything in advance in the waterfall method. There are too many examples to list of failed waterfall-style projects (it certainly would’ve helped Healthcare.gov). It didn’t take much discussion for the basic concept to sink in.
While they were amused with the idea of making decisions based on the HiPPO (Highest Paid Person’s Opinion) in the room and intrigued with the idea of building a testing culture, I think these are key areas where it gets more challenging to import the experience of building data-driven companies to government. By their nature, governments are structured hierarchically. There are (ideally) clear lines of responsibility for the essential work that a government agency is tasked with doing. This is essential in a democracy such as ours that puts a value on accountability, particularly for dereliction or abuse of the public trust. The civil service is also embedded in a political system where the fortunes of elected officials rise and fall on the public’s perception of how good (or bad) the government does the work of governing. The highest paid person is often the one who will suffer the most when things go wrong and has a keen interest in seeing everything go well, or at least be perceived as such.
Beyond just the perception problem, Google or Facebook can “move fast and break things”, but government agencies are responsible to their citizens to provide critical services today, tomorrow, the next week, month, year, decade, century, etc. As a private company, Facebook can go out of business tomorrow without causing a massive disruption in critical services people rely on for their survival (despite what they might think of this possibility).
Government is too important to fail at the tasks expected of them and therefore must be naturally cautious about how it goes about delivering those services. While there are plenty of examples of where government has been able to successfully innovate how it delivers services, the process of innovation is necessarily more conservative in government than the private sector. Building a testing culture inside government will require building a tolerance for a testing culture outside of government.
While in an ideal world, we as citizens would all cheer an agency that tries new things and gladly admits when it makes a mistake, we don’t live in that world. Rather we live in a world where criticism for faults, real or perceived, is swift, punishing, and merciless in the court of public opinion. It takes a different kind of leadership in government to accept the risks of trying something new and innovative where failure is not only possible but welcome as a learning experience. It means the scope of innovation is necessarily smaller, the consequences of failure less, and the chance of success greater. That different kind of leadership is one that accepts challenge to closely-held opinions when it’s not backed up with data. The HiPPO should be a data-informed opinion, blending both critical analysis and experience. As Anderson quotes:
“Do you have data to back that up?” should be a question that no one is afraid to ask and everyone is prepared to answer.
This along with the inquisitive, questioning culture Anderson advocates are some of the biggest challenges I see in bringing this to government, particularly to those agencies that have a history of very top-down decision making based on a limited or anecdotal understanding of the issues. This will also be a challenge to those agencies with particularly technical tasks requiring a higher degree of specialization and potential liability concerns (NYC Department of Transportation, for example). That’s not to say this can’t be done, merely that the lessons learned from private industry must be more thoughtfully adapted to the public sector to be useful.
My students struggled with how they would be able to do this in their offices seeing these challenges. I agree with them that this isn’t easy. I think the key is to pick low-hanging fruit. Like Michael Flowers did as Chief Analytics Officer with the NYC Mayor’s Office of Data Analytics, focusing on important but relatively easy tasks first, it’s possible to introduce the benefits of data-driven decision making. Making clear the benefits of a data-driven culture can help build the support and infrastructure necessary for more complicated challenges. This helps build momentum and broaden support based on results.
One student shared her experience doing exactly this when she used a data-driven approach to turn around a project that had been plagued with problems. She was not only able to ultimately solve these problems, but go above and beyond the expectations of those involved with respect to what the project could achieve. The demonstrable value of her approach helped build support or at least tacit acceptance by her colleagues. She didn’t do this on her own, however, and made clear her achievements were only possible with the support of her supervisor, but it showed the way for the other students seeking to do something similar in their offices. The challenge then becomes institutionalizing the practice so that when they move on the people left behind continue to build and sustain the data-driven culture they’ve started.
I’m fully aware that this class of 15 people doesn’t represent the CUNY system as a whole, but is a self-selected group of people already drinking the Kool-aid of data. I’m encouraged to continue this discussion with my students in future classes and better develop the ideas we discussed. Anderson lays out a lofty vision of the data-driven organization, one I’m sure he still finds challenging to build at Warby Parker. There are benefits along the path of evolving organizations to be more data-driven, even if the progress is slow and the challenges daunting. These include the time, money, and potential lives saved through more and better use of data in decision making, as well as the increase in accountability and transparency in the delivery of services when decisions can be described clearly using the data available on hand.
Ultimately, this is what it means to govern in the 21st century, making cities not just smarter, but more responsive to the needs of both their citizens and their employees. I look forward to the insights that come and sharing them in future blog posts.