Archive

open data

6127243966_e9189f1099_zLast week, fourteen groups filed a public comment asking the Centers for Medicare & Medicaid Services (CMS), to disclose Medicare payments to providers. Should the recommendation be implemented, it will add more transparency for health care costs to a system that needs it.

“We urge CMS to uphold its stated commitment to transparency and adopt a policy to promptly disclose, in an open format, payment data, with as much detail as practicable while protecting patient privacy,” recommended the organizations.

The Medicare provider charge data for hospitals showed a significant variation within communities and across the country for the same procedures. Providing more transparency into Medicare payments could help fight fraud and strengthen Medicare itself, argues Gavin Baker, an open government policy analyst at the Center for Effective Government, which signed the public comment.

The public has a fundamental right to know how government spends public funds. Medicare’s tremendous size and impact – $555 billion in expenditures, covering 49 million beneficiaries – make it a prime target for increased transparency. In fact, just the improper payments from Medicare are estimated at a whopping $44 billion – which is more than the entire budget for the Justice Department.

Releasing payment data would allow members of the public, including journalists and watchdogs, to help detect fraud or improper payments. That increased scrutiny could deter fraudsters– as happened with spending under the 2009 Recovery Act. This in turn could strengthen Medicare and help ensure its ability to continue playing its vital role in securing health care for America’s seniors.

The signatories to the public comment are a roll call of good government advocates, journalism organizations, think tanks and media outlets in the United States, demonstrating widespread interest in the data and a hint of the organizations that stand ready to make use of it.

Such a data release has a recent precedent: in May, the  United States Department of Health and Human Services released open data that compares the billing for the 100 most common treatments and procedures performed at more than 3000 hospital in the U.S. No patient privacy violations related to this release have been reported or demonstrated to date.

Should CMS choose to publish Medicare payments to providers, it would make 2013 a watershed year for increased data-driven transparency into health care costs.

Image Credit: Images of Money

PilotBeforeAfterJAEA02

A recent update on how the Safecast project is tracking radiation remediation in Japan in the aftermath of the nuclear disaster in Fukushima got Ethan Zuckerman thinking.

After trying his hand at radiation monitoring himself, the director of MIT’s Center for Civic Media connected citizen science and data collection with civic participation in a long essay at his blog:

“If the straightforward motivation for citizen science and citizen monitoring is the hope of making a great discovery, maybe we need to think about how to make these activities routine, an ongoing civic ritual that’s as much a public duty as voting.”

Stepping back from the specific instance, the big idea here — citizens monitoring infrastructure — is not a novel one, at least in the world of development, where people have tried to apply information, communication technology (ICT) for years. It just hasn’t been particularly well executed in many places yet.

Ethan points to “Shovelwatch,” a collaborative effort by ProPublica and WNYC to track federal stimulus projects in the United States.

While the (now defunct) Recovery tracker enabled people to query data, there’s no section for people sharing pictures of progress on reports like the one below.
arra-reflecting-pool

New opportunities afforded by the increasing penetration of mobile devices, Internet connectivity and sensors are balanced with challenges to them working well, from data quality to cultural contexts.

First, citizen engagement matters for transparency initiatives, as Lee Drutman observed at the Sunlight Foundation this summer. People won’t become involved in monitoring infrastructure or projects unless they knew such a need or opportunity exists.

Second, the incentive structure matters. Why should people contribute pictures or data? Are there monetary rewards? A lottery ticket? Public recognition for participation by government or media organizations?

Third, who participates matters, in terms of project design. Leveraging “citizens as sensors,” doesn’t work as well in places where the men have the cellphones and the women get the water. Pilots that asked people to send text messages about whether water pumps were working haven’t done well in Africa.

Fourth, there needs to be a low barrier to participation. If people are going to be involved, bandwidth constraints and user-centric software design matter.

Those considerations don’t inherently mean that mobile monitoring projects like the USAID-funded mWater won’t work, but they’re worth watching closely as case studies.

All of those factors have led to increased interest in the development of inexpensive sensors that automatically collect data, as opposed to depending upon people to monitor.

Beyond Safecast, which is now developing the capacity to do air quality monitoring in Los Angeles, there are a growing number of projects focused upon environmental data collection.

For instance, the WaterwiseWater Canary” uses inexpensive hardware to help monitor contamination in water supplies.

A new nonprofit, WellDone, is pursuing a similar approach to water pump monitoring, combining open source hardware, software and mobile messaging. A prototype of their monitor is pictured below:
photo-17-1

A new generation of data-driven journalists are also working “sensoring the news.

Although there’s a long way to go before sensor journalism experiments go mainstream, the success of the Associated Press independently measuring the air quality in Beijing during the 2008 Olympics shows how data collection has already provided an alternative narrative to government reports.

beijing-air-quality-days-compared (1)

Earlier this year, computer science professor David Culler created a prototype for “conscious clothing”, winning a one hundred thousand dollar grant from the National Institutes of Health, the Office of the National Coordinator for Health IT, and the Environmental Protection Agency.

These wearable air quality sensors could be an important clinical tool for physicians and researchers to analyze changes in environmental conditions.

In the photo below, taken at the 2013 Health Datapalooza in Washington, DC, Culler shows off the prototype to Bryan Sivak, chief technology officer at the Department of Health and Human Services.sivak-culler-datapalooza

If you have other examples to share of projects that are collecting or creating data that’s used to hold government or corporations accountable, please share them in the comments or let us know on Twitter or email.

Cries for increased “transparency” has become a rallying cry across industries and governments, as consumers and citizens look for more information about what they’re eating, buying, breathing, drinking or how they are being governed. What “does transparency” means, and for whom?

Targeting transparency can have unexpected outcomes, particularly as humans and societies adopt and adapt novel technologies to suit their needs or goals. For instance, research at Northwestern found hospital report cards could actually decrease patient welfare. The dynamics of transparency are complex, given that systems for reporting may reveal corruption, fraud or abuse by powerful interests but can also expose people to retribution or discrimination.

[Radiation plumes in Japan]
[Radiation plumes in Japan]

“The challenge is to create and design transparency policies that actually work for people and don’t just waste time or create a bunch of information that’s difficult to understand or make organizations go through the fairly expensive processes of collecting information that nobody then goes on to use,” said professor Archon Fung, in a recent interview. “The policy challenge has to do with designing transparency policies so that they produce information that is actually highly valuable to people and that people can take action on.”

Many of the perils and promise of transparency have been explored at length in “Full Disclosure,” by the directors of the Ash Center’s Transparency Policy Project, from calorie counts to restaurant inspections. As is so often the case, such research raises as many questions as it answers. When and how do consumers respond to new information? What factors influence whether private companies respond to disclosure mandates by reducing the risks posed to consumers or improving practices? Where and when should policy makers apply disclosure versus other policy tools?

The answers to all of these questions are further complicated by the introduction of networked systems for communication and disclosure, particularly the emergence of powerful mobile devices and social media. The Ash Center is actively looking for examples of socially networked transparency systems that reduce risks and provide new tools for citizens and consumers to navigate the world. Examples of networked transparency include:

  • hospital ratings by individuals that may help other patients avoid the risk of medical error or infections
  • Websites like PatientsLikeMe that may provide earlier warning of drug side effects, safety or effectiveness problems
  • Civic media services like Safecast, which leverage citizen science and public data to inform the public about radiation risks
  • Data sources like Google Flu Trends, where the actions of individuals provide tacit information that can augment existing systems for early warning of outbreaks

In each of these examples, the collective actions of many individuals reporting information based on their experience, aggregations of reliable reports, or sensor data is collected and then disseminated in a way that makes the associated risks to the public risk more visible and transparent. Such networked transparency systems can then be adapted and used to inform individual choices or change behaviors of the entities creating the risks, saving lives or reducing harms.

Over the next several months, the Transparency Policy Project will be looking for more examples of networked transparency, from grassroots efforts created by public laboratories to reporting systems created by governments.

As you’d imagine, the people collaborating on the research (including the fellow writing this post) will be looking for tips, feedback, comments and links from you. Please email your ideas or pointers to alexanderbhoward [at] gmail.com, or reply to @digiphile or @sunshinepolicy on Twitter. Each week, we’ll gather together what we’ve learned to date and share a digest at this blog.

The Transparency Policy Project’s dual role as an organizer and participant at the Bridging Transparency and Technology workshop in Glen Cove, New York gave us an opportunity to reflect on this emerging field at many different levels.

Prior to the workshop, our conversations with the NGOs that participated revealed that their organizations were wrestling with similar tensions in their advocacy work. Common themes included:

  • Sustaining public interest in tech-transparency projects beyond the novelty of a first website visit.
  • Targeting tech-transparency projects to citizens, but finding that the media and other NGOs were the primary consumers of information.
  • Weighing the strategic advantages of open data approaches versus embargoing information to generate maximum accountability outcomes.
  • Linking online transparency efforts with offline accountability effects.
  • Finding the right metrics to capture the impact of their work.

These themes helped us shape the “arc” of the Glen Cove event. As Allen Gunn of Aspiration Tech noted in How we are designing the agenda for our Bridging Sessions, we aimed to “match needs to knowledge”.

At the workshop we saw a lot of cross-pollination take place between the advocacy strategies of groups working in natural resources governance and the extractives industries, and all the amazing tech tools that already exist for collecting, displaying and disseminating information. We were inspired by the passion, skill and ingenuity everyone brought to the table, and we are motivated by the potential harbored in proposed collaborations between NGOs and technologists that emerged at the workshop.

Based on workshop discussions and post-event conversations with participants, we put forth three lessons to inform the Bridging effort going forward:

  1. Articulate your strategy. Advocacy groups and technologists alike gained an understanding of the challenges and opportunities that exist in the growing technology for transparency space. A desire remains to delve deeper into deconstructing different types of strategies for transparency advocacy, and understanding how technology can be a lever in achieving accountability.
  2. Context matters. A lot. Discussions reinforced the importance of understanding the political environment and context within which technology approaches are implemented and advocacy groups operate. A greater diversity of perspectives – particularly from the developing world – would enrich this discussion and help in evaluating the impact of technology for transparency.
  3. The data exists, so hack! The hands-on opportunity to “hack” transparency projects and demonstrate how existing technology tools and approaches can be implemented quickly and effectively was a valuable experience for participants. Interactions between technologists and NGOs that lead to concrete projects and outcomes must be supported and sustained.

As we move forward with the Bridging initiative, the Transparency Policy Project will engage the Glen Cove groups in reflecting together on how to implement transparency and accountability projects. We are also keen to develop more innovative, and tailored approaches to measuring the impact that this work is having in advancing transparency and accountability. By articulating strategies, understanding context, and hacking projects, we hope to sharpen our collective understanding of how to best leverage technology tools in improving outcomes for arguably some of the most wicked problems on this planet.

Francisca Rojas, research director, Transparency Policy Project (original post)

A much talked-about innovation in public policy has been the push to achieve greater transparency and accountability through open government strategies, where the public has access to government information and can participate in co-producing public services. At the Transparency Policy Project we have been investigating the dynamics behind one of the most successful implementations of open government: the disclosure of data by public transit agencies in the United States. In just a few years, a rich community has developed around this data, with visionary champions for disclosure inside transit agencies collaborating with eager software developers to deliver multiple ways for riders to access real-time information about transit.

Transit agencies have long used intelligent systems for scheduling and monitoring the location of their vehicles. However, this real-time information had previously been available only to engineers inside agencies, leaving riders with printed timetables and maps, that, at best, represent the stated intentions of an complex system that can be disturbed by traffic, weather, personnel issues and even riders themselves.

Recognizing the need to be able to access this information on-the-go and in digital format, Bibiana McHugh of Portland’s TriMet agency worked with Google in 2006 to integrate timetable data into Google Maps, eventually becoming Google Transit. McHugh went further, publicly releasing TriMet’s operations data: first the static timetables, and eventually real-time, dynamic data feeds of vehicle locations and arrival predictions. Local programmers have responded with great ingenuity, building 44 different consumer-facing applications for the TriMet system, at no cost to the agency.

Transit Apps and Ridership by City

Other transit agencies have adopted this open data approach with varying outcomes. The most successful agencies work closely with local programmers to understand which data is in demand, troubleshoot and improve the quality of the data feeds. Programmers also make the link between end users and transit agencies by filtering up comments from apps users. This iterative feedback loop relies on a champion within the agency to build strong relationships with the local developer community. Of the five transit agencies we studied, Portland’s TriMet and Boston’s MBTA exemplify this approach and have generated the highest ratio of apps per transit rider (see table). Meanwhile, the most reluctant agency to adopt open data, Washington DC’s WMATA, only has eleven applications serving its customers.

The number of apps built by independent developers is important, indicating the variety of options riders have in selecting which interfaces (mobile, desktop, map-based, text, audio) and platforms best fit their needs to access transit information. As we have learned from our research on what makes transparency effective, simply providing information is not enough. Format and content matter, and should address the needs of a targeted audience. What we have seen in our study of transit transparency is that local programmers have been the critical intermediaries, taking raw data and generating a variety of information tools that transit agencies could not have imagined on their own. For other open government initiatives to spark this level of innovation and public benefit, they must identify their audience of information intermediaries and foster those relationships.

Francisca Rojas, research director – Transparency Policy Project
Original post on Google’s Policy by the Numbers blog (January 27, 2012)