Archive

technology

PilotBeforeAfterJAEA02

A recent update on how the Safecast project is tracking radiation remediation in Japan in the aftermath of the nuclear disaster in Fukushima got Ethan Zuckerman thinking.

After trying his hand at radiation monitoring himself, the director of MIT’s Center for Civic Media connected citizen science and data collection with civic participation in a long essay at his blog:

“If the straightforward motivation for citizen science and citizen monitoring is the hope of making a great discovery, maybe we need to think about how to make these activities routine, an ongoing civic ritual that’s as much a public duty as voting.”

Stepping back from the specific instance, the big idea here — citizens monitoring infrastructure — is not a novel one, at least in the world of development, where people have tried to apply information, communication technology (ICT) for years. It just hasn’t been particularly well executed in many places yet.

Ethan points to “Shovelwatch,” a collaborative effort by ProPublica and WNYC to track federal stimulus projects in the United States.

While the (now defunct) Recovery tracker enabled people to query data, there’s no section for people sharing pictures of progress on reports like the one below.
arra-reflecting-pool

New opportunities afforded by the increasing penetration of mobile devices, Internet connectivity and sensors are balanced with challenges to them working well, from data quality to cultural contexts.

First, citizen engagement matters for transparency initiatives, as Lee Drutman observed at the Sunlight Foundation this summer. People won’t become involved in monitoring infrastructure or projects unless they knew such a need or opportunity exists.

Second, the incentive structure matters. Why should people contribute pictures or data? Are there monetary rewards? A lottery ticket? Public recognition for participation by government or media organizations?

Third, who participates matters, in terms of project design. Leveraging “citizens as sensors,” doesn’t work as well in places where the men have the cellphones and the women get the water. Pilots that asked people to send text messages about whether water pumps were working haven’t done well in Africa.

Fourth, there needs to be a low barrier to participation. If people are going to be involved, bandwidth constraints and user-centric software design matter.

Those considerations don’t inherently mean that mobile monitoring projects like the USAID-funded mWater won’t work, but they’re worth watching closely as case studies.

All of those factors have led to increased interest in the development of inexpensive sensors that automatically collect data, as opposed to depending upon people to monitor.

Beyond Safecast, which is now developing the capacity to do air quality monitoring in Los Angeles, there are a growing number of projects focused upon environmental data collection.

For instance, the WaterwiseWater Canary” uses inexpensive hardware to help monitor contamination in water supplies.

A new nonprofit, WellDone, is pursuing a similar approach to water pump monitoring, combining open source hardware, software and mobile messaging. A prototype of their monitor is pictured below:
photo-17-1

A new generation of data-driven journalists are also working “sensoring the news.

Although there’s a long way to go before sensor journalism experiments go mainstream, the success of the Associated Press independently measuring the air quality in Beijing during the 2008 Olympics shows how data collection has already provided an alternative narrative to government reports.

beijing-air-quality-days-compared (1)

Earlier this year, computer science professor David Culler created a prototype for “conscious clothing”, winning a one hundred thousand dollar grant from the National Institutes of Health, the Office of the National Coordinator for Health IT, and the Environmental Protection Agency.

These wearable air quality sensors could be an important clinical tool for physicians and researchers to analyze changes in environmental conditions.

In the photo below, taken at the 2013 Health Datapalooza in Washington, DC, Culler shows off the prototype to Bryan Sivak, chief technology officer at the Department of Health and Human Services.sivak-culler-datapalooza

If you have other examples to share of projects that are collecting or creating data that’s used to hold government or corporations accountable, please share them in the comments or let us know on Twitter or email.

As more and more people become connected using social media, researchers, media and public health officials naturally are increasingly interested in what their updates can tell us about the world. According to the Pew Internet and Life Project, 18% of American adults online are on Twitter with some 200 million active users globally sending out 400 million tweets every day. That amount of data is catnip for researchers interested in everything from sentiment analysis, food security or embryonic pandemics.

sentiment-analysis

When people are sharing what they’re seeing using social media, city managers and public health agencies have increasingly learned to listen to what’s happening in the hopes of responding to fires, floods, odd smells, tornados, crimes or other public emergencies more effectively.

Some of the reported results are genuinely exciting, too: according to David Kirkpatrick, the director of UN Global Pulse, Twitter data accurately predicted the cholera outbreak in Haiti two weeks earlier than official records. Kirkpatrick’s team is now examining the predictive value of millions of tweets sent in Jakarta, Indonesia for assessing food security.

There are caveats to crunching Twitter data as a proxy for what’s happening in a given region or industry. As Pew Research showed in March 2013, what’s happening on Twitter is not necessarily representative of public opinion or a representative sample of a given population.

The perils of polling Twitter are particularly worth noting for media, as New York Times interactive developer Jacob Harris demonstrated this July.

That said, there are an increasing number of projects that are exploring the potential of Twitter for socially networked transparency. In Chicago, health authorities are seeking out Chicagoans who tweet about feeling poorly and ask them to share the restaurants they ate in most recently. Chicago’s health department told the Chicago Tribune that 150 Chicagoans have been contacted since the “Foodborne Chicago” initiative began, triggering 33 restaurant inspections in the first month, some of which found health code violations.

This kind of “high touch, high engagement” human approach requires a lot of humans, however, whose time is hard to scale over an entire city.

Further to the east, a research group at the University of Rochester analyzed millions of tweets in New York City to develop a system to monitor food-poisoning outbreaks at restaurants. The research crunched 3.8 million tweets, traced 23,000 restaurant visitors and found 480 reports of likely food poisoning.

As Henry Kautz highlighted in his column on public health and social data in the New York Times, there’s considerable interest in what can be gleaned from what people are sharing online:

“Groups at Brigham Young University and the University of Iowa have done extensive work on influenza monitoring via Twitter posts. Researchers at Microsoft are helping to identify women who are at risk of severe postpartum depression by analyzing changes in their online behavior. And researchers at Cornell are mining the social media stream to gather data for urban planning and environmental conservation.”

As always, anyone making public policy decisions based upon such data will have to take into account who is represented in the data or who is not.

That’s also true in efforts like Lungisa in South Africa, too, where a project is encouraging residents to hold government accountable using Twitter and Facebook. The issues raised on social media reflect the needs of the connected, not necessarily those of the poor, or of the powerful, who have their own channels to influence policy.


We’re looking for more examples of socially networked transparency, so please keep them coming. There’s already some rich veins for inquiry that some digging is turning up. For instance, in the most recent installment of his “week in civic innovation,” David Sasaki shared two helpful resources, which in turn have many more links to various projects and initiatives:

1) The Crowdsourced International Civic, Democratic and Transparency Website List, maintained by mySociety, has a long list of global efforts.

2) This list of Transparency and Accountability Resources & Initiatives, from the Engine Room, has a more US-centric collection.

If there’s something important happening in your town, state or industry, please let us know.

Cries for increased “transparency” has become a rallying cry across industries and governments, as consumers and citizens look for more information about what they’re eating, buying, breathing, drinking or how they are being governed. What “does transparency” means, and for whom?

Targeting transparency can have unexpected outcomes, particularly as humans and societies adopt and adapt novel technologies to suit their needs or goals. For instance, research at Northwestern found hospital report cards could actually decrease patient welfare. The dynamics of transparency are complex, given that systems for reporting may reveal corruption, fraud or abuse by powerful interests but can also expose people to retribution or discrimination.

[Radiation plumes in Japan]
[Radiation plumes in Japan]

“The challenge is to create and design transparency policies that actually work for people and don’t just waste time or create a bunch of information that’s difficult to understand or make organizations go through the fairly expensive processes of collecting information that nobody then goes on to use,” said professor Archon Fung, in a recent interview. “The policy challenge has to do with designing transparency policies so that they produce information that is actually highly valuable to people and that people can take action on.”

Many of the perils and promise of transparency have been explored at length in “Full Disclosure,” by the directors of the Ash Center’s Transparency Policy Project, from calorie counts to restaurant inspections. As is so often the case, such research raises as many questions as it answers. When and how do consumers respond to new information? What factors influence whether private companies respond to disclosure mandates by reducing the risks posed to consumers or improving practices? Where and when should policy makers apply disclosure versus other policy tools?

The answers to all of these questions are further complicated by the introduction of networked systems for communication and disclosure, particularly the emergence of powerful mobile devices and social media. The Ash Center is actively looking for examples of socially networked transparency systems that reduce risks and provide new tools for citizens and consumers to navigate the world. Examples of networked transparency include:

  • hospital ratings by individuals that may help other patients avoid the risk of medical error or infections
  • Websites like PatientsLikeMe that may provide earlier warning of drug side effects, safety or effectiveness problems
  • Civic media services like Safecast, which leverage citizen science and public data to inform the public about radiation risks
  • Data sources like Google Flu Trends, where the actions of individuals provide tacit information that can augment existing systems for early warning of outbreaks

In each of these examples, the collective actions of many individuals reporting information based on their experience, aggregations of reliable reports, or sensor data is collected and then disseminated in a way that makes the associated risks to the public risk more visible and transparent. Such networked transparency systems can then be adapted and used to inform individual choices or change behaviors of the entities creating the risks, saving lives or reducing harms.

Over the next several months, the Transparency Policy Project will be looking for more examples of networked transparency, from grassroots efforts created by public laboratories to reporting systems created by governments.

As you’d imagine, the people collaborating on the research (including the fellow writing this post) will be looking for tips, feedback, comments and links from you. Please email your ideas or pointers to alexanderbhoward [at] gmail.com, or reply to @digiphile or @sunshinepolicy on Twitter. Each week, we’ll gather together what we’ve learned to date and share a digest at this blog.

The Transparency Policy Project’s dual role as an organizer and participant at the Bridging Transparency and Technology workshop in Glen Cove, New York gave us an opportunity to reflect on this emerging field at many different levels.

Prior to the workshop, our conversations with the NGOs that participated revealed that their organizations were wrestling with similar tensions in their advocacy work. Common themes included:

  • Sustaining public interest in tech-transparency projects beyond the novelty of a first website visit.
  • Targeting tech-transparency projects to citizens, but finding that the media and other NGOs were the primary consumers of information.
  • Weighing the strategic advantages of open data approaches versus embargoing information to generate maximum accountability outcomes.
  • Linking online transparency efforts with offline accountability effects.
  • Finding the right metrics to capture the impact of their work.

These themes helped us shape the “arc” of the Glen Cove event. As Allen Gunn of Aspiration Tech noted in How we are designing the agenda for our Bridging Sessions, we aimed to “match needs to knowledge”.

At the workshop we saw a lot of cross-pollination take place between the advocacy strategies of groups working in natural resources governance and the extractives industries, and all the amazing tech tools that already exist for collecting, displaying and disseminating information. We were inspired by the passion, skill and ingenuity everyone brought to the table, and we are motivated by the potential harbored in proposed collaborations between NGOs and technologists that emerged at the workshop.

Based on workshop discussions and post-event conversations with participants, we put forth three lessons to inform the Bridging effort going forward:

  1. Articulate your strategy. Advocacy groups and technologists alike gained an understanding of the challenges and opportunities that exist in the growing technology for transparency space. A desire remains to delve deeper into deconstructing different types of strategies for transparency advocacy, and understanding how technology can be a lever in achieving accountability.
  2. Context matters. A lot. Discussions reinforced the importance of understanding the political environment and context within which technology approaches are implemented and advocacy groups operate. A greater diversity of perspectives – particularly from the developing world – would enrich this discussion and help in evaluating the impact of technology for transparency.
  3. The data exists, so hack! The hands-on opportunity to “hack” transparency projects and demonstrate how existing technology tools and approaches can be implemented quickly and effectively was a valuable experience for participants. Interactions between technologists and NGOs that lead to concrete projects and outcomes must be supported and sustained.

As we move forward with the Bridging initiative, the Transparency Policy Project will engage the Glen Cove groups in reflecting together on how to implement transparency and accountability projects. We are also keen to develop more innovative, and tailored approaches to measuring the impact that this work is having in advancing transparency and accountability. By articulating strategies, understanding context, and hacking projects, we hope to sharpen our collective understanding of how to best leverage technology tools in improving outcomes for arguably some of the most wicked problems on this planet.

Francisca Rojas, research director, Transparency Policy Project (original post)

Huge and impenetrable government databases – technically public but inaccessible in practice – have long hidden critical health and safety information. Consumers and patients need emerging knowledge about product defects, drug side effects and service flaws to choose safe cars, cribs, doctors, medicines and much else. Government has long collected this kind of information from us: manufacturers, retailers, medical experts and consumers send in millions of stories every year about unexpected problems that cause deaths or injuries. But, despite efforts toward more open government, shoppers and patients often can’t get access to this developing knowledge to make smart choices.

Now software developers are filling the gap – making important clues about health and safety risks hidden in government-gathered information easily available to consumers – and hoping to make a profit, of course. Two examples to watch: AdverseEvents Inc. and Clarimed LLC are firms that translate dense data about unexpected drug side effects and medical device malfunctions into usable information for patients and doctors. Can such transparency save lives and improve product safety? The CEO of Clarimed told Melinda Beck of the Wall Street Journal : “The best way to drive quality improvements is to make things crystal clear and transparent as possible.”

Mary Graham, co-director — Transparency Policy Project

A much talked-about innovation in public policy has been the push to achieve greater transparency and accountability through open government strategies, where the public has access to government information and can participate in co-producing public services. At the Transparency Policy Project we have been investigating the dynamics behind one of the most successful implementations of open government: the disclosure of data by public transit agencies in the United States. In just a few years, a rich community has developed around this data, with visionary champions for disclosure inside transit agencies collaborating with eager software developers to deliver multiple ways for riders to access real-time information about transit.

Transit agencies have long used intelligent systems for scheduling and monitoring the location of their vehicles. However, this real-time information had previously been available only to engineers inside agencies, leaving riders with printed timetables and maps, that, at best, represent the stated intentions of an complex system that can be disturbed by traffic, weather, personnel issues and even riders themselves.

Recognizing the need to be able to access this information on-the-go and in digital format, Bibiana McHugh of Portland’s TriMet agency worked with Google in 2006 to integrate timetable data into Google Maps, eventually becoming Google Transit. McHugh went further, publicly releasing TriMet’s operations data: first the static timetables, and eventually real-time, dynamic data feeds of vehicle locations and arrival predictions. Local programmers have responded with great ingenuity, building 44 different consumer-facing applications for the TriMet system, at no cost to the agency.

Transit Apps and Ridership by City

Other transit agencies have adopted this open data approach with varying outcomes. The most successful agencies work closely with local programmers to understand which data is in demand, troubleshoot and improve the quality of the data feeds. Programmers also make the link between end users and transit agencies by filtering up comments from apps users. This iterative feedback loop relies on a champion within the agency to build strong relationships with the local developer community. Of the five transit agencies we studied, Portland’s TriMet and Boston’s MBTA exemplify this approach and have generated the highest ratio of apps per transit rider (see table). Meanwhile, the most reluctant agency to adopt open data, Washington DC’s WMATA, only has eleven applications serving its customers.

The number of apps built by independent developers is important, indicating the variety of options riders have in selecting which interfaces (mobile, desktop, map-based, text, audio) and platforms best fit their needs to access transit information. As we have learned from our research on what makes transparency effective, simply providing information is not enough. Format and content matter, and should address the needs of a targeted audience. What we have seen in our study of transit transparency is that local programmers have been the critical intermediaries, taking raw data and generating a variety of information tools that transit agencies could not have imagined on their own. For other open government initiatives to spark this level of innovation and public benefit, they must identify their audience of information intermediaries and foster those relationships.

Francisca Rojas, research director – Transparency Policy Project
Original post on Google’s Policy by the Numbers blog (January 27, 2012)

Pedro Daire is in charge of technology for Chile’s Fundación Ciudadano Inteligente (“Smart Citizen”), a leader in employing technology for transparency in Latin America. Notable projects include Vota Inteligente to monitor the Chilean parliament, the Freedom of Information portal Acceso Inteligente, and most recently, a conflict of interest tracker that shines a light on parliamentarians, the legislation they support, and their personal investments. I spoke with Pedro via Skype on December 16, 2011 to capture his reflections on the Bridging Transparency and Technology (TABridge) workshop (which TPP participated in as well) held in Glen Cove, New York in early December. Pedro’s answers to my questions were so insightful on the matter of bridging tech and transparency that I decided to share a condensed version of our interview through this blog post.

Francisca: What was most valuable about participating in the TABridge event to the work you are doing and want to do in the future?

Pedro:    The most useful insight from the event was learning from the experiences of Kert Davies of Greenpeace and Heather White from the Environmental Working Group (EWG). They have been working with information to advance their advocacy work since before technology became such a powerful tool, and a result, they really know how to use information in strategic ways. They use technology as an amplifier, as a tool, and not and end in itself. In contrast, for younger people like me, technology is content, something in and of itself.

That insight, combined with the imperative to have an explicit theory of change behind our projects, an idea which Archon Fung guided us through at the workshop, is very useful for us. We are planning to analyze every project we do through the theory of change framework to guide our strategy and planning.

Did the event give you new ideas as to the role of technology in transparency efforts?

We have a very creative and innovative team at Ciudadano Inteligente, which means that we often have too many creative ideas! Our challenge is implementing those ideas and thinking about the impact chain of each project. We are very adept at developing tools for transparency, but don’t think about how that tool will be used to change or promote our end goals. We are too often focused on the tool and assume people will react to the information in impactful ways. But that’s not true. What I learned in the event is to attach a cause behind each tool.

We try to actively engage people with the information we provide, but we haven’t made a point to teach them what the next step is for them in terms of mobilizing around the information. With the conflict of interest tool for example, we should have considered incorporating an element to mobilize people to hold their legislators accountable from the beginning, but instead, we improvised a last-minute online petition when we realized this could be useful. I’ve come to see that facilitating these interactions is very important because otherwise, we have this citizen, but she’s only a spectator of the information. Only the most motivated people will react to the information. We need to say “hey, you’ve seen the info, now you should share this with five people,” and that’s much more than what we’ve done so far.

What should the TABridge program focus on as we work to support this network of practice?

The group that gathered at Glen Cove is like a ‘dream team’ of NGOs! Everyone who was there is very capable, enthusiastic and smart. The shared sense of purpose that we felt there – knowing that we are all concerned about the same issues and looking for the best ways to impact change – is very powerful. But there is risk that we’ll forget that we’ve been together thinking about how to use tech for transparency in strategic ways.

We need the organizers to keep reminding us that we have common issues, challenges and problems. I don’t think they should be helping me to implement projects, however. I didn’t expect to return to Chile from the workshop with solutions. Rather, I expected to come home with doubts. We have already been successful with our work here in Chile, but it’s risky for us to keep holding those minor successes as triumphs. The main value of TABridge is to help us reflect on our work.

Francisca Rojas, research director – Transparency Policy Project (original post)

As part of our research with the Transparency and Accountability Initiative on how groups are using technology to enable more effective forms of transparency and accountability, we have come across the growing field of “citizen science” (also sometimes called “civic science” or “open science”). This do-it-yourself (DIY), grassroots approach to data collection is facilitated by the increasing availability of everyday electronic devices – particularly cell phones – which today can be easily equipped with cheap sensors to capture data on ambient conditions.

Here I highlight two groups that are developing tools and methodologies for communities to use in monitoring their environments: the Public Laboratory for Open Technology and Science (PLOTS) in Somerville, Massachusetts and the Living Environments Lab at Carnegie Mellon.

PLOTS is known for their grassroots mapping initiatives where they produce low-cost, high-resolution aerial imagery by building and deploying digital cameras attached to balloons and kites. In the Gulf Coast, for instance, this simple and inexpensive approach has facilitated community-led monitoring of the Deepwater Horizon oil spill that is more detailed and frequent than what is captured by satellites. This independent dataset of images intends to track the environmental impact of the oil spill over time as a means to hold BP accountable in its restoration efforts. The grassroots mapping approach to impact transparency could also be applied to monitoring other types of threats like deforestation and land disputes. More recently, PLOTS has been deploying balloons to document and contest the size of public protests in Santiago, Chile (in partnership with the Fundacion Ciudadano Inteligente) and at #OccupyWallStreet. All of their work is open source, and on their website you’ll find step-by-step instructions on how to build your own tools for civic monitoring.

While PLOTS has mostly focused on documenting the view from above, the Living Environments Lab works on arming citizen scientists with cheap sensors to create a distributed network of ambient data collectors. An early experiment in Accra, Ghana aimed to measure air pollution from automobiles by recruiting taxi drivers to collect this data with their mobile phones. The data revealed not only that air quality varies greatly throughout the city by time of day and location, but also showed that when citizens have access to this type of dynamic information, it can effect behavioral change to minimize exposure to harm. Researchers were surprised to learn that the taxi drivers who participated in the experiment were altering their routes so as to minimize their exposure to areas of the city with high levels of pollution.

The value for transparency and accountability in the approaches discussed above lie in their ability to 1) distribute and democratize data collection in affordable and scalable ways and, 2) to close out the feedback loops of information. A challenge nevertheless exists in determining how representative these sources of data are in capturing problems or risks, and by extension, to what extent this community-generated data is actionable.

Francisca Rojas, Postdoctoral Fellow – Transparency Policy Project (original post)