the-compiler
the-compiler
compiler
56 posts
[ARCHIVED] The Compiler was made up of curated research picks from around the web, brought to you by The Engine Room.
Don't wanna be here? Send us removal request.
the-compiler · 8 years ago
Text
A Médecins Sans Frontières ethics framework for humanitarian innovation
A Médecins Sans Frontières ethics framework for humanitarian innovation plus worked case studies
Published: July 2016 By Médecins Sans Frontières
To address the lack of guidance for how to apply ethics frameworks in innovation projects without hampering the dynamic processes of innovation, Médecins Sans Frontières (MSF) has produced an ethics framework for humanitarian innovation, which it applies to three case studies.
Key points from the recommendations include:
Clearly identify the problem you are seeking to address, and what you expect to gain from the project.
Innovation projects have practical implications, and the simplest or most direct solutions may not be ethically appropriate.
Clarify how you will involve the end user from the start of the process; 
Identify and weigh harms and benefits.
Ensure that the risk of harm is not carried by those who will not benefit.
Plan (and carry out) an evaluation that delivers the information needed for next decisions to install or scale-up the project. 
0 notes
the-compiler · 9 years ago
Text
What do people working in the humanitarian sector think of drones?
Drones in Humanitarian Action - A survey on perceptions and applications Published September 2016
The Swiss Foundation for Mine Action (FSD) has released a survey on how drones are viewed by people in the humanitarian sector - the first of its kind. It’s part of a wider initiative by the European Commission to study the use of drones in humanitarian crises.
Only one in ten respondents had actual experience with drones in humanitarian work. Those who had experienced them were more likely to see them as a good thing.
Overall, the majority (61%) of respondents viewed the use of drones in humanitarian work positively. The key benefit cited was that drones can provide faster, better access in dangerous situations or hard-to-reach areas. Interestingly, though, opinions were split (40% to 41%) on whether drones should be used in conflict settings.
Many emphasised that drones can improve – but not replace – the work of ground teams.
A significant minority (22%) viewed the use of drones unfavourably for three main reasons:
The technology creates distance between beneficiaries and aid workers.
Potential association with military applications.
Lack of added value delivered by the use of drones.
Most respondents agreed that clear guidance and rules for using drones in humanitarian work was needed, as well as better coordination and experience with using the technology. This is vital to ensure that the data collected with drones is handled safely and responsibly by humanitarian organisations. This was summed up well by one respondent:
“I have no doubt that there’s potential in the use of drones for humanitarian activities [but] basically, I don’t trust the humanitarian industry to use them responsibly at this point in time.”
0 notes
the-compiler · 9 years ago
Text
Are fact-checking sites making any impact? This research says yes (sometimes).
The Rise of Fact-Checking Sites in Europe Published November 2016
A new report by Lucas Graves and Federica Cherubini surveys fact-checking sites in Europe - “a landscape which is remarkably diverse and fast-changing.” The survey raises questions about the role of fact-checking sites as a democratic institution: what counts as reliable data? Who has the authority to assess public truth? And how is accuracy balanced with other democratic principles such as openness and pluralism?
The survey found that fact-checkers have significantly different missions and identities.
Three-quarters of fact-checkers see themselves as journalists, but only 40% accept the label of “activist” or “policy expert”. Even fewer would use the label “academics” or “technologists”. 
One-third believe the most important goal is to inform citizens, while 23% believe it’s about holding politicians to account. 
A small group (10%) see themselves as reformers, pushing for policy change.
It also found that in Western Europe fact-checking is more frequently based in newsrooms, while it is usually NGO-based in Eastern Europe. This variation in structure, identity and mission can result in slightly different methodologies and approaches.
Most interesting is the survey’s findings on impact. As concerns grow about the dawn of a ‘post-fact’ or ‘post-truth’ age, what can realistically be achieved by holding public figures to account for their false statements?
Political impacts: 
“Politicians generally ignore fact-checkers”. Cases in which a politician abandoned a false claim or acknowledged the error publicly are rare.
“Public figures seem to word their arguments with slightly more care once fact-checking becomes established”. 
On rare occasions a fact-check can influence public discourse if one politician uses it against the other.
Use of media to increase impact: 
Half of fact-checking sites relied heavily on the news media to increase the reach and impact of their fact-checking. 
“Several fact-checkers complained that their work is sensationalised or misrepresented by journalists”, or they are not fully credited for the material. 
To maintain some control over their “media footprint,” most sites enter into media partnerships. However, these partnerships can be vulnerable to political pressure, particularly if it is public media.
1 note · View note
the-compiler · 9 years ago
Text
The Ethics of Big Data as a Public Good: Which Public? Whose Good?
Published August 2016
Linnet Taylor has written a useful paper on how corporations approach ‘donating’ their data to non-profit organisations. To do so, she looks at two case studies in which mobile network operators (Telenor and Orange) have shared Call Detail Records (CDRs) for nonprofit research projects.
She suggests that corporations are not refusing to share data to simply avoid risk, but are instead “navigating challenges...in nuanced, reflexive ways.” Moreover, she finds that they are helping to develop “new ethical approaches to data-sharing infrastructures and practices.” However, she warns against adopting a one-size-fits-all approach for sharing data:
You can’t standardise procedures for preventing harm: to make an accurate assessment, you need social and political knowledge of the scenario in which the data was collected. The procedures that are necessary also depend on whom the data is shared with, and in what form.
Existing procedures vary widely: Companies typically share data on a case-by-case basis with a variety of people, ranging from government security services to the company’s customers. Researchers, on the other hand, design projects that are tightly focused on one specific problem.
Taylor proposes that we should ultimately leave mobile operators to decide what to do with the mobile data they collect, because they have the best access to, and understanding of, the context in which they are operating - and because they will be inclined to protect their customers.
But this suggestion is based on the actions of two mobile network operators - would all mobile operators always act in the same way as Telenor and Orange? What about operators that are part-owned by governments, or are simply unaware of risks that the most marginalised groups are facing?  She also notes that companies would prefer to control the data-sharing process, as it increases the data’s scarcity value and focuses any good publicity on them.
0 notes
the-compiler · 9 years ago
Text
The Digital Divide Among Twitter Users and Its Implications for Social Research
Published October 2016
When Twitter data is being used for forecasting elections and disease outbreaks, it’s important to question how representative that data is. As using Twitter data becomes increasingly popular, Grant Blank conducted a multivariate empirical analysis of British and US Twitter users, finding that:
In both countries, Twitter users are disproportionately members of elite groups.
British users are younger, wealthier, and better educated than other Internet users - who are in turn younger, wealthier, and better educated than the offline British population.
American Twitter users are also younger and wealthier than the rest of the population, but they are not better educated.
He also notes a number of problems with research that uses Twitter data:
Because accessing all tweets is too expensive, researchers only use a subset of tweets.
Because available data does not include private tweets, researchers find it difficult to make claims about the quality of the data.
The rates of tweeting are extremely skewed among Twitter users.
Twitter is used differently from other social media platforms. As such, the Twitter data needs to be treated distinctly.
All this underlines that researchers shouldn’t be using Twitter data as a proxy for research on the population as a whole, or even the subset of the population that is online. 
0 notes
the-compiler · 9 years ago
Text
Digital technology for health sector governance in low and middle income countries: a scoping review
September 2016 
Isaac Holeman, Tara Patricia Cookson and Claudia Pagliari have published an interesting review of how tech has been used to strengthen health sector governance in low– and middle–income countries (LMICs). It’s a useful piece of research, in part because the use of technology in this sector hasn’t been explored in depth before.
The review identified 15 initiatives, grouping them into four categories:
Gathering and verifying information
Data aggregation and visualisation
Mobilisation
Automation and auditing
Some interesting examples include a tool that allows purchasers to verify that their medicines are not counterfeit using SMS, and a Peruvian interactive documentary that uses mobile phones and radio to tell the stories of indigenous people affected by a government policy of sterilisation.
The review finds that the iniatiatives’ results are “undeniably mixed” and raise three overarching points:
Projects tend to emphasise data or transparency alone, often to the disadvantage of the project. The report suggests that initiatives should focus on driving reform by establishing “feedback loops” between citizens and the government, rather than settling on transparency or better information as the end goal.
Projects should pay more attention to human–centred and participatory approaches that involve end–users in design processes and continually adapt to feedback. This could help digital interventions adapt to local complexities and, more importantly, it would reflect the “democratic view that ordinary people should have a say in matters that affect them”.
“Understanding local patterns of technology use is… vital when determining which components of good governance interventions should be digitised and which are better left offline.” The authors found that lower rates of internet access and mobile phone ownership among women and vulnerable groups has made some interventions less effective.
0 notes
the-compiler · 9 years ago
Text
Data Ethics - investing wisely in data at scale
September 2016
David Robinson and Miranda Bogen at Upturn have mapped ways in which “data at scale” may pose risks to beneficiaries and funders. (By “data at scale,” they mean collecting, storing, analysing or using digital information in ways that have recently become possible.)
They found that grantmakers didn’t know enough about the benefits and risks of using data because of: the rapid growth in the use of technology; grantmakers and beneficiaries’ lack of data science expertise; limited training and resources; and a lack of clear guidelines for tackling new risks.
They identify three main challenges in data-oriented grantmaking:
1. The collection and use of data from the public sector isn’t carefully regulated, meaning that such data can now be used in a growing variety of potentially harmful ways.
2. Although there are many benefits related to the use of data at scale in decision-making, the use of automation may also risk reinforcing longstanding social biases.
3. Expertise for using data at scale is concentrated in certain large companies and government organisations. The nonprofit sector and academic researchers need to be prepared to harness these powerful new methods and shape how they are used across society by investing in training and fostering expertise.
They suggest that major foundations have a unique role to play in shaping norms around how this data is treated, and include a set of key questions for program staff and grantees to consider in data-intensive work. Finally, they recommend that funders incorporate data ethics in the grantmaking process and create data ethics checklists for grantees and program staff. For an report with similar themes based on research conducted in 2013-14, see this link. 
0 notes
the-compiler · 9 years ago
Text
Transforming governance: What role for technologies?
Read the report here
Duncan Edwards has published a summary of some key themes from a February event run by Making All Voices Count, on learning about where tech is useful in projects that try to lead to more accountable governance.
It’s worth a read because it summarises some important recent bits of research about what makes projects more likely to suceed (which we'll post on The Compiler soon). We also like that they put the report on a microsite as well as a PDF. We've summarised the summaries and highlighted a couple of questions from the report here:
'Thick and thin engagement': - Thick engagement describes small on- or offline groups where people discuss a range of views on an issue, then decide how they want to solve problems - 'Thin engagement' being mainly online activities where people can express their opinions, make choices or affiliate themselves with a particular group or cause.
Question to look into: Do too many tech-for-governance projects focus on thin engagement (which is quicker and easier), without trying hard enough to combine it with thick engagement?
Vertically integrated action means strategically engaging at many different pressure points within different structures and levels of civil society and government. For example, it could involve projects that combine policy advocacy with citizen-based protests, all acting upon the same issue in different ways.
Looking at accountability ecosystems means understanding how different groups looking at an issue interact with each other. Those groups could include multiple actors from grassroots groups to government reformers and multiple levels of government (from district- to national-level). Then, the idea is that you select multiple tools and approaches (including advocacy, monitoring, legal empowerment and investigative journalism) depending on which groups are strongest in particular areas.
Question to look into: Both approaches seem to put a lot of importance on working in coalitions or alliances, but this requires a lot of time and resources. What's best for organisations who can't afford that time, or who can't form links with best-placed actors (whether because of limited civic space, political differences or otherwise)?
0 notes
the-compiler · 9 years ago
Text
Civil Society Watch report
Download the report here
CIVICUS, the global alliance of civil society organisations, has published its Civil Society Watch report for 2015.
They find that in 109 countries, civil society organisations didn't have at least one of the following:
freedom of association
freedom of expression
freedom of peaceful assembly
The country-by-country breakdowns of developments over the last year are essential reading. A few things that we picked out:
There was a [year-on-year] increase in harassment and physical violence targeting civil society activists, professional journalists and citizen journalists, as well as a rise in targeted assassinations. "More and more states are failing their commitments under international law and reneging on their duty to protect and enable civil society."
There are fewer and fewer spaces for civil society organisations to meet or assemble. The most common violations were of the right "to set up, join or operate a formal or informal group to take collective action" (in 85% of the countries assessed) and the right to "gather publicly or privately to collectively express, promote, pursue and defend common ideas or interests" (two-thirds of countries).
In response to this, more activism is taking place online, meaning that activists are increasingly exposed to digital security risks. Many states monitored and blocked websites; and threatened, detained and sometimes imprisoned bloggers and online activists. Other states are implementing or actively considering social media and internet regulations.
0 notes
the-compiler · 9 years ago
Text
Data visualisation: Contributions to evidence-based decision-making
Read the whole report here
Imogen Robinson at SciDev has just published a report for researchers thinking about whether data visualisations could get more people to read and act on their research.
They've talked to a lot of people from different countries and different types of organisations, as well as including some interesting analytics from their own articles that show how much higher interest and engagement is when an article includes a visualisation.
We've picked out four key questions that the report suggests anyone should asks before they decide to create a visualisation to summarise their research:
Is high-quality, interesting data available?
Does your target audience have the capacity to access and understand data visualisations? This can include skills related to things such as language, statistics, visual literacy, computers and critical thinking, all of which earlier research suggests influence how people interact with visualations. It also means thinking about whether the audience have the bandwidth and technology to view the visualisation.
Do you have the production skills to make visualisations on your team, or can you access them? Priority skills are data analysis, visual design, digital skills, and storytelling or journalism skills.
Do you have organisational buy-in to build a team to work on visualisations and invest in developing these skills?
They also suggest some hypotheses for what makes visualisations most effective:
Data visualisations are most effective when they are based on topical issues.
Data visualisations should be tailored to the target audiences’ interests and needs to be most effective.
[Data visualisations] that follow good design principles and provide clear links to the original data are likely to be most effective.
It would be great to test these empirically in future.
0 notes
the-compiler · 9 years ago
Text
Using mobile phones to collect feedback
Download report here
Between 2013 and 2016, World Vision UK managed a pilot on the best ways of designing and implementing beneficiary feedback mechanisms, with INTRAC, CDA Collaborative Learning Projects and SIMLab.
One of the approaches they tested was using mobile phones (specifically SMS and voice calls) to collect and monitor feedback from beneficiaries in two projects in Hargeisa (Somaliland) and Iringa (Tanzania). The pilots all offered suggestion boxes as an option. It throws up some insights on designing projects focused around using a particular technology tool, and some pitfalls to avoid.
What did they find?
"Take-up was much lower, mobile phone ownership much less common, and female access to phones much more constrained than had been suspected." For example:
In the Tanzania case study, 76% of all feedback was received through suggestion boxes, with only 16.5% and 13.5% coming from SMS and voice calls, respectively.
"Predominantly male ownership of technology had a significant chilling effect on the use of the mobile [feedback mechanism] by women." For example, in cases relating to gender-based violence...67% of SMS respondents were male, compared with 58% of those who used the suggestion box.
So, what does this mean for other organisations?
Don't rely on technology alone. The researchers suggest that without suggestion boxes, feedback levels could have been significantly lower, and highlight that:
"multichannel approaches are critical not only to reach different people, but for the same people to turn to for different purposes."
They suggest that "the BFM might have been less distorted by gender and more heavily used" if it had included radio and automated interactive voice response systems.
They also pointed to two potential reasons why the project's expectations about mobile usage were wrong, highlighting things that others could watch out for:
when people were asked about technology usage during the planning period, participants could have responded overly positively to ensure that the project still came to their community.
staff were more familiar with technology than the community they were trying to reach, and did not fully understand "the true politics of technology use by women in these communities."
0 notes
the-compiler · 9 years ago
Text
Monitoring Conflict to Reduce Violence: Evidence from a Satellite Intervention in Darfur
Download paper here
Grant M. Gordon’s research investigates the impact of Amnesty International USA’s “Eyes on Darfur”, the first satellite intervention that aimed to reduce violence in the Darfur region of Sudan. It highlights some important points about unintended consequences of using technology in projects that monitor human right violations, election irregularitiess.
Gordon estimated the impact of Eyes on Darfur by using a dataset from the US State Department’s Humanitarian Information Unit on village population in Darfur, and the year that they experienced conflict in the period 2003-10 - as well as interviews with staff who worked at Amnesty during the programme.
What did he find? Areas monitored by Eyes on Darfur were associated with between a 15 and 20 percentage point increase in violence, compared with unmonitored areas. This wasn’t just during the period of that Amnesty monitoring the areas - violence continued there in subsequent years. (As Gordon points out, this could be because Amnesty didn’t inform the Government of Sudan that monitoring had stopped - but in any case, it doesn’t diminish the overall impact of the trend).
Monitoring also didn't seem to simply displace violence elsewhere - but neither did it seem to protect the area where it was implemented: there was no increase or decrease in violence in villages that neighboured the villages monitored by Eyes on Darfur.
Gordon suggests some reasons why this might be the case (including that Eyes on Darfur may have offered the Government of Sudan a low-cost way of assessing whether the Janjaweed had performed the tasks they had been assigned), but suggests that it was most likely that the Government was attempting to retaliate against Amnesty, dissuade it from monitoring areas and deter other human rights organizations from engaging in the same behaviour. (The Government of Sudan had prohibited most organizations from operating in Darfur, targeted their workers or made it very costly for them to operate - but could not otherwise halt the satellite monitoring.)
Retaliating against Amnesty could only be done by targeting the communities they aimed to protect.
So, what does this mean for human rights organisations? It's important to note that Amnesty started by monitoring a small number of villages because they were concerned about this possibility, and that they decided not to expand the program after getting anecdotal evidence that it had increased violence. More generally, Gordon suggests that the results show that satellite monitoring did have an impact, and that they:
speak to the underlying potential for limited external interventions conducted by human rights organizations to change the strategic calculus of actors involved in genocide, even if they cause violence.
He then suggests “two scope conditions under which advocacy-driven monitoring may fail”: - "Human rights interventions may be more successful in the early days of conflict when actors are still invested in their reputations and believe negotiated settlements are still in sight." [During the period when Eyes on Darfur was in place, the Sudanese Government was already set on its path]. - Monitoring might be less likely to reduce violence in contexts where “actors have strong beliefs and are unlikely to care about international audience costs that fall short of inciting a military intervention.”
0 notes
the-compiler · 9 years ago
Text
Designing for Data Literacy Learners
Download paper here We really like this bit of research from Rahul Bhargava and Catherine D'Ignazio because it pins down some key practical elements that anyone trying to help people learn about using data should remember. They point out that though more and more tools are now available to help people start to work with data, they rarely help people learn more deeply: "a tool that is easy to learn is not necessarily designed to support rich learning.” As they put it, many tools “introduce themselves as ‘magic’, explicitly hiding the mental models and software operations that they run through to produce their outputs."  
They recommend that tool designers and educators “design from the start with...strong pedagogical principles in mind,” referring to Paulo Freire and Seymour Papert in particular. If you’re building a tool, aim to do so according to four design principles:
Focused: A focused tool does one thing well - giving enough space for a learner to experiment, but not so many options that they get lost.
Guided: guiding a learner through an example activity as soon as they start (potentially creating example outputs too).
Inviting: Making tools appealing and non-intimidating to learners by using relevant information, or funny and playful approaches that encourage learners to experiment.
Expandable: Pitching a tool at the right level for the learner's abilities, while also giving them a path to find out more about how the tool works and learn more deeply about the issue.
(They also come up with a helpful definition of data literacy - the ability to read, work with, analyze and argue with data.)
Reading data: understanding what data is, and what aspects of the world it represents.
Working with data: creating, acquiring, cleaning,and managing it.
Analysing data: filtering, sorting, aggregating, comparing, and performing other such analytic operations on it.
Arguing with data: using data to support a larger narrative intended to communicate some message to a particular audience.)
0 notes
the-compiler · 9 years ago
Text
Data Visualization for Human Rights Advocacy
View the article
A team from New York University���s schools of Law and Engineering (Katharina Rall, Margaret L. Satterthwaite, Anshul Vikram Pandey, John Emerson, Jeremy Boy, Oded Nov and Enrico Bertini) has published some research into how human rights organisations are adopting data visualisations.    
Amid large amounts of anecdotal evidence in this area, the research collects some useful, rigorous findings. Here are three we wanted to highlight:
1. What did they do? Content coding of Amnesty International and Human Rights Watch fact-finding reports published in English as PDFs in the years 2006, 2010, and 2014.
What did they find?
The number of visual features included in reports almost tripled between 2006 and 2014, and organisations are using data visualisations in both old and new areas of their work. 
However, major human rights organisations were only using data visualizations relatively rarely in comparison with photographs, which were the most common visual features used. [Engine Room note: it would be interesting to see if this trend persists in 2016.] 
2. What did they do? An experimental user study on whether data visualisation can make a message more persuasive, comparing viewers’ responses to data presented through bar charts and line charts with their responses to data in tables.
What did they find? 
Graphical information can be more persuasive than text...in some situations. 
Human rights advocates have an opportunity to “use experimental findings to refine their strategic choices about using different types of charts and tables when targeting communications at different types of audiences.”
"There is a consistent trend that graphical information (data presented through charts) is more persuasive than textual information (data presented through tables) under certain conditions...
"For people who did not have a strong prior opinion on the issue, charts were more persuasive than tables. Tables...seemed to outperform charts when the participants...had a strong initial attitude against the persuasive message." [Statistical significance could only be confirmed for the former of these two findings].
3. What did they do? An experimental user study on how visualisation techniques could mislead users. They chose common distortion techniques—including some that they’d found in human rights from point 1, and presented viewers with deceptive visualisations using synthetic data based on real-life human rights issues.
What did they find?
There’s evidence that data visualisations can and do mislead viewers:  
Participants who were presented with a deceptive visualization which intentionally exaggerated the message to be drawn from the data did perceive the underlying message in its exaggerated form at a statistically significant rate. 
Similarly, participants who were presented with a visualization that suggested a reversal of the message to be drawn from the underlying data were deceived at a very high rate.
0 notes
the-compiler · 9 years ago
Text
OpenData.Innovation: an international journey to discover innovative uses of open government data
Download paper here
Mor Rubinstein (Open Knowledge International) and Josh Cowls and Corinne Cath (Oxford Internet Institute) have looked at how open government data is being used in five countries - Chile, Argentine, Uruguay, Israel, and Denmark - through interviews with 'social hackers', open data practitioners and experts in those countries. 
They make various recommendations, as well as coming up with four elements that are crucial for open data to be used in innovative ways:
1. availability: an established sustainable system for the publication of open data 
- as seen in the contrast between the Buenos Aires local government's positive attitude to open data and that of the national-level government.  
2. literacy: the ability of users to acquire and use this data
- as shown by the fact that “Chile's open data ecosystem is lacking key actors to mediate between the open data community and other stakeholders in society” [including data journalists].
3. urgency: a culture encouraging fast-paced problem-solving 
- as shown by Denmark, whose "seeming lack of pressing problems" and high levels of trust in the state may have contributed to a more limited focus on civil society usage of open data.
4. community: strong cross-national and cross-sectoral networks to support these uses of data
 - as with the collaboration between Uruguay's digital government agency and DATAUY, a volunteer-run civil society group, or the LATAM community in Latin America. In Israel, by contrast, “political circumstances make regional cooperation impossible”.
0 notes
the-compiler · 9 years ago
Text
Mapping and Comparing Responsible Data Approaches
Download paper here [PDF]
by Jos Berens, Ulrich Mans, and Stefaan Verhulst
This paper, released in July 2016, looks at the scope and reach of various policies which address elements of responsible data. Interestingly, only one of the 17 policies chosen for inclusion here is actually labelled as a “responsible data policy” (from Oxfam) - the others are labelled as addressing Data Protection, Privacy, or Data Sharing. 
The paper goes through various elements of these policies, looking at similarities and differences between them. The takeaways from this analysis centre around implementation of policies (leadership, accountability, feedback loops), accessibility (clear language), and learning (what works, what doesn’t, and iterating.) 
The paper seems to be aimed at those working in the humanitarian space rather than more broadly, but has useful and interesting lessons for people looking at creating privacy/data sharing policies. 
0 notes
the-compiler · 9 years ago
Text
Mapping Refugee Media Journeys
The "Mapping Refugee Media Journeys" project, by the Open University and France Médias Monde, looks at how Syrian and Iraqi refugees use technology and suggests ways of using this to help the most vulnerable refugees.
It’s based on interviews with more than 50 Syrian and Iraqi refugees in France and the UK, analysis of refugee social networks (Facebook and Twitter), and interviews with the European Commission, international media and NGOs.
Here are some of the key points we picked out:
Unsurprisingly, responsible data was a big concern: 
Refugees will not share personal information online, preferring to remain anonymous for fear of reprisals, surveillance, detention and/ or deportation. 
The digital tools and resources that help, guide and comfort refugees are also used to exploit, monitor and track them.
For example, researchers found that interviewees followed news on Whatsapp mainly because they trusted that the service (unlike Twitter and Facebook) wasn’t under surveillance.
Refugees also rarely used resources produced by national or state­‐funded organisations, and were often driven towards “unofficial, potentially dangerous and exploitative resources” as a result. 
The report gives practical recommendations to deal with this (informed by Aspiration Tech guidelines and Oxfam’s Responsible Data policy, among others). They include designing tools that don’t asking refugees to disclose any information about themselves, and recommending that any information sources  include warnings about the known risks of financial exploitation by taxi/private drivers or smuggling networks.
Highlighting the daily changing nature of the situation in border areas, the researchers criticised the plethora of hackathons to create tools, saying:
any resource must be frequently updated in order to avoid it doing more harm than good with misinformation....there is a real danger that quick tech fix initiatives are not viable or sustainable. A sustainable resource of the kind that international news organisations might provide would offer a more viable alternative.
They argue that any digital outreach project should be “highly personal” and ensure that "trusted individuals” should have a continuous physical and digital presence at key sites. In particular, they recommend drawing on sustainable existing networks (such as by encouraging refugees to ask for help and counseling from local NGO staff) and understanding shifting local conditions on­‐the­‐ground experience. They also found that, despite the range of tools available, there was little content available in key areas, notably "relevant high quality legal information...and sources of information about language learning facilities”.
Download paper here
0 notes