Making Sense of Syria: Day 1
It’s been more than a year since the original “Making Sense of Syria” workshop. In that time the conflict has only become more complex and I feel there’s no better time to bring the group back together and re-contextualize the use of data and it’s implications on conflict mediation. Richard Tyson and the now official Special Projects Office, organized this event and they've done an incredible job of gathering a large group of thoughtful and talented participants. It’s only been one day and I am amazed at the diversity of perspective, the quality of challenging ethical considerations, and the imagination with the potential usages for the data we've gathered.
This year the workshop will focus mainly on the town of Aleppo. For a large part of the conflict Aleppo was not involved in the fighting. Aleppo’s disinterest in the revolution was rooted largely in the presence of growing business and investments in the town, as well as it’s diversity in inhabitants (In 2011 Aleppo had the largest Christian population in the middle east). Currently it is in a stalemate, after already struggling through takeovers from the Islamic State of Iraq Syria (ISIS), Aleppo is now divided down the middle between the regime and the opposition. Checkpoints across the region prevent access and possibilities for travel, and there is only a single supply route through the city which dramatically affects the economics depending on who’s controlling it.
This year we have the opportunity to work with four different data sets. Each offers insight into different levels of specification ranging from global rhetoric about Aleppo, all the way down to day-to-day utilities and perceptions on the streets. Although each of these data sets are rich in information, the conversation today revolved around some key questions that speculate on the function of the data. The high level question of course is how do we use the data? How do we understand the data? and to whom is it most valuable?
At this level of conflict it becomes superfluous to think in terms of who’s right and who’s wrong, or in other words who’s the good guy and the bad guy. There is no standard lens for legitimacy. The best we can do is to think about the data objectively and map it in a way that tells a story. The inherent problem to this pursuit however is that by aggregating data into a navigable system you are building a tool that shows relationships and these relationships in the wrong hands can have severely negative impacts. When speculating on the value of the data it becomes problematic. A tool has to be built to understand the raw data but once something that powerful has been built who do you show it to?
It’s rare we are faced with these types of ethics in academia, or at least having them be a direct reality of our creations. Access to this type of information is relatively new, and the rule book is still being written on how to manage the unintended consequences of creative curiosity. I feel blessed however to be in a room full of people who are asking these types of questions.
We ended today on a framework for processing this data, at least for our own purposes. Data doesn’t necessarily tell us what we don’t know but rather helps us hold more confidently what we do know and then allows us to see the gaps in our thinking. It became an important criteria to really check the validity of the data we’re picking apart. Why is it here in the first place? What was it’s intended use? Is it from a credible source? By creating a methodology of checking premises, we are bringing an ethical ideology into every step of the process. It’s this foundation that then allows us to confidently begin to ask the right questions of how it can be used.
Looking forward to tomorrow.
You can look more into the event here.