While anonymity can support positive practices of sharing, it can also foster negative or even illegal practices. So, if the CIS is established with a high level of anonymity, it needs to also manage the possibility that de-linking persons from data might disrupt accountability. This means that a CIS needs to directly consider the tension between supporting anonymity but also providing the means to hold people accountable/liable for choices and actions which are either not in the public’s best interest and/or are against the law.
How can a CIS be set up in a way that balances accountability with anonymity?
How can the system support its users in taking responsibility for their actions and use of the system?
If a system flags up irregularities, what irregularities warrant flagging? Why?
What legal and ethical justification is there for what is logged about a user?
How are users made aware of what they are being held accountable for?
To what extent could or should the system offer the possibility of contextualizing logs?
One way of encouraging and upholding accountability/liability in a CIS is through making the system and its users auditable. This may be done through the collection of user profiles and trace histories in a log (e.g. a record of a user’s data inputs, additions, and alterations, etc.). Methods such as tracking file access histories can help service providers and users to reduce issues such as abuse of the system, insecure programming interfaces, malicious insiders, data loss or leakages and unknown risk profiles. However, the collection of trace histories also raises the potential for users to be tracked/surveilled and for such data to be mined and sold.
Logging user data raises various other issues, too. If a person knows that their actions are being logged, they could change their actions to manage what’s logged and make people so cautious that they hinder the ability of the system to adequately enable effective disaster risk management. Consideration should be given to what are reasonable expectations to be levelled at emergency responders within the social contract societies have with them.
In a context where the system helps make people accountable/liable for their online practices, it also becomes important to make sure people are aware of which practices they could be held accountable/liable for. In other words, people need to develop a literacy of online norms, rules, and laws, and be aware of how these may shift in different cultural and jurisdictional contexts. For example, there are different understandings of privacy within the EU; depending on which context one is embedded, the sharing of private information online may be seen very differently with different ramifications for the person who has shared it.
Often faced with overwhelming complexity and adversity, emergency responders can come under immense pressure. While post-disaster reviews often praise examples of exemplary efforts, they normally focus on mistakes to identify lessons and opportunities for improvement frequently using digital logs of activities and communications to do so.
For example, the expert inquiry into the response to the 22/7 Norway attacks - where a lone terrorist first set off a bomb in the centre of Oslo and less than two hours later opened fire at the participants of a summer camp on the island of Utøya - was similarly based on the use of digital logs as evidence. Interestingly though, it went beyond asking what was known to the responders by turning to a consideration of what they could have known. The expert report found out that just ten minutes after the explosion in the government quarter in Oslo, a call was received in the emergency call centre. The caller blurted out a lot of information, including a car registration number. The information was written down on a post-it note and taken into the control room, marked as important. However, it was not noticed until 20 minutes later in what was a very busy control room. And even when it was found, it was not sent out over the radio, and neighbouring counties were not notified. On the basis of such evidence, the experts argued that ‘the technical systems both for notification and for sharing information were very poor’ and the report concluded:
The authorities’ ability to protect the people on Utøya Island failed. A more rapid police operation was a realistic possibility. The perpetrator could have been stopped earlier on 22 July. (Gjørv, 2012, English version:11)
While there were no legal liability proceedings for the individuals involved, the report’s findings have had significant negative impact on the reputation of the police and emergency services, and worries about individual liabilities arising from logging of data practices within digital environments is an emerging concern amongst practitioners.
Büscher, M., Liegl, M., Perng, S.,and Wood, L. (2014). How to Follow the Information? A Study of informational mobilities in crises Sociologica. 1. [DOI]
Cartlidge, E. (2012). Aftershocks in the courtroom. Science (New York, N.Y.), 338(6104): 184–8 [Link]
Ellebrecht, N. and Kaufmann, S. (2014) Boosting Efficiency Through the Use Of IT?: Reconfiguring the Management of Mass Casualty Incidents in Germany. International Journal of Information Systems for Crisis Response and Management (IJISCRAM), 6(4): 1-18. [DOI] [Link]
Gjørv, A. B. (Ed. . (2012). Rapport fra 22 Juli-Kommisjonen. Oslo. [Link]
Jones, T. (2012). Short Cuts. London Review. [Link]
Related Key Terms