How HazAdapt Incorporates Humane Technology

Posted By:

Ginny Katz, MPH

September 1, 2022

HazAdapt is a Humanity-Friendly (HF) Information and Engagement Platform (IEP) shared between the public and Emergency Authority.

Humanity-Friendly (HF) is an umbrella term for the standards of ethics in design and implementation that this technology has adhered to. Technology has achieved HF title status when it incorporates three main tenants:
1) Humane technology standard of design and operation
2) Inclusive Design & Equitable Resources
3) Community-centered approach and benefit.

What is Humane Technology

Humane Technology is a newly forming value-centric standard of technology, based off of the standards set by The Center for Humane Technology, that operates for the common good by implementing the following standards:

  1. Build with the understanding that technology is never neutral, therefore assumptions that tech developers bring into their work must be addressed in order to avoid biases being programmed into a technology. 
  2. Build technology that is sensitive to human nature and doesn’t exploit our innate physiological or social vulnerabilities.
  3. Narrow the gap between the powerful and the marginalized instead of increasing that gap through providing empowering information that supports physical and social hazard coping capacity. 
  4. Reduce greed and hatred instead of perpetuating them. This helps to build shared reality instead of division with fragmenting realities
  5. Accounts for and minimizes the externalities that it generates in the world

Humane technology is an ethics system that examines the technology impact on its users and how the business makes money. It’s important because we see cases like Meta’s Facebook and Instagram that extort users with ads and can further hate, misinformation, that can cause negative mental health impacts and real-world life and death Many emergency management and public safety entities are using social media as free tools to supplement mass notifications shortcomings and connect with the public. The social media platform, Facebook, has been utilized as another channel for local emergency management to share and engage in back-and-forth information sharing with their community (Homeland Security, 2018; Tran, Valecha, Rad, & Rao, 2021). While Facebook does expand the functional capacity of public and authority engagement (Ross, Potthoff, Majchrzak, et al. , 2018), its algorithms were not designed for the ethical need of this sphere. Designed to boost posts that got high engagement, Facebook feeds would boost posts with hate speech, misinformation due to the high engagement the posts would incur (Homeland Security , 2018; Roberts, Misra, & Tang 2021; Tran, Valecha, Rad, & Rao, 2021). Adding Facebook to a situation of civil unrest parallels pouring gasoline on a flame. Whistleblowers from Facebook have testified the algorithms were designed to profit at the expense of the user and when Facebook had to choose between safety and profit, Facebook would consistently choose profit over safety (Frances Haugen Written Testimony, 2021). Facebook drastically increases the combustibility of an already hot situation by boosting dehumanizing and hate filled posts with further reach and promotion. Facebook has been cited in playing a role in the January 6th riots (Merrill, 2022) and the deaths from misinformation and genocide from viral hate speech in Myanmar and Ethiopia (Tran, Valecha, Rad, & Rao, 2021). Additionally, the high number of ads and engagement boosting features have been cited as discriminatory (Ali et al, 2019, addiction-forming (Chakraborty, 2016), and polarizing (Besse et al. 2016) and has the strong potential to detract from user resilience and cause more harm.


Humane Technology Standard Applied

1. Build with the understanding that technology is never neutral, therefore assumptions that tech developers bring into their work must be addressed in order to avoid biases being programmed into a technology. 

We created a process of regular feedback and evaluation as designer and developer bias. Design feedback systems such as GenderMag (2018, Burnett et al) and consulting 3rd party experts and those with lived experiences are helpful tools and processes to discover the biases and then root them out during the design process.

2. Build technology that is sensitive to human nature and doesn’t exploit our innate physiological or social vulnerabilities.

This includes algorithm success metrics that are not based on time spent, scrolls, attention, or consumption by highest possible interactions. This includes not creating monetization plans or algorithms that boost content for engagement value that could be driving polarization.: This includes not creating content or content displays, gamification, which could be considered excessive or habit forming or addiction promoting dopamine triggering and releasing, creating a possibility of addiction by overuse. 

This includes following the Privacy by Design principles that store all personally identifiable data on the user’s device and not in the HF IEP data stores. This ensures that the user is always in control of their data and significantly reduces the potential harm possible if there is any kind of hack by an actor of malintent. 


3. Narrow the gap between the powerful and the marginalized instead of increasing that gap through providing empowering information that supports physical and social hazard coping capacity. 

By making information exchange consensual and accessible, an HF IEP removes many of the access barriers, systematic limitations, justifiable concerns, and unethical practices to then empower the vulnerable and marginalized. An HF IEP allows for personal identification and representation of self with the granularity of intersectionality of spectrum options in race, gender, sexual orientation, ethnicity. This empowers the user to accurately describe and represent themselves on the platform. However, this personally identifiable information is stored only on the users device and is within their control.  An HF IEP follows differential privacy that prevents the public good servicing entity (PGSE) from gaining access to personal information. A PGSE will be able to access non-identifiable analytics about a dataset, such as descriptions of the patterns of groups within the dataset while withholding information about individuals in the dataset. A PGSE cannot search for or access any user's personally identifiable data.



4. Reduce greed and hatred instead of perpetuating them

An HF IEP will seek to bring people together as a community rather than promote polarizing content by inherent system mitigation mechanisms to discourage and prevent the spread of misinformation. 



5. Helps to build shared reality instead of division with fragmenting realities

An HF IEP will seek to bring people together as a community rather than promote polarizing content by inherent system mitigation mechanisms to discourage and prevent the spread of misinformation. 

6. Accounts for and minimizes the externalities that it generates in the world

During design HF IEP first asks “How can we do no harm in this function” before attempting to “do good”. This is an early analysis of the potential harms and abuses that might be possible if abused by a malevolent entity or in accidental ignorance to the potential harms.  An HF IEP has open accountability that allows the system and the company providing the system to be open and held accountable for harms that are proven to be caused by the technology. Humane Technology is concerning the multiple aspects of the technology and company monetization and how these impacts the user. Specifically, humane technology is most critically focused on the data that is collected from users. 

Incorporating antiracism in the humane technology aspect focuses on narrowing the gap between the powerful and the marginalized. Instead of increasing that gap by providing race and other personal data to authorities, each information request must be approved by the user. HazAdapt also incorporates community-honoring information, features, and protections that boost user autonomy and lessen the dependency for information from authorities alone. Intentionally building a shared reality instead of promoting division with fragmenting realities requires designing to bring people together as a community rather than promote polarization. This can be done through monitoring content by inherent mitigation mechanisms in the system protections to discourage and prevent the spread of misinformation and hate. This is especially important as history has shown that people have incorrectly incorporated race and ethnicity to disaster causes (ie “The China virus” and AntiAAPI hate crime rise).