⬅️ Back to Home
<aside>
📝 What are we missing? ➕ Add a start-up OR ✏️ suggest an edit.
</aside>
Problem
Data bias can occur at multiple stages: from collection, cleaning, parsing, to analysis. Acting upon biased information has led to disproportionate sentencing and policing of Black and LatinX Americans, discriminatory hiring decisions, and racially charged targeted advertising. Most recently, Google and Facebook came under fire for targeting potential customers by demographics and geography for mortgage, unemployment insurance, and credit assistance advertisements.
There are three main data and privacy problems that exacerbate informational inequities and racism: 1) Targeted Ads / Adtech, 2) Big Data and Racist Algorithms / Algorithmic Bias, and 3) Surveillance / Facial Recognition.
Key Terms
- Algorithmic Bias: Systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. This can result in enhancing unfair and systemic biases towards race, gender, sexuality, ethnicity.
- Data Auditing: Assessing the quality of data and if an algorithm/process is fulfilling its intended purpose.
- Data Anonymization: Anonymization has become more common in conversations around cybersecurity and data privacy. It essentially strips all personally-identifiable information around an individual's data.
- Data Decentralization: The transfer of control and decision-making around data from a centralized entity (i.e. big tech corps like Facebook or Google) to a distributed network.
Solutions
1. Anonymization
Existing solutions and players heavily rely on data anonymization, or “blind” data practices which discard demographic data entirely. While these solutions address concerns with demographics based bias, they also propagate a “color blind” approach which hinders holistic and intersectional analyses.
Existing Solutions
- The facial recognition market is growing exponentially and has made our faces our identifiers. D-ID's identity protection acts as an anti-facial recognition solution to make organizations’ photos and videos unrecognizable to facial recognition tools.
- Textio is an augmented writing platform that uses AI to match the words used in job descriptions with hiring outcomes to write gender-neutral job ads. Here, the algorithm is looking for common, well-understood patterns on which it has been trained and replaces them with properly-crafted alternatives.
- Jobecam is a job posting platform which uses video technology to help recruiters to reduce unconscious biases during the hiring process by applying AI, concealing the candidates' identity, blurring their images, and changing their voices in the video interviews or video resume.
- CodeSignal offers a gamified coding platform for job-seeking programmers to reach recruiters by performing well—but also anonymously—on coding challenges. First launched in 2014, the platform offers coders the opportunity to battle each other, or bots, and earn points for accuracy and speed. High performers who reach a “critical point” in the game are then given the option to connect with employers.
What needs to be done?
<aside>
⚙ We're in need of some innovation in this section. Help contribute and add to this section by suggesting an edit!
</aside>
2. Privacy Protected Data Sharing and Data Decentralization
Government, healthcare, financial institutions, and technology companies hold mass amounts of individuals’ data, at times leading to misuse and manipulation. Increased user privacy and data decentralization could reduce inequities exacerbated by data bias.