Team 1 posted a reading about automatic facial recognition (AFR) and Team 5 posted one on Amazon's biased recruiting ‘AI’, both of which reveal important ethical issues in computing, hence the name of our class. AFR is starting to be used by law enforcement globally, but it does raise some privacy concerns and questions about accuracy, bias, and trustworthiness. For example, demographic bias can exacerbated with algorithms that unfairly target non white groups, which makes existing inequalities worse. Another ethical concern is surveillance and misuse by governments or private entities that can lead to the infringing on individual rights especially without their consent.
Similarly, Amazon's AI recruiting tool developed unintended biases against female candidates due to being trained on predominantly male resumes. This shows us how AI can perpetuate existing societal biases if unchecked. The ethical framework of fairness and equality directly applies here, emphasizing the need for unbiased and transparent systems. I think that while these technologies can offer efficiency and accuracy, strict regulatory oversight is needed to prevent misuse and discrimination. I feel pretty weird about their widespread adoption because history shows technology can amplify societal biases if not carefully monitored. It is very important to prioritize ethics from the development stage to ensure technology serves all people fairly.