Some thoughts on ethical AI

With the current advancements in machine learning, humanity has taken significant strides. In recent years, we have solved problems that have persisted for more than half a century. The progress made in the technical aspects of AI is amazing. However, we are currently trying to shift our attention to the ethical questions, which are not only difficult to answer but also challenging to articulate.

Take, for example, the current issues with Gemini. Mistakes happen; such is the nature of experimentation, trying new paths, and simply pushing the limits. What bothers me is the mindset—the alluring notion that we can solve human problems like discrimination with half the budget in a single fiscal quarter. There are no algorithmic shortcuts for fixing centuries of injustice. This time, the road will be long but rewarding, focusing on understanding and authentic empathy. Choosing the most important battles is essential. Let us think deeply about the data and how to gradually enhance it. Also about what matters and will lead to a real improvement in the lives of all people affected by discrimination. We shouldn’t rush or prioritize short-term gains to appease investors.

Furthermore, ethical AI is complex and has a huge non-technical component. With all due respect to the excellent developers at Google, ethical considerations aren’t solely within their expertise. Devs alone cannot solve these issues in a quarter. There are no keyboard shortcuts or known algorithms. Consider the baseline: even human-level performance is honestly bad; how can we hope AI will do better? It’s a difficult path that must be navigated together, by both AI and humans. No planes fly to that destination.