Some thoughts on ethical AI

With the current advancements in machine learning, humanity has taken significant strides. In recent years, we have solved problems that have persisted for more than half a century. The progress made in the technical aspects of AI is amazing. However, we are currently trying to shift our attention to the ethical questions, which are not only difficult to answer but also challenging to articulate.

Take, for example, the current issues with Gemini. Mistakes happen; such is the nature of experimentation, trying new paths, and simply pushing the limits. What bothers me is the mindset—the alluring notion that we can solve human problems like discrimination with half the budget in a single fiscal quarter. There are no algorithmic shortcuts for fixing centuries of injustice. This time, the road will be long but rewarding, focusing on understanding and authentic empathy. Choosing the most important battles is essential. Let us think deeply about the data and how to gradually enhance it. Also about what matters and will lead to a real improvement in the lives of all people affected by discrimination. We shouldn’t rush or prioritize short-term gains to appease investors.

Furthermore, ethical AI is complex and has a huge non-technical component. With all due respect to the excellent developers at Google, ethical considerations aren’t solely within their expertise. Devs alone cannot solve these issues in a quarter. There are no keyboard shortcuts or known algorithms. Consider the baseline: even human-level performance is honestly bad; how can we hope AI will do better? It’s a difficult path that must be navigated together, by both AI and humans. No planes fly to that destination.

“Good or bad, hard to say”

One very unproductive, even harmful behavior in the software development is the tendency to “protect” our code from critics. 

Sure, we give it a lot of thoughts and sure it’s a snapshot of our way of thinking at the current moment, but… let’s face the facts, shall we? Given there is no way to know what the future brings and how the code would have to change to accommodate new requirements, our code is probably not the best it can be. I still have nightmares involving snippets of code i wrote a while ago.

So, what can we do? 

First, let’s throw it to the critics and try to learn from them. If the code is really that good, then it should be easy to understand, do the job, consider all the edge cases, be prepared for changing requirements, be time and memory efficient, integrate seamlessly in the application and be consistent with the code base. Does it do really all of that? Unlikely.. So now we start with the mindset “there is space for improvement” and are open-minded to critic. Listening and learning. Some of the obvious benefits: we see problems before the customer runs into them. Also we build better interactions with the peer software devs, being able to break out of the “defender’s mode” and openly accept help or challenge.

Second, we can use our code as the best measure of technical growth. Looking at code from the last year and thinking it is good means we’ve lost an year learning nothing (as if we didn’t have enough bad news).  The best proof that we are on the right track, getting better each day is being ashamed of the work from yesterday. That’s the delta. Knowing it, enables the calculation of speed and direction. It is pointless to compare ourself with other people, since everyone has different start, different prerequisites, different story. But comparing today’s and yesterday’s me, is easy and meaningful.

The shoulders of the Giants

Men don’t grow by centimeters, we grow by the number of books we’ve read.

And seldom was someone rewarded for having opinion. Most often we are rewarded when doubting..

The books that I’m most grateful for in the last year were:

  • Practical Wisdom by Barry Schwartz
  • The Paradox of Choice by Barry Schwartz
  • Algorithms To Live By by Brian Christian and Tom Griffiths

And to wrap up the post with a quote from a favourite book, think about this:

Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.    

                    — Douglas Adams, Last Chance to See