top of page

Pierre-Adrien Hanania

Accessibility note: if you have trouble reading this post, I advise you to enable the Read Aloud plugin in Chrome or read it using the Reader View in Firefox.

Name: Pierre-Adrien Hanania

Nationality: French

Country of residence: Germany

On the web: Linkedin, Twitter

Short bio: Pierre-Adrien Hanania is Capgemini’s Chief-of-Staff to the Public Sector and Global Offer Leader for Data & AI in Public Services. He focuses on how the right and intelligent use of data can help organisations deliver augmented public services to the citizens. Based in Berlin and covering activities across over 18 countries, Pierre-Adrien also drives the partnership with the United Nation’s ITU, with which Capgemini contributes to the AI4Good Summit since 2020. Prior to Capgemini, Pierre-Adrien worked for various European think-tanks, and has graduated in European Affairs at Sciences Po Paris. Pierre-Adrien was born in Paris, and grew up between Paris and Berlin.


You shifted from political analyst to business and tech consultant. What prompted this decision? Is the one field interacting with the other on your daily job today?

When I finished my studies in European Affairs at the very political science-ish Sciences Po in Paris, I didn’t actually plan on joining the private sector, rather I was looking towards political institutions and think-tanks. I went through nearly 40 discussions with potential places, where I could start my career, and only two of these were from private sector – one of these being Capgemini.

My discussion with the – then – Public Sector Head of Capgemini Germany made clear to me that technology and digital is today at a crossroad, highly nurtured by the technological, business, but also societal and political dimension. Add to that a local and global intertwined play, and you get the recipe that made me go for that shift – for which I’m really thankful today. In the past years, I actually continued to be a political scientist within a big firm (300.000 employees!), but with a start-up flair, as I started an initiative that later became the formalized Global Public Sector, bringing together 20 countries connected which each other through networks of Public Sector Heads, account executives, technology leads, all tackling global challenges such as climate change, inclusive citizen services, and cyberthreats – on a global level.


What does a day in your job at Capgemini look like?

In a nutshell: three working languages, (too) many calls, and a generalist mission of hopping from a topic to another, while still diving deep into each context, activating the knowledge and expertise we have within Capgemini – from smart farming for a Ministry of Agriculture in the Nordics to AI ethics for a digitization agency and Document Automation for an employment agency in Asia to AI4Good for the United Nation’s ITU. Next to supporting these client organization requests, I currently have a focus on shaping the Data & AI narrative and its offers; and I tackle structural initiatives within our Global Public Sector on levels such as international organizations, talent, and sustainability.

I’m blessed with a team of approximately 15 people, with – I think – around seven nationalities represented, which makes my generalist approach viable and exciting.


Tell us more about Capgemini's participation in the AI4Good program of the United Nations.

It started in Amsterdam in 2019; I met Kseniia Fontaine from the UN’s ITU, who suggested we shall get more involved in the UN’s Ai4Good summit, tackling the potential of AI to accelerate the achievement of Sustainable Development Goals.

So we did! The past two years, we have participated with an annual webinar – addressing topics such as how AI can better fight disinformation, early drop-out at schools, forest damages, and bed unavailability in hospitals – and we brought together Public Sector CIOs, start-ups, citizen, and technology firms to discuss how Sustainable Development can benefit from a digital social contract, committing us to act on their achievement together and with the help of data.


Smart and sustainable cities, mobility, ethical AI. Which topics are you most interested in?

What I’m most thrilled about is whenever technology offers a potential to address a topic in a pluridisciplinary way. Take Ethical AI for example: for too long, we have thought of the topic in business and technology terms, not acknowledging how many doors it opens on rethinking our ways of doing and thinking things – on accountability, representativeness, control, and purpose. A biased AI, in my opinion, only merely mirrors technology’s problems – rather it truly questions the humans behind it and how they shape technology the right way.

Another topic that fascinates me is the role of the smart citizen. Most of us live digitally – and consume data in our seamless flow of our daily life. With the power of Data & AI embracing apps, for example, we see a shift in what I call the “emergence of smart citizen” – those who notify of an incident in a smart city, those who provide feedback to a certain service, or those who use a home-care app allowing doctors to have privacy-preserving insights into their health. All that is information sovereignly created, produced, and shared – emerging from citizens who embrace the power of data and insights to safely provide them to relevant actors, who can then leverage it to better provide essential services such as public safety in a city, emergency care, or environmental wellbeing.


Pierre-Adrien Hanania on why collaborative data ecosystems for the public sector are relevant today and what Capgemini’s approach is.

In what contexts, sectors, or business settings do you think it would be useful to implement ethical AI in?

AI mostly makes sense wherever there is a case for leveraging the mathematical advantage of new technologies, such as Deep Learning, for available data just laying there waiting to be leveraged as insights, and for humans to get an augmentation of its work – not a replacement! The augmentation can be very different from one context to another: getting help on an automatically processing document, hence gaining time for the more critical tasks; getting an extended workforce to answer the most trivial questions with conversational AI, such as on emergency hotlines; finally, getting the system to make sense of large datasets in a context of decision-making, with the AI as enabler to better understand what happens (a pandemic, a crime, …), what will happen next, and what can be done to proactively tackle the situation rather than reacting.

All these use cases need to go hand-in-hand with ethical guidelines and implementation! One task I’ve been truly exited about in the past years is how we can move from talking about ethics in AI to building ethics in AI – on explainability, fairness, transparency issues, and many more. We need the great EU guidelines for ethical AI to be embedded and operationalised within the reality of projects. This will happen, on the one hand, with tools, and, on the other, with upskilling both the data scientists and the organization applying AI.


I don’t know if you get this question a lot, but I can’t help but ask you: are AI’s decisions less biased than human ones?

Having studied political science and sociology, before going for a major in European Affairs, you could get me to talk on the topic over a five-course dinner, and I would probably end up with five opinions! In a nutshell, I think that saying that AI is just as biased as the human behind it holds its truth, although it’s more complicated than that.

Yes, I think that on the principle, AI is not the starting point for bias – it’s only a new medium that reproduces pre-existing biases, whether conscious or unconscious. And the range of biases can be hugely different from one case to another – sometimes it just makes any use of the AI unusable because conclusions don’t hold; sometimes it can get more critical if decision-making power is involved based on non-transparent black box systems. And that’s the whole problem of a part of the predictive policing field. Now, one issue with AI is that building on its advantages (24/7 availability of chatbots, amount of information within seconds leveraged in big data), it can very quickly scale bias, hence create a deduplication of bias-impacted services; imagine a racist conversational AI able to lead millions of conversations at the same time!


What are the most common digital transformation challenges faced by small-medium enterprises in Europe?

That's a very good question, and probably one that decides the fate of innovation within Europe. I have been invited to several workshops with start-ups and SMEs – both in Germany (at Factory Berlin) and Paris (at Station F) – and my takeaways are:

First, SMEs need help in getting the right balance between compliance and agility. We simply cannot expect them to handle the effort behind GDPR like a big enterprise would be able to. It certainly doesn’t mean that they are exempted of data privacy rules – but we can help them overcome the bureaucratic hurdle.

Second, SMEs want to participate in this. This means other actors trust, provide room, and help for growing ideas – of ecosystems, of scaled real-life projects, of innovation labs… and they are most welcome! Fields like GovTech show how essential they are. In that regard, I’m quite proud about what my firm Capgemini provides as playground. We have some flagship successes in how to integrate SMEs, especially in Data & AI, like the Future4Care health platform, our involvement at the GovTech Campus in Germany, or our support to the annual conference FranceIsAI at Station F.

Pierre-Adrien Hanania at FranceIsAI
Pierre-Adrien Hanania at FranceIsAI

I was reading about technological social responsibility (TSR) , i.e. aligning short- and medium-term business goals and longer-term societal ones. In your opinion, how easy is it to implement?

We probably don’t know yet how easy exactly it will be to implement it end-to-end, but one thing we know is that it will be easy to start the effort around. I work in an environment that thinks purpose-oriented across its business. Inclusion, green sustainability, health, and education are just a glimpse of values we want to infuse into our projects per default. I come from a political science background, so your question is exactly where I see my role at: the crossroad between business, society, and tech.

I have had extremely interesting conversations in the last two years, most recently at the TeensInAI Summit, learning how purpose and tech can be married to best serve sustainable development goals. The right next steps for the industries must be to think about their part and to leverage their business ambitions along a set of wished outcomes that fosters progress within the society.


What are your views on gender equality in your industry? Are there equal opportunities?

First of all, my answer would quite differ depending on the location I would talk about. Let me narrow the scope down to Western Europe where I’m based, and especially Germany and France where I am from. Are there equal opportunities? All in all, and along the structured processes (recruitment, training suggestions, promotion, etc), yes.

Obviously, we’re not there yet. First, because the game starts very early on, and as someone who studied sociology as a minor, I can’t help but go back to the theories behind differential socialisation that today will still urge boys to go into math and engineering over girls. I can definitely see the outcome of this in my daily life. For instance, I feel it whenever I organize a big event or report where, to apply gender equality, I do struggle to find senior female leaders in IT, especially in the cloud business, for example. From my personal perspective and from my 10 pre-career internships up to my role today, I feel like progress is on its way. There is still a lot to be done, and often it must come from outside the job too: men to share the load of family tasks at home, societal pressures from within families to offer more role models to young females, etc.

What can be done? A lot! Obviously, I could talk politics and society for some hours, and there are a lot of structural initiatives we need to push (e.g. empowerment sessions for STEM paths at school and university). On a personal level, I do apply myself the following principles whenever I can:

- I refuse all-male panels, including the ones where the only female is the moderator – or I accept the invitation only if I can bring along a female teammate.

- In reports or large-scale initiatives, I don’t accept any author landscape that would not bring up at least 33% of females.

- On a more abstract note, I try to empower as much as possible young colleagues to believe in and follow their paths, although here it’s simply something to be done anyway, and supporting especially young talent to explore their strengths while pushing their soft skills (presentation, communication, etc).


What are you currently working on and what are your future plans?

In my role of Data & AI Global Offer Leader for public services, I do have a few current quests I follow. First, I’m all focused on creating our portfolio all around the topic of Collaborative Data Ecosystems, and how public organisations can win insights by sharing their data. This applies across the scope of Public Sector, whether for symptom assessment by hospitals, fraud detection by tax agencies, or earth observation by smart cities.

Second, I will continue to tackle the field of good (as in ethical) AI that meets AI for Good (as in fostering sustainability and progress), and I’m very much looking forward to some events that are coming up – starting with the MyData Conference in Helsinki and this year’s AI4Good summit of the United Nations.

 
Images c/o Pierre-Adrien Hanania

Do you have any suggestions about who I should interview next? Send them my way!


bottom of page