AI is on the roadmap, where do I start?

Marc Lecoultre, Co-founder of wazzabi.ch speaking at Innovation Leaders 2018 event on Machine Learning.

Have you ever heard from your board or management “We need to do something with AI, we need to show the market that we are ahead of the competition”? I hear this very often when I visit my clients for the first time. The next sentence is most often “but we don’t know where to start”.

Through this article, I would like to share my experience of implementing AI-based services for businesses. I practiced AI, or rather Machine Learning (ML) I should say to be precise, for over 15 years. I have worked on dozens of projects in various companies and industries and I can not say that either one is more prepared to implement ML.

You may think that you are reading yet another article on AI and you are not going to learn anything; before turning the page, I would like to reassure you immediately, this is not an article on AI like those you’ve probably already read, you will not find generic facts about AI, not even a description of case studies already seen. I am going to talk about AI for ALL, AI for YOU, without math or theory. I will present a real working example that will demystify the whole subject.

I would like to convince you that AI is concrete, and not only for global or high-tech data driven companies. Too often, my clients get lost in the overabundance of information, in my opinion, they are not getting the right information.

Have you ever tried to find business information on AI or research for companies that could help you in Switzerland? It is not an easy task. It’s hard to find talent that understands what it takes to implement AI-based services. It’s not like a traditional IT project where suppliers are well identified.

You hear about AI everywhere and every day, in newspapers, on the radio, on television, there are dozens of conferences every month on AI. You have probably read many articles about different use cases made by others, most of them come from the United States and are far from your daily concerns. You may be even frustrated, everyone tells you that the AI is like the Holy Grail, and you’re like a little kid looking at this amazing toy that thinks it can never have it.

When I visit customers, they are often a little nervous at the beginning of the meeting. They feel uncomfortable, they don’t really know what we are going to tell them. They do not master the subject as might be the case when you meet a vendor who will set up a new IT infrastructure. In the end, we saw people smile, as if they were relieved. It motivates me every day to do what I do at MLlab.ai, to democratize AI and make it accessible to all.

But then, what can AI really do for you? For enterprise applications, there are primarily two types of domains where AI can impact: improve decision-making and drive operational improvement. I could make an endless list of use cases you’ve heard about. To illustrate the fact that you do not get the right information about the potential of AI, there is this interesting case: reduce the churn rate by using social media data. At first it seems promising to use social media data to learn more about your customers and infer their behavior. If you are a well-known B2C company and sell your product around the world you may receive additional information from this type of data. If you are the manager of a political campaign you could also get useful information…

Remain serious, have you ever tried to search on Twitter or Facebook the number of posts referencing your brand or product? You are probably marketing an excellent product or service, but your customers do not talk about it on social media. We did the test for one of our client, a health insurance company. We have collected all posts on Swiss health insurance and found less than a thousand documents and most of them reflect on news in the media and almost none speak of specific insurance companies. It was useless…

This is not “AI for All” but rather “AI for them”! Remember, you have been asked to do something with AI for your business. Now that you have read articles, watched videos, searched the Internet you are probably sitting at your desk and I hear you say, it’s not for us.

But yes, it’s for you. You probably will not use social media data, but your data, in most cases, is sufficient. You probably have several years of historical, structured and unstructured data. Based on it, we can build useful AI services like the one I’m going to show you now.

The case I want to share with you is the optimization of a back office we did for a client. Suppose you have customer requests with different complexity that you receive each day, such as insurance claims, loan applications, tax returns, customs declarations, subscription to a service offered by your company. We have encoded the complexity from 0 to 4, 0 being the least complex and 4 the most complex.

In your team you have people with different skills and capacities, some can read perfectly Italian and German but not English, others do not work on Wednesdays, and so on. How do you divide the workload among your team members so that the processing time is small but of high quality?

How would you do that? With a traditional approach, you would probably design a rule-based system that infers the complexity of each new request from them. You can clearly see the disadvantages of this approach. You will end up with a static system that will not be able to take into account changes, either of the behavior of your customers or changes in the skills of your team. The system may be impossible to maintain because of the large number of rules needed to describe the complexity of the actual process.

We have developed a system that learns the complexity of a task and then assigns it to the right people or groups of people. Using ML algorithms, we can create a self-adaptable system that you can test from Excel. Yes, I just said Excel, I told you, this article aims to demystify AI, what is more common than an Excel sheet?

How does it work? For this case, we used Microsoft Azure Machine Learning Studio to develop the application. It is one of the well-integrated machine learning tools available today in the cloud, it is visual and intuitive. The result of this development process is a web service that you can call using the Azure Machine Learning add-in found in Excel. All you need to do is add the web service by providing the URL and API key. When you save the Excel workbook, your web services are also saved. You can share the workbook with other users and allow them to use the web service.

In this Excel sheet, there are two specific regions: one containing the data of one or more client requests, and the other showing the prediction of the complexity. For the request, you find fields like the customer identifier, the age, the annual income, the number of children, … In the prediction zone, you have five columns, one for each complexity, they show the probability that the request is of this complexity. The last column corresponds to the final prediction, the one with the highest probability.

When you change the values of a request, Excel sends the data to the web service and receives its complexity prediction in return. You can play and test different scenarios to validate the predictive model.

I will now present how we have done and give some details on the different steps needed to implement such services. In the following, there are no technical details, it is a high-level overview of the process needed to get this example to work.

We start the process by creating what we call an experiment. As part of this experiment, we will train a model to learn to attribute complexity to unknown demands, i.e. new customer requests. As with all supervised ML problems, we need historical data from which the system will learn. In this case, we use a set of 20’000 queries with their associated complexity, the label.

After loading the data, the second step is called feature engineering. It is the process of using domain knowledge of the data to create new features that make machine learning algorithms work better. For example, you can extract date and time information from a timestamp and create features such as weekend, day or night, day of week, year, …

Then, you need to divide your dataset into at least two subsets called training set and test set. We will not use the third set, called validation set, in this example for simplicity, although for your implementation, I recommend using the validation set. The training set is used to train the model and the test set to test its accuracy on data that has not been seen. This will tell us if the model is able to generalize.

In our example, we decided to train two algorithms, a neural network and a decision tree. Both models are scored with the test set and then compared to each other. In our example, we used the overall accuracy to decide which model to use for production. In this case, the decision tree did better with an overall accuracy of 0.94 versus 0.88 for the neural network.

Here we finished, to deploy our web service we just need to click a button and it is deployed and available from Excel or any other application using a simple http POST request. As you can see, this is not rocket science but be very careful here, it requires the right skills, methodology and management organization to be successful.

What are these ingredients necessary for an AI project to succeed? First, you need affordable computing power to run the training phase of your model; some algorithms can be greedy in terms of computing. You can find many specialized cloud infrastructures and start right away. You do not need to invest in an on-site infrastructure that costs you a lot of money to buy, install and maintain.

Then you need sophisticated algorithms to build your model. The good news is that the open source community is big in AI. Many very good algorithms are available in libraries for languages like R and Python, two programming languages commonly used for machine learning.

The previous two components are required but not sufficient to implement an AI project successfully. The third ingredient is data. Data is at the heart of your project and is a very important success factor. If you plan to implement a specific AI service such as the one presented, you will need clean and reliable data from your organization.

I would like here to clarify something that is not always understood. To require data does not mean to collect as much data as possible. An AI project will not start with your data but with your strategy, what does your company want to achieve with AI? It does not matter what data already exists.

This does not mean either deleting data that could be used in the near future. I have seen companies delete their data: to save a few megabytes of disk space, a company decided to keep a summary of orders after one year and delete the lines of its orders. Data storage costs nothing today, please keep the data that you already collect, even the logs of your website, application logs that are usually deleted every n days are full of information.

You have to compromise, when you are dealing with data you need a good data strategy and data governance that will determine what data to keep or delete. Data governance refers to the overall management and caretaking of data, from its creation to its deletion, covering usability integrity and security.

Apart from the technical and organizational aspect, an important part of good data governance relies on building a data culture within your organization. At every layer, there should be a culture of data being a valuable asset and to care of it.

You do not have to wait for the entire data strategy to be in place before starting your AI projects. It can be built in parallel and benefit from your first experiences.

Last but not least, you need talent that know how to use this technology, manipulate data and lead an AI implementation project. The problem: the skills required are scarce and most of them are absorbed by the largest companies in the world. You will need to find these skills outside your company, and to partner with specialized suppliers in AI. For these kinds of projects, a great experience helps a lot. It’s not just about technical knowledge, we use a lot of our guts and we know what algorithm will work or not without always being able to explain it.

Now is the right time to get onboard and use AI intelligently for the benefit of your processes, products and services. I encourage you to discover the potential of AI for your own business and concretely asses the relevance of this approach.

We created an AI Starter Kit. It is a flat rate offering, which allows your company to discover the potential of AI to solve one of your problems. At the end of it, you will have an AI-based service implemented like the one I presented in this article.

In conclusion, I would like you to remember that, yes, AI can be complex and requires specific skills, experience and an adapted management to be implemented and deployed successfully. But I also would like you to feel that “AI for all” can become “AI for you”.

Machine learning: from skepticism to application in fraud prevention

The following blog is a transcription of the presentation done by Jérôme Kehrli, CTO of NetGuardians during the Innovation Leaders 2018 evening. You can also see the full video below.

Jérôme Kehrli about machine learning applied in fraud prevention

Hello, everybody! My name is Jérôme Kehrli, and tonight I want to tell you about the way we use machine learning for fraud prevention in banking institutions at NetGuardians. I don’t have the pretention to give you a complete, global and absolute overview on the topic. On the contrary, I would want to present you our perspective at NetGuardians on the problems we encounter at our customers’ sides and the way we could solve them using artificial intelligence.

My name is Jérôme Kehrli, and I’m in this wonderful business of software engineering and data analytics for 18 years, still a big passion. So maybe before I start a few words aboutNetGuardians.

NetGuardians is a Swiss software editor that has been founded in 2008. After a pretty long incubation period, it really started to develop in 2012. It’s been founded by two former students of the engineering school in Yverdon, and this is maybe the reason why it is still based in Yverdon. We develop a big data analytics platform that we deployed in banking institutions for one big key concern which is fraud prevention on a large scale. By fraud prevention, I mean internal fraud as well as external fraud. Internal fraud is when banking employees withdraw money from their employers. And external fraud is as much ebanking fraud, credit card fraud all the other attacks cybercriminals can imagine.

Today we have around 60 employees, 50 customers, we double our sales turnover every year for the past three years. And we acquire a dozen of new customers. And we hope we continue doing that this year. And I myself am the CTO for 3.5 years now.

 

So, let’s get started! A little bit of history first

Before 2000, banking fraud detection relies mostly on manual controls: internal control, internal audits or audits performed by external auditors. With all the usual issues with this approach: internal control and auditors work by using sampling which means a lot of frauds pass through the cracks. Of course, there are a few security checks implemented here and there in the operational information system or some business intelligence reports targeted at detecting fraud. But all in all, it’s not considered a very big deal.

And you know beginning 2000, we are before the subprime crisis, we are before the South European countries debt crisis, margins are important, and people trust banking institutions. All in all, bankers are rather happy people. Banking fraud exists, of course, but it is a deal but not such a big deal.

In the late 2000s, the cost of fraud, the maturity of the attackers and the complexity of the attacks increased significantly. Banking institutions react by deploying quite massively specific analytics systems aimed at detected and preventing fraud. All these systems are at the times rules engines, and most of them come from the Anti-Money Laundering world, not even specifically developed for fraud prevention.

Netguardians was founded at this time and NetGuardians‘ NG|screener platform was a sort of gigantic rule engine. By rule, I give you an example of what I mean at the bottom of the slide. Of course, there have been a few papers published in the early 2000s about how artificial intelligence and machine learning can be interesting to detect fraud cases, but the bankers and the engineers are not concerning that seriously. They do not want to consider an approach, whom the interpretation of results was considered fuzzy and blurry. Let’s just say that artificial intelligence at that time was just generating a lot of scepticism in banking institutions.

But the reality of fraud in financial institutions evolved dramatically.

Let me give you two examples: the first one is the Bangladesh bank heist, pretty funny story. In February 2016 a group of attackers successfully compromised the banking information system of the Central Bank of Bangladesh. They successfully attacked the specific gateway used by the bank to reach the SWIFT network, and they use it to issue a set of financial transactions on the SWIFT network, transactions that aimed at withdrawing money from the Bangladesh Central bank VOSTRO account by the US Federal Reserve Bank. They successfully send 81 million dollars to the Philippines where they have been laundered through the Philippine casinos. After the facts, all the responsible of the financial institutions involved: the Central Bank of Bangladesh, The US Federal Reserve, even the Prime Minister of Philippines were all convinced that they would recover this money and find the cybercriminals. Two years after, we know that we will never recover the money. We know that the attackers are untraceable, and they will never be found. 81 million dollars for a team that we believe to be 15 to 20 persons.

But of course, this is Bangladesh, right? And here we are in Europe. Even better here we are in Switzerland. And let’s say the army of massive security holes of the Bangladesh information system does not concern us. Right? Let me give you another example: The Retefe worm.

The Retefe worm is a worm developed for four years by a group of cybercriminals targeting specifically the e-banking applications of small to medium size Austrian and Swiss financial institutions. It is four years old, and for four years the cybercriminals keep evolving and maintaining it to counter anti-viruses and specific securities banking institutions put in place. It is four years old, and even today it successfully steals money from between 10 and 19 banking sessions every day. And this is today in Switzerland.

This is the reality to which banking institutions are confronted nowadays.

Some numbers: the big one: 81 million from the Bangladesh case, we estimate that in 2017  the total cost of fraud on a large scale has been 3 000 billion dollars worldwide. Even further cybersecurity ventures estimate that by 2021 the total cost of cybercrime will be 6 000 billion dollars. And of course, a lot of it is internal fraud, right? And here in Switzerland the maturity of the baking business but also the security put in place within the banking information systems make internal fraud more marginal. But external fraud is a cruel reality, think about the Retefe worm.

The consequence of that is that traditional systems deployed in banking institutions to prevent fraud, rule engines are beaten. Let me try to explain to you why with an example. Let’s take the example of one customer such as myself who is using his banking account to pay his mortgage at the end of the month, his telephone bill, his taxes, etc. If suddenly a transaction is input on the system that attempts to withdraw 20 000 CHF from my account and send it to Nigeria, it is an anomaly. It should be blocked by the system. This should not happen.

Now let’s take another example, the example of a customer who is responsible for acquisitions for a big corporation. The guy travels all around the world and pays providers with the corporate account in big amounts. In this case, it is, on the contrary, a small

transaction withdrawing money for a counterparty in Switzerland that should be blocked. That’s the anomaly, right?

If you want to represent all the different situations of all the banking customers with rules, you end up defining hundreds of thousands of rules in your system, which is impossible. So only the most common rules can be defined. As a consequence, a lot of frauds pass through the cracks. And you know, to catch the bigger fraud you have to define limits in these rules that are sufficiently low to catch them, resulting in a huge number of alerts to be generated by the system requiring an army of analysts by the banking institutions to analyse them. And all of that has huge financial impacts on the banks. Fraud is dead money, and the analysts should be able to focus on tasks with more added value. And I don’t need to explain what consequence had the Bangladesh case regarding reputation on the Bangladesh Central Bank.

Something else is required to protect financial institutions.

Artificial intelligence and machine learning provide the solution. In 2016 we started to deploy our first machine learning algorithm approaches within our system at NetGuardians. Our initial idea was to use the machine to analyse a very deep history of data, to analyse a very deep history of transactions, to learn about habits and behaviour of individuals, customers or employees, and build dynamic profiles capturing these habits and behaviours. Then each and every transaction, should it be an internal transaction, a credit card transaction, a banking transaction, etc. input in the system will be compared with the profile of the customer of the user, and the machine will compute a global risk score and will take a decision based on this risk score: either let the transaction pass through, either block the transaction and register it for further investigations by an analyst.

With this technic, using artificial intelligence and profiling to detect suspicious transactions, we have been able to significantly improve the situation of our customers. It was a game-changing paradigm. We could reduce drastically the number of false cases passing through and that by still reducing to 1/3 the number of cases to be investigated by analysts. Not only we could reduce the number of cases to be investigated, but we could reduce the required time to investigate one case by 80%. And finally, the number of reconfirmation asked to the bank customers could be reduced to 1/4 of what it was before. And this has obvious benefits for the financial institutions: operational efficiency, financial gains, reputation and so on.

And then we figured we still had an issue. Let me give you an example: If tomorrow I buy a new car: an Audi. That is a 60 000 CHF transaction that leaves my account to a new beneficiary – Amag Audi Switzerland – that I have never used before. The machine will qualify it an anomaly since I never used such a counterparty before and I never had such a big transaction on my account.

So, my money will be blocked, and I would be annoyed.

So, we figure that sometimes, it’s required to broaden the view of the artificial intelligence. And the idea we had in 2017 was the following: using clustering technics to group together individuals having the same behaviours regarding transactions. Building so-called peer groups and maintaining of course with big data technologies these peer groups up to date in real time. And if you look at the Audi example, of course, the machine will think looking at my profile that it is an anomaly.  But if the machine looks at the 200 or 300 customers that have had the same kind of behaviours as myself, it would find out that people buy new Audi every day. It is not an anomaly. The machine will find out that is a legitimate transaction and release it.

The big impact of that new approach has been that it has helped us to reduce significantly the number of cases to be investigated. These cases we call false positive, wrong alerts that still need to be generated if you want to catch the true alerts, the true positive.

And then we figured: profiling and clustering work cool but can we do something better. Because we still have a problem here: that we work after the facts. By analysing financial transactions, we need to have the transaction input on the system, so we can qualify it.

So, our idea was: can we analyse each and every little trace of interaction between the individuals, customers and employees, with the banking information system and qualify them as legitimate or potentially fraudulent, even before a transaction is inputted on the system?
And this has required significantly different analysis technics. Let me give you an example: for instance, with e-banking application.

Imagine the situation of a legitimate user such as myself login the e-banking platform, checking the account balance imputing the first payment, validating it, second payment, validate it, third payment, validate it. Then I check my pending orders to make sure I don’t forget anything, and if I’m fine, I quit the application.

Now imagine that a worm successfully hijacks the e-banking session. The worm will have a completely different behaviour. The worm will likely go very fast from login to payment input to payment validation and log out. And here I am only focusing on transitions, but think of what if we consider user thinking time in front of the screen? What if we consider the speed at which the user types on the keyboard?

And our idea has been to let the machine build a probabilistic model of “path-to-action”, using probabilistic learning and discover what is the usual path reaching a specific action or a specific interaction of the banking employee or customer.

With this new approach, we are able to detect fraud before it happens. We are able for instance to qualify an e-banking session potentially fraudulent before a transaction is inputted into the system. And this way we protect, the privacy and the data of banking institutions customers.

My conclusion on all of this would be as follows. Today the reality at our customer is the following: artificial intelligence monitors each and every little trace of interaction between the employees and customers and the banking information system, in addition to financial transactions, to secure banks and customers.

This is what AI does for us.

A little note that I cannot retrain myself from saying is about science-fiction vs reality. Today science-fiction is way in advance when it comes to artificial intelligence with reality. And you know, because of maybe of Elon Musk or Hawkins or Holywood, in its imagination, the public has the fantasy that AI represents a supercomputer that will want to rule the world.

So, let’s clarify something.

If we qualify a weak artificial intelligence, an intelligence able to optimize a mathematical functions or find a solution for a problem in a very strict and given context or give an answer to a question in a very strict and given context, then we will call a strong artificial intelligence an intelligence able to contextualise, an intelligence able to show sensitivity or emotions.

If the progress in the world of weak artificial intelligence is today very impressive, amazing and tremendous, let’s agree that strong artificial intelligence is really science-fiction. We don’t have today the littlest trail of proof that one day we will be able to build a strong artificial intelligence. This does not mean that artificial intelligence is not interesting “au contraire”, applications are amazing.

 

Let me give one perspective on that. You know if you consider the chess game, for instance, artificial intelligence has been beating chess masters for quite a long time now. But today it is what we call the Centors, sometimes average chess payers, amateurs payers, augmented with artificial intelligence: half-man, half-machine who win all freestyles parties on the internet. Today this technology gives the best results not when they replace the human decision process but when they support it. This is called augmented intelligence. And the augmented intelligence is precisely what we do at NetGuardians by providing the bankers with means to detect fraud and prevent fraud more efficiently.

Thank you very much for listening.

Introduction to Machine Learning and Artificial Intelligence

The following blog is a transcription of the presentation done by Prof. Chris Tucci during the Innovation Leaders 2018 evening. You can also see the full video below.

Prof Chris Tucci introducing the machine learning and artificial intelligence.

Thank you very much Tilo and welcome to all of you. I’m pleased that you are here tonight. We are going to talk a little bit about machine learning. Let me say a few introductory remarks and definitional type things so that way the speakers that follow don’t have to go over and over again the definitions of everything. And we can start building and finding out how people are using machine learning in business applications.

Let’s start with artificial intelligence. If you do a google search on google image search for artificial intelligence, you find an incredibly terrifying image, and I think that after tonight you’ll probably be less terrified. The general public appears to be terrified about this, and I think that will demystify it a little bit tonight.

So the goal of artificial intelligence is to develop machines that behave as if they were intelligent. That is the definition that John McCarthy developed in the 1950s and a lot of times we still think of it as a very visionary statement about AI. However what most people and most companies that are working on AI are doing is something much more narrow. That is what we call machine learning, and in a subset of machine learning, it’s a supervised machine learning by training past data
on inputs and outputs. I’m going to explain that a little bit more detail in a second.
So machine learning here is concerned with methods of learning from data and making predictions on data. Machine learning is about category labelling.
The crazy idea is that AI will wipe out everybody’s job. I went to a great talk the other day, done by Cassy Costigan from Google and she said: substitute category labelling whenever you hear the word machine learning or AI and test yourself on how that sounds: “So category labelling is gonna wipe out everybody’s job”. An interesting thought process, right?

Now imagine you have a stream of data, it doesn’t have to be a firehose of data, it could be occasional data that you have based whatever it is that comes in. All of this machine learning is based on data that comes in every once in a while that you need to analyse and you need to make some kind of decision. So imagine that you have photos your customers have taken or you have credit card transactions for your company, or you have judicial cases for crimes that need to have sentencing. Next, we analyse the data to classify these various inputs. For example the fraudulent credit card transactions versus legitimate ones. So you have a transaction that comes in as someone just made, and then you have to decide relatively quickly: it’s okay or not.

So there are certain things that you might imagine how you might make decisions on this. But there’s a bunch of things that lead up to that decision and certain
characteristics of those data set that might lead you to go ahead and say that’s a good one or no that’s a bad one. But it could also be about more complex decisions like, for example, the length of the sentence that fits the crime, tagging friends and photos. So what you want to do is go through some past data that you have, and you want to manually tag them for the decision that you’re trying to make or for the output that you want. So you could go through a bunch of card transaction say yes that one was fraudulent, that one no, that was good and so on. It is what we call making your training dataset.
In some sense tagging photos is a great example. In the photo app on your laptop, you can see how the app is trying to make suggestions about people whom you know who are in the photos. For example, it doesn’t know who this is and it’s waiting for me to suggest as to who that is. Then based on the analysis of the characteristics of the person’s facial features next time I’m taking a photo of the same person, it will tag it by default.

There are lots of different methods of categorising these kinds of decisions that you’re making so on this little graphic here you’re going to see some red X’s and green o’s and what you’re trying to do is distinguish between the red X’s and the green o’s. All the buzzwords that you hear are typically just different ways of trying to draw the line or draw the lines for the classification. So you can do it with a straight line or do you need to maybe partition it into several different areas.
By partition it in different ways, you narrow down the set, and of course, you’re never going to make it perfect, but you try and make as many good ones as you possibly can. The idea here is that the different methods that you hear about from supervised machine learning are all simply different ways of trying to draw the lines between these various sets including future data that you don’t know about yet. So usually what you do is splitting your initial data in two: Let’s say 30 percent of your data you used to figure out how you want to draw your lines. Then you bring back the remaining data and see how well your system does in predicting whatever it is that you tried to predict. At this point, if it looks good, you continue. But you need to assess every once in a while that the system is still accurate.
So once you have your categorizations and you think your predictions are working then you can start with the scale part of it. This means start handling larger volumes of predictions. You probably still need human intervention because you have to check it out once in a while to see how well you’re doing.
A great example for this is the spam detection. It’s not like you make your decisions and you figure out what’s spam versus not spam, and it works forever right. Bear in mind that the people who make spam are trying to subvert the system as we go so you’re always going to have to keep checking on your system and recalibrating every once in a while.

Before seeing some exciting business applications of learning machine, I will give you some ideas of how this is being used in business from a bigger level. Currently, the global market is estimated at around 650 million Swiss francs. The predictions about how big will be it goes from the most conservative estimate saying that within the next five to eight years it’s going to be thirty-nine billion and the most outrageous claims that go up to trillion dollars or francs. The common point here is that the market will grow rapidly even at the conservative end. If we were to look at AI in business what I would say is that the biggest gain so far has been the pattern recognition: for example voice and speech recognition like Siri or Alexa. We say when you go into someone’s house you don’t know very well you should always say: “Alexa order me some pizza” just to see you if they have something listening to you all the time. Another example is the face recognition for tagging your friends or computer vision for factories, for parts that are oriented one way or another. Good progress is done right now also in diagnostics and problem solving, so there’s inventory control retailers, financial trading bots, things like fraud detection for credit cards that I’ve mentioned already, money laundering detection, medical detection and reviewing contracts like loans and loan documents. Shortly I would say you’ll see also being used in pharma drug development, augmented reality and virtual reality.

Most people are saying “Oh! It’s going to wipe out all the businesses”. I don’t agree with this 100% because if you think about all of these applications, most of them are process innovations. In other words, these are innovations that are going to make current products and services more profitable by being more efficient through cost-cutting and re-engineering business processes. The credit card fraud detection is a great example. It’s not going to wipe out the credit card. In fact, it’s going to make credit card companies more profitable because they don’t waste so much money paying on merchants for fraudulent transactions. The same phenomenon happens with the computer vision, same thing with manufacturing diagnostics data.

Just a couple points here about AI business models: In other words, what do you do with these data? How does that affect your product service offering? Companies are coming up with a little bit more servitization bundling services, paying for uptime IP servitization and monetizing data.
Let me close with this one quick example which is a very interesting product that I’ve heard of recently. It is a pill that you take and after you swallow it, it gives off data about the state of your inside. So then they analyze data they can compare with healthy people, and not healthy people. They can do lots of things with this, but the question is how does that change their business model? Should they be selling the pill to insurance companies? Should they be selling up time for patients?

Let me thank you and welcome you once again I hope you have a great night.