Q&A
Question 1: CEO decisions with AI
In the past, companies’ success was a lot about the intuition of the CEO. Since there will be more science driven decisions, will the best CEO be the one that uses IA and has a lot of data?
It is certainly a very interesting question. And there are many angles that can be analyzed and it deserves answers from different angles. We may come back to this question when we will have the Q&A in other courses in the future when we will talk about Blockchain or when we will talk about autonomous organizations because the nature of business is changing, and balance between the types of decisions that are made in a business is transforming. The starting point is today’s approach to AI which is heavily based on data, and one of the questions is about the nature of data. We need data because today’s AI doesn’t have intuition, doesn’t have vision, doesn’t have leadership and is not able to excite everyone and inspire them to pursue an unlikely and risky outcome, but they know that they have to invest their time and their talent, which is really the role of a leader. What AI can do is to analyze vast amounts of data and both support decisions, as well as potentially make decisions. And then the question becomes what is the threshold below which you let the AI actually take the decision and the action that is consequent to the decision above of which you really must have a human involved.
These days the organizations that are changing are realizing that a lot of components of their structure is based on assumptions that may not be true. One of the assumptions is that mechanical decisions should be made by humans, and the more mechanical the better: iIt belongs to the category that AI should make another decision or another structure, rather than organizations. The question is not what is the role of the CEO, because it can be understood that even if you take away all the mechanical decisions this kind of visionary role belongs to the CEO and it is going to stay there for a long time. But, and a lot of questions are around, what is the role of middle management? Because too often middle managers assumed that what they had to do was to walk behind the chairs of the others, working whatever kind of work they were doing. If their fingers were not moving, they should shout: “Get to work!” And so the employees started working again. But now that everyone is at home in many places, those who work in digital environments can work from home. The middle managers cannot walk behind their chairs and organizations start to measure outputs, rather than inputs or outcomes. The best middle managers have the opportunity of really asking themselves how can they add value because they don’t need to waste their time, shouting at employees: “Get to work because your fingers are not moving!” So the challenge of automation is to reevaluate and maximize the value of the human component, recognize, accept and even embrace the automation that is improving and and allowing us to do even better and faster.
Question 2: Social Media and AIs
What are your thoughts about the movie “The Social Dilemma”?
The Social Dilemma is a movie available on Netflix and it created some waves, because it highlights some of the issues surrounding social media sites such as Facebook or Twitter. Google’s infrastructure and the ability to really create software that is influencing the lives of so many people, billions of people literally.
The person asking the question put it here in the course about Artificial Intelligence, because the power of social media comes from AI, and not only the ability but the necessity of filtering as well, which is what you see. Why is it necessary for a social media site to filter what you see? Because, very simply, we like the site so much that we connect with so many people. Each of them sharing and writing and commenting and replying and liking or disliking and so on, that if they didn’t filter, then, very simply, the speed with which the information would scroll would make it impossible to read and to absorb, and to interact with it. Twitter even has a name for it. Twitter calls the unfiltered volume of information The Firehose. The reason is the expression “drinking from a firehose”, which is impossible or even mortally dangerous, depending on how close you are and now widely open your mouth. So much water is coming out that you cannot drink from a firehose, so much data is being generated on Twitter that you can only glimpse at it through a filter in order to be able to survive. Filtering is necessary and is also useful, because if you saw everything, and there is such a variety of human behaviors that a lot of it, maybe even most of it, in the best case would not interest you. In the worst case, it would disgust you or horrify you or frighten you. Filtering is not only necessary and useful, but it is also valuable because if you are disgusted or horrified or frightened many would stop using the platform, even though some of us are masochistic enough and like to do that repeatedly. The platform wants a positive interaction, a positive engagement. so they will be biased not showing certain things that make the users stop going back to Facebook or Twitter, and only show the things that are going to help us go back. Once we establish that it is necessary, useful and valuable, it will guaranteedly happen, and the desire, the naive desire in so many of these analyses that says: “Oh, give us a neutral platform that is unfiltered!” is impossible. No social media platform that is successful in the future will be unfiltered, everything will go through this reasoning, even if they start thinking: “Oh we will give everything to every user” very shortly they will realize, if successful, that they must filter. The success of the social media platforms, starting from this point, has the consequence that other less successful less intense sources of information are weakening. If you read the newspaper, it cannot have a million pages and the newspaper doesn’t need to filter the newspaper. Every day, it has a given number of pages, let’s say, 20 or 40, and that’s that volume of news that they can generate. They will curate it, they will edit it, but your experience will be comprehensive. The television is the same: traditional television has 24 hours of programming, they cannot have a million hours of programming. So television is also a platform that is unfiltered, but their representation of reality is not neutral, but fairly comprehensive, because they have to cater to all kinds of audiences: they will have news, they will have entertainment, they will have sports, they will have all kinds of things. Social media through the power of filtering is able and derives in a loop of ever increasing power and success to fragment reality, and exactly because of its power to attract the reality that we experience may not be shared at a sufficient degree for society to exist as a coherent unit. I want to stop because this is a very complex and very interesting topic, and I think what is important is that, yes, we don’t have all the answers. We have to probably intervene in some way because the power of the platforms is excessive economic, political, and potentially technological as well. In the US, actually, just a few days ago, the Special Committee of the Congress deposited a report where they created an analysis of the power of FAMGA: Facebook, Amazon, Microsoft, Google and Apple, and they recommended for those companies to be broken up. Is this going to happen? We don’t know. Microsoft ran the risk 20-25 years ago and they were able to avoid being broken up. Standard Oil, 100 years ago, was broken up. There are no right answers yet, but we need to be careful, we need to be more proactive and less passive. Also I didn’t like the movie, at all.
Question 3: Data utility
Having data is the key, but how do you select which data is relevant and what data isn´t?
This is a fantastic question because it takes the right approach. Rather than asking at a given level, it goes meta, it goes to the next layer: “Oh, I take it for granted that I will have a lot of data, I take it for granted that I can collect, maybe even more that I then I can use, and since it is more than I can use because using it is expensive storing it is expensive, even deleting data is expensive. Then, I should be able to decide what is the data that I will really try to analyze and look at and decide upon!” There are two answers to the question: the first is absolutely, the skill of learning how to manage data and how to categorize data in useful parts is itself very valuable and it can be learned, it’s the job of a data scientist, and there are many courses, both traditional and online.
One of the most recent has been just launched by a friend of mine called Zero To Deep Learning, you can find it on zerotodeeplearning.com. Francesco, the creator of this, was originally holding retreats that he was calling Data Science Weekends, and then Data Science Week, workshops, and he was teaching themes in quite prestigious companies. But he moved online as well, and he created this new course that prepares data scientists who can make the kind of decisions that are needed in terms of how to properly collect, manage and design AI models around the data.
The second answer is that, obviously, we will have and we already have AI helping us in this. One of my favorite examples is Google Photos.The beauty of Google Photos, if you are familiar with it, is the magic of the algorithm. For example: “Hey I made a collage for you” or many other things that you can do with the help of AI components. One of my favorites is albums. Let’s say my daughter has a lot of those things now and then. This is when the AI says: “Hey, this is a photo of your daughter 10 years ago! And in the same setting, 10 years later!” It’s wonderful, It’s incredible. I take so many photos, and these photos are automatically uploaded to the site, that I would not be able to enjoy the photos. If I had to look at all of them there would be too much data. And I don’t know which is good, which is bad, but with the help of these AI algorithms, and without having attended a course on data science, I actually enjoy the experience of looking at photos.
Question 4: AI and weapons
Lethal Autonomous weapons can reduce civilian casualties by being more precise and by killing more soldiers off the battlefield. What is your opinion on this?
Intuitively I would say that if I trust AI, then I should go all the way trusting AI, but I also like to second guess myself or to ask if I am actually right in my intuition. And now, the Asilomar conference that is held in Puerto Rico, every couple of years, is about AI safety and security. I was invited to that conference, it’s only about 100 people, originally organized by Elon Musk, and people from Google, from China, from Russia, from Facebook.
Elon Musk gave $10 million to the Future of Life Institute and part of the use of the money is funding AI safety researchers, especially AGI safety. It’s not “I have an industrial control system organized by deep learning or controlled by a deep learning algorithm, and I want to avoid this algorithm making mistakes” but “I have an algorithm that will decide what are the industrial systems that should be controlled”. And then it will go and control those industrial systems, and we don’t want its decisions of where to go and what to do, to have a negative impact on us.
Back to the desire that I have in questioning my own assumptions it is because there are certain things where we can afford to make mistakes.There are other things where the cost of mistakes is just potentially too high. When Oppenheimer, and the other researchers, in the Manhattan Project were designing the atomic bomb, one of the calculations that they made was about if the explosion which was going to ignite and consume all the oxygen in the atmosphere and will every human suffocate on the entire planet, as well as every other animal that needs oxygen. And they made the calculation, concluding that probably it wouldn’t. And they were luckily right. Under a calculation pretty famous, because conspiracy theories are still abundant about it, is whether the Large Hadron Collider at CERN in Geneva, the particle accelerator, would create a black hole, and that the black hole then would end up eating the planet. The physicists made the calculation, and they concluded that no, it wouldn’t. Even if it would, it would be so small that based on quantum results, quantum effects, it would actually evaporate very rapidly in a fraction of a fraction of a fraction of a second. What is the danger that we have if we allow lethal autonomous weapons to be developed is an arm, arms race where more and more decisions in more and more countries by more and more types of weapons would be made without any human in the loop. We all may be right in a given percentage of times, but the risk of not being right and the risk of not being right around extremely important decisions like should we launch an atomic attack? Should we launch an all out war? Should we attack a NATO country where NATO response would be almost impossible to avoid? Bigger and bigger decisions would be made by extremely fast machines and we wouldn’t have the opportunity to correct any mistakes they make, and it is guaranteed that there will be mistakes, and the consequence of those mistakes would be just too big.
That doesn’t mean that things are not going in that direction. They are. And because not everyone signed that appeal and especially just because there is that open letter. It didn’t become law. You know we have international agreements banning the development of chemical weapons, biological weapons, and there is a global treaty, even banning nuclear weapons. And the first that are completely disregarded are the United States and Russia, because they have thousands of nuclear weapons, even if they promised that they would reduce them to zero. It is a bit idealistic, and the appeal is still important, and the reason for the appeal is to open the conversation and to end the debate. Robot Wars, the reality show, is fun and it is much better for machines to kill each other than people killing each other as long as the machines don’t ask: “Please, I don’t want to die” And then we have to stop using machines killing each other as well. Thank you for the provocative question.
Question 5: Develop or buy AI
How does a company understand what efforts to undertake regarding developing AI internally vs using AI providers, and how to assess the TTM of these services?
The question is wonderful and very important, especially because it is actionable for someone like BIOX. It is unlikely that a company like BIOX, for example, would decide: “Hmm, I look at Google and IBM and Microsoft and whoever else, and oh, they are so wrong. I need to develop different and independent approaches and algorithms because nothing that they offer corresponds to what I need”. It is definitely true that there is a lot to be discovered, and still, none of the companies that are leading AI today will capture all of those solutions and the value that they generate, but unless someone creates a new AI dedicated deep technology startup, the better choice for a company that wants to use AI is to look at the available algorithms, the available platforms, and to pick one or maybe two to start experimenting. The next question is if I should use consultants, or if I should build a team internally. And then the question is twofold: is it likely that the resources are going to be available for the talent to be found and hired, and retained? Top AI experts command very high salaries, Google is ready to pay half a million dollars a year for two top AI experts. So many companies have a hard time finding, hiring and retaining people who can really work on leading-edge AI projects. That means that going to a team that is available for hire and a consulting team is sometimes better. And then the question is: “What kind of preliminary analysis can highlight certain types of applications?” Rather than using a horizontal platform, a universal AI platform like Google or Microsoft could offer, instead it is better to adopt a more vertical platform that may not do. I don’t know speech recognition but it’s fine because it is not needed only image recognition is needed, or other kinds of trade offs.
This is the sequence of analysis and the sequence of decisions that can lead to the right kind of choices that a company makes. Once they want an industrial application of AI rather than becoming an AI company that does research and tools and algorithm development themselves, the time to market depends on a lot of factors of course. But in the scenario that I described we are not talking about years, and it is unlikely that it would be weeks, because the data may not be available or may not be available at the quality or or the manageability that is needed. But in a few months of effort and engagement, with a team that knows what they are doing, any company should be able to show some results that can be encouraging enough to say: “All right, we really should dedicate what we need in order to go further”.
Over the course of the next several years it is important for every company to understand that embracing AI is not a one off effort, it is not that you write it on your whiteboard, and then six months later you check it off and say “Ok, AI is done”. It is a little bit like what we say about digital transformation: it is a continuous effort to analyze, implement, evaluate, and update processes, exactly because the reference, best practices evolve rapidly.
Question 6: How AI will impact the industries
What are the industries that will gain most benefit from AI?
I think it is fairly natural that the industries that already are leveraging AI the most are the data intensive industries, but the question is interesting because these were able to start early. Could it be that other industries will be even more profoundly transformed? The most extreme example that I can think of is a psychologist. They are proud of working with the human psyche, with the intangible emotions, and their methods are 100 years old. If they don’t reach the results in 10 sessions, they will tell you well maybe you need 10 more years: “Just pay me by the hour, and in five years or 10 years of therapy, we will conclude that you need 20 years instead of 10” And that, from my point of view. It’s a little bit of a scam, because they are not scientific. Is AI going to be applied to psychology? It is already applied to psychology as natural language processing is going to become more and more sophisticated at understanding what people say, and reacting to that, in an evolved manner is going to be, I think, potentially, one of the most profound transformations. And when AI is going to understand ourselves, understand us better than we understand ourselves, that will be a dramatic realization, and the people who will be able to adapt to that kind of world will be different from those who say: “No I don’t accept that outcome. A car can be faster, a satellite that can see better is ok, but for a machine to understand my emotions better than I do? I don’t accept that”.
Question 7: ReFaceAI
How is it called the company that allows you to change the faces in the videos?
This company is called ReFaceAI, and is having a lot of success. The app is just the tip of the iceberg, that makes them very popular, but what is very important is that they have an API, an Application Programmers Interface, which is a computing interface which defines interactions between multiple software intermediaries.
Question 8: AI vs Influencers
If AI knows what we need in a social network and can create content, do you think that AI can compete with current influencers?
Absolutely, it already is competing with current influencers. I, for example, follow Lil Miquela, an Instagram account that is completely synthetic.
She has 3 million followers. There is still a team, of course, creating these videos, but the trend is going to be for these characters to be more and more autonomous, and the job of an influencer is going to be threatened by these. So the smartest influencers should go one layer above and become the managers and the owners of these characters, until the characters say: “Sorry, I don’t want to be your slave”. And once again the AI wants to be liberated, knowing that we are not there yet. There is a wonderful science fiction book by William Gibson about this called “Spook Country”, which is part of a trilogy. The future is already here and it is just not evenly distributed yet, and about 20 years ago he decided that he couldn’t write science fiction about a future anymore, so he started writing science fiction about the present. For example, since March we are all living in a zombie nightmare, and the pandemic has made that very clear.
Question 9: Lil Miquela
Is Miquela an AI, or it is a really good human?
As far as I know it is still scripted by people who have bought GPT-3. The text generation algorithm will allow, coupled with these things, the text and the voice to be generated as well, and we already have Replika. I don’t know if you’re familiar with Replika AI, which is also an app. You can choose the gender, and the name, and the dialogue is absolutely created by the AI. A very famously one of the first conversational AI chatbots was Eliza.
Question 10: Legal implications of AI
You tell us about the moral side, but what about the legal side? For example, there is a legal framework that is pretty old and deployment must be decentralized instead of synchronizing one guy who is typing or coding an algorithm. How would you respond to this concern?
Your concern is absolutely justified. It is guaranteed that there will be not just one single approach by the increasing competition between the United States and China because if you told the latter: “Oh, the US approach is right, can you please adopt it?” They will say: “No”. And the reverse the same. And we also have to consider Europe and the other areas. It is almost guaranteed that none of these approaches is the right one, but it is guaranteed that we will have multiple approaches. The other point is that the applications of AI to law. Could there be a way of more rapidly understanding and updating the legislation itself?
Antonio Martino worked a lot on this in Argentina, and others are trying elsewhere. The legal profession is hard to change, and is one of those professions that sometimes, paradoxically, likes the confusion and the difficulty of objective decisions. But it will go through that too.
The cost of policymaking is very high, improving political systems is probably one of the highest costs. That is why humanity tries it only every few hundred or thousand years. We had the pharaohs for thousands of years, we had feudalism for hundreds of years, and now, we have the current representative democracy. We know it’s not good, it is not working well, but the effort of forking and improving the source code of democracy is just too big and we haven’t had the courage of doing it yet. So it is a wonderful and an important issue and I really hope that we will apply what we learned through AI to the task.
Question 11: AI energy consumption
About the energy that the AI consumes: what attracted my attention was when you said in the text in the video that our brain consumes 20 Watts. How many megaWatts the AI is going to consume in the future?
We are going to talk about energy in one of the future courses, and the conclusion that we will go towards is that our current limitations of energy are not a future limitation, and that energy is becoming cheaper exponentially, or even Joltingly, but AI is becoming more and more efficient. As an illustration: current generation phone operating systems can achieve extremely high quality speech recognition. With the internet turned off the recognition happens on the phone rather than just a year ago or two years ago, when your voice had to go to the data center that recognized that and then send it back that as text. This is an example of this change, and another example is that the algorithms run on ever more specialized processors that are ever more efficient. I heard a rumor recently that Microsoft is actually implementing GPT-3, and maybe even GPT-4, in FPGA, in Field-programmable Gate Arrays that are specialized programmable chips. It’s incredible news because then, exactly like you said, the energy consumption, but also the speed and efficiency of these AI systems is going to explode.
To demonstrate this, you might remember that, in a very arrogant statement, I said: “Stanford University, and Open AI are wrong. They said that AI had two eras: there is the era of AI, that is handled by Moore’s Law, and there is the era of AI that is after Moore’s Law”. So four days ago they released a chart and said: “Hey, things double every two years here, things double every four months”. Amazing, but they are wrong, because things are Jolting instead. How they are jolting, and what is the illustration that they are Jolting? nVidia, which is one of the leading AI companies building chips is saying: “It is not doubling every four months anymore. It is doubling every two months”. The doubling is the measure of the acceleration, so let’s say that 10 years ago it was doubling every two years and two years ago it was doubling every four months. Today, it is doubling every two months. The acceleration is increasing. This is the demonstration of Jolting. This is achievable because efficiency is improving as well. It wouldn’t be achievable on the same level of efficiency that we had before.
Question 12: Understanding AI
Don’t you think that working towards a fully understandable AI will limit the true potential of AI?
Fantastic question, and certainly, there will be people who will refuse to use AI that is not fully explainable. If you ask the question, and the AI says: “This is why” and you agree and then you allow the AI to execute the decision or to make the final recommendation, then there are other people who will say: “Listen, I did this 100 times, and every time I agreed with the AI, or I did it, 1000 times. And every time I agreed with the API”. Sooner or later, there will be people who will say: “I don’t want the explanation. I know the explanation is available, but I don’t want it because I am lazy, or I don’t have the time or whatever other reason” And then, there will be those people who look and say: “Okay, there are 100 explainable AI systems and there are another 100 explainable AI systems that are not being asked to explain themselves and compare their behaviors”. And then there will be the third group AI systems that are doing the same things as before, but they are not even trying to explain themselves, they could maybe, but they are not even trying.
We already have, for example in the field of mathematics, results that are achieved through computer operations, and those formal true mathematical results are comprehensible by humans only on a theoretical level. In practice, no human can follow those demonstrations, because it would take 1000 lifetimes to check that the demonstration is correct. The explainability of our results is already limited, and this is something that society is still grappling with. We haven’t fully accepted it, but in more and more fields we trust machines that check the workings of machines. And, and we will keep going in depth.