AI For Everyone_Week_1 By Andrew NG 课程英文

AI For Everyone__Week__1 By Andrew NG

1 Introduction

Welcome to AI for everyone. AI is changing the way we work and live and this nontechnical course will teach you how to navigate the rise of AI. Whether you want to know what's behind the buzzwords or whether you want to perhaps use AI yourself either in a personal context or in a corporation or other organization, this course will teach you how. If you want to understand how AI is affecting society, and how you can navigate that, you also learn that from this course. In this first week, we'll start by cutting through the hype and giving you a realistic view of what AI really is. Let's get started.

You've probably seen news articles about how much value AI is creating. According to a study by McKinsey Global Institute, AI is estimated to create an additional 13 trillion US dollars of value annually by the year 2030. Even though AI is already creating tremendous amounts of value into software industry, a lot of the value to be created in a future lies outside the software industry.

In sectors such as retail, travel, transportation, automotive, materials, manufacturing and so on. I should have a hard time thinking of an industry that I don't think AI will have a huge impact on in the next several years. My friends and I used a challenge each other to name an industry where we don't think AI will have a huge impact. My best example was the hairdressing industry because we don't know how to use AI robotics to automate hairdressing.

But, I once said this on stage and one of my friends who is a robotics professor was in the audience that day, and she actually stood up and she looked at me in the eye and she said, "You know Andrew, most people's hairstyles, I couldn't get a robot to cut their hair that way." But she looked at me and said, "Your hairstyle Andrew, that a robot can do." There is a lot of excitement but also a lot of unnecessary hype about AI. One of the reasons for this is because AI is actually two separate ideas.

Almost all the progress we are seeing in the AI today is artificial narrow intelligence. These are AIs that do one thing such as a smart speaker or a self-driving car or AI to do web search or AI applications in farming or in a factory. These types of AI are one trick ponies but when you find the appropriate trick, this can be incredibly valuable. Unfortunately, AI also refers to a second concept of AGI or artificial general intelligence.

That is the goal to build AI. They can do anything a human can do or maybe even be superintelligence and do even more things than any human can. I'm seeing tons of progress in ANI, artificial narrow intelligence and almost no progress to what AGI or artificial general intelligence. Both of these are worthy goals and unfortunately the rapid progress in ANI which is incredibly valuable, that has caused people to conclude that there's a lot of progress in AI, which is true. But that has caused people to falsely think that there might be a lot of progress in AGI as well which is leading to some irrational fears about evil clever robots coming over to take over humanity anytime now. I think AGI is an exciting goal for researchers to work on, but it'll take most for technological breakthroughs before we get there and it may be decades or hundreds of years or even thousands of years away. Given how far away AGI is, I think there is no need to unduly worry about it.

In this week, you will learn what ANI can do and how to apply them to your problems. Later in this course you'll also see some case studies of how ANI, this one trick ponies can be used to build really valuable applications such as smart speakers and self-driving cars.

In this week, you will learn why this AI. You may have heard of machine learning and the next video will teach you what is machine learning. You also learn what is data and what types of data are valuable but also what does the data are not valuable. You learn what it is that makes a company an AI company or an AI first company so that perhaps you can start thinking if there are ways to improve your company or other organizations ability to use AI and importantly, you also learned this week what machine learning can and cannot do. In our society, newspapers as well as research papers tend to talk only about the success stories of machine learning and AI and we hardly ever see any failure stories because they just aren't as interesting to report on. But for you to have a realistic view of what AI and what machine learning can or cannot do, I think is important that you see examples of both so that you can make more accurate judgements about what you may and maybe should not try to use these technologies for. Finally, a lot of the recent rise of, machine learning has been driven through the rise of Deep Learning. Sometimes also called Neural Networks.

In the final two optional videos of this week, you can also see an intuitive explanation of deep learning so that you will better understand what they can do particularly for a set of narrow ANI tasks. So, that's what you learn this week and by the end of this week, you have a sense of AI technologies and what they can and cannot do. In the second week, you'll learn how these AI technologies can be used to build valuable projects. You learn what it feels like to build an AI project as what as what you should do to make sure you select projects that are technically feasible as well as valuable to you or your business or other organization. After learning what it takes to build AI projects, in the third week you'll learn how to build AI in your company. In particular, if you want to take a few steps toward making your company good at AI, you see the AI transformation playbook and learn how to build AI teams and also built complex AI products.

Finally, AI is having a huge impact on society. In a fourth and final week, you'll learn about how AI systems can be bias and how to diminish or eliminate such biases. You also learn how AI is affecting developing economies and how AI is affecting jobs and be better able to navigate this rise of AI for yourself and for your organization. By the end of this four recourse, you'll be more knowledgeable and better qualified than even the CEOs of most large companies in terms of your understanding of AI technology as well as your ability to help yourself or your company or other organization navigate the rise of AI as I hope that after this course, you'll be in a position to provide leadership to others as well as they navigate these issues. Now, one of the major technologies driving the recent rise of AI is Machine Learning.

But what is Machine Learning? Let's take a look in the next video.

2 Machine Learning

The rise of AI has been largely driven by one tool in AI called machine learning.

In this video, you'll learn what is machine learning, so that by the end, you hope we will start thinking how machine learning might be applied to your company or to your industry. The most commonly used type of machine learning is a type of AI that learns A to B, or input to output mappings.

This is called supervised learning. Let's see some examples.

If the input A is an email and the output B one is email spam or not, zero one. Then this is the core piece of AI used to build a spam filter.

Or if the input is an audio clip, and the AI's job is to output the text transcript, then this is speech recognition. More examples, if you want to input English and have it output a different language, Chinese, Spanish, something else, then this is machine translation.

Or the most lucrative form of supervised learning, of this type of machine learning maybe be online advertising, where all the large online ad platforms have a piece of AI that inputs some information about an ad, and some information about you, and tries to figure out, will you click on this ad or not? By showing you the ads you're most likely to click on, this turns out to be very lucrative. Maybe not the most inspiring application, but certainly having a huge economic impact today.

Or if you want to build a self-driving car, one of the key pieces of AI is in the AI that takes as input an image, and some information from their radar, or from other sensors, and output the position of other cars, so your self-driving car can avoid the other cars. Or in manufacturing.

I've actually done a lot of work in manufacturing where you take as input a picture of something you've just manufactured, such as a picture of a cell phone coming off the assembly line. This is a picture of a phone, not a picture taken by a phone, and you want to output, is there a scratch, or is there a dent, or some other defects on this thing you've just manufactured? And this is visual inspection which is helping manufacturers to reduce or prevent defects in the things that they're making.

This set of AI called supervised learning, just learns input to output, or A to B mappings. On one hand, input to output, A to B it seems quite limiting.

But when you find a right application scenario, this can be incredibly valuable. Now, the idea of supervised learning has been around for many decades.

But it's really taken off in the last few years. Why is this? Well, my friends asked me, "Hey Andrew, why is supervised learning taking off now?" There's a picture I draw for them. I want to show you this picture now, and you may be able to draw this picture for others that ask you the same question as well.

Let's say on the horizontal axis you plot the amount of data you have for a task. So, for speech recognition, this might be the amount of audio data and transcripts you have.

In lot of industries, the amount of data you have access to has really grown over the last couple of decades. Thanks to the rise of the Internet, the rise of computers.

A lot of what used to be say pieces of paper, are now instead recorded on a digital computer. So, we've just been getting more and more data.

Now, let's say on the vertical axis you plot the performance of an AI system. It turns out that if you use a traditional AI system, then the performance would grow like this, that as you feed in more data is performance gets a bit better.

But beyond a certain point it did not get that much better. So it's as if your speech recognition system did not get that much more accurate, or your online advertising system didn't get that much more accurate that's showing the most relevant ads, even as you show the more data.

AI has really taken off recently due to the rise of neural networks and deep learning. I'll define these terms more precise in later video, so don't worry too much about what it means for now.

But with modern AI, with neural networks and deep learning, what we saw was that, if you train a small neural network, then the performance looks like this, where as you feed them more data, performance keeps getting better for much longer. If you train a even slightly larger neural network, say medium-sized neural net, then the performance may look like that.

If you train a very large neural network, then the performance just keeps on getting better and better. For applications like speech recognition, online advertising, building self-driving car, where having a high-performance, highly accurate, say speech recognition system is important, enable these AI systems get much better, and make speech recognition products much more acceptable to users, much more valuable to companies and to users.

Now, a few couple of implications of this figure. If you want the best possible levels of performance, your performance to be up here, to hit this level of performance, then you need two things: One is, it really helps to have a lot of data.

So that's why sometimes you hear about big data. Having more data almost always helps.

The second thing is, you want to be able to train a very large neural network. So, the rise of fast computers, including Moore's law, but also the rise of specialized processors such as graphics processing units or GPUs, which you'll hear more about in a later video, has enabled many companies, not just a giant tech companies, but many many other companies to be able to train large neural nets on a large enough amount of data in order to get very good performance and drive business value.

The most important idea in AI has been machine learning, has basically supervised learning, which means A to B, or input to output mappings. What enables it to work really well is data.

In the next video, let's take a look at what is the data and what data you might already have? And how to think about feeding this into AI systems. Let's go on to the next video.

3 What is Data

You may have heard that data is really important for building AI systems.

But, what is data really? Let's take a look. Let's look at an example of a table of data which we also call a dataset.

If you're trying to figure out how to price houses that you're trying to buy or sell, you might collect a dataset like this, and this can be just a spreadsheet, like a MS excel spreadsheet of data where one column is the size of the house, say in square feet or square meters, and the second column is the price of the house. So, if you're trying to build a AI system or Machine Learning system to help you set prices for houses or figure out if a house is priced appropriately, you might decide that the size of the house is A and the price of the house is B, and have an AI system learn this input to output or A to B mapping.

Now, rather than just pricing a house based on their size, you might say, "Well, let's also collect data on the number of bedrooms of this house." In that case, A can be both of these first two columns, and B can be just the price of the house. So, given that table of data, given the dataset, it's actually up to you, up to your business use case to decide what is A and what is B.

Data is often unique to your business, and this is an example of a dataset that a rural state agency might have that they tried to help price houses. It's up to you to decide what is A and what is B, and how to choose these definitions of A and B to make it valuable for your business.

As another example, if you have a certain budget and you want to decide what is the size of house you can afford, then you might decide that the input A is how much does someone spend and B is just the size of the house in square feet, and that would be a totally different choice of A and B that tells you, given a certain budget, what's the size of the house you should be maybe looking at. Here's another example of a dataset.

Let's say that you want to build a AI system to recognize cats in pictures. I'm not sure why you might want to do that, but maybe the fun mobile app, and you want to tag all the pictures of cats.

So, you might collect a dataset where the input A is a set of different images and the output B are labels that says, "First picture is a cat, that's not a cat. That's a cat, that's not a cat" and have an AI input a picture A and output B is it the cats or not so you can tag all the cat pictures on your photo feed or your mobile app.

In Machine Learning tradition, there's actually a lot of cats in Machine Learning. I think some of this started when I was leaving the Google Brain team and we published the results with somewhat infamous Google cat, where an AI system learn to detect cats from watching YouTube videos.

But since then, there's been a tradition of using cats as a running example when talking about Machine Learning with apologies to all the dog lovers out there. I love dogs too. So, data is important.

But how do you get data?How do you acquire data? Well, one way to get data is manual labeling. For example, you might collect a set of pictures like these over here, and then you might either yourself or have someone else go through these pictures and label each of them.

So, the first one is a cat, second one is not a cat, third one is a cat, fourth one is not a cat. By manually labeling each of these images, you now have a dataset for building a cat detector.

To do that, you actually need more than four pictures. You might need hundreds of thousands of pictures but manual labeling is a tried and true way of getting a dataset where you have both A and B.

Another way to get a dataset is from observing user behaviors or other types of behaviors. So, for example, let's say you run a website that sells things online.

So, an e-commerce or an electronic commerce website where you offer things to users at different prices, and you can just observe if they buy your product or not. So, just through the act of either buying or not buying your product, you may be able to collected a data set like this, where you can store the user ID, the time the user visited your website, the price you offer the product to the users as well as whether or not they purchased it.

So, just by using your website, users can generate this data from you. This was an example of observing user behaviors.

We can also observe behaviors of other things such as machines. If you run a large machine in a factory and you want to predict if a machine is about to fail or have a fault, then just by observing the behavior of a machine, you can then record a dataset like this.

There's a machine ID, there's a temperature of the machine, there's a pressure within the machine, and then did the machine fail or not. If your application is prevent the maintenance, say you want to figure out if a machine is about to fail, then you could for example, choose this as the input A and choose that as the output B to try to figure out if a machine is about to fail in which case you might do preventative maintenance on the machine.

The third and very common way of acquiring data is to download it from a website or to get it from a partner. Thanks to the open internet, there's just so many, there's as that you can download for free, ranging from computer vision or image datasets, to self-driving car datasets, to speech recognition datasets, to medical imaging data sets to many many more.

So, if your application needs a type of data, you just download off the web keeping in mind licensing and copyright, then that could be a great way to get started on the application. Finally, if you're working with a partner, say you're working with a factory, then they may already have collected a big dataset, machines, and temperatures, and pressure into the machines fail not that they could give to you.

Data is important, but there's also little bit over-hyped and sometimes misused. Let me just describe to you two of the most common misuses or the bad ways of thinking about data.

When I speak of seals of large companies, a few of them have even said to me, "Hey Andrew, give me three years to build up my IT team, we're collecting so much data. Then after three years, I'll have this perfect dataset, and then we'll do AI then." It turns out that's a really bad strategy.

Instead, what I recommend to every company, is once you've started collecting some data, go ahead and start showing it or feeding it to an AI team. Because often, the AI team can give feedback to your IT team on what types of data to collect and what types of IT infrastructure to keep on building.

For example, maybe an AI team can look at your factory data and say, "Hey. You know what? If you can collect data from this big manufacturing machine, not just once every ten minutes, but instead once every one minute, then we could do a much better job building a preventative maintenance systems for you." So, there's often this interplay of this back and forth between IT and AI teams, and my advise is usually try to get feedback from AI earlier, because it can help you guide the development of your IT infrastructure. Second, misuse of data.

Unfortunately, I've seen some CEOs read about the importance of the trend in use, and then say, "Hey, I have so much data. Surely, an AI team can make it valuable." Unfortunately, this doesn't always work out.

More data is usually better than less data, but I wouldn't take it for granted that just because you have many terabytes or gigabytes of data, that an AI team can actually make that valuable.

So, my advice is don't throw data in a AI team and assume it will be valuable.

In fact, in one extreme case, I saw one company go and acquire a whole string of other companies in medicine, on the thesis, on the hypothesis that their data would be very valuable. Now, a couple years later, as far as I know the engineers have not yet figured out how to take all this data and actually create value out of it.

So, sometimes it works and sometimes it doesn't. But, you will not over-invest in just acquiring data for the sake of data until unless you're also getting an AI team to take a look at it. Because, they can help guide you to think through what is the data that is actually the most valuable.

Finally, data is messy

You may have heard the phrase garbage in garbage out, and if you have bad data, then the AI will learn inaccurate things. Here are some examples of data problems.

Let's say you have this data sets of size of houses, number of bedrooms, and the price. You can have incorrect labels or just incorrect data.

For example, this house is probably not going to sell for $0.1 just for one dollar. Or, data can also have missing values such as we have here a whole bunch of unknown values.

So, your AI team will need to figure out how to clean up the data or how to deal with these incorrect labels and all missing values. There are also multiple types of data.

For example, sometimes you hear about images, audio, and text. These are types of data that humans find it very easy to interpret. There's a term for this.

This is called unstructured data, and there's a certain types of AI techniques that could work with images to recognize cats or audios to recognize speech or texts, or understand that email is spam. Then, there are also datasets like the one on the right.

This is an example of structured data. That basically means data that lives in a giant spreadsheet, and the techniques for dealing with unstructured data are little bit different than the techniques for dealing with structured data.

But AI techniques can work very well for both of these types of data, unstructured data and structured data. In this video, you learned what is data and you also saw how not to misuse data, for example by over-investing in an IT infrastructure in the hope that it will be useful for AI in the future, but we're actually checking that they're really will be useful for the AI applications you want to build.

Finally, you saw data is messy. But a good AI team would be the help you deal with all of these problems.

Now, AI has a complicated terminology when people throw around terms like AI, Machine Learning, Data Science. What I want to do in the next video is share with you what these terms actually mean, so that you'd be able to confidently and accurately talk about these concepts with others.

Let's go on to the next video.

4 The terminology of AI

You might have heard terminology from AI, such as machine learning or data science or neural networks or deep learning.

What do these terms mean? In this video, you'll see what is this terminology of the most important concepts of AI, so that you will speak with others about it and start thinking how these things could apply in your business.

Let's get started. Let's say you have a housing dataset like this with the size of the house, number of bedrooms, number of bathrooms, whether the house is newly renovated as was the price.

If you want to build a mobile app to help people price houses, so this would be the input A, and this would be the output B. Then, this would be a machine-learning system, and particular would be one of those machine learning systems that learns inputs to outputs, or A to B mappings.

So, machine learning often results in a running AI system. So, it's a piece of software that anytime of day, anytime of night you can automatically input A these properties of house and output B.

So, if you have an AI system running, serving dozens or hundreds of thousands of millions of users, that's usually a machine-learning system. In contrast, here's something else you might want to do, which is to have a team analyze your dataset in order to gain insights.

So, a team might come up with a conclusion like, "Hey, did you know if you have two houses of a similar size, they've a similar square footage, if the house has three bedrooms, then they cost a lot more than the house of two bedrooms, even if the square for this is the same." Or, "Did you know that newly renovated homes have a 15% premium, and this can help you make decisions such as, given a similar square footage, do you want to build a two bedroom or three bedroom size in order to maximize value? " Or, "Is it worth an investment to renovate a home in the hope that the renovation increases the price you can sell a house for?" So, these would be examples of data science projects, where the output of a data science project is a set of insights that can help you make business decisions, such as what type of house to build or whether to invest in renovation.

The boundaries between these two terms, machine learning and data science are actually little bit buzzy, and these terms are not used consistently even in industry today.

But what I'm giving here is maybe the most commonly used definitions of these terms, but you will not find universal adherence to these definitions. To formalize these two notions a bit more, machine learning is the field of study that gives computers the ability to learn without being explicitly programmed.

This is a definition by Arthur Samuel many decades ago. Arthur Samuel was one of the pioneers of machine learning, who was famous for building a checkers playing program.

They could play checkers, even better than he himself, the inventor could play the game. So, a machine learning project will often results in a piece of software that runs, that outputs B given A.

In contrast, data science is the size of extracting knowledge and insights from data. So, the output of a data science project is often a slide deck, the PowerPoint presentation that summarizes conclusions for executives to take business actions or that summarizes conclusions for a product team to decide how to improve a website.

Let me give an example of machine learning versus data science in the online advertising industry. Today, to launch our platforms, all have a piece of AI that quickly tells them what's the ad you are most likely to click on.

So, that's a machine learning system. This turns out to be incredibly lucrative AI system to inputs enrich about you and about the ad and outputs where you click on this or not.

These systems are running 24-7. These are machine learning systems that drive our gravity for these companies, such as a piece of software that runs.

In contrast, I have also done data science projects in the online advertising industry. If analyzing data tells you, for example, that the travel industry is not buying a lot of ads, but if you send more salespeople to sell ads to travel companies, you could convince them to use more advertising, then that would be an example of a data science project and the data science conclusion the results and the executives deciding to ask a sales team to spend more time reaching out to the travel industry.

So, even in one company, you may have different machine learning and data science projects, both of which can be incredibly valuable. You have also heard of deep learning.

So, what is deep learning? Let's say you want to predict housing prices, you want to price houses. So, you will have an input that tells you the size of the house, number of bedrooms, number of bathrooms and whether it's newly renovated.

One of the most effective ways to price houses, given this input A would be to feed it to this thing here in order to have it output the price. This big thing in the middle is called a neural network, and sometimes we also called an artificial neural network.

That's to distinguish it from the neural network that is in your brain. So, the human brain is made up of neurons.

So, when we say artificial neural network, that's just to emphasize that this is not the biological brain, but this is a piece of software. What a neural network does, or an artificial neural network does is takes this input A, which is all of these four things, and then output B, which is the estimated price of the house.

Now, in a later optional video this week, I'll show you more, what this artificial neural network really is. But all of human cognition is made up of neurons in your brain passing electrical impulses, passing little messages each other.

When we draw a picture of an artificial neural network, there's a very loose analogy to the brain. These little circles are called artificial neurons, or just neurons for short.

That also passes neurons to each other. This big artificial neural network is just a big mathematical equation that tells it given the inputs A, how do you compute the price B.

In case it seems like there a lot of details here, don't worry about it. We'll talk more about these details later.

But the key takeaways are that a neural network is a very effective technique for learning A to B or input-output mappings. Today, the terms neural network and deep learning are used almost interchangeably, they mean essentially the same thing.

Many decades ago, this type of software was called a neural network. But in recent years, we found that deep learning was just a much better sounding brand, and so that for better or worse is a term that's been taken off recently.

So, what do neural networks or artificial neural networks have to do with the brain? It turns out almost nothing. Neural networks were originally inspired by the brain, but the details of how they work are almost completely unrelated to how biological brains work.

So, I choose very courses today about making any analogies between artificial neural networks and the biological brain, even though there was some loose inspiration there.

**So, AI has many different tools. **

In this video, you learned about what are machine learning and data science, and also what is deep learning, and what's a neural network. You might also hear in the media other buzzwords like unsupervised learning, reinforcement learning, graphical models, planning, knowledge graph, and so on.

You don't need to know what all of these other terms mean, but these are just other tools to getting AI systems to make computers act intelligently. I'll try to give you a sense of what some of these terms mean in later videos as well.

But the most important tools that I hope you know about are machine learning and data science as well as deep learning and neural networks, which are a very powerful way to do machine learning, and sometimes data science. If we were to draw a Venn diagram showing how all these concepts put together, this is what it may look like.

AI is this huge set of tools for making computers behave intelligently. Off AI, the biggest subset is prairie tools from machine learning, but AI does have other tools than machine learning, such as some of these buzzwords, are listed at the bottom.

The part of machine learning that's most important these days is neural networks or deep learning, which is a very powerful set of tools for carrying out supervised learning or A to B mappings as well as some other things. But there are also other machine learning tools that are not just deep learning tools.

So, how does data science fit into this picture? There is inconsistency in how the terminology is used. Some people will tell you data science is a subset of AI.

Some people will tell you AI is a subset of data science. So, it depends on who you ask.

But I would say that data science is maybe a cross-cutting subset of all of these tools that uses many tools from AI machine learning and deep learning, but has some other separate tools as well that solves a very set of important problems in driving business insights. In this video, you saw what is machine learning, what is data science, and what is deep learning and neural networks.

I hope this gives you a sense of the most common and important terminology using AI, and you can start thinking about how these things might app ly to your company. Now, what does it mean for a company to be good at AI? Let's talk about that in the next video.

5 What makes an AI company

What makes a company good at AI? Perhaps even more importantly, what will it take for your company to become great at using AI? I had previously led the Google brain team, and Baidu's AI group, which I respectively helped Google and Baidu become great AI companies.

So, what can you do for your company? This is the lesson I had learned to washing the rise of the Internet that I think will be relevant to how all of us navigate the rise of AI. Let's take a look.

A lesson we learned from the rise of the Internet was that, if you take your favorite shopping mall. So, my wife and I sometimes shop at Stanford shopping center and you build a website for the Shopping mall. Maybe sell things on the website, that by itself does not turn the shopping mall into an internet company.

In fact, a few years ago I was speaking with the CEO of a large retail company who said to me, "Hey Andrew, I have a website, I sell things in the website." Amazon has a website, Amazon sells things on website is the same thing. But of course it wasn't, in the shopping mall with a website isn't the same thing as a first-class internet company.

So, what is it that defines an internet company if it isn't just whether or not you sell things on the website? I think an Internet company is a company that does the thing that internet let you do really well. For example, we engage and pervasive AB testing.

Meaning we routinely threw up two different versions of website and see which one works better because we can. So, we learn much faster.

Where as in a traditional shopping mall, very difficult to have two shopping malls in two parallel universes and you can only maybe change things around every quarter or every six months. Internet company is since a very short iteration times.

You can ship a new product every week or maybe even every day because you can whereas a shopping mall can be redesigned and we are protected only every several months. Internet companies also tend to push decision-making down from the CEO to the engineers and to other specialized rules such that the product managers.

This is in contrast to a traditional shopping mall. We can maybe have the CEO just decide all the key decisions and then just everyone does what the CEO says.

It turns out that traditional model doesn't work in the internet era because only the engineers and other specialized roles like product managers know enough about the technology and the product and the users to make great decisions. So, these are some of the things that internet companies do in order to make sure they do the things that the internet doesn't do really well.

This is a lesson we learned from the internet era. How about the AI era? I think that today, you can take any company and have it use a few neural networks or few deep learning algorithms.

That by itself does not turn the accompany into an AI company. Instead, what makes a great AI company, sometimes an AI first company is, are you doing the things that AI lets you do really well? For example, AI companies are very good at strategic data acquisition.

This is why many of the large consumer tech companies may have three products that do not monetize and it allows them to acquire data that they can monetize elsewhere. Serve less strategy teams where we would deliberately launch products that do not make any money just for the sake of data acquisition.

Thinking through how to get data is a key part of the great AI companies. AI companies sends a unified data warehouses.

If you have 50 different databases or 50 different data warehouses under the control of 50 different Vice-Presidents, then there will be impossible for an engineer to get the data into one place so that they can connect the dots and spot the patterns. So, many great AI companies have preemptively invested in bringing the data together into single data warehouse to increase the odds that the teams can connect the dots.

Subject of course to privacy guarantees and also to data regulations such as GDPR in Europe. AI companies are very good at spotting automation opportunities.

We're very good at saying, Oh! Let's insert the supervised learning algorithm and have an ATP mapping here so that we don't have to have people do these tasks instead we can automate It. AI companies also have many new roles such as the MLE or Machine Learning Engineer and new ways of dividing up tasks among different members of a team.

So, for a company to become good at AI means, architecting for company to do the things that AI makes it possible to do really well. Now, for a company that become good at AI does require a process.

In fact, 10 years ago, Google and Baidu as well as companies like Facebook and Microsoft that I was not a part of, we're not great AI companies the way they are today. So, how can a company become good at AI? It turns out that becoming good at AI is not a mysterious magical process.

Instead there is a systematic process through which many companies, almost any big company can become good at AI. This is the five-step AI transformation playbook that I recommend to companies that want to become effective at using AI.

I'll give a brief overview of the playbook here and they're going to detail in a later week. Step one is to execute pilot projects to gain momentum.

So, just to a few small projects to get a better sense of what AI can or cannot do and get a better sense of what doing an AI project feels like. This you could do in house or you can also do with an outsource team.

But eventually, you then need to do step two which is the building in house AI team and provide broad AI training, not just to the engineers but also to the managers, division leaders and executives and how they think about AI. After doing this or as you're doing this, you have a better sense of what AI is and then is important for many companies to develop an AI strategy.

Finally, to align internal and external communications so that all your stakeholders from employees, customers and investors are aligns with how your company is navigating the rise of AI. AI has created tremendous value in the software industry and will continue to do so.

It will also create tremendous value outside the software industry. If you can help your company become good at AI, I hope you can play a leading role in creating a lot of this value.

It is video you saw what is it that makes a company a good AI company and also briefly the AI transformation playbook which I'll go into much greater detail on in a later week as a road-map for helping companies become great at AI. If you're interested, there is also published online an AI transformation playbook that goes into these five steps in greater detail for you see more of these in the later weeks as well.

Now, one of the challenges of doing AI projects such as the pilot projects in step one is understanding what AI can and cannot do. In the next video, I want to show you and give you some examples of what AI can and cannot do, to help you better select projects AI that there may be effective for your company.

That's gone to the next video.

6 What machine learning can and cannot do

In this video and the next video, I hope to help you develop intuition about what AI can and cannot do.

In practice, before I commit to a specific AI project, I'll usually have either myself or engineers do technical diligence on the project to make sure that it is feasible. This means: looking at the data, look at the input, and output A and B, and just thinking through if this is something AI can really do.

What I've seen unfortunately is that some CEOs can have an inflated expectation of AI and can ask engineers to do things that today's AI just cannot do. One of the challenges is that the media, as well as the academic literature, tends to only report on positive results or success stories using AI, and we see a string of success stories and no failure stories, people sometimes think AI can do everything.

Unfortunately, that's just not true. So, what I want to do in this and in the next video, is to show you a few examples of what today's AI technology can do, but also what it cannot do, and I hope that this will help you, hone your intuition about what might be more or less promising projects to select for your company.

Previously, you saw this list of AI applications from spam filtering to speech recognition, to machine translation, and so on. One imperfect rule of thumb you can use to decide what supervised learning may or may not be able to do is that, pretty much anything you could do with a second of thought, we can probably now or soon automate using supervised learning, using this input-output mapping.

So for example, in order to determine the position of other cars, that's something that you can do with less than a second. In order to tell if a phone is scratched, you can look at it and you can tell in less than a second.

In order to understand or at least transcribe what was said, it doesn't take that many seconds of thought. While this is an imperfect rule of thumb, it maybe gives you a way to quickly think of some examples of tasks that AI systems can do.

Whereas in contrast, something that AI today cannot do would be: to analyze a market and write a 50 page report, a human cannot write a 50 page mark of analysis report in a second, and it's very difficult, at least I don't know. I don't think any team in the world today knows how to get an AI system to do market research and run an extended market report either.

I've found out one of the best ways to hone intuition is to look at concrete examples. So, let's take a look at a specific example, relating to customer support automation.

Let's see a random website that sells things, so an e-commerce company, and you have a customer support division that gets an email like this, "The toy arrived two days late, so I wasn't able to give it to my niece for her birthday. Can I return it?" If what you want is an AI system that looks at this and decides this is a refund request, so let me route it to my refund department, then I will say, you have a good chance of building an AI system to do that.

The AI system would take as input, the customer text, what the customer emails you, and it would output, is this a refund requests or is this a shipping problem, or is it the other request, in order to route this email to the most appropriate parts of your customer support center. So, the input A is the text and the output B is one of these three outcomes, is it a refund or a shipping problem, or shipping query, or is it a different requests.

So, this is something that AI today can do. Here's something that AI today cannot do which is if you want the AI to input an email and automatically generate, it responds like, "Oh, sorry to hear that.

I hope you're niece had a good birthday. Yes, we can help with, and so on." So, for an AI to output a complicated piece of text like this today is very difficult by today's standards of AI and in fact to even empathize about the birthday of your niece, that is very difficult to do for every single possible type of email you might receive.

Now, what would happen if you were to use a machine learning tool like a deep learning algorithm to try to do this anyway. So, let's say you tried to get an AI system to input the user's email, and output a two to the three paragraph, empathetic and appropriate response.

Let's say that you have a modest-sized dataset like a 1,000 examples of user emails and appropriate responses. It turns out if you run an AI system on this type of data, on a small dataset like 1,000 examples, this may be the performance you get, which is if a user emails, "My box was damaged," they'll say, "Thank you for your email," and it says, "Where do I write a review?", "Thank you email." "What's the return policy?", "Thank you for your email." But the problem with building this type of AI is that with just a 1,000 examples, there's just not enough data for an AI system to learn how to write to the three paragraph, appropriate and empathetic responses.

So, you may end up just generating the same very simple response like, "Thank you for your email," no matter what the customer is sending you. Another thing that could go wrong, another way for an AI system to fail is if it generates gibberish such as: "When is my box arriving," and it says, "Thank, yes, now your," gibberish.

This is a hard enough problem that even with 10,000 or a 100,000 email examples, I don't know if that would be enough data for an AI system to do this well. The rules for what AI can and cannot do are not hardened first and I usually end up having to ask engineering teams to sometimes spend a few weeks doing deep technical diligence to decide for myself if a project is feasible.

But to hone your intuitions to help you quickly filter feasible or not feasible projects, here are a couple of other rules of thumb about what makes a machine learning problem easier or more likely to be feasible. One, learning a simple concept is more likely to be feasible.

Well, what does a simple concept mean? There's no formal definition of that but it is something that takes you less than a second of mental thought or a very small number of seconds of mental thought to come up with a conclusion then that would lean to whether it being a simple concept. So, you're looking outside the window of a self-driving car to spot the other cars that would be a relatively simple concept.

Whereas how to write an empathetic response, so a complicated user complaints, that would be less of a simple concept. Second, a machine learning problem is more likely to be feasible if you have lots of data available.

Here, our data means both the input A and the output B, that you want the AI system to have in your A to B, input to output mapping. So for example, in the customer support application, the input A would be examples of emails from customers and B could be labeling each of these customer emails as to whether it's a refund requests or a shipping query, or some other problem, one of three outcomes.

Then if you have thousands of emails with both A and B, then the odds of you building a machine learning system to do that would be pretty good. AI is the new electricity and it's transforming every industry, but it's also not magic and it can't do everything under the sun.

I hope that this video started to help you hone your intuitions about what it can and cannot do, and increase the odds of your selecting feasible and valuable projects for maybe your teams to try working on. In order to help you continue developing your intuition, I would like to show you more examples of what AI can and cannot do.

Let's go into the next video.

7 More examples of what Ai can or cannot do

One of the challenges of becoming good at recognizing what AI can and cannot do is that it does take seeing a few examples of concrete successes and failures of AI. If you work on an average of say, one new AI project a year, then to see three examples would take you three years of work experience and that's just a long time.

What I hope to do, both in the previous video and in this video is to quickly show you a few examples of AI successes and failures, or what it can and cannot do so that in a much shorter time, you can see multiple concrete examples to help hone your intuition and select valuable projects. So, let's take a look at a few more examples.

Let's say you're building a self-driving car, here's something that AI can do pretty well, which is to take a picture of what's in front of your car and maybe just using a camera, maybe using other senses as well such as radar or lidar. Then to figure out, what is the position, or where are the other cars.

So, this would be an AI where the input A, is a picture of what's in front of your car, or maybe both a picture as well as radar and other sensor readings. The output B is, where are the other cars? Today, the self-driving car industry has figured out how to collect enough data and has pretty good algorithms for doing this reasonably well.

So, that's what the AI today can do. Here's an example of something that today's AI cannot do, or at least would be very difficult using today's AI, which is to input a picture and output the intention of whatever the human is trying to gesture at your car.

So, here's a construction worker holding out a hand to ask your car to stop. Here's a hitchhiker trying to wave a car over.

Here is a bicyclist raising the left-hand to indicate that they want to turn left. So, if you were to try to build a system to learn the A to B mapping, where the input A is a short video of our human gesturing at your car, and the output B is, what's the intention or what does this person want, that today is very difficult to do.

Part of the problem is that the number of ways people gesture at you is very, very large.

Imagine all the hand gestures someone could conceivably use asking you to slow down or go, or stop. The number of ways that people could gesture at you is just very, very large. So, it's difficult to collect enough data from enough thousands or tens of thousands of different people gesturing at you, and all of these different ways to capture the richness of human gestures.

So, learning from a video to what this person wants, it's actually a somewhat complicated concept. In fact, even people have a hard time figuring out sometimes what someone waving at your car wants.

Then second, because this is a safety critical application, you would want an AI that is extremely accurate in terms of figuring out, does a construction worker want you to stop, or does he or she wants you to go? And that makes it harder for an AI system as well. So, today if you collect just say, 10,000 pictures of other cars, many teams would build an AI system that at least has a basic capability at detecting other cars.

In contrast, even if you collect pictures or videos of 10,000 people, it's quite hard to track down 10,000 people waving at your car. Even with that data set, I think it's quite hard today to build an AI system to recognize humans intentions from their gestures at the very high level of accuracy needed in order to drive safely around these people.

So, that's why today, many self-driving car teams have some components for detecting other cars, and they do rely on that technology to drive safely. But very few self-driving car teams are trying to count on the AI system to recognize a huge diversity of human gestures and counting just on that to drive safely around people.

Let's look at one more example. Say you want to build an AI system to look at X-ray images and diagnose pneumonia.

So, all of these are chest X-rays. So, the input A could be the X-ray image and the output B can be the diagnosis.

Does this patient have pneumonia or not? So, that's something that AI can do. Something that AI cannot do would be to diagnose pneumonia from 10 images of a medical textbook chapter explaining pneumonia.

A human can look at a small set of images, maybe just a few dozen images, and reads a few paragraphs from medical textbook and start to get a sense. But actually don't know, given a medical textbook, what is A and what is B? Or how to really pose this as an AI problems like know how to write a piece of software to solve, if all you have is just 10 images and a few paragraphs of text that explain what pneumonia in a chest X-ray looks like.

Whereas a young medical doctor might learn quite well reading a medical textbook at just looking at maybe dozens of images. In contrast, an AI system isn't really able to do that today.

To summarize, here are some of the strengths and weaknesses of machine learning. Machine learning tends to work well when you're trying to learn a simple concept, such as something that you could do with less than a second of mental thought, and when there's lots of data available.

Machine learning tends to work poorly when you're trying to learn a complex concept from small amounts of data. A second underappreciated weakness of AI is that it tends to do poorly when it's asked to perform on new types of data that's different than the data it has seen in your data set.

Let me explain with an example. Say you built a supervised learning system that uses A to B to learn to diagnose pneumonia from images like these.

These are well pretty high quality chest X-ray images. But now, let's say you take this AI system and apply it at a different hospital or different medical center, where maybe the X-ray technician somehow strangely had the patients always lie at an angle or sometimes there are these defects.

Not sure if you can see the lost structures in the image. These low other objects lying on top of the patients.

If the AI system has learned from data like that on your left, maybe taken from a high-quality medical center, and you take this AI system and apply it to a different medical center that generates images like those on the right, then it's performance will be quite poor as well. A good AI team would be able to ameliorate, or to reduce some of these problems, but doing this is not that easy.

This is one of the things that AI is actually much weaker than humans. If a human has learned from images on the left, they're much more likely to be able to adapt to images like those on the right as they figure out that the patient is just lying on an angle.

But then AI system can be much less robust than human doctors in generalizing or figuring out what to do with new types of data like these. I hope these examples are helping you hone your intuitions about what AI can and cannot do.

In case the boundary between what it can or cannot do still seems fuzzy to you, don't worry. It is completely normal, completely okay.

In fact even today, I still can't look at a project and immediately tell is something that's feasible or not. I often still need weeks or small numbers of weeks of technical diligence before forming strong conviction about whether something is feasible or not.

But I hope that these examples can at least help you start imagining some things in your company that might be feasible and might be worth exploring more. The next two videos after this are optional and are a non-technical description of what are neural networks and what is deep learning.

Please feel free to watch those. Then next week, we'll go much more deeply into the process of what building an AI project would look like.

I look forward to seeing you next week.

8 Non-technical Part I

The terms deep learning and neural network are used almost interchangeably in AI.

And even though they're great for machine learning, there's also been a bit of hype and bit of mystique about them. This video will demystify deep learning, so that you have a sense of what deep learning and neural networks really are.

Let's use an example from demand prediction. Let's say you run a website that sells t-shirts.

And you want to know, based on how you price the t-shirts, how many units you expect to sell, how many t-shirts you expect to sell. You might then create a dataset like this, where the higher the price of the t-shirt, the lower the demand.

So you might fit a straight line to this data, showing that as the price goes up, the demand goes down. Now demand can never go below zero, so maybe you say that the demand will flatten out at zero, and beyond a certain point you expect pretty much no one to buy any t-shirts.

It turns out this blue line is maybe the simplest possible neural network. You have as input the price, A, and you want it to output the estimated demand, B.

So the way you would draw this as a neural network is that the price will be input to this little round thing there, and this little round thing outputs the estimated demand. In the terminology of AI, this little round thing here is called a neuron, or sometimes it's called an artificial neuron, and all it does is compute this blue curve that I've drawn here on the left.

This is maybe the simplest possible neural network with a single artificial neuron, that just inputs the price and outputs the estimated demand. If you think of this orange circle, this artificial neuron as a little Lego brick, all that a neural network is, if you take a lot of these Lego bricks and stack them on top of each other until you get a big power, a big network of these neurons.

Let's look at a more complex example. Suppose that instead of knowing only the price of the t-shirts, you also have the shipping costs that the customers will have to pay to get the t-shirts.

May be you spend more or less on marketing in a given week, and you can also make the t-shirt out of a thick, heavy, expensive cotton or a much cheaper, more lightweight material. These are some of the factors that you think will affect the demand for your t-shirts.

Let's see what a more complex neural network might look like. You know that your consumers care a lot about affordability.

So let's say you have one neuron, and let me draw this one in blue, whose job it is to estimate the affordability of the t-shirts. And so affordability is mainly a function of the price of the shirts and of the shipping cost.

A second thing though affecting demand for your t-shirts is awareness. How much are consumers aware that you're selling this t-shirt? So the main thing that affects awareness, is going to be your marketing.

So let me draw here a second artificial neuron that inputs your marketing budget, how much you spend on marketing, and outputs how aware are consumers of your t-shirt. Finally, the perceived quality of your product will also affect demand, and perceived quality would be affected by marketing.

The marketing tries to convince people this is a high quality t-shirt, and sometimes the price of something also affects perceived quality. So I'm going to draw here a third artificial neuron that inputs price, marketing and material, and tries to estimate the perceived quality of your t-shirts.

Finally, now that the earlier neurons, these three blue neurons, have figured out how affordable, how much consumer awareness and what's the perceived quality, you can then have one more neuron over here that takes as input these three factors and outputs the estimated demand. So this is a neural network, and its job is to learn to map from these four inputs, that's the input A, to the output B, to demand.

So it learns this input output or A to B mapping. This is a fairly small neural network with just four artificial neurons.

In practice, neural networks used today are much larger, with easily thousands, tens of thousands or even much larger than that numbers of neurons. Now, there's just one final detail of this description that I want to clean up, which is that in the way I've described the neural network, it was as if you had to figure out that the key factors are affordability, awareness and perceived quality.

One of the wonderful things about using neural networks is that to train a neural network, in other words, to build a machine learning system using neural network, all you have to do is give it the input A and the output B. And it figures out all of the things in the middle by itself.

So to build a neural network, what you would do is feed it lots of data, or the input A, and have a neural network that just looks like this, with a few blue neurons feeding to a yellow open neuron. And then you have to give it data with the demand B as well.

And it's the software's job to figure out what these blue neurons should be computing, so that it can completely automatically learn the most accurate possible function mapping from the input A to the output B. And it turns out that if you give this enough data and train a neural network that is big enough, this can do an incredible good job mapping from inputs A to outputs B.

So that's a neural network, is a group of artificial neurons each of which computes a relatively simple function. But when you stack enough of them together like Lego bricks, they can compute incredibly complicated functions that give you very accurate mappings from the input A to the output B.

Now, in this video you saw an example of neural networks applied to demand prediction. Let's go on to the next video to see a more complex example of neural networks applied to face recognition.

9 Non-technical Part II

In the last video, you saw how a neural network can be applied to demand prediction, but how can the new network look at the picture and figure out what's in the picture? Or listen to an audio clip and understand what is said in an audio clip? Let's take a look at a more complex example of app lying a neural network to face recognition.

Say you want to build a system to recognize people from pictures, how can a piece of software look at this picture and figure out the identity of the person in it? Let's zoom in to a little square like that to better understand how a computer sees pictures. Where you and I see a human eye, a computer instead sees that, and sees this grid of pixel brightness values that tells it, for each of the pixels in the image, how bright is that pixel.

If it were a black and white or grayscale image, then each pixel would correspond to a single number telling you how bright is that pixel. If it's a color image, then each pixel will actually have three numbers, corresponding to how bright are the red, green, and blue elements of that pixel.

So, the neural networks job is to take as input a lot of numbers like these and tell you the name of the person in the picture.

In the last video, you saw how a neural network can take as input four numbers corresponding to the price, shipping costs, amounts of marketing, and cloth material of a T-shirt and output demand.

In this example, the neural network just has to input a lot more numbers corresponding to all of the pixel brightness values of this picture. If the resolution of this picture is 1000 pixels by 1000 pixels, then that's a million pixels.

So, if it were a black and white or grayscale image, this neural network was take as input a million numbers corresponding to the brightness of all one million pixels in this image or if was a color image it would take as input three million numbers corresponding to the red, green, and blue values of each of these one million pixels in this image. Similar to before, you will have many many of these artificial neurons computing various values, and it's not your job to figure out what these neurons should compute.

The neural network will figured out by itself. Typically, when you give it an image, the neurons in the earlier parts of the neural network will learn to detect edges in pictures and then lobe and later learn to detect parts of objects.

So, they learn to detect eyes and noses and the shape of cheeks and the shape of mouths, and then the later neurons, further to the right, will learn to detect different shapes of faces and it will finally, put all this together to output the identity of the person in the image. Again, part of the magic of neural networks is that you don't really need to worry about what it is doing in the middle.

All you need to do is give it a lot of data of pictures like this, A, as well as the correct identity B and the learning algorithm will figure out by itself what each of these neurons in the middle should be computing. Congratulations on finishing all the videos for this week.

You now know how machine learning and data science work. I look forward to seeing you in next week's videos, as well where you'll learn how to build your own machine learning or data science project.

See you next week.

结语

这个课程非常好,对于了解AI很有帮助!!!

posted @ 2021-07-19 08:26  Dba_sys  阅读(157)  评论(1编辑  收藏  举报