#18 Special Episode – Top 10 Get Your AI On! Moments – 2020 Season

This is the last episode of 2020 and it’s a mix of moments collected from my guest speakers throughout the year, in no particular order: Markus Lampinen, Andreea Georgescu, Sebastien Provencher, Paul Ortchanian, Titus Capilnean, Cristian Olarasu, Terence Tse, Rudradeb Mitra, Andraz Bole, Marina Pavlovic Rivas. Enjoy and see you with a new season next year!

#18 Special Episode – Top 10 Get Your AI On! Moments – 2020 Season Get Your AI On!

Markus Lampinen: In the typical data world, you start from the assumption one, massive database of millions of data points across lots of different people. So what if you turn that the other way around, that you start with holistic and rich data points on an individual level? And how can you essentially model base on that? And you’re absolutely right to tear up this problem because it’s not trivial. And that means that, for example, it’s something that’s as compute and data-intensive as machine learning, you have to kind of break it down. So one of the things we… Like, you know, at the end of the day, I try to think about it from a big picture and then also from a small picture. And I think from the small picture point of view, you can do a lot with very basic modeling, like very, very basic, like 80/20 rule type of data, you can actually create quite a lot of good indicators for merchants. So this is one of the things that’s really valid, really important for us that if essentially, somebody uses this system, they have to get value right away, not in 90 days or 180 days. It has to be right away that when the user lands on the site, then you know a little bit more about the user, and then you can provide more value for them. And for the user, it has to be one click. It can’t be any more difficult than that. So you can start off with very basic things, like you take 10 data points, like if you’re able to validate and cross-check 10 data points, like age bracket, income bracket, demographic, where they live, some of the interesting groups and events and so on and so forth that they engage with, I mean, you already know quite a lot. And if this is data that you can trust – for example, it’s aggregate data over the past year, like this person has been at 17 events that kind of fall in this category, this person has spent X amount of dollars in this category, yadda, yadda, not exposing any raw data, not exposing anything sensitive, but just kind of, you know, this type of behavioral data – that’s already quite valuable. But now, you’re absolutely right, that, for training a predictable and statistically significant model, that’s something where essentially, it does require various different steps. But for us, a lot of the things that we start with are just very primitive access to holistic data. And then from there, I think that you can do a lot of different more sophisticated things. But this is also like just kind of thinking back, what we did in, for example, the FinTech company I ran, we worked with a lot of anti money laundering and compliance systems. And that’s something where, for example, if you have to work with, for example, anti-fraud or money laundering or so on, then I mean, at that stage, you work in rules and thresholds, like you work with statistical significance, you work with pattern recognition and all these different things. So, I think that that’s absolutely important. But at the same time, just thinking about where the merchants are today, and thinking about, like, how can we help them take one step in the right direction, and then one step beyond there and one step beyond there, then the first thing is just essentially that if we can give them access to better data so that they can provide better value, that’s already essentially a game-changer. And then from there, they can essentially optimize and they can train, and so on. But this is kind of where the open collaboration comes in as well that, for example, Kimmo, who’s our CTO in our company, he’s run three data companies and he’s somebody that in his Ph.D., he studied neural networks in the late ‘90s. So these are all things that we’re kind of also grappling with internally that we’re thinking about, like, you know, if you start with the individual, and you have lots of different individuals, like you have millions of individuals, how do you do that, instead of essentially starting from one database with millions of data? I think Google has also been very visionary in this sector. I mean, of course, they have a lot of insight into the trends, and they have a lot of different projects in the works. They also have lots of open source libraries, and so on and so forth. But that’s actually a really, really cool and just a very specific point about federate learning. That’s something that we’re also looking into. And I think that there are a lot of these types of initiatives that as we get into kind of the new era of things, then they’ll just become more and more valuable because, you know, for example, third party cookies being depreciated, that’s not a small thing, that’s gonna be quite a big shift.

Ciprian Borodescu: So, I did my homework; well, sort of, but I did browse the newsvendor problem. But essentially, what I want to understand is if the uncertain demand also refers to times such as the ones we’re going through right now, or this newsvendor model is appropriate for slight variations in demand, not outliers, let’s say.

Andreea Georgescu: So that’s actually a very good observation about all of these classical operations research models. You know, in the old days, when there wasn’t really a lot of data available, most of these models that people worked with started with, “Oh, the demand is this.” And it’s usually just a random variable, a distribution that is given to you. Now, obviously, in real life, no one gives you that demand distribution for free. So actually, a big part of how you would use these models in practice is figuring out what the inputs are to these models. And estimating demand is definitely one of the first things that people need to deal with or estimating whatever other inputs you’re using in your models from real data. So, with regards to your question, I think such a model will be useful these days as well if you’re using the right demand, which you would have to estimate on these times that you call outliers, as opposed to just using the same demand you would use in normal times. I actually think the bigger difference in these times compared to usual times is the trade-offs. I think in these times, as I was alluding to before, what changes is that really you don’t want to run out of stock. And this is actually common to retailers in general. The cost of not meeting demand usually outweighs the cost of ordering too much by a lot. But in these times, I imagine it’s even more unbalanced. Like, you really don’t want to lose demand on some items or the essential items. And on the other items, you probably just don’t even carry them anymore. So this trade-off between how much you want to satisfy demand and the cost of ordering too much, usually, in this model in particular is really summarized by the service level you want to guarantee. So if you want to meet 95% of the demand, then you order a given amount; if you want to satisfy 99% of the demand, you order more. And this service level is in practice really set by managers based on intuition about the business, but in a more rigorous formulation is really given by the cost of not meeting demand and ordering demand.

Ciprian Borodescu: We already discussed the stage of the company when hiring a product manager makes sense. But when does product operations become a bottleneck?

Sebastien Provencher: So I think the first thing is to define what product operations is. So product operations is a side team, so you don’t necessarily manage product managers or product owners but you come in with two big mandates. The first one is making sure that the product team and the product management team is well equipped to do their job. So, do they have the right tools? Do they have the right process in place? Do they have everything they need to be successful, to do their work in a frictionless way? So that’s the first thing. So it’s very tooling-oriented and process-oriented. The second portion is more of a coaching and mentoring aspect where you share best practices. So at Element AI, I created the product management guild so that we could meet every week, and share learnings. And so, it’s kind of a support role to a Head of Product or a VP of Product who manages these people and you come in to help the VP of Product and the product management team in a horizontal fashion. Typically, product operations will only happen once you start to have a very complex product organization. So, you have a product portfolio – let’s say maybe more than three or five products – and so, a VP of Product is really looking at the roadmap and making sure everything’s orchestrated, and that person might not have as much time to worry about processes and tooling, and also coaching and mentoring – so, obviously, there’s coaching and mentoring happening as well on the executive side – but it also helps to kind of create a horizontal layer of help there.

Ciprian Borodescu: If you take this next question out of context, it might sound really funny. What is SOAP? Why should we know about it? And where can we get more of it? And yes, SOAP is not the thing you use to wash your hands with. It’s actually a framework that Paul created. Tell us more about it, Paul.

Paul Ortchanian: I mean, it’s a tongue-in-cheek joke. Our company’s called Bain Public, which in French translates for public bath. We always feel that companies need to use hygiene on their roadmaps. So, you know, we often joke that ‘it’s time to clean your product roadmap’. So we were playing with the idea of cleaning and we said, “Well, if you want to give your product the care it deserves, then you need to use soap.” And so, we created this methodology, which is a 12-step methodology. It’s nothing clever. It’s basically an aggregation of a number of things that we do as product managers all into one; that basically allows companies to go from ‘we have no product roadmap’ to ‘we have a product roadmap’. And this framework, basically, we called it SOAP, because it’s all about roadmap prioritization, product strategy. So the S stands for strategy, right? We used to have the ability to tell you what SOAP stands for. And I completely forgot, though, at this point, because I think that we just started using it as what it is, really. It’s a way to start fresh, a way to basically use soap to remove the dirt and building the right product.

Ciprian Borodescu: What do you think – or maybe you’ve experienced it yourself – is the most surprising thing for a marketer when coming in contact with an AI project or product for the very first time?

Titus Capilnean: It’s not that complicated. I think that’s the most surprising find because, like, if you sit down and talk to someone who really understands AI really well, they will be able to explain it to you in a way that makes sense for a non-technical person. They will tell you in simple analogies what AI does, and how it works, and how that is very similar, to a certain extent, with how you as a human being make decisions or how your brain works, and how you process information and you look at new information with the lens of your previous experience and try to categorize in most cases, or build new things, in other cases, depending on what kind of AI you’re talking about. But it’s not too dissimilar from other parts of life that we’re very familiar with, which we do every single day, like when we show up to work, or we talk to people, we choose to go to a party or not. This is actually an example that one of our senior AI people talks about every time. It’s like, the decisions that come into play when you’re deciding whether to go to a friend’s party or not, are actually a combination of sub-decisions in your previous experiences with the group of people that will be there, if you’re feeling well that day, if that last interaction with your friend was a good one, if you’re looking to do something there specifically, or if the train ride or car ride is not too far away, or if you have other plans that are likely to yield better rewards that are competing with that. So, that was a very interesting example and I think that’s the kind of example that helps non-AI people understand what AI does. It’s not the black box, it’s not magic.

Ciprian Borodescu: And I know that you’ve been involved with a lot of product teams over the years. What are some of the most important roles you believe such a team should consist of? Especially if it’s an AI company at core, and maybe, you know, you’ve been a founder and an entrepreneur. Maybe talk a bit about these themes as relative to the stage of the company from startup to scale-up company to maybe an enterprise the size of Nike.

Cristian Olarasu: So, okay. I think, let’s first talk about the different things. I think it’s useful to talk about the maturity of the technology and product organizations in those companies. And let’s say at one end you’ll have traditional companies and at the other end, you’ll have the Amazons of the world. And, in between, you’ll have things like Walmart or Target or something like that. Now, depending on where you are on this scale, you’re going to approach problems related to let’s call it data in the automation and AI research differently. Because most probably, it won’t be core to your activity, or maybe it will. So, the chances of being very different between the three buckets are pretty high. Now, assuming that the AI department is not really set up yet, it’s a very different approach in my experience than traditional product management on how to build those products. Because think about it. Instead of going after features in wireframes and customer discussions, you’re building things with one of the biggest scopes in collecting data and getting more input for your system. And, at the same time internally, inside the organization, it’s very hard. Like, everything around AI is expensive, from hiring to educating everyone, to getting expertise on board, to iterating, and teaching the organization that there’s no big launch moment, it’s an optimization problem, and you have to work for it and you have to change your mindset as an organization. So, that is built gradually, in my experience. The best way to build it is you kind of want to start small, but with something that has a pretty large business impact and something that’s feasible also. And even for that thought, you have to bring a lot of people together inside the organization, you need data science, you need data engineering because most of the work is in data engineering before people can mingle with that data. You need your researchers, in some cases, you need design involved, you need product teams involved, you need the domain experts that will tell you and educate you on the business feasibility, you will need the technical experts that will educate you on the technical feasibility, and so on and so forth. And then you’ll have to pitch it to executives or senior leaders and you kind of have to link it to the AI strategy if the company has one, or you have to just build the strategy at the same time, so your project is successful. I think it varies a lot. But I would say, to summarize it, I think, in most organizations, that requires a lot of executive leadership education. It’s almost impossible to start with the Big Bang. So, you have to figure out a way to start small, but pick something that can either be scaled or has a pretty big business value. Otherwise, no one will listen to you or your project or your strategy. And the last one, the last one is the mental model in product management kind of flips a bit and that needs, again, iteration and learning.

Ciprian Borodescu: For those that want to understand the end game for AI technology, I strongly encourage them to read the AI Republic, it’s on Amazon. And honestly, there are many things that resonate with me from the AI versus IA analogy, big data versus right data or the MAP – minimum algorithmic performance. I think these are important notions and concepts to be understood by any leader out there that wants to incorporate AI into their organization. What are three actionable takeaways you’d want entrepreneurs or executives or managers to remember after reading your book and immediately be able to apply to their startups or organizations?

Terence Tse: Okay, okay. I think the first thing that one needs to understand is this. The most important part of AI is not the model itself. You know, like, in a couple of articles that we wrote, we always used… You know, like, if you think about AI, essentially an AI model, a good AI model is like a performance car engine. You can have a Lamborghini, Ferrari engine, but you need the rest of the Ferrari, you need the rest of the Lamborghini in order to go from point A to point B. So the key is actually not to focus on the AI model itself. As a matter of fact, you know, lots of companies actually buy AI models from the other vendors because there’s so many AI developers. Again, it all depends on what you’re trying to achieve. If you’re trying to actually go and try to beat a deep mind, that’s a completely different story than, you know, you’re trying to use AI to deal with customers’ questions online, right? So you know, it’s a very, very different thing. So, think not about the AI model itself, but the rest. You know, the key to success is to get the rest correct. You can do it yourself or you can actually get vendors, to work closely with vendors to make sure that the so-called production environment is actually up to scratch. So, think about that because no matter how many PLCs and pilots you do, you know, if you have not got anyone to help you to implement, you won’t be able to handle it. The second actionable takeaway I believe is, you know, do not even think that you need to collect enough data in order to actually run AI projects. Because again, what really matters is what exactly you are trying to achieve.

Ciprian Borodescu: The use case, yeah.

Terence Tse: So, let’s say, if all you’re doing is actually getting a model to make sense and read and understand and extract the data from a driving license, that’s easy, okay? Because on a driving license, the data is always in exactly the same place. Whereas, you know, if you’re asking, you know, a machine to actually make sense out of bank statements coming from different banks, they all have different formats, that’s a slightly different story. What we’re now doing these days is, you know, in the case of, like, when you do not have enough data, you can already start to synthesize it. You know, we create for you.

Ciprian Borodescu: Correct.

Terence Tse: It is not terribly difficult because all you need to do is to have a certain set of data to start with, and then what we’ll do is we’ll make some changes to the data that you have got – current data you have got – and then we push it to actually train the model to be more accurate or to be more able to read.

Ciprian Borodescu: And, of course, it’s not perfect, but at least it’s a starting point, right?

Terence Tse: You came up with a very, very important point. A very, very important point. Do not even think for a moment that it will be perfect. You know, like, if someone is telling you that you can get 100% out of it, the person is lying to you. It’s not gonna be 100%. And you have to actually understand the fact that you will never be able to get to 100%. So my third takeaway is this, you know, think about what percentage is actually acceptable? For a lot of things, you don’t need 100%. If you are just checking documents, you don’t need 100%. If you are using AI to diagnose cancer, probably, you will want to look at 100% or you’re using AI to decide whether someone should go to jail or not, probably you’re looking at 100% accuracy. But in many cases, you don’t really need 100% accuracy.

Ciprian Borodescu: So in your book, Creating Value with Artificial Intelligence, you’re talking about a very good framework of identifying use cases and problems that can be solved through intelligent use of data. In fact, it’s a set of features that make the collaboration between humans and machines work. Can you dive a bit deeper into it and give a few examples?

Rudradeb Mitra: Absolutely. So, I think the framework I’m talking about is that, again, to start with the why, you start with the problem, right? So why do you want to build a solution? A lot of companies, a lot of startups say, “We want to use AI.” I mean, that’s the wrong place to start. So first, you have to start with, “Okay, is there a problem that is worth solving using machine learning on our technology?” Or in general technology, but also AI and machine learning, let’s say. And “How do I identify the right problem?” And I say that first of all, identify problems that have a huge human error or the existing errors in those ways you are solving is quite high, for various reasons could be. And then see if there are patterns. Once you have these things, you can see there are repeat patterns, how is currently the problem solved? There are potential repeat patterns, and there are high errors, then go and look out for the data to solve those problems. Because a lot of times, again, companies make this mistake that they first start with the data, they say, “Okay, we have this data. What can we do with this data?” And I always say that’s the wrong approach because then you are limiting yourself to the data that you only have, rather than starting from a problem that might require other data that you may not have, but if you look at only the data you have, then you will ignore the data out there, which may be publicly available, and you can get that data anyways. So, I think the framework for me is very simple. Start with the problem, identify if there are patterns, and then look for the data that you have. And then, of course, once you select the right problem, know what data to collect, or what data you have, then go and overcome the challenge with data.

Andraz Bole: So I have this notion – and I had it for a while – that every single company should have a game designer. I mean, also a designer in terms of art, but a game designer in terms of mechanics, designing mechanics for games. I’ve come to a realization that game designers are some of the most cognitively flexible and productive and creative people I’ve met because making mechanics for games is, in some ways, a very general basic way of building an organism and how that organism functions. And when you look at it from this perspective, every company is an organism, whether it’s on the human level, or on a technology level, or on the product level. There’s always parts that interact in a way, like we said before, the tree and the forest. And game designers I find have always very insightful comments on various different parts of whatever it is you’re doing, whether it’s an autonomous vehicle company, or a marketing company, or a machine learning company that does recommend the ranges, like we said before. So that’s my first part of the answer. Every company that deals with AI, or any sort of digital technology, or… I’m going out on a limb and say every company should have a game designer, even if you have a chain of stores. So, one was the game designer. The second is somebody who understands the technology at a foundational level. So let’s call it an expert engineer. And then, I don’t know if this translates to other… Like, I have somewhat good insight into games, because that’s where I kind of grew up in the industry. I don’t know if the role of a producer is a thing across various specters of industries. But basically, you know, in the games industry, there’s this joke that the job description of a producer is ‘buys lunch’. And it’s a nice ridicule of a certain social skill set that I think it’s absolutely crucial in an organization, which is somebody that sees and realizes and also, in a way, manages all the subtle little team dynamics between people, right? Because one of the issues, for example, that we have – sometimes; not all the time – was that if you’re dealing with very specific engineering stuff, more often than not, you’re going to have people who don’t know how to deal with social anxiety, who don’t know how to deal with social pressures, problems, etc. And if you’re somebody that for the vast majority of your life were in your room coding on problems and now, all of a sudden, you’re part of an organization and you don’t know how to solve simple little problems – like, if somebody took your coffee mug, accidentally – and I, as a person, I’m going to keep having to put out fires because of you, you’re useless to me, right? Because no matter how genius you are as an engineer, it’s going to be more effort to make everything going than the results you’d produce. So, that’s why I think the third role as whatever it is you call the producer in other industries – and manager is not a proper word for this because usually, what manager means is somebody who misplaces trusts and uses, you know, all sorts of leverage to make people do whatever it is at hand. So, the point being is how you identify the nuances in dynamics in the team so that conflicts and fires don’t even come up. Right? So, in a way, it’s like a teacher in elementary school class, where, if you don’t react to certain things fast enough, in a proper manner, all of a sudden, you will have a fire in your classroom that you’re not going to be able to put out without resorting to authoritarian measures, which always, in my opinion, make you look impotent and lose a certain amount of prestige, right? As many times as you have to raise the voice and go all monkey-like, you’re just losing prestige and a certain level of respect. So, I think that’s the third crucial part besides, of course, all the usual suspects of having somebody who can communicate the ideas, somebody who can operate the ideas, Scrum masters.

Ciprian Borodescu: And I know that you’re collaborating with both academia and health experts. And I wanted to ask you, what are some of the do’s and don’ts of conducting this kind of research, based on your experience? Is there a difference between doing research in academia versus a deep tech startup where speed is of the essence? Do you feel there’s a difference in approaches? Or maybe speed?

Marina Pavlovic Rivas: Yes, I do believe there is a difference in both. So, speed definitely is a huge difference. Another difference is the direction and the process based on milestones and having a specific goal. Sometimes, in fundamental research, you do research, but you’re not sure where it will lead you. But when you’re doing it for a business and to address a specific need, well, you have a specific objective, so you cannot just go with the flow. You have those objectives that you need to respect in a specific timeframe. So this is the main difference. However, we believe that having partnerships with academia is a great thing because there’s all this knowledge, there are people who are working for many years or even decades on some topics, so it would be a shame not to leverage this existing knowledge. So, for us, the very important point was that when we meet with potential partners, to really filter for that attitude aspect towards how they feel about reaching milestones and respecting a specific timeline, and how they feel about that. Is it something that seems to excite them or is it something that you see that they’re really not comfortable with? So, when we do that, it’s something that excites them and that they feel that they are part of something concrete that they are part of a journey that will lead to market adoption in a pretty short period of time. Some of them are really excited about that, and it can lead to great partnerships.