Using Technology to Automate Business Processes
Michael Slattery, a senior managing consultant and lead of technical operations for BRG’s Global Applied Technologies (GAT) products, discusses how to use technology to increase the accuracy of results and decrease the time to provide results to customers.
TRANSCRIPT
MJ 00:00 Hi, everyone. This is Michael Jelen from the Global Applied Technology podcast. The GAT team, as we call ourselves, is a globally distributed team of software engineers, data scientists, graphic designers, and industry experts who serve clients through our products built atop the BRG DRIVE(TM) analytics platform. We're helping some of the world's largest and most innovative clients and governments transform raw data into actionable insights, drive efficiency through automation, and empower collaboration to improve business decisions. You can learn more about us, our products, and our team on our website. And if you have any questions or comments, please email us at gat@thinkbrg.com.
Today, I'll be speaking with Michael Slattery, and since we have so many Michaels on the team, you'll hear me refer to him as Slater. As GAT's lead of technical operations, Slater manages the maintenance and deployment of GAT products. And as a solutions architect of Digital Workforce, he develops new workers to be incorporated in the Digital Workforce suite. He strives to use automation and data to increase accuracy of results and decrease time to results for each customer. Please enjoy this conversation about Digital Workforce automation with Slater. Hi, Slater.
MS 01:06 Hey, Jelen. How's it going?
MJ 01:07 Going very well. How about you?
MS 01:09 Pretty good.
MJ 01:11 Awesome. Well, thank you so much for making time to chat with me today. I'm super excited to talk about automating business processes and using technology to do that. But before we jump into it, I'd love if you could just take a moment and introduce yourself to everybody on the line.
MS 01:25 Yeah. Sure. Mike Slattery, senior managing consultant at BRG. Basically, started out at Georgetown University, did my master's degree there in applied mathematics, shortly worked doing business consulting at IBM before actually meeting you, Jelen, at FTI doing data analytics consulting. And obviously, we're here today to do, kind of, improving business processes and data analytics.
MJ 01:50 Awesome. And I know you worked on a lot of fun stuff. We worked on a lot of fun stuff together. Was there anything that sparked your interest in data analytics or in this topic in general?
MS 02:01 Predominantly, I'm a numbers guy, whether it's video games or sports or what have you. I always enjoy looking at the numbers. But using numbers to, one, get to the end goal is great; but also using that same process again and again to do less things is also great. So that's sort of how I kind of have my mantra and how I kind of go about my day-to-day.
MJ 02:26 One term that we're going to be probably using a lot today is Digital Worker. Can you define what a Digital Worker is when we say that?
MS 02:34 Yeah. A Digital Worker is really just a workflow process that leverages different sorts of technology to do tasks, pretty much to just increase speed, accuracy, and scalability. Normal things that a human would do at a computer or anything technical, math related, things like that, that is now just a scalable worker that would be in the cloud or on local param or wherever you might be. Get it done, get it done efficiently, and have that availability of scale at a lower cost.
MJ 03:06 Awesome. Sounds good. Well, I think that transitions us pretty well into what we're going to be talking about today, which is automation. So, could you maybe give a little bit of background about how we came upon working on this? What is the business problem that we were introduced to, and how did we go about trying to solve that?
MS 03:24 Yeah. Sure. I guess with this particular example, or at least where we kind of started on the diving board before getting into the pool, is going to be predominantly toward the financial services sector. But again, for anybody listening, you can cast the net as wide as you want, and we do have other examples of cases where we have worked in different industries.
This initial engagement was a loan vendor in the US, mid-tier but were having concerns and potential investigation about how they distributed loans based off of public assistance, so anything from food stamps to alimony. We just kind of needed to confirm that they had followed, one, the business logic of the government, but also ensure that they had actually done the due diligence for those loans. So do they have notes saying they're employed? Do they have notes saying that this has been confirmed?
So pretty much this was a look-back engagement where they really didn't have the budget nor the resources to look through the amount of loans they had for the initial scope. It was about 2.5 million pages for 7,000 loans, but the actual engagement was 70,000 loans in terms of what they had to, kind of, go through and report. And obviously, that's a lot of pages to go through each day applying that business logic to come up with a certain ticked box of, "Did we do our job right?" So that was the initial scope of that engagement, and kind of checked against to see, "Did they buy all these homes properly with all the paperwork signed now?" and what have you. Obviously, not being a top-tier kind of vendor of loans, that could become quite expensive very quickly, especially when you had, at least for our initial sample size, it was something like 2.5 million for 7,000 loans. So obviously, you can extrapolate there.
MJ 05:09 2.5 million pages, is that what that number is?
MS 05:13 Yeah. 2.5 million pages of loan documents for 7,000. And as I said, I believe the total populace that they wanted to investigate was 70,000. So ten times that, things get crazy. It's a lot of pages.
MJ 05:26 Okay. And so just to back up a step, are these mortgage lenders? So basically, what goes into the application to get a mortgage for your house?
MS 05:37 Yeah. Yeah. Exactly.
MJ 05:38 Cool. So what kind of documents are included in the loan packet?
MS 05:43 It could be anything from your employment status to bank records to where you lived—maybe it's the second mortgage. We see some very weird stuff in there. So like a hand of a cat that was photocopied. Weird things make it into these loan packets that have to be reviewed by a human, and it was quite a variety.
MJ 06:02 Got it. So business problem, we have lots and lots of documents that are related to mortgage lending. There are a wide array of documents, everything from valuable things to junk pictures of cat paws, letters from grandma, all that sort of stuff. And so, what do we need to do in order to verify that these loans were properly or potentially improperly sourced? What are the tests that have to be applied?
MS 06:26 First is, which pages matter? So what are the pages that you care about? Obviously, things like employment status—are you employed? In the particular case that we're talking about, public assistance is there. So is this person on food stamps? Does this person have any sort of COVID assistance, child support, alimony, those sort of things? But also, was there a letter, a note, some sort of verification of the fact that somebody did their job to confirm these things? And that could have been whether through telephone, email, a lot of different ways that you would communicate to somebody's employer or their bank or what have you to verify that sort of information.
MJ 07:03 Okay. Got it. So to take that specific example, one of the things in giving someone a mortgage is that you want to verify that they're employed. And so it sounds like that would require at least a couple of different documents. One would be some sort of income that, I guess, would be related to that employment. Another would be someone confirming, that would be either calling or emailing the company, to confirm that they do in fact work there and that they are employed. And it sounds to me like those documents can maybe come in a lot of different formats. So I guess from a—
MS 07:35 Can I interject there real quick?
MJ 07:37 Yeah. Absolutely.
MS 07:37 Because yes, you're going along the right train of thought. But what happens if that person is retired? Or what happens if that person is self-employed? You can't call them because they're obviously going to say they're employed by themselves. Or if they're retired, right, maybe they want to put it against their 401(k) or however retirement structure that they have going on. So it isn't always that simple. And a lot of what we have to do, right—and this is kind of training the machine to do something that you would teach a human to do—is this business logic tree of, "Well, if this then that. If not, then this," and using a lot of industry standards of what they understand to kind of say, "Okay. This is the format, and we're going to use this," and we can kind of layer that on top.
MJ 08:23 Got it. Okay. So the first piece of that decision tree might be, is the person employed? Maybe they're not. Maybe they're retired. Maybe they're [on] compensation from the government. Then if they are employed, then we need to go a step farther and start looking for verification of that employment. Okay. That comes in different formats. How do we look for that? And the decision tree essentially keeps growing until we get the answer and verify that this is, in fact, up to the standards that are required for offering this personal loan. Is that sort of right?
MS 08:50 Yeah.
MJ 08:50 So it sounds like this is probably something that lots of firms go through in this industry. Is there an opportunity to make that a little bit more general or add additional tests to make this more applicable to different companies?
MS 09:04 Yeah. And actually, that's a lot of the work that we're doing right now, which I think is even more interesting. Right? I think the example that I gave earlier is a company under duress now has to figure out with their budgetary constraints, "How do I talk to the government and report properly and make sure I'm not doing anything that's improper?" And that gets quite expensive very quickly, and that's what consultants do a lot of times.
But in this sense, a lot of the stuff we're doing right now is more of, like, a looking-forward in the terms of, well, yes, we can do this for a look-back if you are in a pinch or struggle and you need these answers really quickly. But we can also just apply this on a day-to-day. So when those 50 loans come in for the week, we can just apply the process at a very low scalability, right, not 2.5 million pages, probably like 500 pages. Put that business layer on top of, "Do you really need to take a second look at this? Do you trust the person that was reviewing this to begin with?" Or maybe you don't even need anybody to review it, because you're totally fine with 5 percent or 10 percent of what you do review, and then go from there. And then all of this stuff is going to be logged, and you're able to then go look back and look at these pages and say, "Okay. Maybe I want to take a smaller sample set of what's been digitally reviewed. I want to take another look." So all this stuff is trackable, readable, at the end of the day, scalable. Going forward, it kind of seems like the way, in my personal opinion.
MJ 10:34 Yeah. I mean, why would you do a small sample for a QC when you could run everything through the algorithm and essentially apply all those Digital Workers to test each and every single page, ensuring you don't miss anything? So I absolutely agree with you. It does certainly seem like the way forward.
MS 10:51 It's crazy when you think about it, right, because back in the day when we first started working, you had physical papers. Right? Now, everything's PDFs, and you have to scroll through. It just seems very error-prone to me for a human to do those sort of things, whereas when you have physical papers, maybe you missed a page. But kind of the rogueness of just going through PDFs seems very much more error-prone than physical papers.
MJ 11:19 Yeah. And I think we actually saw that in our results as we were testing this product alongside humans. The product, unsurprisingly, outperforms humans. We've gotten to that point right now.
MS 11:29 Yeah. My very first job had to do an audit of receipts. And I'm not going to lie, doing that eighty hours a week to get a project similar to what we're just talking about over the line, your mind goes numb, and you kind of just go through the motions. And I would much rather trust the machine to go through the motions than myself at this point.
MJ 11:47 Absolutely. 100 percent. Cool. So of course, this can work in the mortgage industry. We're expanding out the number of tests, and we're running this essentially as a QC process. That's very cool… And so I guess, as we went down that path and we knew that that was the problem that had to be solved, ultimately, what sort of technology did we use to be able to achieve that?
MS 12:10 We used an assortment of technologies. It's actually pretty incredible what's out there right now, because for a time, actually, back when we were both working together at FTI, where I did kind of an OCR engagement, and the technology was really not there, there had to be a significant amount of manual review for some of the basic packages out there. But now that there's a ton out there for this example that we're using, we're actually leveraging both AWS Textract as well as Azure's Form Recognizer. Both have some good things with it. Both have some not good things that are not fleshed out there. But using them together and using what they're strong at actually has assisted in a lot of the different tasks that we've built. And I do kind of want to describe, when I say tasks, these would be tasks that a human would do. Right? So a task, in this case, would be, "I want to flag a document." One tool might be better than the other. We're able to use kind of confidence scores to figure out which one is doing the better job of it, compare the two, and kind of move forward.
MJ 13:13 Well, so I guess the highest level, right, to go back to the initial problem here, which documents are important? Well, we're going to apply some computer vision logic to look through all these documents and flag which ones we think are important based on the text and the information that's on there. And then it sounds like the next step would be to try to extract the actual text, the information that appears on those pages, and then start to apply some business logic around it. Is that right?
MS 13:41 Yeah. That's correct. And the way you extract data from a page, right, could be very different depending upon who it is. Right? For example, I know we both work in the Middle East a lot. They read right to left versus left to right. There's certain situations where you need a table that you need to kind of parse out and other situations where you want loose key-value pairings. So back to my initial point before, is an Azure versus AWS versus a Google versus other general tools out there, kind of use them differently in terms of the packages that they provide. And that's sort of what we're leveraging is, well, this one makes the most sense for this situation. So we set up a task to do that, to then kind of go through that workflow that is that business process or Digital Worker, if you will.
MJ 14:27 Cool. Awesome. And it sounds like in the end, ultimately, we were able to apply that logic. Can you talk a little bit about the outcome of building these Digital Workers for that specific engagement?
MS 14:38 Yeah. Sure. So for this, 70,000 loans had to be reviewed, and for the initial setup, we did the 7,000. But again, there was no way with the team and the budget that was there that they were going to be able to review all of those documents. So the kind of consensus was based off of the Digital Workers themselves. We would scale up this process and target what real people would be looking at, so the top people would know these are the documents that have to be looked at with a certain degree of confidence to then make those right decisions. So in the smaller sample case, 2.5 million pages, we were able to reduce that down. We don't need a team of like 50 people working through three weeks' time. It was, I think, like seven people working a week. But if you then scale that out to the 70,000 loans, 2.5 million pages now grows. It's much easier for us to target the pages that have to be reviewed, apply some business logic with some confidence, and then it's up to the decision maker to go ahead and say, "Should we review? Should we not?" because conversely, in a lot of these situations, they might look at like 5 [percent] to 10 percent of the loans each month. In this case, we're putting that extra layer of protection over everything to then kind of target what they want to look at.
MJ 15:59 Cool. Yeah. So I think we're able to test 100 percent, rather than try to sample and do a smaller percentage, essentially. What are some of the other areas that we've been working with to use computer vision and other sorts of automation tools to build these Digital Workers?
MS 16:14 I guess to start, one of the first Digital Workers I think our team built was the automated process for a very large country's healthcare data, which basically, now we are able to kind of feed in their data monthly, go through a bunch of transformations, standardization normalization of that data, and provide that to clients for them to be able to report on their own individual healthcare data with a benchmark against their country. I think that was the initial conception of, "How do we go ahead and automate a lot of the stuff that our team does?"
But from there, I think some of the more relevant things to this conversation was, first, a worker tracking analytics project, which we're not actually tracking workers. But in a sense, it was more along the lines of, "Do they need a chair?" or how things in a certain state are regulated based off of if you're standing for a certain amount of time versus, "Do you have space for a chair?" and all these different things. So you can imagine in this case, right, we just got a bunch of surveillance photos of people doing their job and where they're moving around. How do we best figure out, "Do they need a chair?" It is a very, very abstract thing to think about. But when you kind of take a step back and you say, "Well, this human is in a box. This workstation is a box," and using pixelated colored images to track where that individual moves around in these two boxes kind of can give you a loose picture of, "Maybe this is something we need to investigate. Maybe this is something that they don't need chairs because they're running all over the place all day." Not going to say what the verdict was on our suggestion, but you can understand the concept.
MJ 17:57 Sure. So using surveillance video, essentially, we're trying to identify the locations in a given office or in a given work location that one person is spending the majority of their time. Is that right? And so, if they are at their workstation, like a computer or something like that, for, I don't know—I'm making this up—80 percent of the time, then that seems like a good opportunity to offer that person a chair to sit down. But if they need to constantly move around and reach for things or pick stuff up, then that would be slightly different. That chair could actually get in the way. Is that the kind of analysis that we're talking about?
MS 18:32 Yeah. But also does the workstation—is it even big enough to have a chair? Right? I mean, there's a lot of questions you can ask from it and from the work that we do. Right? We're kind of data agnostic. We'll give you the answer. What that recommendation is is based off that same business logic we talked about with mortgages and loans. We kind of just give you the tools to make the right answer.
MJ 18:52 Got it. Got it. Yeah. It's an interesting problem that can be solved now in an automated fashion rather than standing there and looking and watching and observing what the person's doing.
MS 19:02 Yeah. I mean, could you imagine having a check board walking from station to station? You have the cameras anyway. You might as well use them.
MJ 19:08 Yeah. Exactly. Exactly. I know that there's a lot of information on the internet that we're trying to incorporate into a number of our different projects. I mean, getting information from lots of different sources is another opportunity for automation. Do you want to talk a little bit about some of the tactics that you take to pull that information down?
MS 19:25 Yeah. Sure. So very, very unique case, and we've actually done web scraping in the past when we were back in Saudi Arabia. We've done it in the Middle East a few times. But this case is kind of investigating how a certain vendor uses client searches to promote certain products. Not going to get into the nitty-gritty about the who and the what, but the whole thing is from one specific country, one specific area. How do we best kind of take data to see, "Are they suggesting certain products over another, or is there any sort of favoritism there?" So when tackling a web-scraping problem, it gets complex very fast. Right? So at its core, you're looking at a page. You need to scrape the HTML, which can be embedded in a number of different ways from variables that you want to see. It can be very easy. It could be very hard. But each problem is very much different. But in this case, it was getting blocked. You could not just hit the pages and go from there.
So we had to go through a process of going ahead and signing in and mimicking somebody signing in to then mimicking clicks on pages to then go around and extract the different pages that we wanted. So this one was very complex. But in the case of, maybe, you want thirty different types of products that you want to review, and from those thirty, you want to compare it with thirty others, that's getting very large already of the number of pages and clicks and information that you would need a human to do. Whereas in this case, I pretty much have a virtual machine just sitting up there that would go through and kind of capture that data and pull it down with screenshots.
MJ 21:09 Cool. Yeah. That's awesome. So it sounds like in the mortgage industry, we've already isolated situations that we're able to add these tests and essentially provide a layer of QC using these digital workers. When it comes to other industries, though, there's a lot of information, especially online, that could be very beneficial to us if we're able to extract and pull that down. I know you've been working on something in that space recently. Can you tell us a little bit about that project as well?
MS 21:35 Yeah. Sure. So in this specific project, we needed to grab large amounts of data from pages on the web and had to deal with an e-commerce company that sells a variety of products. And the concern there was, are they showing favoritism based off of maybe certain delivery fees? Maybe it's certain products that they're promoting versus others. Is it harder to get to one product’s page versus the other? So in a lot of these web-scraping initiatives, it really depends on the level of security of the page itself. And do they even care? But in this particular one, it did take a little bit of effort to actually train a Digital Worker to pretty much pull up a page, log in from a certain region—and in this case, the region was UK, Europe—and from there, click around on the page manually. Somebody is just hitting a function all the time. Maybe it's a Google, Amazon, Walmart, whatever. They'll pick that up and flag you as a botter. So in a lot of these cases, you have to use the Digital Worker that's also smart, that actually kind of has to mimic a lot of what a normal human would have to do if you wanted this information, but then also be smart enough to routinely take that information and put it in the right place.
MJ 22:56 Got it. So you're creating a bot that is as close as possible to a human in order to prevent yourself from being blocked by one of these major sites.
MS 23:06 Exactly.
MJ 23:07 Yeah. Very, very challenging. But yeah, then once I guess you extract the information, then you can process it and use it for whatever the purposes of that engagement is. So that's pretty cool.
MS 23:17 Yup. Exactly.
MJ 23:18 So now that we've got an example from the mortgage industry where we're reading a bunch of paper and we're also in other industries taking a look at surveillance video, we have the capabilities of reading information from the web in a programmatic fashion, all these different digital workers that are performing these individual jobs, what are some of the future digital workers that you could see us creating, some other areas that would be interesting to expand into?
MS 23:43 So as I mentioned before, I use to work for IBM, was a UAT project manager there. And doing UAT in the most recent example we just talked about with web scraping is, if you're able to simulate clicks of what a person should do, you're able to do UAT. And just for those that don't know, UAT—user acceptance testing—it's what any big tech part of a company has to perform before rolling anything out, and that's not just a Google thing. It could be your online banking system, or anything else you could imagine that's tech, always usually goes through a UAT. So that's a very simple one and something that I'd do in the past where you make these tests. You then physically had to make people do it and then come back and, "Things are broken," or "Things are great," "You need to investigate this more."
MJ 24:33 So there's a super wide array of things that already are capable with technology that a digital worker would be able to do at least as good as a human worker in a lot of spaces. And one interesting one that has been getting better and better over time is actually the ability for computers to be able to read language, so natural language processing. And I know we've done some work around sentiment analysis. So the computer can actually read the document and determine, "How is this? Is it positive about this certain topic? Is it negative about the certain topic? Is it angry? What would a human feel if they were reading this document?" Do you want to talk a little bit about that world and how we've been using that?
MS 25:15 Yeah. Sure. Natural language processing has kind of gone leaps and bounds in a lot of cases. We've done one in particular about doing Twitter analysis about how a certain airline fares in Twitter versus another and doing comparisons. We've also done it in two countries as to how people are responding to certain things on Twitter or what have you, Facebook, based off of certain legal things that have come out from the government and using that text to kind of construct, "What are people's opinions about things?" because opinions are important.
Going off that is: it doesn't need to be typed out. It could be voice annotated. Microsoft Teams already does that. There's several packages that already can take what I'm saying right now, put it in a nice little document for somebody to read later. But once you start to take those tools, you can do some very weird, cool analysis or investigations into things about, what would make sense going forward? What could be helpful to the end user, or end users in this case, whether it's a meeting, having that transcribed meeting? Nobody has to take notes anymore. Now you can kind of listen to what people say. And when you put a layer of sentiment analysis on top of it, you can say, "Well, user A really didn't vibe with point one that was made on that topic," not to be super Big Brother.
MJ 26:45 Yeah. It seems like all of the data that we've been capturing as businesses over the past ten years, a lot of it has been unusable. Right? If every chat that is going on inside of a company is constantly being tracked, that's fine. And of course, we can go back to it if something goes wrong in a legal and compliance setting. But it sounds like now we have the ability to constantly monitor and ingest that information and use it as just an additional data source that can inform the business or improve processes. But yeah, it's really amazing how far a lot of the technology to use this data has come.
MS 27:21 I mean, back in the day, it was physical papers, reading documents, us writing reports manually, handing it to them, discussing the data we found, pulling examples, doing all these things. Now, if you could imagine, right, maybe you're not able to go into the office all the time. You're now able to have voice annotations to a meeting that is being recorded of who's sharing their screen. So, it could be a document, could be a visualization. It doesn't really matter which. But from what we've already described of how we're able to scan these documents, we're able to get information whether that's sentiment or certain pages that need to be flagged based on business rules.
MJ 28:00 Yeah. Absolutely. It is pretty impressive. I think we're getting better at being able to use all of the information that's available to us to paint a fuller picture of what's going on in the business.
MS 28:12 Yeah. My motto is work smarter, not more. So, at the end of the day, whatever we can do to make everybody efficient, and more accurate to that point as well, and not be doing the same rote tasks over and over, I think just makes everybody's lives better.
MJ 28:28 Perfect. Well, I think that's a great way to wrap everything up here. Work smarter, not more. Leverage technology and Digital Workers to automate as much as you possibly can in the processes so that you can focus on the most high-value, important things to do. All right. Well, thanks so much. I really appreciate it, Slater. Thanks for taking the time to chat today. Always love talking to you about using technology to automate things.
MS 28:51 Awesome. Thanks, Jelen.
MJ 28:53 The views and opinions expressed in this podcast are those of the participants and do not necessarily reflect the opinions, position, or policy of Berkeley Research Group or its other employees and affiliates.