Dreaming of a Data Privacy Resource

Amy Worley, a managing director in BRG’s data privacy practice, discusses the creation of BRG’s DREAM software and how the evolution of privacy has led to the need for a data privacy resource.


TRANSCRIPT

MJ 00:00              Welcome to Force Multiplier, the official podcast of BRG's Global Applied Technology team. The GAT team, as we call ourselves, is a globally distributed team of software engineers, data scientists, graphic designers, and industry experts who serve clients through our BRG DRIVE(TM) analytics platform. We're helping some of the world's largest and most innovative companies and governments transform data into actionable insights.

I'm Michael Jelen. And in these conversations, we speak with people both internal and external to BRG to discuss how technology, and specifically software, acts as a force multiplier to extend the impact of people across any kind of professional organization.

Today, I'll be speaking with Amy Worley, BRG's head of data privacy and the creator of BRG's DREAM product. DREAM stands for Data Risk Expert Assessment Model and is our best attempt to convert Amy's brilliance into a software product. In a nutshell, DREAM intakes hundreds of data points about a company and identifies the areas of biggest risk, estimates [their] dollar value, and suggests potential actions to reduce that risk. I always have a blast talking with Amy, who is a compulsively curious, adventurous former lawyer who believes in math, interconnectedness, and the limits language imparts on our ability to understand. Please join me today as I speak with Amy about DREAM. Hi, Amy. How are you doing?

AW 01:19             Good. How are you?

MJ 01:20              Great. Thanks so much for making the time to talk to me today. I'm super excited to dive in. But before we do, I'd love if you could just take a couple of minutes and introduce yourself to everyone.

AW 01:30             Sure. So, I am Amy Worley. I'm a managing director here at BRG. And I have been with the firm almost three years now coming out of industry. My specialty is information compliance, mostly privacy but really all the way around information compliance. And I always tell people up front that I am a recovering lawyer. So, I was in private practice for about sixteen years before going in-house, ultimately becoming the chief global privacy officer for a multinational pharma and then joining BRG, where I lead the privacy compliance group.

MJ 02:10              Great. That's awesome. And how did you get into that space? As [you’re] a recovering lawyer, I'm sure there is probably an interesting path to get there, and I know you're quite technical as well. But how did you end up selecting that?

AW 02:23             It was completely on accident. [laughter] So I started out as a civil rights lawyer coming out of law school, doing some Fourth Amendment search-and-seizure work, and just had a real interest in and passion for this idea of what people choose to keep confidential as sort of tied to basic free speech rights and democracy. I'm one of these people who believes that privacy is a fundamental right or freedom. And so, my law practice sort of happened to coincide with the internet becoming something that everybody was engaging with on a day-to-day basis. So, I really grew up in two worlds. As a kid, my first computer was an Apple II. And I grew up in a pretty online world—I mean, yes, the internet existed, but regular folks didn't have it—and was licensed to practice in 2000. And it just so happened that my career and the internet kind of grew up together. And my interest in Fourth Amendment search and seizure and privacy naturally converged into this privacy and technology practice that I have now.

And I'm technical, but I'm largely self-taught on the tech side. My underground background was really in logic and literature. And I spend a lot of time sort of translating to nontechnical folks—executives, and lawyers, chief compliance officers, GCs [general counsels], CEOs—sort of explaining the tech world to them, and then doing the opposite: taking the compliance framework and the laws and translating that to engineers, developers, security folks. "If the rule is I have to be careful with what I do with personal data, what does that look like in my code base? What data elements can I use? How do I actually make this regulation meaningful?" And so, I think everybody sort of forgets about Gen X. [laughter] But one interesting part of the folks who're at my stage of life is that we did sort of bridge these two worlds. And in some ways, that makes us pretty good translators.

MJ 04:48              Yeah. And it's such a fascinating time right now, as we see in the public domain tons of talk about privacy for the first time. It felt like ten years ago as the web was rising, people were willing to give away their data to virtually anyone in exchange for convenience and great products. And only right now are we realizing that that may have been a mistake, and we may want to rethink how we're doing that. So, I don't know what your thoughts are on many people saying, "Privacy is dead," but it does seem like a narrative that's quite wide-spoken right now.

AW 05:19             Yes, absolutely. So, when I tell people what I do, I get one or two answers. I either get, "Wow, you must be busy," which is true, [laughter] or, "Privacy is dead." And I will say, people have been saying privacy is dead since we invented the telegraph. But I do think the way that you introduce this topic is exactly right. And I think that the difference is people's understanding of what they were giving away. I know that early adopters of social media were so excited—and I was in this group—to reconnect with college friends. And there wasn't a widespread understanding of data collection and data use. And I tell people all the time, if the product's free, you're the product. But after things like Edward Snowden and Cambridge Analytica and other really sort of news events like that, people become aware that their data is collected. And they have some concerns about that.

And so, when people say privacy is dead, I always say there's a difference between absolute confidentiality and privacy. So, the simplest example of this is a secret. We all have secrets. I may choose to tell you something confidential about myself and three friends. And my expectation is that it's not absolutely confidential anymore, what I've shared, but I've chosen based on my relationship and the information that I have about the people that I've shared this with—I've chosen to make a limited disclosure. And that's really what privacy is about. It's about choosing what information about yourself to share with whom and what happens to it. What am I sharing—again, not absolute confidentiality—and then, what do I know about what's going to happen to that data after it's out in the world?

MJ 07:22              And it seems like right now is a critical time in the evolution of privacy, as we see people caring about this more and more and trying to understand what's the correct behavior, not just of our data but of all of our customers' data. And of course, when it comes to a large company and their considerations in this area, selecting wrong could be an extremely expensive mistake. And so, when you're in the boardroom and you're trying to translate things between the engineers and the lawyers and the board, I can only imagine that there are so many different topics that are being discussed. And it seems like it might be a little bit difficult to make a decision based on something empirical at this point. I know that's a challenge that you're going through. Could you maybe set the stage a little bit about what data points you might use or how people are making these decisions right now? It sort of still feels like it's very subjective in many cases.

AW 08:12             Yeah, it is. So, we are still using the same—and when I say we, I mean, compliance professionals—are largely using the same risk-modeling tools that we used in 1965. We'll say risk is impact times likelihood. And there are lots of different ways of going about that math, but they're often very subjective. And when I came outside of the law firm and went into a business and really saw this in impact times likelihood analysis being done, I was like, "Gosh, this is so very subjective. What do you mean, likelihood?" And I felt like there was a lot of guessing going on. And it's very challenging then, when you're asking to build out a program, especially in this post-GDPR world, that it's very expensive and very far-reaching. Executives will ask tough questions. "Why should I be spending that much money on that program?" Or "What is the actual risk avoidance? What's the benefit to this?"

And I found that those questions were tough to answer in the old way of doing the analysis. And I just felt like—and sort of the reason that we're talking—there is a better way to do this, to have conversations with boards and executives about company behaviors around privacy, around people's data and how that really drives the bottom line. And so that's where DREAM came from. It's trying to get out of pulling a case study, pulling a few regulatory decisions, putting them in a spreadsheet, kind of manually taking a shot at it. Let's use algorithms, machine learning. Let's do the math on the data that's out there and actually get to a much more defensible and empirical risk analysis, which makes it so much easier to tell a story to a board.

MJ 10:17              Yeah. And I know our entire team has always been speaking about data-driven decision-making as one of the core tenets of successful execution these days. It seems like a lot of these data sources may be there, but people aren't recognizing them. Do you want to talk a little bit about the methodology that you might undertake and how you might pull some of these different data sources and use them to come up [with] or quantify some of these different attributes that you're talking about?

AW 10:44             Yeah. So, in the old way, you would go and look at settlement decisions or case decisions, read them, and try to extrapolate from that. Now, there are three websites that are available that provide a lot of information on regulatory actions. For example, there are some good tables on US Department of Health and Human Services HIPAA fines, and you can grab that information. The GDPR fines are out there and available. And you no longer have to go and pull each case or each decision. You can grab the information out of tables and look at it at scale. And the same thing is true in terms of the one thing that we know, maybe not perfectly, that measures how people feel about corporate behavior, which is stock price. You can look at a company that's had a privacy event and look at their stock price before and their stock price after. And that can give you a good data point to see what is the potential impact if we, as we say on my team, do something creepy with people's personal data that causes them to have an unpleasant reaction here with this.

MJ 12:13              Yeah, interesting. And I guess if we have a lot of data and we're able to build models and start to quantify these things, do you see that as something that replaces the current subjective decision-making that occurs? Or do you think that there is an opportunity to steer that in a certain direction? How would you use your expertise in conjunction with some of these datasets to build a decision space that executives would love to understand a little bit better?

AW 12:40             Yeah. So, I'm going to go improv and say this is the problem. So, I think all of this objective information is super important and is game-changing and is where compliance is going. And the machines really are better than humans at making some decisions. However, and—yes, and—the people who will be deciding whether it's customers that might affect share price, employees or regulators who might be issuing fines, they're humans who have feelings and have human responses. And so, what I think the right answer is is a mix. It's a mix of: use the objective empirical data and enrich it with humans knowing about humans. And that's really where you get the best answer of what your actual risk profile is.

I mean, I think of it in terms of like EQ. The empirical data that we can scrape off the web is the IQ. And it's absolutely necessary. But so is the EQ, because the person—I was on the phone yesterday with one of the European regulators. And we talked about them as capital T, capital R, “The Regulators.”. [laughter] And he has feelings and experiences and beliefs. So, the model has to consider all of that.

MJ 14:12              Yeah, I love that. We're effectively humans in a human world making decisions, but we could leverage technology to be a force multiplier for us.

I do love the example of the history of chess, which has obviously been a historically human-to-human game for as long as we can remember. And up until the moment, I think in 1997, when Garry Kasparov lost to Deep Blue from IBM—at that point, the whole world thought this game is over. And there's no more need for a human to play against the machine, because machines will always win, and that's just the way that it'll be.

What then happened after was pretty surprising to me. But all of a sudden, these centaur teams, which would be a human and a computer, would be competing against another human and a computer. And at that point, the human-computer combo was far more powerful than either a human or a computer individually.

Because if we are playing in a human world, we're still applying game theory. We're still trying to outsmart each other. And in many situations, that's sort of what compliance is. If we have a set of tools and technology that can enable us as professionals or experts in this area to just reach more data and get more decision points that we can build on, we're able to sort of shape the future in our way, the way we want, but with the power that a computer provides. So, this seems like a perfect example of trying to do that in the business world.

AW 15:33             Totally. So chess is not dead, privacy is not dead.

MJ 15:37              Cool. Yeah. And so, I guess that transitions us pretty well into the way that we are trying to build this new product. You'd mentioned a moment ago, DREAM, which is Data Risk Expert Assessment Model. And we were lucky enough to build this together while we were in beautiful Whistler, Canada. But in trying to build and scope out this product, how does it work? What are some of the attributes that you wanted to build into there, right from the beginning?

AW 16:02             Yeah. So, I'm very excited about DREAM. I will not use the pun that it is a dream come true, but I did. [laughter] So, one of the things that I talk about—and I haven't been able to do it as much since COVID—but when I was doing teaching, more CLEs, is that really the sexy part of compliance is that it's like poker, right? So, you are always playing this hand. These are the regulations. These are the legal expectations. And what is the risk or likelihood that something could really go south for us? Because we don't live in a perfect world with infinite resources. I have never worked in or for a business with infinite resources. So, businesses have to decide where to allocate money to reduce risk. And what DREAM does is, it takes the client's information about itself, the expert information of BRG's privacy experts, and then takes empirical market data and regulatory enforcement data and really tries to help up our game. Let us make more informed decisions about the best card to play or whether to call. And ultimately, the goal is to get to monetize risk modeling that is based on all of those factors: client, expert, market, regulatory.

MJ 17:35              Yeah. And in our early customer interviews, as we are building out the product and enhancing features, one thing that really stuck out was the response of clients when we would first capture some information, scrape some information from publicly available sources, and then all of a sudden throw out what we believe or what the model believes is the highest risk area, and what that actually means in dollar terms, along with what suggestions would be to mitigate that. It's almost like a diagnostic tool to see where your health is, where it compares against peers, and what things you can do to improve that. So, it really seemed like quite a powerful thing to our clients.

AW 18:14             Absolutely. And often you hear people say, especially in companies that are less mature on the compliance side, like, "Let's go after the low-hanging fruit." But what they don't know is which low-hanging fruit or sometimes even which fruit tree. And so, what we do with DREAM is, once given what we think the risk profile is, we offer mitigation strategies. And you can choose them. Say, "Well, if I delete some personal data that we have, or if I do some data minimization practices ,or I limit how much genetic data I'm collecting," you can toggle these different mitigating strategies and see how does that affect your risk score. So, it takes away this platitude of, "Hey, let's go after the low-hanging fruit," and instead says, "Let's go after the orange on the third branch."

MJ 19:12              Yeah. And it often might be that you didn't even recognize that an orange tree was there, as you were saying. One thing that I thought was quite interesting was, the typical response or risk that people associate with noncompliance is whatever the fine may be levied by the regulator. But as you're saying, with most people these days, they vote with their feet. And if a company is behaving irresponsibly, they will move away from that company and start to purchase things from competitors, which can have a massive impact on their market share. And then, of course, that translates into bottom line for the organization.

So, it turns out when speaking to people on the board about what the risk is, it's not necessarily the fine that's the biggest risk. It might be a huge drop in market share, which ultimately translates into a lower market cap for the entire organization. And so being able to show that to someone in terms of pointing them toward the orange tree and not the apple tree is often extremely powerful and really the place where the highest risk is located.

AW 20:10             Absolutely. And as far as I know—and I can't say that we've researched this exhaustively—but as far as I know, this is the only tool that I've seen that looks at the market's behavior and not just the regulators' behavior. Millennials, Gen Z’ers, they care about ethical business practices. And you see things like creation of chief trust officers outside of financial services. Companies are trying to put their best foot forward and be trustworthy. And these younger consumers are aware of data practices. They care about privacy. Again, they may not care about absolute confidentiality, but they want to know what's happening to their data. And so, we can see market impacts when privacy events occur. And that matters in the boardroom.

MJ 20:59              Yeah. And it seems like some of the bigger industries are starting to adopt this more rapidly than others. I think we can see this in financial services and banking, as well as, of course, life sciences, which is an area you've worked in quite extensively. I don't see this going away at any time soon. The volume of data is increasing. I think the behavior of consumers is also going to continue to grow in a way that encourages responsible business practices. How do you see this going from a regulatory perspective? And what are regulators thinking about in this space?

AW 21:32             Yeah. So, it's interesting. I think we're at a watershed moment here. The regulators are now looking at companies across the board, not just Big Tech, although that is certainly a large focus—they're saying, "You have collected all of this data about your customers or about your business partners. And you're using that for profit. You should also use it to detect signals of noncompliance." So, we've already seen it in the financial services sector, where you have to be monitoring certain data sets to look for fraudulent or noncompliant in a money-laundering type behavior. Life sciences, you see the OCR saying, "We want you to look at your data for compliance problems with respect to selling pharmaceuticals or medical devices."

This is going to become the regulatory norm. And I think it's fair to say if you have the data and you know how to use it, because you are using it in your Salesforce app and in your SAP app, and you're using it to drive growth in your business, you also need to look at it to make informed decisions and spot compliance problems.

And to flip that on its head, to move from the problem—way back when I was still lawyering, I tried some cases. And the nice thing about using a tool like DREAM is, it also lets you tell such a good story of how we did use our data for good and not evil, right? We took it. And we looked at it. And we very intentionally tried to reduce the risk to the individuals. And here's how. Here's our total story of how we did that.

MJ 23:12              Yeah. And I think it provides you with the mechanism of evaluating and measuring any of the mitigation actions that you would take. So, in the event that you see an off-label promotion for instance, and a certain area of the world is engaging in certain practices. And then you were to decide to put a training into place. I think under the current system, very frequently, people say, "I did the training. It's fine." But no one's really measuring whether or not that did fix the root problem. And by the way, there are ten different possible trainings you could do. Why not A/B test a few of them and see which are the most effective in changing the underlying behavior of people that are engaging in non-compliant activities?

And so, by putting all of that into one place, not only are we using math to identify the problems, suggest possible solutions, but when evaluating the implementation of those solutions, we can actually measure how effective each of them are and continue to do more of what's working and do less of what's not working. So, it really is quite an end-to-end solution in order to, as you said, show regulators that you are using data for good and supporting the compliant practices that they're trying to represent.

AW 24:21             Absolutely, absolutely. And I'm in conversations with regulators all the time. And I can tell you that a real difference maker in how that conversation goes is what the regulator believes to be true about how integrated your compliance program is into your culture and how effective your monitoring of that program is. And DREAM basically does that for you. You can just say, "Here it is. Here's exactly what we did. These are the mitigation measures that we took. This is what we believe the result from that was. And oh, by the way, we tried this one thing, and it didn't change our numbers at all. So, we moved our resources over here." And that just says, "We're really trying to get this right."

MJ 25:14              Yeah. And that combination of bringing together the best of both worlds, the human expertise as well as the technology, really does seem like it makes compliance pretty sexy. So, as we've covered tons of topics today, this has been such a pleasure chatting with you. I feel like we've left people with so many things. Is there any final thought or way that you would want to wrap this up for our listeners today?

AW25:35              Yeah, I would say compliance is poker than using data effectively. And I feel like DREAM is counting cards. [laughter]

MJ25:44               I love that. Well, thank you so much, Amy. It's been such a pleasure to work on DREAM with you and build it together. And I'm so excited for what the future holds.

AW 25:53             Likewise. It really, hopefully, is a dream come true. [laughter] So thank you.

MJ 25:59              Perfect. Well, have a great rest of your day. Thank you again. Chat soon.