July 12, 2022


Don’t wait for an emergency; secure your database correctly right out of the gate. Think of everything outside of your database as the wild west.

What can you do to create the most controlled environment possible for all of your most sensitive data?

I invited Robert Buda, President of Buda Consulting, Inc, and an expert in database technology, onto the show to help us learn the value of database security and what you can do today to improve your security measures.

Join us as we discuss:

  • Why database security is undervalued
  • Critical risks to be aware of regarding your database
  • Avoiding a sense of false security with the cloud
  • Ensuring your database is as secure as possible

To hear this episode, and many more like it, you can subscribe to The Virtual CISO Podcast here.

If you don’t use Apple Podcasts, you can find all our episodes here.

Listening on a desktop & can’t see the links? Just search for The Virtual CISO Podcast in your favorite podcast player 

Speaker 1 (00:06):

You’re listening to the Virtual CISO podcast, a frank discussion providing the best information security advice and insights for security, IT and business leaders. If you’re looking for no BS answers to your biggest security questions, or simply want to stay informed and proactive, welcome to the show.

John Verry (00:26):

Hey there, and welcome to yet another episode of the Virtual CISO podcast. With you as always your host, John Verry, and with me today, Mr. Robert Buda. Hey, Bob.

Robert Buda (00:37):

Hey John, how you doing today?

John Verry (00:38):

Good man. So I always like to start easy, especially with slow people like you.

Robert Buda (00:44):

That’s a great introduction.

John Verry (00:47):

I’m just trying to set expectations for the audience pretty low, because-

Robert Buda (00:50):

Well done.

John Verry (00:51):

My expectations are low; Theirs should be as well. So can you tell us a little bit about who you are and what is it that you do every day?

Robert Buda (00:59):

Sure, so my whole career has really been around database technology. I started out long time ago, working some stints in government and private industry as a DBA, taking care of large databases in the New Jersey state government and in a number of industries. And then in ’97 I opened up my own company, and I’ve been really running my own database management company ever since. I now lead a team of database experts in many disciplines, Oracle, SQL Server, MySQL, [inaudible 00:01:31] Postgres, all the major database technologies. And a little point of pride for me is, and something that shows my age, is I was actually certified as an Oracle DBA in the first certification exam they ever gave. So that gives you a little bit of sense of my longevity in the field.

John Verry (01:46):

Yeah. I’m not sure if that’s something you want to brag about or something you want to keep quiet. I think at one point it was brag about, but I think now you might want to think that through a little bit before. All right, if anyone hasn’t picked it up already, Bob and I are long time friends and there’ll probably be a lot of chop busting because there usually is whenever we get together. So before we get down to business, we always ask what’s your drink of choice?

Robert Buda (02:07):

Well and John, of course we’ve been long time friends. You probably know the answer to this already, but until not too long ago, my drink of choice was always a nice peaty scotch called Lagavulin, especially when I have my occasional cigar, which have become less frequent recently, but largely your fault. I’ve been switching my tastes and I’ve really kind of gone over to the bourbon side and I really enjoy some of the nice bourbons that you have on your shelf behind you.

John Verry (02:35):

Yeah. Unfortunately I have noticed that it’s like, you know when you had your kid, when your kids are a certain age, you put little wax marks on the back of bottles. Bob, I just want you to know I’ve been putting wax marks on the back of bottles, so I know which ones you’re hitting, so please stop. Some of them are quite unusual. So actually Bob’s team shares some office space with Pivot Point Security. So that’s why he has access into my office.

Robert Buda (02:58):

And John’s not lying about me stealing some of his bourbon from time to time.

John Verry (03:04):

Yeah, that’s okay. I’ve stole bourbon off your desk and beer out of the refrigerator that’s yours as well. So we’re even at the end of the day. All right. And I’m very happy that we converted you away from a good… There’s a book that was called Ken something or another wrote it, a famous golf commenter, and it was like A Good Walk Spoiled, and to me, and you’ve heard me say this scotch is a good whiskey spoiled by peat.

Robert Buda (03:32):

Well, I still enjoy a nice peaty scotch from time to time, but it’s rare.

John Verry (03:36):

All right. So I’m excited to have you on the podcast and it’s crazy that we’ve waited this long to do this because the topic of database security is one that frankly confuses me. So let’s think of it this way. You go into any bank in America and the biggest lock is logically on the vault. And that makes a hell of a lot of sense, right? The single most critical asset that would have the biggest impact if it was compromised, is being protected in the most significant way. Now, I think it would be a fair analog for many organizations to say that their most critical asset like money is to a bank would be information assets and that the vast majority of information assets are stored in databases of some sort. So if we agree with that, why do you think it is that database security tends to be, in my opinion, exceptionally undervalued. And I guess I should have asked, do you share that opinion? And if so, why do you think so?

Robert Buda (04:33):

Yeah, I definitely share that opinion and I’ve thought about it for a long time. And I think that there’s a couple of reasons. One I think is that I think we do, we, everyone, including customers, clients, they think of the database a little bit too much like a safe. When we buy a safe or when a bank buys a safe, that safe is a box and it’s got a front door and we put our valuables in it and we close that front door and we spin the lock and there’s a perception that other than that front door, there’s no way into that safe. We take it for granted that the manufacturer of the safe has sealed all of the seams. There’s no back door. So there’s only one attack surface. It’s that front door.

And I think when we think of databases and if we think of them as a safe, we have that same impression, but it’s really very different than that. There are a lot of back doors in a database. So if all we do is secure the front door of the database, we’ve left ourselves way open. So I think that’s one problem is we look at it too simplistically. So that I think is one problem. Another problem is that I think that companies underestimate the risk of insider threats, both intentional and unintentional. So victims of phishing and things of that nature. And so most of those are threats that wouldn’t make it to the database if you’ve already secured the perimeter. So in other words, we are relying too much on that perimeter security. We’re thinking all the threats are outside that wall. So I think that’s number two. And then finally we overestimate how well the perimeter’s been secured in the first place. So I think those three reasons I think are why we put stuff in that safe and we think, okay, we’re done.

John Verry (06:17):

Yeah. So generically, right, when somebody says to me, “We have a risk, which is undertreated,” to me fundamentally, right, there’s two reasons that would happen so that either we don’t understand the risk, it’s not well understood, or we’ve deemed that risk to be acceptable. Right. And risk, of course you can’t eliminate all risks. So you’ve always deemed some level of risk acceptable, right. We can’t afford to treat all risk. So it sounds to me like from your perspective that you think it’s that the risk is just exceptionally poorly understood, and it’s not a matter of the fact that you’ve communicated the risk or you think that an average executive or executive technical leadership person understands the risk and says, “Yeah, I can live with that.”

Robert Buda (07:06):

Well, again, I think it’s a couple of things. I think there’s a combination, right? So in some cases they just don’t know the risk because it’s hard to know the risk until you do an assessment. So it’s a chicken and the egg problem. It’s hard to say to them, “Hey, you’ve got all these vulnerabilities.” And they say, “What vulnerabilities?” And we say, ‘Well, we can’t tell you that until we do the assessment.” And then the assessment comes out with 50 vulnerabilities, but there’s a chicken and the egg problem. So I think that’s one problem. Another problem is some companies, even when they know there’s risk, mitigating those risks can be phenomenally expensive and very, very time consuming. One case that comes to mind is a bank that we did some work for a while back that had over 10,000 SQL databases across their portfolio and doing a full security assessment on 10,000 databases is just a monumental task.

John Verry (08:00):

Yeah. I think we were involved in that project. Weren’t we?

Robert Buda (08:02):

That’s correct. And so I think that’s part of it is there is some acceptance of risk simply because it’s sometimes impossible to completely mitigate it.

John Verry (08:11):

Yeah. Let’s not use the word impossible. Let’s say exceptionally difficult.

Robert Buda (08:14):

Impractical.

John Verry (08:15):

Or exceptionally difficult and expensive. Right. Because at some point if that risk is a risk, which is going to… And if you think about a bank that’s got 10,000 databases, obviously it’s a massive bank, right. So if you have a risk that’s going to put that bank out of business. I don’t care if it’s impractical or challenging, it’s a risk that they need to treat. It’s the old, I always use the analogy you have a $500 horse that doesn’t, it doesn’t make sense to protect it with a $50,000 corral and vice versa. Right. So I do think you got to get this what is the impact if this risk is realized versus what is the cost operationally and hard cost to mitigate it and kind of balance that out. Right.

Robert Buda (08:52):

I agree. But I think we’re going to have to do a better job of employing technology to fix that problem because 10,000 databases or on the order of that is really a monumental job. So I think we’re going to need to use AI in that process. We’re going to need to use a lot more automation in that process. And I don’t think there’s a lot of tools out there yet to tackle that, but it’s going to be needed.

John Verry (09:20):

Yeah. Or let’s go the other direction. Right. So you’re referring to the horses being out of the corral. I think the other side of this would be to put a strong control environment in place that prevents the horses from getting… I mean, the reality is having 10,000 databases probably wasn’t a good idea, like a large percentage of those are spun up by developers or DBAs and their test databases and their play data and suddenly you’ve got all these… If we put better controls around the process of creating a database, right, and making sure that it goes through an approval process and it’s a worthwhile database, and we know when it’s going to be retired and all those kind of things, I think that’s the other side of this, right. There’s the solve the problem, it’s contain the swamp versus clear the swamp. I think automation is a clear the swamp thing. I think contain the swamp is probably more process oriented. Wouldn’t you agree?

Robert Buda (10:08):

Totally agree. And so there’s two pieces to that. One is, as you say, put controls around the creation of new databases. So things like policy management as we create new databases, but also consolidation. There are many, many databases that can be consolidated and security managed within the database. And then there’s just less attack surface because you have less databases. So I think combination of consolidation and containing [inaudible 00:10:36] with policies, I think is a good thing.

John Verry (10:37):

Yeah. I would agree. Okay. So we agree that we’ve got a challenge and we agree that it’s risk centric, right, that we feel like folks don’t really understand database risk. So let’s talk about database risk. So give me some ideas on what I would call significant risks and let’s talk about risk needs to be framed in terms of impact, right? So let’s talk about critical risks that people should be aware of and what the associated impacts would be if those risks were realized.

Robert Buda (11:15):

So I think that the key risks, when I think about what are the let’s say top four, top five, first of all, insider threats. So things like non rigorous management of user and login profiles. That’s low hanging fruit, that’s pretty easy to solve, and we can grab that. Privileged users, obsolete accounts, default passwords, things like that, again, low hanging fruit. And they’re very, very often neglected. And the impact of that is fairly obvious. If you leave the door open, those things are leaving the door open. So if you leave that door to that safe open, people get in and they can play. Another, I think, threat that is not as well recognized, but is ubiquitous are non masked data in QA and dev type environments. We see that all over the place. And we see the production environments well secured, and the dev and QA environment’s not well secured, and yet they have the same data. So that’s, I think, an under recognized problem.

John Verry (12:19):

And real quick, the impact there would be that data would be subject to the same breach notification. So if you’ve got personally identifiable information, or if you’ve got credit card pans in there, anything of that nature, technically, even that data being out of your prod environment, probably subjects you to breach notification because many of the people that have access to the dev and QA environments don’t have a reason to have access to said data.

Robert Buda (12:43):

Well, breach notification, and also that’s kind of from a regulatory perspective, but even from a company risk perspective, if you’re storing product formulations and that product formulation data gets to a competitor, that’s real business risk. So even if you don’t have to notify anybody about it, it’s real business risk. And the QA and dev environments sometimes get put onto developers’ local machines, so I think that’s a something that’s kind of lying under the covers and that again is easily fixable. There are plenty of tools out there for masking or obfuscating data.

John Verry (13:27):

You’re using a lot of big words today, Bob. Did you study the dictionary before? I mean.

Robert Buda (13:31):

I had that on my notepad to make sure I used it today.

John Verry (13:34):

Obfuscation, ubiquitous. I mean, I need a thesaurus to keep up with you today.

Robert Buda (13:40):

I’m trying, I’m trying. Well, I’m not going to say anything else intelligent. So I have to sprinkle in a few smart words. I mean.

John Verry (13:47):

So to your point about that story, right, and to talk about a realized risk, right? And this is public record, right? We were working with the city of New York and their human resources application, GL applications, all run in the financial information systems agency. We were doing work in there on their critical system, a PeopleSoft system, I believe it was. And the developer dumped data onto a laptop. Okay. And it was the records of 440,000 active and 160,000 inactive employees of the city. And he went to lunch at a Korean restaurant and left the laptop in the Korean restaurant. And 23 million dollars later, I think the price was, they were done with the incident. So there’s 23 million good reasons to listen to Bob when he says, don’t allow prod data to leak into your Dev and QA environments.

Robert Buda (14:41):

And the reason that it happens is because there’s always an emergency. We have to refresh dev. We have to refresh QA, because we’re behind schedule. Let’s just take a snapshot and we’ll worry about cleaning it up later. And then it never gets cleaned up. So that’s, I think another piece. And then database sprawl, we’ve already talked about, I think that’s a well known problem that’s getting worse and worse because it’s getting easier and easier to spin up databases.

John Verry (15:10):

So when you say database sprawl, well, that’s the example of the 10,000 databases at the bank when they probably needed a quarter or a third of that number or something like that.

Robert Buda (15:18):

And that was on premise stuff. It’s exponentially worse in the cloud. And then another, this is one I don’t hear discussed much. And maybe there are controls around this. I am not a DevOps expert. However, it seems to me that what I call pipeline leakage has to be a very, very significant risk in the DevOps CICD type world or data engineering world, I should say. We’re taking data out of this very well protected database. We’re creating XML or CSV or JSON files that have all of this data, putting it somewhere else. But now there’s these temporary files or holding areas or spreadsheets just all over the place. I think that’s probably a huge problem. It’s not something inside the database that we can control in the database. But I think it’s something we really need to have a strong focus on the cleaning up of that, not the securing of that, but the cleaning up of it.

John Verry (16:12):

But I like what you said earlier, and you used a phrase that’s be gotten very popular in security and I’d never really thought about using it to a database, but I liked it, it’s application surface management, right. Attack surface management, excuse me. And what you’re really saying is a lot of the risks that are associated with the database are not the database itself, right. There is a finite number there, but data exists in a database for one reason, for applications, people, systems to access that data. And you have to think about the security of your databases being the database itself, plus all of those surfaces that are being exposed. Right?

Robert Buda (16:53):

Sure. Absolutely. And so that means we need to work together. If we’re securing the database, we also need to work together with the teams that are securing the network, the servers, the storage, and so forth.

John Verry (17:05):

But even then, right. I mean, even beyond that, right, you’ve got all of the ETL tools, all of the business intelligence tools, all the reporting tools, right. They might have either a direct database interface. Right. They’re communicating directly or through an ODBC style connection or something of that nature. Right. Or some API. Right. And then you’ve got the applications, right? So if you’re looking at PeopleSoft, if you’re looking at SAP, if you’re looking at Oracle, I mean an Oracle built application, right. You’ve got this app that’s got a set of credentials and authorizations for different types of users that also reaching out of the database. And actually we’ve had challenges very often if the application architecture is that the app uses the database as a single user, it makes it impossible to see if somebody’s doing, or very difficult, to see if somebody’s doing something they shouldn’t do. Right. Because on one side, I’ve got John Verry doing something, connecting into the application. But on the backside, John Verry and Bob Buda’s access, one is malicious, one isn’t, are both being Oracle app or PeopleSoft app talking to the database. Right?

Robert Buda (18:10):

Yeah. So there’s really two levels of visibility if you will. So one is the administrative access and that we have total visibility to that. We can audit for that. We can do all kinds of things. We can secure that at the database level, but the user, for many, many applications, especially web applications, it’s very, very common to have one or two application owner accounts that connect. And the application does all of the security around that. And for those applications, it’s imperative that the applications have an auditing mechanism built in, and there’s no way to make those application auditing mechanisms as robust and bulletproof as a database auditing mechanism. But they’re still very, very valuable.

And it’s important that they’re built in, and many good apps have that, but from a database professional perspective, we can’t control that. So we see all of the transactions that take place for that application. We see it as one user. And so if we run an audit log, if we take a look at the audit logs, we’ll see that database user, let’s call it payroll, made these changes to the table. We have no idea what user did that. So that is definitely a gap. And you need to marry that up with the audit logs that come from the application itself.

John Verry (19:23):

And then you also have the issue too, if you think about it is a database in a weird way is kind of like a firewall, right? You can configure it to have certain parameters, but in order to be useful, it has to allow access by definition. So if the application has a flaw in the application security logic, right, that means your database can be wide open. Right. I mean, and again, very famous case, and it was another public entity where they rolled out a major new application. And literally if you put select star in the search field, you ran other personnel records, right. I mean.

Robert Buda (20:02):

SQL injection.

John Verry (20:04):

Yeah. But it wasn’t even SQL injection. I mean, it was just poorly… But if you think about it, your database, right, you’re the guy who’s a database guru and responsible for securing the database. You did everything you could. But what happened was is that the application’s not secure. So the application is an attack surface that you… I guess what you need to do is you need to compensate or be aware of the fact that belt and suspenders. If the application is not well architected, we need to protect our database independent of that. Correct.

Robert Buda (20:32):

For sure. And not only the application, right, but everything else the application connects to. So, you mentioned ETL before, right? So if you have an ETL script-

John Verry (20:43):

And by the way, explain what ETL is in case other people don’t know what it is.

Robert Buda (20:46):

Sure. It’s an acronym for extract transform and load. And there are variations of that now, as everything is of course changing all the time, but ETL is well known acronym for that. And what it means is take data out of your database, transform it, so extract it out of the database, transform it and load it somewhere else. So now you’ve taken data out that might have been protected in your database and it’s somewhere else. And where is that somewhere else? It could be another database. It could be a data lake. It could be a data warehouse. It could be a temporary staging. That’s a big problem, staging databases. We have those all over the place. It could be an operational data store, an ODS, which is kind of a holding area before things get into a data warehouse. So it’s really important if we’re really trying to secure the entire environment, not just to think about the what we’d call the database of record or the system of record, but everywhere else that data might get via the interfaces that come into the database or that pull data out.

John Verry (21:43):

And then the last thing is of course is authorized use and control of the data outside the database. So somebody pulls data into Excel to do statistical analysis and that data’s sensitive. And now that data’s sitting in an Excel spreadsheet on someone’s desk. Maybe they’re emailing it to somebody. So like you said, you have this giant lock on the database. You think the front door’s locked, but the guy that was allowed to open the safe and go in and get the money now walked out and he put it on his desk in the middle of the bank where it open to public and went to lunch. Right.

Robert Buda (22:16):

Right. And so there are some tools out there that try to mitigate some of these things, things like the data loss prevention tools, which will scan email systems, looking for potential sensitive data in case somebody did what you just said, in case somebody extracted a spreadsheet and mailed it, but it’s certainly a very significant security risk.

John Verry (22:37):

So I think what we figured out is that databases are definitely under understood from a risk perspective, that protecting the database directly is probably on the easier side of the scale and protecting the database across the entire attack surface of which we really didn’t cover all of it. Right. Because we really didn’t talk about physical access. We didn’t really talk about network access, and we didn’t even talk about configuration management and vulnerability management of the database. Right. When was the last time your database was patched, right.

Robert Buda (23:08):

Yeah. So I’m going to take issue with the way you started off that sentence. You said-

John Verry (23:15):

Is your name on the podcast, or is my name on the podcast?

Robert Buda (23:19):

I don’t care. I’m going to take you through anyway. You said something along the lines of securing the database itself might be one of the easier aspects of this. I actually don’t think that’s the case because there are so many moving parts, even within the database and some of those are getting easier as we get into the cloud. So some of securing just the database piece is getting easier. Some of it’s not. And we can talk through that a little bit if you like, but I would not underestimate the difficulty of actually securing the database itself.

John Verry (23:54):

Fair. I probably was sloppy in my language. So I’ll accept your criticism. I was probably sloppy with my language. It’s a surprisingly small portion of the attack surface, right, would’ve been probably a better way to say it, because there’s so many way… because by nature, by its nature, the database needs to give access to its contents in a very broad way. And if we’re not controlling all of that, right. And there’s so many attack services that people aren’t aware of.

So you mentioned the cloud, right, which is definitely something I wanted to touch base with you on because as you are more than well aware, something crazy like 70% of all workloads have been migrated to the cloud over the last six years or eight years, some crazy numbers that you see, and we’re increasingly seeing people using, I’m going to call them different approaches to putting database in the cloud, right. Maybe a cloud native database, maybe they’re ripping and pushed to the cloud in some cases. Right. So they’re just pushing on-premise stuff up into their own EC2 instance. Sometimes they’re using more newer technologies. You mentioned Snowflake, you mentioned Data Lakes, things of that nature. So does moving to the cloud significantly change the risk posture for databases?

Robert Buda (25:12):

So it depends on the risk. So some risks I would say shift. So as I mentioned a minute ago, the infrastructure, the infrastructure risk, things like not securing backups properly, right? So that diminishes when you get into the cloud, but that’s kind of compensated for by database role, so there, there are shifting risks there. Some risks are worse in the cloud. Some risks are not as bad in the cloud. Some risks are the same. The business or user related risks are the same, doesn’t matter. And this kind of comes back to what we were talking about before, where with that attack surface and the fact that anything you can pull out now becomes vulnerable. That makes it really, really important to secure access by user. And that’s where you marry the database security with the overall security is by minimizing and really controlling what user accounts can pull data out in the first place.

So you have all these ETL things, the extract, transformer load, you have all these other things. By controlling the access that those products have or that those processes have, we gain a greater level of security. That doesn’t change whether you’re in the cloud or whether you’re on premise, but one area where I think we have increased risk is that it becomes more difficult to know where your sensitive data is when we’re up in the cloud, not really just in the cloud, but newer technologies, things like, not so much newer anymore, things like no SQL databases, schema less databases. I think that poses a challenge to knowing where your sensitive data is because the schemas can change all the time, and at one point you might have sensitive data. At another point you might not.

And even though it was never a hundred percent perfect to look at schema names, table names, column names, to know where your PIA data is, or any of your sensitive data, it got you a large percentage there by looking at the schema. With unstructured data, you can’t do that. So I think that it’s not really a cloud thing, but it is newer technologies. That’s where our risk, I think, increases a bit, maybe quite a bit.

John Verry (27:26):

So let me ask a question. I would think, and perhaps this is ignorant because I haven’t really thought about it that deeply before, but I would think that for an average SMB SME that a platform as a service style database on par is probably a little easier for them to deal with because when you think about that shared responsibility matrix, there’s certain things, SaaS, PaaS or you own all the users, you own all the control of the data when it comes into your environment, you’re responsible to understand risk. But the nice thing about PaaS I would assume, right, is that I don’t have to worry about the underlying operating systems. I don’t have to understand about the vulnerability and configuration management of the database itself. I don’t have to make sure that I’ve got the database controls configured for baseline transparent data encryption and things of that nature set up right.

So I would assume that if you were going to just on par and I know this is generalization, so generalizations for a reason, but I would say as a generalization, it would seem to me that running Oracle or Postgres on my own servers, inside of my own environment versus some type of a platform as a service implementation that on par for most companies, that the platform would probably be a little simpler for them to think about because the attack surface that they’re responsible for is a smaller percentage.

Robert Buda (28:41):

I think that’s true, but I think that there are caveats to that. So one is that I think that the perception… Remember we talked about the safe analogy in the beginning when I said we have a perception that the database vendor has locked down all the back doors and we only have to worry about one door when it’s a safe, right. So I think there’s a risk that when we have more of the services in the cloud, that impression of the database vendor or host managing all that for us and us not having to care increases. And I think that’s risky because for example, things like having logging configured, that’s not generally configured by default. Having auditing turned on is not generally configured by default. So if we take the approach that, okay, well, now that I’m moving to the cloud, I really don’t have to worry about a lot of the security stuff. I think that could actually make us more vulnerable.

John Verry (29:34):

Right. But that’s what that criticality of the shared responsibility model and understanding what is your responsibility and what isn’t, right. No matter what you’re using from a cloud perspective, you’re going to be hosed, if you don’t understand what your responsibilities are in a shared responsibility model.

Robert Buda (29:50):

Agreed. The matrices that I’ve seen go all the way from us being responsible for everything to all the way down to just being responsible for data. And I think when we see that on that matrix and we see data, I think that makes us think, well, there’s really not a lot for us to think about. And I think that there’s more for us to think about than that.

John Verry (30:11):

Yeah. By the way, that’s a bad matrix, right, because there’s no possible way that you only have obligation for data, right. I mean, there’s no model, right. The model that you own, the least of the stack is SaaS, and the biggest thing you own with the SaaS stack is user access. Right. And that includes, like you said, logging, and so if someone’s putting out a shared responsibility matrix that shows you’re just responsible for data, they should be… I’m trying to think of a nice way to say it. I won’t say it, because there is a nice way to say it.

Robert Buda (30:41):

There is one out there. I know I’ve seen it and it alarmed me because it just makes you feel like you don’t have as much responsibility.

John Verry (30:49):

Oh, I don’t have to worry about it. Yeah. Yeah. And there’s nothing… Yeah. And I like what you said there because worse than having bad security is having a false sense of security, right, which is what you’re referring to. So I think you touched on this before and maybe you didn’t or if you have anything else to add. From your perspective, right, so you’ve been doing this a long time, like you said, and not only are you a database maintenance guy, but you’re a database developer, right. Most of your career, I remember you being the last 10 years have been more maintenance and operational, but up before that you did a ton of programming. Right. So, and that was all done in more the old school waterfall, conventional development methodologies. So does agile and CICD make the problem of database security, the challenges of database security, better or worse?

Robert Buda (31:41):

Well, so I think if you go all in and really do CICD in a robust manner, I think it could make it better. But I actually think that the way many people do CICD in a partial manner could make it a bigger problem. So I guess what I’m trying to say there is if you really configure robust regression testing into your CICD pipeline… I don’t know exactly the right terminology there because I don’t have a lot of background in CICD, but I know the concepts. And I know that the kind of the utopia there is that every time you make a change, it’s pushed out and it’s completely regression tested, and it can be rolled back if something fails.

I don’t think that ideal is always achieved, but if it is, and if we could build into that, and if we do build into that regression test vulnerability testing, then I think we could achieve greater database security. But I think what is most likely the case is that the regression testing built into that is less robust than individuals doing the testing from a security perspective. And so I think we could end up with bigger holes or holes that we miss.

John Verry (32:55):

Or that same false sense of security. Right.

Robert Buda (32:58):

Or that false sense of security. Right, right, right. But that is an uninformed opinion. It’s just my opinion on it.

John Verry (33:06):

No, I think that was actually a nuanced opinion and not uninformed. And I think it matches up with what we see with things like infrastructure as code. If you have a really robust implementation of infrastructure as code, you’re putting out some hammering environments that when we review them, they’re great. But if people are stretched for time, if people don’t have all of the knowledge that they need, if they’re not staying current on the newest switches and capabilities that Amazon or Microsoft introduced… One of the things that you have that are challenges with the cloud is you get these cloud drift, right. So what happens is they add a new feature, like you could not change your code at all that generates all your infrastructure and your security posture can go down because what happens is they implement a new feature and they don’t want to turn on that feature by default, you know what I mean? And now what happens is your code needs to adjust to stay at the place where it was.

Robert Buda (33:59):

And if you do that, and if you’re using infrastructure as code, that’s great because it makes it easy to roll all that out. But if you don’t do that, big problems.

John Verry (34:08):

Yeah. No, that was actually a good answer. So we certainly have painted a challenging picture. Is there… So if somebody’s listening and they’re like, oh crap, I’m not going to be able to sleep tonight. Is there a simple one, two, three step, some logical process that you talk through with folks if someone wants to have a better sense of what that attack surface is and ensure their database is at least reasonably secure?

Robert Buda (34:33):

Yeah. So, firstly, to secure the database, especially on premise, right, there’s a lot of layers to that. So there’s the physical security. There’s the network security. My team lives at the database level. So we generally rely on our client’s security team to do the physical security, the network team or teams like yours to do the network security. And we take over at the database layer, but we assume that those other teams didn’t do a very good job and that we need to lock the database down as tight as we can because there are likely going to be people who get through that perimeter security. So that’s kind of the way we try to think about it is we don’t try to secure the perimeter. We work with the other teams and it should be secured, but we don’t assume it’s secured because we know there’s always a way in to the perimeter.

And so what we can do is we can knock down the vast majority of the threats with a few steps. And this is not a hundred percent. We can keep going to get to as close to a hundred percent as we can. There’s no hundred percent. But if you do a handful of things, you can knock out so many of the things. So you mentioned this before and it’s really, really critical one, up to date patching. All of the database vendors issue security patches on a somewhat regular basis. Oracles’ it’s every quarter. SQL Server, it’s either every six months and on demand, and they all do it on demand when there’s an actual crisis that, so step number one is make sure that we have a cadence that we’re getting these patches in as soon as they are released and as soon as they can be tested.

So that’s absolutely number one. Things I mentioned before have removal of default accounts, simple passwords, implementing robust password policies, securing file system folders that may be accessible by the database system or that may contain database files, backup files, control files, things of that nature. So that’s another big one. Minimizing or locking down privileged user administrator accounts, very simple to do, and very, very critical. Implementing two-factor authentication for direct database accounts, a much bigger factor for administrator accounts and for the kind of applications where every user has their own database account, but critical there. Eliminating things like SQL authentication on SQL Server accounts. Using windows authentication, assuming that you’ve got a good LDAP or some authentication mechanism in place on the network. And then drilling deeper in, and not a lot of places go this far, but this is really where you can start locking down that last 10 or 20% are user related issues, cleaning up obsolete, user accounts, doing user rights reviews.

This goes across both the kind of databases where you have specific database users and application users. You can do user rights reviews, even on an application that uses a single user. Of course, you need to be able to get into the application to do that so to me, that takes you into the 90s if you go that far and actually do user rights review because that’s where you start to get into trouble. That’s where you get into the place where a bad guy gets a valid user and that valid user has too many rights or that valid user is no longer a valid user. They’ve left the company. They’re still there. They still have an account. So to me, those are the kind of the top things that knock off the vast majority of vulnerabilities.

And there are scanning tools that will identify many of these things and make it really easy to know you’re not missing them, things like Trustwave, application security, or App Detective rather. Imperva’s got tools. So, there are tools to do this and that’s what we use when we do one of these things on client sites. We don’t go in there and try to do everything manually, as long as we can license it for the client. And we use tools like that to lock it down. So, so I hope that answers your question. That’s what we go to.

John Verry (38:40):

Yeah. Just to kind of double down on a couple things you said. The user account reviews we offer them for that an entitlement review. And I’m always amazed. We used to do them a lot of times with PeopleSoft and you dump a list into Excel of what the expected entitlements are, and then you would pull a list out of PeopleSoft. And then we would write some code that would compare the two. You just could not believe how often people are being given much broader permissions that we actually think they are. So I double down on that. And then the one thing you didn’t mention, and I’m sure you were just mentioning a lot of things, not everything, but I think it’s really also important is making sure that the logging and auditing and because you need to really… If something’s wrong, right, that becomes your belt and suspenders. Hey, in a perfect world, we’re not giving people overly broad access.

In a perfect world, we’re blocking inappropriate levels of access. We’re blocking this certain types of activities from occurring. But if we’re logging and we’re reviewing those logs, or if we’re alerting when particular activity occurs, right, I think we’re going to have a higher chance of getting it. One last question. And I meant to ask this before, but this is also, I’ve seen been a challenge, right, is we talked about applications and application logic being wrong and that giving access into the database that we don’t expect. Has coding changed? Right? So now that we’ve gone more API centric with many applications, are there still a lot of stored procedures being used inside of databases? And does that create another distinct challenge because of that segregation of application logic and access logic?

Robert Buda (40:20):

Well, so for sure, a lot of stored procedures are still being used. I’ve never seen stored procedures as a unique security risk, in fact, and maybe this is simply because I’m a database guy and I’m biased toward using the database for all that it can do. But I feel better when I see more logic in the database because it’s more controllable. There are access privileges that you can grant to a specific stored procedure. So you can actually lock down who can run a process. I’ve always, and again, this is just maybe me, but I’ve always kind of seen everything outside the database as the wild west and everything inside the database as a controlled environment. And so I’d rather keep database logic, especially database logic that might present a vulnerability, I’d rather see that in the controlled environment.

John Verry (41:10):

So what you’re saying is that you have more granular control if it’s in a stored procedure than if it was in application code.

Robert Buda (41:18):

Right. So for example, I could grant John Verry the privilege to run a procedure called execute payroll or payroll process. And if John Verry leaves the company, I can revoke that. And so if for some reason you manage to get in somewhere else and get back in and you try to run that process, it won’t let you do it. So it just gives you a little bit more control. Not everyone uses that. And in fact, and we haven’t really talked much about this, but robust database systems like Oracle have very sophisticated security mechanisms that are really underused by many, many companies. There’s label security. There are data masking tools. There’s all kinds of things that are there and available and are underused. So this is just an aside.

John Verry (42:05):

All right, we’ll have to have you back on to talk about… Once people clean up their databases, we’ll have you back on, we can talk about some of these more advanced tools, but I think we need to get them to do the basics. And that was hopefully the purpose of this meeting, this podcast. All right. So I think we beat it up pretty good, sir. What do you think?

Robert Buda (42:23):

I had a good time.

John Verry (42:24):

All right. Well, wait, wait, you’re not done yet. We got the most difficult question here. We’re going to see if you did your homework. Give me a fictional character or if you prefer a real word person, you think would make an amazing or horrible CISO if you prefer DBA and why?

Robert Buda (42:37):

Okay. So despite the fact that I was a terrible student in school, I actually did my homework. So I did not do a fictional character, but I actually have an amazing CISO candidate that I think would really be a great CISO. And it’s Charlie Munger.

John Verry (42:53):

Is because is he the only older gentleman doing information security than you at this point in time?

Robert Buda (43:00):

He actually doesn’t do information security.

John Verry (43:03):

He is older than you though. He’s older than anybody listening to this podcast.

Robert Buda (43:09):

And I just have to add, John, that even though I might look a little older than you we’re within days of the same age.

John Verry (43:15):

Don’t tell people that. No, we’re not. No, we’re not. If you remember right, we figured out that’s not true. You’re at least a year older than me we figured out.

Robert Buda (43:23):

All right. Well, so here’s why I think Charlie would be-

John Verry (43:26):

So I’m still a young buck.

Robert Buda (43:28):

You are.

John Verry (43:31):

So why is Charlie… Sorry, I got you off track. Why is Charlie Munger going to be a great CISO?

Robert Buda (43:36):

Well, because he’s thought a great deal about what causes us to have blind spots and what causes us to make bad assumptions. He’s even got a really great speech where he goes into 25 tendencies that we have that cause us to make bad assumptions and have blind spots. So I think that would be a phenomenal way of thinking for a CISO because you have to not have blind spots. In particular, he’s got one that he calls thinking backward. I don’t know if you’ve ever studied Charlie Munger.

John Verry (44:08):

No, I got to be honest with you. First of all, it’s a great answer. You did do your homework and second off, will you please send me the link because I think I’m one of those people that has a lot of blind spots. No, I think optimistic people and people who get excited about something, I think they jump to a logical end without thinking through all the steps in the middle. And I think that characterizes me and I think it’s one of the things that would make me a less good CISO. So I actually like the idea of what you’re talking about. So if you would share that, I’d appreciate it.

Robert Buda (44:34):

I’ll be happy to, and for those who don’t know who Charlie Munger is, he’s Warren Buffet’s business partner, and he’s done a great deal of work on what I would really call psychology. But it’s all about how he goes about his thought process of picking companies and things of that nature for investments. And he’s got this idea called thinking backward and he calls it inversion. And the idea is to look at things backward from the way most people would look at them. So where most of us might think, and I’m applying this now to security, most of us might think what are the ways that I can keep a hacker out. He would think, what are the ways I can let a hacker in?

John Verry (45:12):

Or what is the way hackers would get in, right?

Robert Buda (45:16):

Yeah. You basically take the problem and turn it upside down. And finally he emphasizes the use of checklists and mental models, which again, I think would be very useful for a CISO to do so that’s my rather conventional, I guess, answer. I’ve heard some more interesting fictional characters. I think somebody mentioned Shrek once.

John Verry (45:34):

Oh, there’s been some nutty ones, but I think my favorite one might have been Eeyore.

Robert Buda (45:38):

Oh, maybe that’s what I’m thinking of.

John Verry (45:40):

Yeah, I was just like… Yeah, but there’s been some great ones. I mean, of course you’ve had Jack Bauer and you’ve had Michael Scott a couple times which is of course the case. All right. If somebody wanted to get in contact with Buda Consulting, what’s the best way to do that?

Robert Buda (45:55):

Probably the best thing to do is connect with me on LinkedIn. It’s Robert Buda, B-U-D-A on LinkedIn, or my email is [email protected]. Welcome to reach me that way too. Be happy to connect with anybody and have a chat if you’re interested.

John Verry (46:08):

Mr. Buda, you brought it today. Thank you. It was fun to do this.

Robert Buda (46:12):

John, thank you very much. I enjoyed it. I appreciate the invitation and I will see you for a bourbon one of these days soon.

John Verry (46:17):

Hopefully real soon.

Robert Buda (46:18):

All right, sounds good.

Speaker 1 (46:22):

You’ve been listening to the Virtual CISO podcast. As you probably figured out, we really enjoy information security. So if there’s a question we haven’t yet answered or you need some help, you can reach us at [email protected]. And to ensure you never miss an episode, subscribe to the show in your favorite podcast player. Until next time, let’s be careful out there.