Connected and autonomous vehicles can be susceptible to cyber attacks due to the tightly interconnected nature of their communications systems. Securing this environment requires effective mitigation of multiple threat vectors. It is a complex problem, but there is much that can be done, and the work is extremely important. Without secured data systems, truly autonomous vehicles simply cannot exist.
Typically, once a manufacturer has produced a product, a lot of the engineering disciplines within that company complete their work and just end their involvement in that product, moving on to the next project in the pipeline. They are part of the project work that leads up to and results in the manufacture of the product, but they go no further with it.
However, the need for robust and up-to-date automotive cybersecurity continues far beyond the sale, extending through the entire life of the product. OEMs must maintain ownership of the security of their product throughout the life of the product, regardless of who owns it or how long the product is in use. Regardless of the number of years that the OEM decides is the life expectancy for that product, they must continue to support that product and make sure that they monitor and address all vulnerabilities as they arise.
OEMs must maintain a complete and intimate knowledge of potential threat vectors and the areas and technologies they may impact. For example, if an OEM product has several open source components in it, the OEM needs to be tied into that environment, those people, their labs and forums, or whatever else supports that open source software. The OEM needs to be connected to that community so they can watch for vulnerabilities and maintain an effective overwatch posture.
At LHP, we're looking at, not just open source, but anything that's related to Common Vulnerabilities and Exposures (CVEs). Often, we see reports of vulnerabilities in open source software that get reported on the software's own site, but it may not be reported to the greater community as a CVE. So, we must constantly monitor these situations.
There are different ways to maintain an appropriate level of awareness. It can be done by hand, and LHP does provide that service; Every Friday, our team gets together and reviews the weekly CVEs, to see if any of them impact our customers. But the best way to perform this work is through software. Several different companies offer this type of Software Composition Analysis (SCA) tool. SCAs automate the analysis of the different chunks of software that you have, whether it is open source or not, and collect all the relevant CVEs out there.
Many of these SCA tools provide some form of proprietary collecting of information. They have different ways to go out and get the open source information, they can monitor it, and then place that information in a nice dashboard that summarizes the vulnerability list based on the software that is in your current Software Bill of Materials (SBOM). Then, you need to come up with a plan to mitigate these issues, where you prioritize the critical ones that you'll need to schedule for a quick resolution.
For example, if there is a vulnerability, that is a very critical situation. We need to make sure that we get the solution pushed out sooner than the regular release schedule of weekly or quarterly releases. That could impact customers. We want to be good partners and not just do only the things that we want to do, but to work with our customers and their clients. Protecting them is our utmost priority.
Before production, an audit must be performed to make sure that all the elements are ready for production. Did we cover everything? Of course, before production, penetration testing and validation must also be performed and successfully completed.
For the SCA analysis tools that can help security engineers and just people in general, one does not have to be a security engineer per se to utilize these tools. LHP can make the software easy enough to run so that a person can run through it with the current software stack that they have. And if an issue is discovered, then they can direct it to a security engineer who can perform a detailed evaluation and determine the best course of action.
How do you leverage your technical people and security people to achieve a balance between both sides of the relationship? You don't want to overwhelm your security folks. We see that a lot where if you have security, people that are trying to do New Product Development (NPD) work, writing security requirements, and going through threat modeling, they also have to assume the task for the Current Product Support (CPS) in a lot of cases. It can be overwhelming for a lot of people because it boils down to the question of how many products do we make this one person support? Do they have to support everything that has been developed and assigned to them since they started working there?
These are some of the problems that I saw when I was working at one of the Big 3 U.S. auto manufacturers. People just quit because they were working on 10-year-old products. It was great that they had all the knowledge. But then it kept piling up, and there was no time to do the new work because you were too busy with CPS work.
So, there has to be a good way to transfer the work responsibilities from one person to the next. That's not just the skill, but the effort, for NPD to CPS. I think this is where certain tooling really helps out. You're not just trying to make the job faster and easier. Security costs a lot of money and it takes a lot of time. So, let's try to put tools in place to help. There are some out there, you just have to select which ones are best for your situation.
How do engineers and cybersecurity personnel keep up with maintaining and handing off stewardship of legacy products when someone leaves a team? There are industry standards for how you solve all these things as you transfer the product from the design and production phase to its post-sale lifecycle. There are standards for production support and NPD for sure. But transferring that from NPD to CPS? I can't think of a standard for that. It is something that organizations typically figure out for themselves. How do we manage this handoff in the best possible way? Right now, I think that's a big miss. There is room for improvement in most organizations, and that improvement is currently not being managed as effectively as it needs to be.
Instead, we in the industry are seeing that when people get overwhelmed, they quit and go to the next job. They stay there for several years, they get overwhelmed again, quit, and then go to the next job. It is a vicious cycle. Purging their queue by quitting their present job to start with a clean slate in the next one, seems to be a common way right now of managing that personal workload. But oh my, it is so wasteful. We must figure out a better way to manage a person’s security workload as they stay in the organization. How do we move products effectively from one person to the next, and still maintain a high level of security posture for the life of the product? Solving this issue is critical to retaining top security talent.
Processes and procedures must be in place. And there is guidance out there for that. But there are also things like bug bounties that can help us find additional problems with our product. Bug bounties are a great way to find issues we might otherwise overlook.
If somebody finds some vulnerabilities in your product, you must provide the mechanism to report those bugs so they can be addressed. I've read the stories where a lot of people have found several bugs and products before but there was no way to tell the organization that hey, I found these. A lot of people like to get credit for finding bugs, but they cannot take credit for it until they notify the organization. Having a proper process within the organization to investigate external bug reports, and an easy method for these individuals to report bugs, strengthens not just the products but also the company’s commitment to cybersecurity.
If you provide an efficient and effective mechanism for people to report bugs, they will use it. They want to get credit for something that they spent hours finding, and rightfully so. They report it to us, we acknowledge it and turn our attention to the inside, figure it out and fix it. Then, the person who found the bug can publish this accomplishment and get their name out there. The process works to the best benefit of both sides.
The white hat security researchers or hackers, whatever you want to call them… They write, and they have a great connection with the community, trying to find these bugs and building their knowledge and experience to empower others to have the ability to do it. Sometimes, you can pay them a bug bounty fee, but it's not always the money that gets people motivated. It's the notoriety, the recognition, all those adrenaline-based motivations. People like solving these challenges, especially PhDs. Say a person is working on their Ph.D. They want to want to find something on a particular product to showcase their abilities and get those accomplishments published. It adds legitimacy to their name.
Within the organization, once that bug bounty is created, you've got to integrate a process to handle them. Okay, the report came in, I have a list of people that are responsible for this, and the process starts going so we can get it solved. I think some organizations may not have that process in place. They'll go to launch it, and they're not even thinking about bugs that are reported back to the organization. So, we must think about the most effective way to leverage the network, and then implement that plan.
A White Hat is a hacker who tests computer systems for possible vulnerabilities and then reports them to the proper persons so that the issues can be fixed, rather than exploiting them for nefarious reasons. They are part of a fraternity known as "penetration testers" or "ethical hackers." These are a group of experts who are hired to hack into computer systems or find other ways to gain access. They will spend days or weeks searching business and governmental systems to seek out weaknesses and vulnerabilities, and then come up with recommendations for fortifying these systems against such threats.
How do the White Hats pick up the knowledge to do all of this? What are some of the common vectors for getting into this line of work?
Learning how to hack ethically requires a significant time investment and a lot of practice. There are several ways this can be accomplished, with one of the most common being that they simply start at a young age. There are kids, 13 and 14 year old’s, who are already hacking systems. They typically start with learning the basics of operating systems and then move into other areas. There are all kinds of online courses available that guide the learner into really thinking about organizing their mind to think the correct way, to figure out how to break something. And there is a lot of information out there on the internet. There are videos and websites that can help. It is really all about diving in and doing the work, investing the time in themselves, spending hours and hours hacking and breaking into things, and just trying to understand how to do it all.
To be effective in this realm, one must have a real passion for breaking things, taking them apart, and seeing how they work. It is kind of like an engineering mindset but in reverse. The best ethical hackers possess a curiosity that can't be quenched until they solve that puzzle.
White Hats don't do it because it's a job, they do it because they are passionate about it. Hacking and breaking things can be frustrating, so commitment, perseverance, and stamina are paramount. When they finally break into something, it is a significant adrenaline rush, and they get all excited because they solved the mystery and discovered something new before anyone else. It is a legitimate sense of accomplishment.
To clarify, in ethical hacking, you do not go out and break somebody's website. Instead, you set up a virtual environment that replicates the online presence. Perhaps it has a vulnerable operating system on it. And then, you practice your skill sets in the virtual environment, learn the OS, learn all these different pieces, and learn how things work at a deeper intimate level for both the OS and the machine itself. You always practice on the virtual machine, not the real thing.
There is a lot of knowledge that a White Hat must obtain, really by doing it. It's kind of like learning how to play the piano. It takes many hours to become a Mozart, but Mozart invested a ridiculous amount of time practicing. And if you adopt the same approach to breaking things, you can amass a tremendous skill set over time. If you think about penetration testing (pentesting), those folks are dedicated to that craft. They do nothing else but pentesting. Because if an organization is focused on defining security requirements and process work and things like that, they also need a dedicated group of people who are continuing the process of learning how to break things.