Summary
The security of PoCs and MVPs is often an afterthought, leaving them vulnerable to threats. Threat Model Mentor automates the STRIDE methodology using GPT-based models and makes threat modeling accessible to development teams.
In today’s fast-paced software development environment, where proof of concepts (PoCs) and minimum viable products (MVPs) often drive innovation, security can sometimes be an afterthought. Many smaller projects are subject to the constraints of limited time, budget, and security expertise, yet they face the same threats as large-scale systems. Traditional security processes, such as STRIDE threat modeling, are often overlooked in favor of meeting tight deadlines, potentially leaving PoCs vulnerable to serious risks.
Threat Model Mentor, a custom solution powered by large language models (LLMs), was built to address this critical gap. By automating the STRIDE methodology using GPT-based models, Threat Model Mentor makes it possible for developers and project managers—regardless of their security background—to conduct comprehensive threat modeling, ensuring security is baked into projects from the very beginning. This blog will delve into the challenges of traditional STRIDE modeling, the solution provided by Threat Model Mentor, its impact on PoCs and small-scale projects, and the future potential of modularizing this approach for scalable use in various environments.
The blog will also detail how Threat Model Mentor was used specifically for ServiceNow Assistant, a project focused on automating the analysis of HR support tickets and enhancing the knowledge base using the GPT Assistant’s Vector Store.
STRIDE Threat Modeling at Pure Storage
Case Study: ServiceNow Assistant
ServiceNow Assistant is a cloud-based application designed to automate the analysis of HR support tickets and enhance the organization’s knowledge base. The system ingests user support tickets, processes them, and determines whether an article exists in the knowledge base that could solve the ticket or if a new article needs to be drafted. ServiceNow Assistant uses GPT Assistants Vector Store to store all knowledge articles and automate content generation. The system identifies one of four outcomes for each support ticket:
- Article exists: Identify an article that could have solved the support ticket completely.
- Update an article: Identify an article that partially addresses the support ticket and suggest an update.
- Draft a new article: If no article exists, a new one is created based on the solution provided.
- Human intervention required: Flagging cases where manual assistance is needed for resolution.
Figure 1: High Level Diagram (HLD) for ServiceNow Assistant
Application of STRIDE Using Threat Model Mentor for ServiceNow Assistant
Given the complexity and sensitivity of the workflow, a robust security assessment was critical for ensuring that both internal and external threats were addressed from the outset.
1. Applying STRIDE
Step 1: Initiating the STRIDE Session
The session began by initiating an automated dialogue with Threat Model Mentor, which first asked for a high-level description of the system architecture. This description allowed the system to form a preliminary understanding of the major components, data flows, and interactions.
Example input:
GPT responded by asking targeted questions to extract more granular details about the architecture, external dependencies, and trust boundaries.
Step 2: Sharing Detailed System Information
After the initial setup, Threat Model Mentor prompted the team to provide more detailed information about the architecture:
- Internal components:
- Orchestrator: Manages workflows across the entire system.
- Ticket Analyzer Tool: Processes support tickets, extracts relevant content, and prepares it for analysis.
- KB Matching Tool: Searches the knowledge base using a vector search and determines if an article exists or needs updating.
- Article Generation Tool: Automatically drafts or updates knowledge articles based on ticket analysis.
- Vector Store: Hosts all knowledge articles in vector form, allowing efficient retrieval using GPT Assistants.
- External components:
- OpenAI GPT API: Used for processing ticket summaries and generating solutions.
- Langchain Framework: Facilitates communication between internal components and external AI models.
- Google API: Used to log the results in Google Sheets and send reports via email to stakeholders.
- Authentication and authorization:
- Basic authentication for ServiceNow API interactions.
- API keys for external integrations with OpenAI and Google APIs.
Here is the exact list of inputs that are required by GPT to carry out an effective STRIDE session:
- High Level Design (HLD)
- Low Level Design (LLD)
- Application Workflow
- Source Code
- Configuration Files
- List of all the Third-Party Libraries used in the Application
- Authentication and Authorization Information
- Data Protection Steps, if applied
- Logging and Monitoring, if applied
- Security Hardening Measures, if applied
- Known Vulnerabilities
- Sensitive Operations, if any
- Infrastructure Details
The detailed description enabled GPT to understand the system holistically and pinpoint areas where vulnerabilities could emerge, especially in terms of interactions between trusted internal components and less-trusted external services.
Step 3: Generating a Trust Diagram
With the system details provided, Threat Model Mentor generated a trust diagram to visually represent the architecture, highlighting both internal components and external services. The trust diagram showed the various data flows, dependencies, and interactions between components. The prompt for this also included content from the documentation outlining the policy of creating a trust diagram at Pure Storage.
Example input:
(Attachment: Guidelines for Threat Modeling & Trust Diagrams)
Using Mermaid markup language (a lightweight diagramming tool), the following diagram was created:
Figure 2: AI generated trust diagram for ServiceNow Assistant using Threat model mentor
This trust diagram highlighted the critical points of interaction, particularly where internal components exchanged data with external services like OpenAI and Google APIs, both of which are located outside the primary trust boundary.
2. STRIDE Analysis
The final output was returned in a tabular format to capture the following details:
- STRIDE Category: Spoofing, Tampering, Repudiation, Information Disclosure, or Denial of Service
- Mitigation Step: Recommended mitigations for each identified threat
- Severity: Severity of the threat on a range of 1 to 5
- Location in Code: Part of the code where these vulnerabilities are spotted
- Code Fix Example: Suggested code for fixing the vulnerability
Figure 3: Threats identified using Threat model mentor along with the mitigation suggestions
Figure 4: Sample working document of implementing threat modeling on ServiceNow Assistant
Conclusion
As the demand for secure development practices continues to grow, Threat Model Mentor is leading the charge in making threat modeling accessible to all development teams—regardless of size or expertise. By using GPT-based models to automate and simplify the STRIDE methodology, developers can ensure that even PoCs and MVPs are built with security in mind from day one.
As demonstrated in the ServiceNow Assistant project, the impact of this approach is significant: Even teams without security expertise can conduct robust threat modeling, ensuring that critical vulnerabilities are identified and mitigated early in the development lifecycle.
As we look to the future, the potential for further modularization and industry-specific implementations offers exciting opportunities for scaling this approach across a wide range of applications. By combining automation with deep security insights, Threat Model Mentor is not just a tool for today, but a foundational piece of the security landscape of tomorrow.
Empowering developers, securing projects, and enabling innovation—this is the promise of LLM-driven threat modeling.
ANALYST REPORT,