Skip to main content
Version: Local · In Progress

AI Chat — Complete Guide

AI Chat is the conversational interface in AI Labs that lets you ask questions about your vulnerability data in plain English. It is backed by real database queries against your ThreatWeaver tenant data — answers are grounded in your actual findings, not generated from the LLM's training data.

This guide covers supported query types, how to write effective questions, example conversations, and what AI Chat cannot do.


How It Works

When you submit a message to AI Chat, three things happen in sequence:

Step 1: Parameter Extraction (local, no LLM call)

The AI parameter extractor (v4.0) parses your message and extracts structured filter conditions. It recognizes severity levels, time windows, asset attributes, CVE IDs, IP addresses, compliance frameworks, sort directions, and dozens of other filter types. This step runs entirely on the backend — no external LLM call is needed.

For example, the query "show me critical vulnerabilities on internet-facing Windows servers discovered in the last 30 days" extracts:

{
"severities": ["critical"],
"isPublic": true,
"osCategory": "windows_server",
"timeWindow": { "days": 30, "label": "last 30 days" }
}

Step 2: Database Query (real data)

The structured filters are passed to the AI Query Engine, which builds and executes a PostgreSQL query against your tenant's vulnerability and asset data. The results are real numbers from your database — not estimates.

Step 3: LLM Synthesis (one LLM call per response)

The query results are packaged with the original question and sent to the configured LLM provider. The LLM synthesizes the data into a natural language response. For count queries ("how many critical vulnerabilities do we have?") this is a sentence. For analytical queries ("summarize our AppSec scan results") this is a paragraph or more.

The response always includes a reference to the underlying data — you can click View Source Data to see the raw query results.


Supported Query Types

1. Count and Inventory Queries

Ask how many vulnerabilities, assets, or findings exist matching certain criteria.

Examples:

  • "How many critical vulnerabilities do we have right now?"
  • "How many open findings are on production assets?"
  • "Count the vulnerabilities discovered in the last 7 days"
  • "How many assets have at least one critical finding?"
  • "How many vulnerabilities are in the CISA KEV list?"
  • "How many unpatched high-severity findings are older than 60 days?"

What you get: A specific number with a brief narrative context. For example: "You currently have 143 critical-severity vulnerabilities in open state. 12 of them are on assets tagged as internet-facing."

2. Asset-Focused Queries

Ask which assets are most exposed, or get a breakdown of vulnerabilities on a specific asset or group.

Examples:

  • "Which assets have the most critical vulnerabilities?"
  • "Show me the top 10 most vulnerable servers"
  • "What vulnerabilities are on the payments-api host?"
  • "List all findings on assets in the production environment"
  • "Which internet-facing assets have critical severity findings?"
  • "Show me vulnerabilities on Windows servers older than 90 days"
  • "Which assets have no agent installed?"

What you get: A ranked list of assets with their finding counts, or a summary of findings on a specific asset.

3. Vulnerability-Focused Queries

Ask about specific vulnerability types, CVEs, or plugin families.

Examples:

  • "Are there any Log4Shell (CVE-2021-44228) findings in our environment?"
  • "Show me all SQL injection findings from the AppSec scanner"
  • "What SSRF vulnerabilities do we have?"
  • "List all findings in the SSL/TLS plugin family"
  • "Show me vulnerabilities with an EPSS score above 0.7"
  • "Which vulnerabilities have public exploit code available?"
  • "Find all findings related to RCE"
  • "Show me outdated software findings on Linux servers"

What you get: A list of matching vulnerabilities with asset context, or a count with the top examples.

4. Trend and Velocity Queries

Ask how your security posture has changed over time.

Examples:

  • "How many vulnerabilities did we fix last month?"
  • "Is our remediation velocity improving or declining?"
  • "How many new criticals appeared this week vs. last week?"
  • "What is our average time to remediate critical findings?"
  • "How has our WeaverScore changed over the last 90 days?"
  • "Are we adding findings faster than we are closing them?"
  • "Compare this quarter's findings to last quarter"

What you get: A trend narrative with directional language ("you closed 47 findings last month vs. 31 the month before — a 51% improvement") and the key data points.

5. Compliance and Framework Queries

Ask about exposure relative to specific compliance frameworks.

Examples:

  • "Which open findings are relevant to our PCI-DSS audit?"
  • "How many SOC 2 control gaps do we have?"
  • "Show me all OWASP Top 10 findings from the last scan"
  • "What is our ISO 27001 compliance exposure?"
  • "Are there any GDPR-relevant data exposure findings?"
  • "Which findings affect our PCI in-scope assets?"

What you get: A count and summary of findings mapped to the requested framework, organized by control category where applicable.

6. AppSec and Scan-Specific Queries

Ask about findings from the AppSec scanner, specific assessments, or scan history.

Examples:

  • "Summarize the AppSec scan results from last week"
  • "What did the last crAPI scan find?"
  • "How many false positives were removed from the last scan?"
  • "Which AppSec findings have the highest confidence scores?"
  • "Show me all BOLA findings from the AppSec scanner"
  • "What is the true positive rate for the last assessment?"
  • "List the unresolved AppSec findings from the last 3 scans"

What you get: A scan summary or finding list with confidence scores, finding types, and remediation status.

7. Remediation and Workflow Queries

Ask about the status of remediation work and ticket assignments.

Examples:

  • "Which critical findings have no remediation ticket?"
  • "Show me findings assigned to the infrastructure team"
  • "What are the oldest unresolved findings in the system?"
  • "Which findings are past their SLA deadline?"
  • "How many exceptions have been approved this quarter?"
  • "Which findings are in the Fix Plan work queue?"

What you get: A list of findings matching the workflow criteria, with status and assignment information.

8. Prioritization Queries

Ask what to work on first based on risk signals.

Examples:

  • "What should we fix first this week?"
  • "Give me the top 5 most urgent vulnerabilities"
  • "Which findings have both high CVSS and high EPSS scores?"
  • "What are the highest priority findings on customer-facing systems?"
  • "Show me findings on critical assets with available exploits"
  • "What is the most impactful thing we can fix today?"

What you get: A prioritized list using WeaverScore logic — combining CVSS, EPSS, asset criticality, and exploit availability — with a brief explanation of why each item is ranked where it is.


Writing Effective Questions

AI Chat understands a wide range of natural language phrasing, but some patterns produce better results than others.

Be Specific About Time Windows

Instead of "recent findings," say "findings from the last 30 days" or "findings discovered this week." The parameter extractor understands:

  • "last 7 days", "last month", "this quarter", "last 90 days"
  • "older than 60 days", "more than 3 months old"
  • "discovered before January", "found after the last scan"

Name the Asset Attribute You Mean

Instead of "important systems," say "production systems" or "internet-facing servers" or "PCI-scope assets." The extractor understands:

  • Environment tags: "production", "staging", "development"
  • Network exposure: "internet-facing", "public-facing", "external"
  • OS type: "Windows servers", "Linux", "macOS workstations"
  • Asset criticality: "critical assets", "high-priority systems"

Use Standard Severity Terms

"Critical", "high", "medium", "low", and "informational" are recognized. So are synonyms: "severe", "urgent", "moderate", "minor". You can also negate them: "not critical", "exclude low severity", "anything above medium."

Combine Filters Naturally

You do not need special syntax. Natural conjunctions work:

  • "critical findings on Windows servers in production discovered in the last 30 days"
  • "high and critical vulnerabilities with exploit code that are older than 90 days"
  • "AppSec findings on payment endpoints that are not yet ticketed"

Ask Follow-Up Questions

AI Chat maintains conversation context within a session. After asking "show me critical vulnerabilities on production servers", you can follow up with:

  • "Which of those have public exploits?"
  • "How long have they been open?"
  • "Generate a fix plan for the top one"

The conversation thread is preserved until you start a new chat or clear the history.


Example Conversations

Example 1: Preparing for a Security Review

Analyst: "I'm preparing for our quarterly security review. Give me a high-level summary of our current posture."

AI Chat: "Here is a snapshot of your current security posture:

You have 1,247 open vulnerabilities across 412 assets. Of these, 89 are critical-severity and 334 are high-severity. Your WeaverScore is currently 62/100, down 4 points from last quarter primarily due to 23 new critical findings identified in the most recent Tenable scan.

Three areas need executive attention: (1) 14 critical findings on internet-facing systems have been open for more than 30 days; (2) 7 findings are in the CISA Known Exploited Vulnerabilities list; (3) the payments-cluster has 4 critical findings that are relevant to your upcoming PCI-DSS audit.

On the positive side, your team closed 87 findings last month, a 34% improvement over the previous month. Remediation velocity is trending in the right direction."


Analyst: "Tell me more about those 14 critical findings on internet-facing systems."

AI Chat: "The 14 critical findings on internet-facing assets break down as follows:

  • 6 findings are related to outdated TLS configurations (TLS 1.0/1.1 still enabled) across 3 web servers
  • 4 findings are Apache HTTP Server vulnerabilities (CVE-2024-38473 and CVE-2024-38476) on 2 assets
  • 2 findings are missing security headers (HSTS, X-Frame-Options) on the customer portal
  • 2 findings are SQL injection vulnerabilities identified by the AppSec scanner on the api-gateway host

The oldest of these has been open for 67 days (the Apache findings). All 14 have remediation tickets in the system; 9 are assigned and in progress, 5 are unassigned."


Example 2: AppSec Scan Follow-Up

Analyst: "What did the AppSec scan find last week?"

AI Chat: "Last week's AppSec assessment (Assessment ID: cfefcdcb) completed on April 5. Here is the summary:

21 findings were identified before deduplication. After removing duplicates and applying heuristic filters, 12 customer-visible findings remain. The scan used white-box configuration against the crAPI target.

By category: 3 broken object-level authorization (BOLA), 2 authentication weaknesses, 2 SQL injection, 1 SSRF, 1 broken function-level authorization (BFLA), 2 security misconfiguration, 1 mass assignment.

Confidence distribution: 8 findings scored above 0.85 confidence, 3 scored 0.65–0.85, 1 scored below 0.65. The low-confidence finding (the BFLA) was flagged for manual review.

Compared to the previous crAPI scan (R21), the key new finding is the BOLA on the vehicle endpoints — this was not detected in R21."


Analyst: "Which of those should we prioritize for remediation?"

AI Chat: "Based on WeaverScore ranking, here are the top 3 for immediate attention:

  1. SQL injection on /api/v1/products/search — CVSS 9.1, EPSS 0.83, internet-facing endpoint. An attacker with no authentication could dump the database. Estimated fix effort: Medium (2–3 days). Fix: parameterized queries in the product search handler.

  2. SSRF on /api/v1/videos/convert — CVSS 8.6, EPSS 0.71, can reach internal services. Estimated fix effort: Small (4–8 hours). Fix: URL validation and allowlist for the video converter.

  3. BOLA on /api/v1/vehicle/{vehicleId} — CVSS 8.2, EPSS 0.55. Low-privileged users can access other users' vehicle records. Estimated fix effort: Medium (1–2 days). Fix: ownership check before returning vehicle data.

Would you like me to generate fix plans or tickets for any of these?"


Example 3: Compliance Gap Analysis

Analyst: "We have a PCI-DSS audit in 6 weeks. What do we need to worry about?"

AI Chat: "For your PCI-DSS audit in 6 weeks, here is what currently requires attention in ThreatWeaver:

Requirement 6.3.3 (Critical patches within 1 month): You have 7 critical-severity vulnerabilities on PCI in-scope assets that have been open for more than 30 days. These are the most audit-critical items — PCI-DSS 4.0 requires critical patches within one month of release for in-scope systems.

Requirement 6.4 (Web application protection): The 2 AppSec-identified SQL injection findings are on customer-facing web applications that would likely be considered in-scope for PCI-DSS 6.4.2 (application-layer protection). These need to be remediated or have WAF coverage documented.

Requirement 2.2.1 (Hardening standards): 6 findings relate to enabled default services and insecure configurations (TLS 1.0, SSHv1, default admin credentials) on in-scope systems.

Requirement 8 (Authentication controls): No open findings in ThreatWeaver directly map to Req 8, but the authentication weakness finding from the AppSec scan should be reviewed.

Total: 15 findings are PCI-DSS relevant. I recommend generating a compliance gap report from the Exposure Management module for the formal audit documentation."


Example 4: Team-Level Remediation Planning

Analyst: "What is the infrastructure team responsible for fixing this sprint?"

AI Chat: "The infrastructure team currently has 23 findings assigned in ThreatWeaver:

  • 4 critical, 11 high, 6 medium, 2 low severity
  • 8 findings are overdue (past SLA deadline)
  • 5 have fix plans generated, 18 have no fix plan yet

The 8 overdue items by severity:

  1. CVE-2024-1234 — OpenSSL on load-balancer-01 (critical, 43 days overdue)
  2. CVE-2024-5678 — Apache on web-02 (critical, 38 days overdue) [... and 6 more]

Would you like me to generate fix plans for the findings that do not have one yet?"


What AI Chat Cannot Do

Understanding the limitations prevents frustration and ensures you use the tool effectively.

Cannot Access External Data

AI Chat queries your ThreatWeaver database only. It cannot:

  • Look up current CVSS or EPSS scores from NVD in real time (scores reflect what was ingested at scan time)
  • Check whether a CVE patch has been released today if it was not in the database at ingestion time
  • Browse vendor advisories or NVD to find remediation steps
  • Access your Jira instance to check ticket status unless Jira integration is configured

Cannot Perform Actions

AI Chat is read-only. It cannot:

  • File tickets in Jira or ServiceNow
  • Update finding status or assign findings
  • Trigger new scans
  • Approve or reject exceptions

Use the appropriate module for these actions. AI Chat can direct you to where they are: "Go to Exposure Management → Finding Detail → Assign" if you ask "how do I assign this finding?"

Cannot Answer Questions Outside Your Data Scope

AI Chat grounds its answers in your database. It cannot:

  • Tell you the industry average remediation time (it does not have benchmark data)
  • Compare your posture to peer organizations
  • Answer questions about vulnerabilities in systems that ThreatWeaver has not scanned
  • Predict future risk without historical trend data to draw from

If you ask a question it cannot answer with your data, AI Chat will tell you this explicitly rather than fabricating an answer.

Cannot Remember Across Sessions

Conversation context is preserved within a single session but is cleared when you start a new chat or log out. AI Chat does not remember that you asked about the PCI audit last week — you need to re-establish context in each session.

Cannot Guarantee Synthesis Accuracy

For count queries ("how many critical vulnerabilities?"), the number is exact — it comes directly from the database query. For analytical synthesis ("is our remediation velocity improving?"), the LLM is interpreting the data and there is some risk of mischaracterization. Always click View Source Data to verify the underlying numbers when making decisions based on AI Chat analysis.

Cannot Replace a Security Expert

AI Chat accelerates research and summarization. It does not replace a security analyst's judgment on complex risk decisions, architectural questions, or novel threat scenarios. Use it to find the data faster, not to make security decisions for you.


Tips for Power Users

Use Count Mode for Quick Checks

Prefix with "how many" or "count" to get a fast number without a full narrative: "How many critical findings are open?" returns "You have 89 critical findings currently open" rather than a full summary paragraph.

Use Sort Modifiers

Add "newest first", "oldest first", "highest CVSS", "lowest EPSS" to control how results are ranked in the response.

Ask for the Top N

"Show me the top 5 most vulnerable assets" returns the five highest-scoring assets by WeaverScore. "Top 10 oldest critical findings" returns the ten longest-open criticals.

Chain to Other Features

From the AI Chat response, you can directly initiate other AI Labs features:

  • Click Generate Fix Plan on a finding mentioned in the response
  • Click Create Ticket to open the Ticket Writer pre-filled with the finding
  • Click Start Exception to open the Exception Handler for a specific finding

Export Conversations

Use Export Conversation to save the chat as a markdown file or PDF. This is useful for weekly status reports — run your standard queries at the start of each week and export the session as documentation.

Configure a Default Context

In AI Chat Settings, you can configure a default context statement that is prepended to every conversation: "Our environment is a financial services company. PCI in-scope systems are tagged 'pci-scope'. The infrastructure team owns network gear; the DevOps team owns application servers." This improves response relevance without needing to re-state context every time.


Frequently Asked Questions

Q: Are the numbers in AI Chat responses exact? Count queries return exact database counts. Trend analyses and summaries are synthesized by the LLM and should be verified against the source data.

Q: Can other users see my AI Chat conversations? No. Conversation threads are scoped to the individual user. Tenant admins cannot see individual chat history.

Q: Does AI Chat query data from all modules? Yes. AI Chat has access to Exposure Management findings, AppSec scanner findings, asset inventory, VFP work packages, and scan history for your tenant.

Q: How long are conversations stored? Chat history is stored in the AiConversation entity in the database. The default retention period is 90 days. Admins can configure or clear chat history in AI Settings.

Q: What happens if the LLM gives me a wrong number? Click View Source Data to see the raw database query and results. If the underlying data is wrong (missing findings, incorrect severity), investigate in the relevant module. If the LLM misinterpreted the data, re-ask with a more specific question or use the direct filters in Exposure Management.

Q: Can I ask questions in languages other than English? AI Chat is optimized for English. Questions in other languages may be understood but response quality is degraded. The parameter extractor is English-only — queries in other languages may not extract filters correctly.