Fortinet NSE 7 - Security Operations 7.6 Architect 온라인 연습
최종 업데이트 시간: 2026년02월14일
당신은 온라인 연습 문제를 통해 Fortinet NSE7_SOC_AR-7.6 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 NSE7_SOC_AR-7.6 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 90개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
Understanding FortiAnalyzer Roles:
FortiAnalyzer can operate in two primary modes: collector mode and analyzer mode.
Collector Mode: Gathers logs from various devices and forwards them to another FortiAnalyzer operating in analyzer mode for detailed analysis.
Analyzer Mode: Provides detailed log analysis, reporting, and incident management.
Steps to Configure FortiAnalyzer as a Collector Device:
A. Enable Log Compression:
While enabling log compression can help save storage space, it is not a mandatory step specifically required for configuring FortiAnalyzer in collector mode.
Not selected as it is optional and not directly related to the collector configuration process.
B. Configure Log Forwarding to a FortiAnalyzer in Analyzer Mode:
Essential for ensuring that logs collected by the collector FortiAnalyzer are sent to the analyzer FortiAnalyzer for detailed processing.
Selected as it is a critical step in configuring a FortiAnalyzer as a collector device.
Step 1: Access the FortiAnalyzer interface and navigate to log forwarding settings.
Step 2: Configure log forwarding by specifying the IP address and necessary credentials of the FortiAnalyzer in analyzer mode.
Fortinet Documentation on Log Forwarding FortiAnalyzer Log Forwarding
C. Configure the Data Policy to Focus on Archiving:
Data policy configuration typically relates to how logs are stored and managed within FortiAnalyzer, focusing on archiving may not be specifically required for a collector device setup.
Not selected as it is not a necessary step for configuring the collector mode.
D. Configure Fabric Authorization on the Connecting Interface:
Necessary to ensure secure and authenticated communication between FortiAnalyzer devices within the Security Fabric.
Selected as it is essential for secure integration and communication.
Step 1: Access the FortiAnalyzer interface and navigate to the Fabric authorization settings.
Step 2: Enable Fabric authorization on the interface used for connecting to other Fortinet devices and FortiAnalyzers.
Reference: Fortinet Documentation on Fabric Authorization FortiAnalyzer Fabric Authorization Implementation Summary:
Configure log forwarding to ensure logs collected are sent to the analyzer.
Enable Fabric authorization to ensure secure communication and integration within the Security Fabric.
Conclusion:
Configuring log forwarding and Fabric authorization are key steps in setting up a FortiAnalyzer as a collector device to ensure proper log collection and forwarding for analysis.
References:
Fortinet Documentation on FortiAnalyzer Roles and Configurations FortiAnalyzer Administration Guide
By configuring log forwarding to a FortiAnalyzer in analyzer mode and enabling Fabric authorization on the connecting interface, you can ensure proper setup of FortiAnalyzer as a collector device.
정답:
Explanation:
Understanding Automation Processes in FortiAnalyzer:
FortiAnalyzer can automate responses to detected security events, such as running commands on FortiGate devices.
Analyzing the Customer Requirement:
The customer wants to run a CLI command on FortiGate to block predefined URLs when a botnet C&C server IP is detected.
This requires an automated response triggered by a specific event.
Evaluating the Options:
Option A: Playbooks orchestrate complex workflows but are not typically used for direct event-triggered automation processes.
Option B: Data selectors filter logs based on criteria but do not initiate automation processes.
Option C: Event handlers can be configured to detect specific events (such as detecting a botnet C&C server IP) and trigger automation stitches to execute predefined actions.
Option D: Connectors facilitate communication between FortiAnalyzer and other systems but are not the primary mechanism for initiating automation based on log events.
Conclusion:
To start the automation process when a botnet C&C server IP is detected, you must use anEvent handlerin FortiAnalyzer.
References:
Fortinet Documentation on Event Handlers and Automation Stitches in FortiAnalyzer.
Best Practices for Configuring Automated Responses in FortiAnalyzer.

정답:
Explanation:
From FortiSOAR 7.6., FortiSIEM 7.3 Exact Extract study guide:
InFortiSOAR 7.6, theWar Roomis a collaborative space designed for high-priority incident investigation. TheEvidencestab within theInvestigateview (as shown in the exhibit) is specifically designed to highlight critical findings found during the investigation process.
Evidence Tagging: To populate theAction Logs Marked As Evidencesection, an analyst must specifically tag a relevant log entry, a playbook output, or a comment within the collaboration workspace with the system-defined keyword"Evidence".
Automatic Categorization: Once the tag is applied, FortiSOAR automatically parses these entries and displays them in this centralized view. This allows team members and stakeholders to quickly view substantiated facts and proof gathered during the "Root Cause Analysis" phase without sifting through all raw action logs.
Manual vs. Action Logs: The exhibit shows two distinct areas: "Manually Upload Evidences" (where files like the CSLAB document shown can be dragged and dropped) and "Action Logs Marked As Evidence." The latter is reserved exclusively for system-generated logs or comments that have been promoted to evidence status via tagging.
Why other options are incorrect:
By linking an indicator to the war room (B): Linking indicators associates technical artifacts (like IPs or hashes) with the record, but it does not automatically classify them as evidence within the War Room action log view.
By creating an evidence collection task and attaching a file (C): While this is a valid step in an investigation, attaching a file to a task typically places it in the "Attachments" or "Manually Upload Evidences" area, rather than the "Action Logs" section specifically.
By executing a playbook with the Save Execution Logs option enabled (D): Saving execution logs ensures a trail of what the playbook did, but it does not mark the output as "Evidence" unless the specific logic or a manual analyst action applies the "Evidence" tag to the resulting log entry.
정답:
Explanation:
Understanding the Playbook Requirements:
The SOC analyst needs to design a playbook that filters for high severity events.
The playbook must also attach the event information to an existing incident.
Analyzing the Provided Exhibit:
The exhibit shows the available actions for a local connector within the playbook.
Actions listed include:
Update Asset and Identity
Get Events
Get Endpoint Vulnerabilities
Create Incident
Update Incident
Attach Data to Incident
Run Report
Get EPEU from Incident
Evaluating the Options:
Get Events: This action retrieves events but does not attach them to an incident.
Update Incident: This action updates an existing incident but is not specifically for attaching event data.
Update Asset and Identity: This action updates asset and identity information, not relevant for attaching event data to an incident.
Attach Data to Incident: This action is explicitly designed to attach additional data, such as event information, to an existing incident.
Conclusion:
The correct action to use in the playbook for filtering high severity events and attaching the event information to an incident is Attach Data to Incident.
References:
Fortinet Documentation on Playbook Actions and Connectors.
Best Practices for Incident Management and Playbook Design in SOC Operations.

정답:
Explanation:
Understanding the Playbook Configuration:
The "Malicious File Detect" playbook is designed to create an incident when a malicious file detection event is triggered.
The playbook includes tasks such as Attach_Data_To_Incident, Create Incident, and Get Events.
Analyzing the Playbook Execution:
The exhibit shows that the Create Incident task has failed, and the Attach_Data_To_Incident task has also failed.
The Get Events task succeeded, indicating that it was able to retrieve event data.
Reviewing Raw Logs:
The raw logs indicate an error related to parsing input in the incident_operator.py file.
The error traceback suggests that the task was expecting a specific input format (likely a name or number) but received an incorrect data format.
Identifying the Source of the Failure:
The Create Incident task failure is the root cause since it did not proceed correctly due to incorrect input format.
The Attach_Data_To_Incident task subsequently failed because it depends on the successful creation of an incident.
Conclusion:
The primary reason for the playbook execution failure is that the Create Incident task received an incorrect data format, which was not a name or number as expected.
References:
Fortinet Documentation on Playbook and Task Configuration.
Error handling and debugging practices in playbook execution.
정답:
Explanation:
From FortiSOAR 7.6., FortiSIEM 7.3 Exact Extract study guide:
InFortiAnalyzer 7.6and related SOC versions, incidents serve as centralized containers for tracking and analyzing security events. There are two primary automated and manual methods to initiate an incident:
Using a custom event handler (A): In FortiAnalyzer, event handlers are used to generate events from raw logs.1A critical feature in recent versions is theAutomatically Create Incidentsetting within a custom event handler.2When enabled, the system automatically elevates a triggered event into a new incident record, allowing analysts to bypass the manual review of every individual event before an incident is raised.3
By running a playbook (D): Playbooks provide a powerful way to automate the incident lifecycle.4A playbook can be configured with anEvent Trigger, meaning it executes as soon as an event matches specific criteria. One of the core actions available within these playbooks is theCreate Incidentaction, which can automatically populate incident details, severity, and category based on the triggering event's data.5This ensures high-fidelity events are consistently captured for investigation.
Why other options are incorrect:
Using a connector action (B): While connectors allow FortiAnalyzer to communicate with external systems (like ITSM or Security Fabric devices), the act of "creating an incident"insideFortiAnalyzer is a function of the internal event engine or playbook automation, not a standalone connector action used for external integration.
Manually, on the Event Monitor page (C): While you can view, filter, and acknowledge events on theEvent Monitorpage, the process ofmanuallyraising an incident typically occurs from theIncidentsmodule or by right-clicking an event to "Raise Incident" in the Log View or FortiView, rather than being a core function defined as occurring "on the Event Monitor page" in the same architectural sense as handlers and playbooks.

정답: 
Explanation:
Collector2.Worker3.Supervisor4.Agent
The FortiSIEM 7.3 architecture is built upon a distributed multi-tenant model consisting of several distinct functional roles to ensure scalability and performance:
Supervisor: This is the primary management node in a FortiSIEM cluster. It hosts the Graphical User Interface (GUI), the Configuration Management Database (CMDB), and manages the overall system configurations, reporting, and dashboarding.
Worker: These nodes are responsible for the heavy lifting of data processing. They execute real-time event correlation against the rules engine, perform historical search queries, and handle the analytics workload to ensure the Supervisor node is not overwhelmed.
Collector: Collectors are typically deployed at remote sites or different network segments to offload log collection from the central cluster. They receive logs via Syslog, SNMP, or WMI, compress the data, and securely forward it to the Workers or Supervisor. They also perform performance monitoring of local devices.
Agent: These are lightweight software components installed directly on endpoints (Windows /Linux). Their primary role is to collect local endpoint logs, monitor file integrity (system changes), and track user activity that cannot be captured via traditional network-based logging.
정답:
Explanation:
Understanding the Problem:
One FortiGate device is generating a significantly higher volume of logs compared to other devices, causing the ADOM to exceed its storage quota.
This can lead to performance issues and difficulties in managing logs effectively within FortiAnalyzer.
Possible Solutions:
The goal is to manage the volume of logs and ensure that the ADOM does not exceed its quota, while still maintaining effective log analysis and monitoring.
Solution A: Increase the Storage Space Quota for the First FortiGate Device:
While increasing the storage space quota might provide a temporary relief, it does not address the root cause of the issue, which is the excessive log volume.
This solution might not be sustainable in the long term as log volume could continue to grow.
Not selected as it does not provide a long-term, efficient solution.
Solution B: Create a Separate ADOM for the First FortiGate Device and Configure a Different Set of Storage Policies:
Creating a separate ADOM allows for tailored storage policies and management specifically for the high-log-volume device.
This can help in distributing the storage load and applying more stringent or customized retention and storage policies.
Selected as it effectively manages the storage and organization of logs.
Solution C: Reconfigure the First FortiGate Device to Reduce the Number of Logs it Forwards to FortiAnalyzer:
By adjusting the logging settings on the FortiGate device, you can reduce the volume of logs forwarded to FortiAnalyzer.
This can include disabling unnecessary logging, reducing the logging level, or filtering out less critical logs.
Selected as it directly addresses the issue of excessive log volume.
Solution D: Configure Data Selectors to Filter the Data Sent by the First FortiGate Device:
Data selectors can be used to filter the logs sent to FortiAnalyzer, ensuring only relevant logs are forwarded.
This can help in reducing the volume of logs but might require detailed configuration and regular updates to ensure critical logs are not missed.
Not selected as it might not be as effective as reconfiguring logging settings directly on the FortiGate device.
Implementation Steps:
For Solution B:
Step 1: Access FortiAnalyzer and navigate to the ADOM management section.
Step 2: Create a new ADOM for the high-log-volume FortiGate device.
Step 3: Register the FortiGate device to this new ADOM.
Step 4: Configure specific storage policies for the new ADOM to manage log retention and storage.
For Solution C:
Step 1: Access the FortiGate device’s configuration interface.
Step 2: Navigate to the logging settings.
Step 3: Adjust the logging level and disable unnecessary logs.
Step 4: Save the configuration and monitor the log volume sent to FortiAnalyzer.
Fortinet Documentation on FortiAnalyzer ADOMs and log management FortiAnalyzer Administration Guide
Fortinet Knowledge Base on configuring log settings on FortiGate FortiGate Logging Guide
By creating a separate ADOM for the high-log-volume FortiGate device and reconfiguring its logging settings, you can effectively manage the log volume and ensure the ADOM does not exceed its quota.

정답:
Explanation:
From FortiSOAR 7.6., FortiSIEM 7.3 Exact Extract study guide:
InFortiSIEM 7.3, a key innovation is the integration of FortiAI, which provides generative AI capabilities to assist SOC analysts during the triage and response process.
Generative AI Summary: When an incident occurs, FortiAI can automatically analyze the underlying logs, correlation logic, and MITRE ATT&CK techniques (such as "Exfiltration Over Alternative Protocol" shown in the exhibit) to generate a human-readable summary.
Structured Output: The output displayed in the exhibit―specifically the categorized Investigation Actions (identifying affected systems, analyzing traffic) and Remediation Actions (immediate containment, patching, user training)―is the typical result of a FortiAI summary request.
Analyst Efficiency: This feature is designed to reduce the "mean time to respond" (MTTR) by providing analysts with immediate, actionable steps without requiring them to manually piece together the recommended response plan from static documentation or disparate log views.
Why other options are incorrect:
Exporting an incident (A): Exporting an incident typically results in a raw data file (CSV/JSON/PDF) containing the log data and metadata, rather than an AI-generated strategic plan for investigation and remediation.
Running an incident report (B): Standard incident reports provide statistical and historical data about incidents over time. They do not dynamically generate specific, numbered investigation steps tailored to the unique context of a single live incident.
Context tab (D): The Context tab in FortiSIEM is primarily used to view the CMDB information of the involved assets (e.g., host details, owner, location) and related historical events. While it provides the data needed for an investigation, it does not provide the list of actionsto take.

정답:
Explanation:
Understanding the Event Handler Configuration:
The event handler is set up to detect specific security incidents, such as spearphishing, based on logs forwarded from other Fortinet products like FortiSandbox.
An event handler includes rules that define the conditions under which an event should be triggered.
Analyzing the Current Configuration:
The current event handler is named "Spearphishing handler" with a rule titled "Spearphishing Rule 1".
The log viewer shows that logs are being forwarded by FortiSandbox but no events are generated by FortiAnalyzer.
Key Components of Event Handling:
Log Type: Determines which type of logs will trigger the event handler.
Data Selector: Specifies the criteria that logs must meet to trigger an event.
Automation Stitch: Optional actions that can be triggered when an event occurs.
Notifications: Defines how alerts are communicated when an event is detected.
Issue Identification:
Since FortiSandbox logs are correctly forwarded but no event is generated, the issue likely lies in the data selector configuration or log type matching.
The data selector must be configured to include logs forwarded by FortiSandbox.
Solution:
B. Configure a FortiSandbox data selector and add it to the event handler:
By configuring a data selector specifically for FortiSandbox logs and adding it to the event handler, FortiAnalyzer can accurately identify and trigger events based on the forwarded logs.
Steps to Implement the Solution:
Step 1: Go to the Event Handler settings in FortiAnalyzer.
Step 2: Add a new data selector that includes criteria matching the logs forwarded by FortiSandbox (e.g., log subtype, malware detection details).
Step 3: Link this data selector to the existing spearphishing event handler.
Step 4: Save the configuration and test to ensure events are now being generated.
Conclusion:
The correct configuration of a FortiSandbox data selector within the event handler ensures that FortiAnalyzer can generate events based on relevant logs.
Fortinet Documentation on Event Handlers and Data Selectors FortiAnalyzer Event Handlers Fortinet Knowledge Base for Configuring Data Selectors FortiAnalyzer Data Selectors
By configuring a FortiSandbox data selector and adding it to the event handler, FortiAnalyzer will be able to accurately generate events based on the appropriate logs.
정답:
Explanation:
Understanding FortiAnalyzer Data Policy and Disk Utilization:
FortiAnalyzer uses data policies to manage log storage, retention, and disk utilization.
The Data Policy section indicates how long logs are kept for analytics and archive purposes.
The Disk Utilization section specifies the allocated disk space and the proportions used for analytics and archive, as well as when alerts should be triggered based on disk usage.
Analyzing the Provided Exhibit:
Keep Logs for Analytics: 60 Days
Keep Logs for Archive: 120 Days
Disk Allocation: 300 GB (with a maximum of 441 GB available)
Analytics: Archive Ratio: 30% : 70%
Alert and Delete When Usage Reaches: 90%
Potential Problems Identification:
Disk Space Allocation: The allocated disk space is 300 GB out of a possible 441 GB, which might not be insufficient if the log volume is high, but it is not the primary concern based on the given data.
Analytics-to-Archive Ratio: The ratio of 30% for analytics and 70% for archive is unconventional. Typically, a higher percentage is allocated for analytics since real-time or recent data analysis is often prioritized. A common configuration might be a 70% analytics and 30% archive ratio. The misconfigured ratio can lead to insufficient space for analytics, causing issues with real-time monitoring and analysis.
Retention Periods: While the retention periods could be seen as lengthy, they are not necessarily indicative of a problem without knowing the specific log volume and compliance requirements. The length of these periods can vary based on organizational needs and legal requirements.
Conclusion:
Based on the analysis, the primary issue observed is the analytics-to-archive ratio being misconfigured. This misconfiguration can significantly impact the effectiveness of the FortiAnalyzer in real-time log analysis, potentially leading to delayed threat detection and response.
References:
Fortinet Documentation on FortiAnalyzer Data Policies and Disk Management.
Best Practices for FortiAnalyzer Log Management and Disk Utilization.
정답:
Explanation:
Understanding the Playbook and its Components:
The exhibit shows a playbook in which an event trigger starts actions upon detecting a malicious file.
The initial tasks in the playbook include CREATE_INCIDENT and GET_EVENTS.
Analysis of Current Tasks:
EVENT_TRIGGER STARTER: This initiates the playbook when a specified event (malicious file detection) occurs.
CREATE_INCIDENT: This task likely creates a new incident in the incident management system for tracking and response.
GET_EVENTS: This task retrieves the event details related to the detected malicious file.
Objective of the Next Task:
The next logical step after creating an incident and retrieving event details is to update the incident with the event data, ensuring all relevant information is attached to the incident record.
This helps SOC analysts by consolidating all pertinent details within the incident record, facilitating efficient tracking and response.
Evaluating the Options:
Option A: Update Asset and Identity is not directly relevant to attaching event data to the incident.
Option B: Attach Data to Incident sounds plausible but typically, updating an incident involves more comprehensive changes including status updates, adding comments, and other data modifications.
Option C: Run Report is irrelevant in this context as the goal is to update the incident with event data.
Option D: Update Incident is the most suitable action for incorporating event data into the existing incident record.
Conclusion:
The next task in the playbook should be to update the incident with the event data to ensure the incident reflects all necessary information for further investigation and response.
References:
Fortinet Documentation on Playbook Creation and Incident Management.
Best Practices for Automating Incident Response in SOC Operations.
정답:
Explanation:
From FortiSOAR 7.6., FortiSIEM 7.3 Exact Extract study guide:
In a modern Security Operations Center (SOC) environment powered by FortiSIEM 7.3andFortiSOAR 7.6, the efficiency of the incident response lifecycle depends on two primary pillars of analysis:
Accurate detection of threats (A): The primary goal of a SOC is to identify genuine malicious activity. Using FortiSIEM's correlation rules and machine learning (UEBA), the system must be tuned to detect patterns that signify real risk. Accuracy ensures that the SOC is not blinded by noise and can focus on critical security events that impact the organization's posture.
Rapid identification of false positives (C): "Alert Fatigue" is one of the greatest challenges in a SOC. Analysts must be able to quickly distinguish between legitimate anomalies (false positives) and actual threats. FortiSOARassists in this by using automated playbooks to perform initial triage and "pre-processing"―such as checking IP reputations or verifying user activity―to automatically close or demote alerts that do not represent a true threat, thereby freeing up analysts for high-priority investigations.
Why other options are incorrect:
Immediate escalation for all alerts (B): This is a poor SOC practice. Escalating every alert without triage leads to analyst burnout and overloads senior responders with low-value tasks. The goal of a tiered SOC (Tier 1, Tier 2, Tier 3) is to filter alerts so only significant incidents are escalated.
Periodic system downtime (D): SOC systems (SIEM/SOAR) are considered "Mission Critical" and must operate on a24/7/365basis. Maintenance should be performed using High Availability (HA) configurations or during "low-flow" windows without causing a complete stop in monitoring, as attackers often leverage downtime to strike.
정답:
Explanation:
From FortiSOAR 7.6., FortiSIEM 7.3 Exact Extract study guide:
In accordance with the MITRE ATT&CK mapping utilized by FortiSIEM 7.3andFortiSOAR 7.6, the described behaviors correspond to the following techniques:
Non-Standard Port (T1571): This technique involves adversaries communicating using a protocol and port pairing that are typically not associated. The incident report identifies HTTPS (TLS) traffic running on TCP 8443rather than the standard port 443.FortiSIEMspecifically includes built-in correlation rules, such as "Suspicious Typical Malware Back Connect Ports," designed to detect these protocol-port mismatches.
Exfiltration Over Alternative Protocol (T1048): This technique describes adversaries stealing data by exfiltrating it over a different protocol than the primary command and control (C2) channel. In this scenario, while the C2 channel is established via HTTPS on port 8443, the adversary is transferring staged files using DNS queries with oversized TXT payloads. DNS is a common "alternative protocol" used to bypass standard data transfer monitoring and egress filtering.
Analysis of Incorrect Options:
Exploitation of Remote Services (B): This technique falls under Initial Accessor Lateral Movement tactics, focusing on gaining entry into a system via vulnerabilities in network services like SMB or RDP. It does not apply to the maintenance of an established C2 channel or the exfiltration of data.
Hide Artifacts (D): This is a Defense Evasion technique where an adversary attempts to conceal their presence by removing traces such as log files or registry keys. While the attacker is "imitating normal traffic," the specific acts of using a non-standard port and DNS exfiltration are primary behavioral signatures defined by their own more specific techniques.
정답:
Explanation:
From FortiSOAR 7.6., FortiSIEM 7.3 Exact Extract study guide:
The built-in Jinja editor in FortiSOAR 7.6is a powerful utility designed to help playbook developers write and test complex data manipulation logic without having to execute the entire playbook. Its primary capabilities include:
Renders output (A): The editor provides a "Preview" or "Evaluation" pane. By combining Ajinca expression with a sample JSON input (manually entered or loaded), the editor dynamically calculates and displays the resulting output. This allows for immediate verification of data transformation logic.
Checks validity (B): The editor includes built-in linting and syntax validation. It alerts the developer to errors such as unclosed brackets, incorrect filter usage, or invalid syntax, ensuring that only valid Jinja code is saved into the playbook step.
Loads environment JSON (D): One of the most significant features for troubleshooting is the ability to load the environment JSON from a recent execution. This populates the editor's variable context (vars) with the actual data from a specific playbook run, allowing the developer to test expressions against real-world data that recently passed through the system.
Why other options are incorrect:
Creates new records in bulk (C): While Jinja expressions are used to format the data that goes into a record, the actual creation of records is handled by the “Create Record “step or specific Connectors, not by the Jinja editor utility itself.
Defines conditions to trigger a playbook step (E): Jinja is the language used to write conditions within a "Decision" step or "Step Utilities," but the Jinja Editor is a tool for evaluating and testing those expressions. The definition of the condition logic and the triggering behavior is a function of the Playbook Engine and Step configuration, not the editor's standalone capabilities.