시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / Plat-Arch-204 덤프  / Plat-Arch-204 문제 연습

Salesforce Plat-Arch-204 시험

Salesforce Certified Platform Integration Architect 온라인 연습

최종 업데이트 시간: 2026년02월14일

당신은 온라인 연습 문제를 통해 Salesforce Plat-Arch-204 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 Plat-Arch-204 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 129개의 시험 문제와 답을 포함하십시오.

 / 3

Question No : 1


A new Salesforce program requires data updates between internal systems and Salesforce.
Which relevant detail should an integration architect seek to solve for integration architecture needs?

정답:
Explanation:
In the "Discovery" phase of integration architecture, the architect must translate abstract business needs into technical requirements. The most critical variables that define the Integration Pattern are Timing and Volume.
An architect cannot choose between the REST API, Streaming API, Bulk API, or Outbound Messaging without knowing:
Latency Requirements: Does the business need the update in 200 milliseconds (Synchronous), 2 minutes (Near Real-Time), or 24 hours (Batch)?
Frequency: Is the data updated every time a user clicks a button, or once at the end of the day?
Volume: Are we moving 10 records at a time or 10 million?
Option A focuses on UI/UX and licensing, which are project management concerns.
Option B focuses on resource allocation and governance. While important for the project, they do not inform the technical design of the data flow.
By specifically seeking out Timing aspects (Synchronous vs. Asynchronous) and Update Frequency, the architect can apply the Salesforce Integration Decision Matrix. For instance, a "Real-time" requirement for small volumes leads to a Request-Reply pattern via Apex Callouts. A "Nightly" requirement for large volumes leads to a Batch Data Synchronization pattern via the Bulk API.
Identifying these "Non-Functional Requirements" (NFRs) early is the only way to ensure the architecture is scalable and stays within platform governor limits.

Question No : 2


An enterprise customer is implementing Salesforce for Case Management. Based on the landscape (Email, Order Management, Data Warehouse, Case Management), what should the integration architect evaluate?

정답:
Explanation:
The evaluation of an integration landscape is a process of rationalization. The goal is to identify which legacy systems Salesforce will replace (System Retirement) and which systems it must coexist with (Integration).
In this scenario, Salesforce is being implemented for Case Management. Salesforce Service Cloud is the industry leader for this specific function. Therefore, the legacy Case Management System should be retired. Any architecture that suggests "integrating" Salesforce with the legacy Case Management system (Options A and B) is creating a redundant and complex "dual-master" scenario that increases technical debt.
To provide a successful support experience, Salesforce needs to be the central "Engagement Layer,"
which requires integration with the remaining ecosystem:
Email Management System: To support "Email-to-Case" and ensure all customer communications are captured within the Salesforce Case record.
Order Management System (OMS): Support agents often need to verify purchase history or shipping status to resolve a case. A "Data Virtualization" or "Request-Reply" integration with the OMS is vital.
Data Warehouse: For long-term historical reporting and cross-functional analytics, Salesforce must push case data to the enterprise Data Warehouse.
By evaluating the integration with the Data Warehouse, Order Management, and Email Management systems, the architect ensures that Salesforce is enriched with the context it needs to resolve cases while simultaneously retiring the redundant legacy support system.

Question No : 3


The URL for a business-critical external service providing exchange rates changed without notice.
Which solutions should be implemented to minimize potential downtime for users in this situation?

정답:
Explanation:
To minimize downtime when an external endpoint changes, an Integration Architect must ensure that the URL is not "hardcoded" within Apex code or configuration. The standard Salesforce mechanism for abstracting and managing external endpoints is Named Credentials.
Named Credentials specify the URL of a callout endpoint and its required authentication parameters in one definition. If the URL changes, an administrator simply updates the "URL" field in the Named Credential setup. This change takes effect immediately across all Apex callouts, Flows, and External Services that reference it, without requiring a code deployment or a sandbox-to-production migration.
Along with Named Credentials, Remote Site Settings (or the more modern External Website Configurations) are required. Salesforce blocks all outbound calls to URLs that are not explicitly whitelisted.
By having both in place, the remediation process is:
Update the URL in the Named Credential.
Update (or add) the new URL in the Remote Site Settings.
This approach follows the "Separation of Concerns" principle.
Option B (ESB) could technically handle this, but it adds an extra layer of failure and complexity for a simple URL change.
Option C (Content Security Policies) is used to control which resources (like scripts or images) a browser is allowed to load in the UI; it does not govern server-side Apex callouts. Therefore, the combination of Named
Credentials and Remote Site whitelisting is the most efficient and standard way to provide architectural agility and minimize downtime.

Question No : 4


Salesforce is the system of record for Leads, Contacts, Accounts, and Cases. Customer data also exists in an ERP, ticketing system, and data lake, each with unique identifiers. Middleware is used to update systems bidirectionally.
Which solution should be recommended to handle this?

정답:
Explanation:
In a complex landscape where multiple systems contain overlapping customer data, each with its own primary key, the core architectural challenge is Identity Management. To ensure that an update in Salesforce (the System of Record) correctly updates "Customer A" in the ERP and "Customer A" in the Data Lake, a Master Data Management (MDM) strategy is required.
An MDM solution creates a Cross-Reference (X-Ref) Table or a "Golden Record" that maps the unique identifiers from all systems. In the Salesforce record, the architect should implement External ID fields for each corresponding system (e.g., ERP_ID__c, Ticket_System_ID__c).
Why this is the superior recommendation:
Bidirectional Integrity: When the middleware receives an update from the ERP, it uses the ERP_ID__c to perform an "upsert" in Salesforce, ensuring no duplicates are created.
Traceability: It allows for easy auditing of data lineage across the enterprise.
Decoupling: Salesforce doesn't need to know the internal logic of the ERP; it simply holds the reference key.
Option B (CDC) is a delivery mechanism, not an identity management strategy; it tells you that something changed, but not which record in the ERP it corresponds to without the ID mapping.
Option C (Local caching in middleware) is an "anti-pattern" because it makes the middleware stateful; if the middleware cache is lost or out of sync, the entire integration breaks. By designing an MDM-based mapping solution directly within the data model, the architect ensures a robust, scalable, and transparent identity framework for the entire enterprise.

Question No : 5


A company needs to send data from Salesforce to a homegrown system behind a corporate firewall.
The data is pushed one way, doesn't need to be real-time, and averages 2 million records per day.
What should an integration architect consider?

정답:
Explanation:
With a volume of 2 million records per day, this integration exceeds the practical limits of standard near-real-time patterns like Outbound Messaging or synchronous Apex Callouts. Sending 2 million individual REST requests would likely exhaust the daily API limit and could cause significant performance degradation in Salesforce due to transaction overhead.
An Integration Architect must recommend an Asynchronous Batch Data Synchronization pattern, typically facilitated by a third-party ETL/Middleware tool (e.g., MuleSoft, Informatica, or Boomi). Staging the records off-platform is essential for several reasons:
Throttling: The homegrown system behind a firewall may not be able to handle a massive, sudden burst of 2 million records. A middleware tool can ingest the data from Salesforce and "drip-feed" it into the target system at an acceptable rate.
Error Handling and Retries: Middleware provides sophisticated persistence and "Dead Letter Queues" to ensure that if the homegrown system goes offline, no data is lost.
API Efficiency: The middleware can use the Salesforce Bulk API 2.0 to extract the data in large chunks, which is significantly more efficient than individual REST calls and consumes far fewer API limits.
Option A is a valid concern but is a symptom of the wrong choice of tool (REST).
Option B describes
an inbound integration to Salesforce, whereas the requirement is outbound. By utilizing a third-party tool to stage and manage the 2 million record flow, the architect ensures that the integration is scalable, respects the corporate firewall constraints (via a secure agent or VPN), and maintains the performance of the Salesforce production environment.

Question No : 6


When a user clicks “Check Preferences” as part of a Lightning flow, preferences from an externally hosted RESTful service are to be checked in real time. The service has OpenAPI 2.0 definitions.
Which integration pattern and mechanism should be selected?

정답:
Explanation:
This scenario describes a classic Request and Reply pattern where a user action in the UI requires an immediate, synchronous response from an external system to determine the next step in a business process (the Flow).
The requirement specifies that an OpenAPI 2.0 (Swagger) definition is available. For an Integration Architect, this is a prime use case for External Services. External Services allow you to import an OpenAPI schema and automatically generate "Invocable Actions" that can be used directly in Flow Builder without writing a single line of Apex code.
Why this is the best fit:
Low Code: It fulfills the requirement purely through declarative configuration, which reduces maintenance and development costs.
Real-Time: It performs a synchronous HTTP callout and waits for the Boolean/String values to be returned to the Flow variables.
Type Safety: Because it uses the OpenAPI definition, Salesforce understands the data types (Boolean/String) natively.
Option A (Data Virtualization) is more suitable for viewing and searching large external datasets as if they were records; it is over-engineered for a simple "check status" function.
Option C (Remote Call-In) is the inverse of the requirement; it refers to an external system calling into Salesforce. By using Enhanced External Services, the architect provides a scalable, declarative solution that perfectly aligns with modern Salesforce development best practices for real-time external system interaction.

Question No : 7


Northern Trail Outfitters is creating a distributable Salesforce package. The package needs to call into a Custom Apex REST endpoint in the central org. The security team wants to ensure a specific integration account is used in the central org that they will authorize after installation.
Which item should an architect recommend?

정답:
Explanation:
When building a distributable package (likely a Managed Package) that must securely communicate back to a central "Hub" org, the architect must use a framework that supports OAuth 2.0 flows. Storing plain-text or even encrypted passwords (Option B) is a security violation and is brittle across different environments.
The architecturally sound solution is to leverage the Authentication Provider and Named Credentials framework. In the central org, a Connected App is created to act as the OAuth endpoint. In the package, an Authentication Provider is configured using the Consumer Key and Consumer Secret from that Connected App. This setup allows the administrator in the "Subscriber" org (the org where the package is installed) to initiate an OAuth flow.
When the security team "authorizes" the integration after installation, they are essentially completing the OAuth handshake. This grants the subscriber org an Access Token and a Refresh Token associated with the specific integration user in the central org. This mechanism ensures:
Credential Security: No passwords are ever stored in the code or metadata.
Centralized Control: The security team in the central org can revoke the Refresh Token at any time to kill the integration.
Scalability: The same package can be distributed to hundreds of orgs, each with its own unique, secure connection to the central Hub.
By using an Authentication Provider combined with a Named Credential, the Apex code in the package can simply call the endpoint by its developer name, and Salesforce handles the entire authentication header injection automatically, ensuring a robust and secure cross-org integration.

Question No : 8


Universal Containers (UC) is currently managing a custom monolithic web service that runs on an on-premise server. This monolithic web service is responsible for Point-to-Point (P2P) integrations between Salesforce and a legacy billing application, a cloud-based ERP, and a data lake. UC has found that the tight interdependencies are causing failures.
What should an integration architect recommend to decouple the systems and improve performance?

정답:
Explanation:
The primary architectural flaw in UC's current landscape is the reliance on a monolithic P2P integration layer. In such designs, any failure in one integration thread or a surge in volume for one system can monopolize resources (CPU, memory, threads), causing the entire service―and thus all other integrations―to fail. This lack of isolation leads to the "tight interdependencies" described.
To effectively decouple these systems, the architect should recommend a Microservices Architecture. By breaking the monolithic service into smaller, independent, and modular components, each integration (Billing, ERP, Data Lake) becomes its own isolated service. This approach provides several key architectural benefits:
Isolation of Failure: If the connection to the legacy billing application fails or times out, it no longer impacts the ERP or Data Lake integrations.
Independent Scalability: If the Data Lake integration requires high throughput, that specific microservice can be scaled horizontally without wasting resources on the others.
Technology Agility: Each microservice can be updated or patched independently, allowing for faster maintenance cycles.
Furthermore, moving a "monolithic" service to the cloud (Option B) is simply a "lift and shift" that preserves the underlying fragility. While the Bulk API (Option A) is excellent for high-volume data loading, it does not solve the fundamental problem of system interdependency and orchestration
failure. Transitioning to a modular, service-oriented design allows UC to implement modern integration patterns, such as asynchronous queuing between the microservices, which significantly improves the overall resilience and performance of the Salesforce-to-back-office landscape.

Question No : 9


Northern Trail Outfitters (NTO) wants to improve the quality of callouts from Salesforce to its REST APIs by adhering to RAML (REST API Markup Language) specifications. The RAML specs serve as interface contracts.
Which design specification should the integration architect include to ensure that Apex REST API Clients’ unit tests confirm adherence to the RAML specs?

정답:
Explanation:
In Salesforce, you cannot perform real HTTP callouts during unit tests. To test integration logic, developers must use the HttpCalloutMock interface to simulate the API's response. To ensure that the Apex code adheres to the RAML contract, the architect should require that the test mock implementation strictly follows the RAML specifications.
By requiring the Apex REST API Clients to implement the HttpCalloutMock (or more specifically, creating a mock class that implements it), the developer creates a controlled testing environment. The mock class should be coded to return a payload that matches the RAML-defined structure (fields, data types, and status codes). When the test runs, the Apex client receives this "contract-compliant" response. The unit test then uses assertions to verify that the Apex code correctly parses and handles this specific data structure.
Option B is technically imprecise; you don't "call" the mock from the client, you provide the mock to the test runtime using Test.setMock().
Option C describes the general process of testing but does not address the "design specification" needed to ensure contract adherence. By mandating a mock implementation that mirrors the RAML contract, the architect ensures that if the API contract changes in the RAML file, the unit tests will fail if the Apex code is not updated to match, thereby
maintaining high integration quality and preventing runtime errors.

Question No : 10


A customer’s enterprise architect has identified requirements around caching, queuing, error handling, alerts, retries, event handling, etc. The company has asked the integration architect to help fulfill such aspects with its Salesforce program.
Which recommendation should the integration architect make?

정답:
Explanation:
Salesforce is a highly capable CRM platform, but it is not a dedicated messaging or orchestration engine. When requirements include complex message queuing, process choreography, and guaranteed quality of service (QoS), the Integration Architect must recommend a middleware solution (ESB or iPaaS).
"True message queuing" involves holding messages in a persistent state until the target system is ready to receive them, handling sophisticated retry logic (such as exponential backoff), and providing dead-letter queues for failed messages. While Salesforce has basic asynchronous tools like Outbound Messaging or Platform Events, they lack the granular control over queuing and orchestration that enterprise middleware provides.
Option A is incorrect because performing heavy transformation and protocol translation (like XML to JSON or SOAP to REST) within Salesforce consumes excessive Apex CPU time and is better handled by middleware designed for that purpose.
Option B is conceptually backward; usually, architects move away from synchronous Request-Reply toward asynchronous Fire-and-Forget to improve scalability. By recommending a middleware solution to handle these infrastructure-level concerns, the architect ensures that Salesforce remains performant for its users while the middleware manages the technical complexities of reliably connecting the enterprise.

Question No : 11


A customer is migrating from an old legacy system to Salesforce. As part of the modernization effort, the customer would like to integrate all existing systems that currently work with its legacy application with Salesforce.
Which constraint/pain-point should an integration architect consider when choosing the integration pattern/mechanism?

정답:
Explanation:
When migrating from a legacy landscape to a modern platform like Salesforce, the most immediate technical hurdle is the diversity of system types and communication protocols used by the existing systems.
In a legacy environment, integrations are often not standardized. An architect may encounter systems that communicate via modern REST/SOAP APIs, but they will also likely find older systems
that rely on Flat File exchanges (FTP/SFTP), Email-based triggers, or direct Database connections. These "System Types" are a fundamental constraint because they dictate the choice of integration middleware. For example, Salesforce cannot natively poll a file system or read an on-premise database; therefore, an architect must identify these constraints to justify the need for an ETL or ESB tool that can bridge these legacy protocols with Salesforce’s API-centric architecture.
While reporting (Option B) and multi-currency (Option C) are important functional requirements for the Salesforce implementation, they do not dictate the integration pattern (e.g., Request-Reply vs. Batch) as much as the technical interface of the source/target systems does. By evaluating the APIs, file systems, and email capabilities of the legacy landscape first, the architect ensures that the chosen integration mechanism―whether it be the Streaming API, Bulk API, or middleware orchestration―is technically capable of actually communicating with the legacy debt.

Question No : 12


Northern Trail Outfitters has had an increase in requests from other business units to integrate opportunity information with other systems from Salesforce. The developers have started writing asynchronous @future callouts directly into the target systems. The CIO is concerned about the viability of this approach and scaling for future growth.
What should be done to mitigate the CIO’s concerns?

정답:
Explanation:
The CIO's concern regarding "viability" and "scaling" is rooted in the risks associated with tightly coupled, point-to-point integrations. Using @future methods for direct callouts creates a "spaghetti" architecture where Salesforce must manage the specific endpoints, authentication, and error logic for every external system.
The architect should recommend implementing an Enterprise Service Bus (ESB). An ESB acts as a centralized middleware layer that provides mediation, routing, and orchestration. By moving the integration logic to an ESB, Salesforce only needs to send a single message to the bus. The ESB then takes responsibility for delivering that data to multiple business units and external systems. This decouples Salesforce from the downstream systems; if a target system changes its API or is replaced, only the ESB configuration needs to be updated, not the Salesforce Apex code.
While External Services (Option A) provide a low-code way to call APIs, they still represent point-to-point connections and do not solve the broader orchestration and scaling challenges. ETL tools (Option C) are designed for bulk data movement and would not satisfy the need for the near real-time updates that the existing callout logic likely supports. An ESB provides the "quality of service" features―such as guaranteed delivery, retries, and protocol transformation―that are necessary for a growing enterprise to maintain a stable and scalable integration landscape.

Question No : 13


Northern Trail Outfitters (NTO) is planning to create a native employee-facing mobile app with the look and feel of Salesforce Lighting Experience. The mobile app needs to integrate with NTO’s Salesforce org.
Which Salesforce API should be used to implement this integration?

정답:
Explanation:
When building custom mobile or web applications that aim to replicate the look and feel of Salesforce Lightning Experience, the User Interface (UI) API is the architecturally recommended choice.
The UI API is specifically designed to provide the metadata and data needed to build high-fidelity user interfaces. Unlike the standard REST API (Option B), which returns raw record data, the UI API returns both data and metadata in a single response. This includes information about page layouts, field-level security, picklist values, and localized labels. By using the UI API, the mobile app can dynamically render fields according to the user's permissions and the organization's layout configurations, ensuring that the custom app stays in sync with changes made in Salesforce Setup without requiring code updates in the mobile app.
Connect REST API (Option A) is primarily used for Chatter, Communities (Experience Cloud), and CMS content, and while it is useful for those specific social features, it does not provide the layout and record-level metadata required for a full CRM interface. The UI API is the same underlying technology that powers the Salesforce mobile app and Lightning Experience itself. Therefore, utilizing this API allows NTO's developers to build a native app that perfectly mimics the Lightning Experience while reducing the amount of custom logic needed to handle complex Salesforce UI requirements.

Question No : 14


Northern Trail Outfitters (NTO) uses Salesforce to track leads, opportunities, and order details that convert leads to customers. However, orders are managed by an external (remote) system. Sales reps want to view and update real-time order information in Salesforce. NTO wants the data to only persist in the external system.
Which type of integration should an architect recommend to meet this business requirement?

정답:
Explanation:
The requirement to view and update data in real-time while ens5uring the data only persists in the external system is the definition of a Data Virtualization pattern. In this architectural model, Salesforce does not store a local copy of the data (which would be Data Synchronization), but instead acts as a window into the external system of record.
An Integration Architect implements Data Virtualization primarily through Salesforce Connect. This tool allows the external system's order table to be represented as an External Object in Salesforce. Because the data is retrieved on-demand via a web service call (typically using the OData protocol), it is always "real-time." Furthermore, since Salesforce Connect supports writeable external objects, sales reps can update the order information directly from the Salesforce UI, and those changes are sent back to the external system immediately without being saved to the Salesforce database.
This approach is superior to Data Synchronization (Option A) in this specific use case because it eliminates the need for data storage costs and the complexity of keeping two databases in sync. It is also distinct from Process Orchestration (Option C), which focuses on the sequencing of tasks across multiple systems rather than the real-time presentation of external data. By utilizing Data Virtualization, NTO achieves a seamless user experience where external orders look and feel like native Salesforce records while strictly adhering to the "no persistence" constraint.

Question No : 15


A Salesforce customer is planning to roll out Salesforce for all of their sales and service staff. Senior management has requested that monitoring be in place for Operations to notify any degradation in Salesforce performance.
How should an Integration consultant implement monitoring?

정답:
Explanation:
Effective operational monitoring focuses on the end-user experience and business outcomes rather than just raw technical metrics. An Integration consultant should identify critical business processes (e.g., "Lead Conversion" or "Order Processing") and establish benchmarks to detect performance degradation.
Monitoring purely technical limits (Option A) or individual API events (Option C) provides "noise" without context. For example, if API usage is high but the system is responding quickly, there is no degradation. However, if a critical process that normally takes 2 seconds starts taking 10 seconds, that is a clear indicator of a performance issue that impacts the business.32
The consultant should use tools like Salesforce Event Monitoring or external APM (Application Performance Management) tools to track the execution time of these key transactions. By setting alerts when performance deviates from established benchmarks, Operations can be proactively notified before users begin to lose productivity or abandon the system. This holistic approach ensures that monitoring is aligned with business value and provides actionable insights for troubleshooting bottlenecks in code, automation, or integrations.

 / 3