Question No : 1
How to sandbox unsafe model execution?
정답:
Explanation:
Question No : 2
How can a dependency confusion attack affect AI repos?
정답:
Explanation:
Question No : 3
How do you detect unwanted outbound calls from ML tools?
정답:
Explanation:
Question No : 4
How can Docker Hub supply chain attacks affect AI workloads?
정답:
Explanation:
Question No : 5
How to identify if model weights were tampered with?
정답:
Explanation:
Question No : 6
Can setup.py scripts be weaponized in LLM projects?
정답:
Explanation:
Question No : 7
How does pip resolve unpinned dependencies to vulnerable versions?
정답:
Explanation:
Question No : 8
How do you verify the authenticity of a model on HuggingFace?
정답:
Explanation:
Question No : 9
How can typosquatting infect an ML pipeline?
정답:
Explanation:
Question No : 10
Why is it risky to use abandoned PyTorch or TensorFlow model checkpoints?
정답:
Explanation:
Question No : 11
How do pre-trained models hide malicious payloads?
정답:
Explanation:
Question No : 12
How can GitHub actions auto-install a poisoned dependency?
정답:
Explanation:
Question No : 13
How do you simulate an attack from a malicious AI model downloaded from the internet?
정답:
Explanation:
Question No : 14
What’s a safe way to install third-party AI libraries?
정답:
Explanation:
Question No : 15
How can requirements.txt introduce supply chain risks?
정답:
Explanation: