UPSC (2026)Prelims Practice Questions: AI Governance & Cybersecurity
Answer: (b) Operate autonomously to identify, analyze, and respond to threats.
Explanation: The article mentions IBM's ATOM (Autonomous Threat Operations Machine) as an example of agentic AI that can "identify a threat, analyse it, assign a risk score, and take action, all within minutes." This highlights autonomous operation in cybersecurity, not just content generation or human replacement.
2. Consider the following statements regarding the evolving nature of cybersecurity:
It is shifting from a compliance-based approach to one of continuous vigilance.
Traditional Security Information and Event Management (SIEM) systems are evolving from being reactive to proactive.
Post-mortem analysis after a security breach is now considered more critical than pre-emptive action.
Answer: (a) 1 and 2 only
Explanation: The article states that cybersecurity is no longer about annual compliance but demands "continuous vigilance," making Statement 1 correct. It also says SIEM systems are evolving from "reactive to proactive," making Statement 2 correct. Statement 3 is incorrect because the article explicitly says, "Nobody is interested in post-mortems anymore. What you can do before an incident happens is more critical."
Answer: (b) Training AI models exclusively on internal, controlled data.
Explanation: The example from Mahindra Logistics explains that to manage risk and ensure accuracy, they "trained our models only on internal data. That way, the models stay within our ecosystem." This is the practical application of the containment strategy discussed.
Answer: (c) Identity and Access Management
Explanation: The article states, "identity and access management have moved to the forefront," and specifically uses the acronym "IAM" in the context of securing models exposed externally.
Answer: (c) Data poisoning and prompt injection
Explanation: Dhiraj Kumar from New India Assurance Co. Ltd. explicitly warns of "new vectors such as data poisoning and prompt injection." These are specific techniques to corrupt AI models or manipulate their outputs.
Answer: (c) Explainability and documenting how a model arrives at its conclusions.
Explanation: The article directly states, "In regulated sectors, explainability is the new compliance... It’s not enough to say the model works; we must be able to show how it works." AI audit trails are the digital records that provide this documentation.
Answer: (c) To enforce accountability and ethical oversight of AI use.
Explanation: Geeta Gurnani from IBM mentions that "ethics boards and responsible AI offices are taking shape, and every user or producer must be accountable." This underscores their role in governance and ethical oversight, not in development speed or funding.
Answer: (b) Make it an integral part of the organizational DNA, with shared ownership across teams.
Explanation: Hitesh Talreja's question, "It can’t be one person or one team. It has to enter the organisation’s DNA," and Siddharth Sureka's point that "Ownership must run across the organisation" clearly indicate that a decentralized, cultural approach is considered essential.
Answer: (c) Human behavior.
Explanation: The article explicitly states, "Many noted that the weakest link in most security frameworks remains human behaviour, not code."
Answer: (b) Governance and cybersecurity are foundational, not optional, for trustworthy AI adoption.
Explanation: The concluding paragraph encapsulates this core message: "AI is no longer experimental; it’s infrastructural. That makes governance and cybersecurity not just best practice — but the very foundation of trust in the age of intelligent systems."
No comments:
Post a Comment