Blog Archive

Saturday, October 25, 2025

UPSC (2026)Prelims Practice Questions: AI Governance & Cybersecurity

 

UPSC (2026)Prelims Practice Questions: AI Governance & Cybersecurity


1. The term "Agentic AI," as discussed in the context of cybersecurity, refers to AI systems that can:
(a) Only generate text and images based on user prompts.
(b) Operate autonomously to identify, analyze, and respond to threats.
(c) Replace human workers in all operational roles.
(d) Function without any underlying data models.

Answer: (b) Operate autonomously to identify, analyze, and respond to threats.

Explanation: The article mentions IBM's ATOM (Autonomous Threat Operations Machine) as an example of agentic AI that can "identify a threat, analyse it, assign a risk score, and take action, all within minutes." This highlights autonomous operation in cybersecurity, not just content generation or human replacement.

2. Consider the following statements regarding the evolving nature of cybersecurity:

  1. It is shifting from a compliance-based approach to one of continuous vigilance.

  2. Traditional Security Information and Event Management (SIEM) systems are evolving from being reactive to proactive.

  3. Post-mortem analysis after a security breach is now considered more critical than pre-emptive action.

Which of the statements given above is/are correct?
(a) 1 and 2 only
(b) 2 and 3 only
(c) 1 and 3 only
(d) 1, 2 and 3

Answer: (a) 1 and 2 only

Explanation: The article states that cybersecurity is no longer about annual compliance but demands "continuous vigilance," making Statement 1 correct. It also says SIEM systems are evolving from "reactive to proactive," making Statement 2 correct. Statement 3 is incorrect because the article explicitly says, "Nobody is interested in post-mortems anymore. What you can do before an incident happens is more critical."

3. The strategy of "Containment" in AI deployment, as illustrated in the article, primarily involves:
(a) Isolating AI systems from the internet permanently.
(b) Training AI models exclusively on internal, controlled data.
(c) Slowing down the adoption of AI to ensure security.
(d) Using only open-source AI models to ensure transparency.

Answer: (b) Training AI models exclusively on internal, controlled data.

Explanation: The example from Mahindra Logistics explains that to manage risk and ensure accuracy, they "trained our models only on internal data. That way, the models stay within our ecosystem." This is the practical application of the containment strategy discussed.

4. In the context of AI security, "IAM" is critical. What does IAM stand for?
(a) Intelligent Asset Management
(b) Integrated Application Module
(c) Identity and Access Management
(d) Indian Accounting Manual

Answer: (c) Identity and Access Management

Explanation: The article states, "identity and access management have moved to the forefront," and specifically uses the acronym "IAM" in the context of securing models exposed externally.

5. According to the article, which of the following is identified as a new and emerging threat vector in AI systems?
(a) Hardware failure
(b) Power outages
(c) Data poisoning and prompt injection
(d) User interface design flaws

Answer: (c) Data poisoning and prompt injection

Explanation: Dhiraj Kumar from New India Assurance Co. Ltd. explicitly warns of "new vectors such as data poisoning and prompt injection." These are specific techniques to corrupt AI models or manipulate their outputs.

6. The concept of "AI audit trails" is gaining importance primarily because it addresses the need for:
(a) Reducing the computational cost of AI models.
(b) Increasing the speed of AI-driven decisions.
(c) Explainability and documenting how a model arrives at its conclusions.
(d) Making AI models accessible to non-technical staff.

Answer: (c) Explainability and documenting how a model arrives at its conclusions.

Explanation: The article directly states, "In regulated sectors, explainability is the new compliance... It’s not enough to say the model works; we must be able to show how it works." AI audit trails are the digital records that provide this documentation.

7. What is the primary purpose of establishing a "Responsible AI Office" or an ethics board within an organization, as discussed in the article?
(a) To accelerate the development of new AI products.
(b) To secure more funding for AI projects.
(c) To enforce accountability and ethical oversight of AI use.
(d) To manage the company's social media accounts.

Answer: (c) To enforce accountability and ethical oversight of AI use.

Explanation: Geeta Gurnani from IBM mentions that "ethics boards and responsible AI offices are taking shape, and every user or producer must be accountable." This underscores their role in governance and ethical oversight, not in development speed or funding.

8. The article suggests that the most effective approach to AI governance is to:
(a) Limit AI use to a small, central team of experts.
(b) Make it an integral part of the organizational DNA, with shared ownership across teams.
(c) Outsource all AI-related security to third-party vendors.
(d) Pause all AI development until international regulations are finalized.

Answer: (b) Make it an integral part of the organizational DNA, with shared ownership across teams.

Explanation: Hitesh Talreja's question, "It can’t be one person or one team. It has to enter the organisation’s DNA," and Siddharth Sureka's point that "Ownership must run across the organisation" clearly indicate that a decentralized, cultural approach is considered essential.

9. According to the discussion, what is often the "weakest link" in most AI security frameworks?
(a) The underlying algorithms of the AI models.
(b) The physical infrastructure hosting the AI.
(c) Human behavior.
(d) The speed of internet connectivity.

Answer: (c) Human behavior.

Explanation: The article explicitly states, "Many noted that the weakest link in most security frameworks remains human behaviour, not code."

10. The core message of the "AI@Work" roundtable discussion can be best summarized as:
(a) AI's potential for revenue growth outweighs all associated risks.
(b) Governance and cybersecurity are foundational, not optional, for trustworthy AI adoption.
(c) India is not yet ready to adopt AI at an enterprise level.
(d) The primary goal of AI should be to achieve full automation and remove human oversight.

Answer: (b) Governance and cybersecurity are foundational, not optional, for trustworthy AI adoption.

Explanation: The concluding paragraph encapsulates this core message: "AI is no longer experimental; it’s infrastructural. That makes governance and cybersecurity not just best practice — but the very foundation of trust in the age of intelligent systems."

No comments:

Post a Comment

Removal of the Chief Election Commissioner: Process, Politics and Constitutional Safeguards

  Removal of the Chief Election Commissioner: Process, Politics and Constitutional Safeguards The recent move by 193 Opposition Members of ...