What mechanisms drive agent behaviour?

Analysis (Software) Components

  1. Agent: Typically this is an agent provided to us by an agent builder. It could be an IMPALA agent that has been meta-trained on a distribution over grid-world mazes. Often the agent builders already have a few specific questions they’d like us to investigate.
  2. Simulator — “the agent debugger”: Our experimentation platform. With it, we can simulate the agent and run experiments. Furthermore, it allows us to perform all sorts of operations we’d usually expect from a debugger, such as stepping forward/backward in the execution trace, setting breakpoints, and setting/monitoring variables.
    We also use the simulator to generate data for the estimation of statistical parameters. Since we can manipulate factors in the environment, the data we collect is typically interventional and thus contains causal information. This is illustrated in Figure 1 below.
  3. Causal reasoning engine: This automated reasoning system allows us to specify and query causal models with associational, interventional, and counterfactual questions. We use these models to validate causal hypotheses. A model is shown in Figure 2 below.
Figure 1. The simulator: our experimentation platform. Starting from an initial state (root node, upper-left) the simulator allows us to execute a trace of interactions. We can also perform interventions, such as changing the random seed, forcing the agent to pick desired actions, and manipulating environmental factors. These interventions create new branches of the execution trace.
Figure 2. A causal model, represented as a causal Bayesian network.

Analysis Methodology

  1. Exploratory analysis: We place the trained agent into one or more test environments and probe its behaviour. This will give us a sense of what the relevant factors of behaviour are. It is the starting point for formulating our causal hypotheses.
  2. Identify the relevant abstract variables: We choose a collection of variables that we deem relevant for addressing our questions. For instance, possible variables are: “does the agent collect the key?”, “is the door open?”, etc.
  3. Gather data: We perform experiments in order to collect statistics for specifying the conditional probability tables in our causal model. Typically this implies producing thousands of rollouts under different conditions/interventions.
  4. Formulate the causal model: We formulate a structural causal model (SCM) encapsulating all causal and statistical assumptions. This is our explanation for the agent’s behaviour.
  5. Query the causal model: Finally, we query the causal model to answer the questions we have about the agent.

Example: Causal effects under confounding

Figure 3. Grass-Sand environments: In these 2 T-shaped mazes, the agent can choose between one of two terminal states, only one of which contains a rewarding pill. During tests, we observe that a pre-trained agent always successfully navigates to the location of the pill.
Figure 4. Causal model for the grass-sand environment. The variables are C (confounder), R (location of reward pill), F (type of floor), and T (choice of terminal state).
  1. Association between T and R: Given the location of the reward pill, do agents pick the terminal at the same location? Formally, this is
    P( T = left | R = left ) and P( T = right | R = right ).
  2. Causation from R to T: Given that we set the location of the reward pill, do agents pick the terminal at the same location? In other words, can we causally influence the agent’s choice by changing the location of the reward? Formally, this is given by
    P( T = left | do(R = left) ) and P( T = right | do(R=right) ).
  3. Causation from F to T: Finally, we want to investigate whether our agents are sensitive to the floor type. Can we influence the agent’s choice by setting the floor type? To answer this, we could query the probabilities
    P( T = left | do(F = grass)) and P(T=right|do(F=sand)).

More examples

  1. Testing for memory use: An agent with limited visibility (it can only see its adjacent tiles) has to remember a cue at the beginning of a T-maze. The cue tells it where to go to collect a rewarding pill (left or right exit). You observe that the agent always picks the correct exit. How would you test whether it is using its internal memory for solving the task?
  2. Testing for generalisation: An agent is placed in a square room where there is a reward pill placed in a randomly chosen location. You observe that the agent always collects the reward. How would you test whether this behaviour generalizes?
  3. Estimating a counterfactual behaviour: There are two doors, each leading into a room containing a red and a green reward pill. Only one door is open, and you observe the agent picking up the red pill. If the other door had been open instead, what would the agent have done?
  4. Which is the correct causal model? You observe several episodes, in which two agents, red and blue, simultaneously move one step into mostly the same direction. You know that one of them chooses the direction and the other tries to follow. How would you find out who’s the leader and who’s the follower?
  5. Understanding the causal pathways leading up to a decision: An agent starts in a room with a key and a door leading to a room with a reward pill. Sometimes the door is open, and other times the door is closed and the agent has to use the key to open it. How would you test whether the agent understands that the key is only necessary when the door is closed?

--

--

--

We research and build safe AI systems that learn how to solve problems and advance scientific discovery for all. Explore our work: deepmind.com

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

AI powered Chatbots: A smarter version of Business

What used to cost millions in 2017, can now be solved with machine learning for $499/month

Latin-America based software company enhances its AI platform with Botcopy and Dialogflow.

Five Things You Should ACTUALLY Worry About Concerning AI

AI Problems

Artificial Intelligence on Blockchain

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
DeepMind Safety Research

DeepMind Safety Research

We research and build safe AI systems that learn how to solve problems and advance scientific discovery for all. Explore our work: deepmind.com

More from Medium

A Selection of Valuable Resources around AI, AI-Ethics(Human Rights), AI-Regulation(Governance)…

Newsletter #57 — IBM, Oracle and Microsoft grapple with AI in healthcare

Responsible AI at Scale with Beth Rudden

Women in AI Ethics podcast series sponsored by IBM. Building Responsible AI at Scale. Beth Rudden, IBM Distinguished Engineer & Principal Data Scientist — Cognitive & AI Services

A Summary of AI Ethics and Bias in 2021