Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

Aggregate Subjects

reductive vs aggregate

What is shared intention?

Functional characterisation:

shared intention serves to (a) coordinate activities, (b) coordinate planning and (c) structure bargaining

Constraint:

Inferential integration... and normative integration (e.g. agglomeration)

Substantial account:

We have a shared intention that we J if

‘1. (a) I intend that we J and (b) you intend that we J

‘2. I intend that we J in accordance with and because of la, lb, and meshing subplans of la and lb; you intend [likewise] …

‘3. 1 and 2 are common knowledge between us’

(Bratman 1993: View 4)

All of the intentions have individual subjects

all intentions have individual subjects

reductive vs aggregate

On accounts like Bratman’s or Gilbert’s, ‘it makes some sense to say that the result is a kind of shared action: the individual people are, after all, acting intentionally throughout.

However, in a deeper sense, the activity is not shared: the group itself is not engaged in action whose aim the group finds worthwhile, and so the actions at issue here are merely those of individuals.

Thus, these accounts ... fail to make sense of a ... part of the landscape of social phenomena

(Helm, 2008, pp. 20--1)

Helm (2008, pp. 20-1)

Start with Helm’s challenge ([because I can answer it at the end]).
How to make sense of this idea?

How?

aggregate subject

I think Helm wants what I will call an ‘aggregate subject’. (He uses the term ‘plural robust agent’, but this is because he ignores a distinction between aggregate and plural subjects which will be important later.)
Meet an aggregate animal, the Portuguese man o' war (Physalia physalis), which is composed of polyps.
Here you can say that ‘the group [of polyps] itself’ is engaged in action which is not just a matter of the polyps all acting.
To illustrate, consider how it eats. Wikipedia: ‘Contractile cells in each tentacle drag the prey into range of the digestive polyps, the gastrozooids, which surround and digest the food by secreting enzymes that break down proteins, carbohydrates, and fats, while the gonozooids are responsible for reproduction.’
This jellyfish-like animal is a crude model for the sort of aggregate agent Helm (and others) suggest we need.
But how can such a thing exist? Humans do not mechanically attach themselves in the way that the polyps making up that jellyfish-like animal do.
So how are aggregate agents possible?

‘[...] a distinctive mode of practical reasoning, team reasoning, in which agency is attributed to groups.’

(Gold & Sugden, 2007)
So these researchers are aiming to build a kind of aggregate subject.
They think, in a nutshell, that aggregate subjects are not only a consequence of self-reflection, but can also arise through (a special mode of) reasoning about what to do.

Gold and Sugden (2006)

Need a bit of a recap (or even first exposure ...)
∞todo repeat the full characterisation from the previous lecture? (not everyone got it!)

‘somebody team reasons if she works out the best possible feasible combination of actions for all the members of her team, then does her part in it.’

(Bacharach, 2006, p. 121)

What counts as best possible?

An individual who engages in team-directed reasoning appraises alternative arrays of actions by members of the team in relation to [...] team-directed preferences.’

(Sugden, 2000)
Preferences of teams just like preferences of the individuals. (Team really is considered as an aggregate agent.)

‘At the level of the team, team preference is a ranking of outcomes which is revealed in the team's decisions.’

(Sugden, 2000)

We are talking about an aggeregate as an agent: the team decides.

Why suppose that team reasoning explains how

there could be aggregate subjects?

If we attribute preferences to Puppe and then act them out with her, she literally has preferences from a decision theoretic point of view. (As long as she decides; you might say that it is not really her decision ...)
Well, maybe she is only a pretend agent. But her decisions have significant impact in a family’s life; and they can be understood in terms of decision-theory.
  • we take* ourselves to be components of an aggregate agent
  • through team reasoning, we ensure that the aggregate agent’s choices maximise the aggregate agent’s expected utility
  • the aggregate agent has preferences (literally)
Team reasoning gets us aggregate subjects, I think. After all, we can explicitly identify as members of a team, explicitly agree team preferences, and explicitly reason about how to maximise expected utility for the team.

game theory is already agnostic about agents ...

individual adult humans (suspects under arrest)

bower birds (maraud/guard nests)

business organisations (product pricing)

countries (international environmental policy)

(Dixit, Skeath, & Reiley, 2014, p. chapter 10)

... so aggregates with preferences that maximise their expected utility are already in view.

How?

aggregate subject

IMPORTANT: we did not explain how aggregate subjects could have intentions; only the bare idea that there are aggregate subjects with preferences which make decisions.
Recap by explaining to each other how there can be aggregate subjects. If that is too easy: how could there be aggregate subjects *of intention*?
These are the questions you would want to answer if you were going to pursue team reasoning.

1. What is team reasoning?

2. [background] aggregate subjects

3. How might team reasoning be used in constructing a theory of shared intention?