FORESTS, FALL 2018
Question
set THREE
DUE 8 November
QUESTIONS MOSTLY ABOUT NATURAL SELECTION (and one to review competition/niche theory):
1. This figure shows the
distribution of two species of cat-tails – Typha latifolia
and Typha angustifolia – over a range of depths of
water. Negative depth means out of the water. (Cat-tails are the
dominant plant in the wetlands around the Dickinson Pond; both of
these species occur on campus). The upper graph shows situations
where both species occur together (in sympatry); the lower graph
shows distributions in situations where only one of the two species
occurs (allopatric). The vertical axis is a measure of abundance
(don’t worry about different units; it's the relative abundance
of the two species that's of interest here).
Interpret the patterns observed in terms of fundamental and
realized niches for the two species, indicating the implied
competitive relationships. If the observed differences between the
two graphs are a result of interspecific competition, you might
hypothesize that competition is for either light or mineral
nutrients (since these are perennial wetlands, it’s
presumably not about water!). Cat-tails are rooted in the sediments,
and presumably obtain mineral nutrients through their roots. Briefly,
lay out an experiment to attempt to test these hypotheses.
Explain your methods, and what you would expect if the relevant
hypothesis is correct.
This is essentially the same scenario as the trout question on previous set, but with plants; range of depths inhabited in allopatry (bottom) would define fundamental niche (at least on the niche space axis defined by water depth). Fundamental niches are largely overlapping, but it looks like T. angustifoliahas a somewhat broader distribution in allopatry, or fundamental niche (could say it's more of a habitat generalist). (NOTE: some of you focused on depths of maximum abundance, which is okay, but the niche is really about the RANGE of conditions/habitat inhabited) In sympatry, there appears to be an asymmetrical competitive partitioning of overlap area in niche space: the generalist T. angustifolia seems to be inferior competitor over much of the area of overlap of fundamental niches (i.e., is excluded from deeper water in sympatry), but T. latifolia doesn't show much change in it's distributional range in sympatry; realized niche not much change for T.l., but substantially reduced for T.a. (which is now confined to deeper water). This is a common pattern; more 'specialized' species are typically better competitors in their range of tolerance than are more generalist species.. The usual assumptions apply; we don't know if a) the wetlands involved in the two graphs are otherwise generally similar, or b) whether other plant species (and competitors) might be influencing distributions differently in the two situations. To test a hypothesis concerning competition between two species you always have two approaches; you can remove/constrain one species and see if the other expands in number/distribution, OR you can change the availability of the putatively limiting resource and see if there's at least some (perhaps temporary) increase in the species thought to be limited by competition (although both would likely grow until the resource becomes limiting again OR some other resource becomes limiting). Here, you'd need to do experiments that would influence availability of light OR mineral nutrients separately from the other (fertlizing might be easy; changing how much light's available to the putatively inferior competitor would be tricky, but you could do it...)
ANSWER 3 OF THE FOLLOWING 4: (In all of these use evolutionary/selective arguments carefully, making sure you put things in terms of how individuals with different traits are likely to differ in reproductive success as a result of different selective 'regimes').
2.
Insects are the primary herbivores affecting forest plants (and
most plants that aren't in grazing/grassland ecosystems), and a wide
range of chemical defenses has evolved in plants. They come in
two types:
'Qualitative defenses' are outright poisons -- insecticides (many of
our agricultural insecticides are modeled after these). They tend
to be effective in small amounts, and they're often quite small
molecules (e.g., cyanide).
'Quantitative defenses' are indigestible, often bitter, compounds that
dilute the food value of the plant tissue and often make it hard to
digest (tannins are an example); these chemicals are typically large
molecules, and they have to be present in high concentrations to be
effective.
Discuss the trade-offs -- selective advantages and risks or
disadvantages -- involved for the plant in each of these defense
'strategies'. Make sure you put your arguments in appropriate
selective terminology. Also consider how the
evolutionary/selective response of insect herbivores to these
two types of defenses might differ. (Here is a CLUE: large,
long-lived plants like trees tend to employ quantitative
defenses, while smaller or short-lived plants are more likely to employ
qualitatitve toxins. See if you can explain why this makes sense
in light of your consideration of trade-offs.)
Toxins (qualitative defenses) are cheap and effective until herbivores evolve tolerance/resistance to them(which is inevitable; examples in evolution class notes for pesticides and antibiotics...); quantitative defenses are inherently expensive (big molecules that have to be present in high concentrations), but essentially unbeatable (herbivores can't simply 'overcome' lower nutritional quality/concentration). Weedy, small, short-lived plants must reproduce early since they don't live long. Allocating large amounts of resources to quantitative defense would be a big trade-off against the great selective importance of getting offspring out quickly and in quantity; toxins better in that regard. BIG trade-off is that insect herbivores can evolve tolerance to toxins, but, by that time, descendants are already elsewhere and local population of 'weed' gone -- AND short generation time of short-lived plants means they have some chance of keeping up in the 'arms race' through selection for variants on the toxin. ALSO, small plants may be just harder to find for specialized herbivores. Long-lived, large plant species, on the other hand, are easy to find, and their generation time is so long that the herbivorous insects can 'outrun' them in evolutionary arms race (many generations of insect per generation of maple tree...) -- so toxins wouldn't work very well. However, these plants, by living long and getting big can defer reproduction and accumulate resources over long periods and allocate a lot to survival issues like chemical defenses; thus, quantitative defenses are 'affordable', and less likely to be overcome even by fast-evolving insects. SO, quantitatively defended oak trees are likely to have higher fitness than qualitatively defended oak trees; the reverse is likely true of dandelions.
3. Insect populations
exposed to regular applications of insecticides typically show
evolution of
genetic resistance quite quickly; 5-10 years of intensive use is
about all a new insecticide is good for. This is a simple
(directional) selection scenario; strong toxins impose strong
selection if
there's any heritable (genetic) variation in tolerance.
Individual insects who are even slightly more tolerant of the toxin
will have higher fitness -- reproductive contribution to
subsequent generations -- when the toxin is a major cause of
mortality.
a) If the insecticide is removed from
the environment, it is usually the case that insecticide-resistant
genotypes in insect population have lower fitness than the normal or
'wild-type'.
Offer a hypothesis explaining this phenomenon. Predict what
would happen, in this case, if the insecticide were applied
only in episodes separated by a number of years.
b) It's also frequently the case that resistance does NOT evolve when
several different types of insecticide (that is, ones that
work by different means) are used in combination. (NOTE that
resistance is frequently a single-gene trait -- i.e., conferred by a
single mutation to a gene related to whatever physiological pathway
the insecticide poisons). Propose a reason for this
phenomenon.
(A
SIDE NOTE: this is precisely parallel to
what occurs when pathogens are treated with antibiotics or
antivirals; the second scenario corresponds to modern treatment of
HIV infection with 'cocktails' of multiple anitviral drugs)
a)
REMEMBER that lower fitness means lower reproductive success relative to other individuals. So this
means resistant insects, when no insecticide is present, have lower
reproductive success than 'wild-type' insects. This implies there
must be some "cost" to insecticide resistance -- a trade-off (think about
things like sickle-cell trait -- an exact parallel; there's also an
example regarding mosquito resistance to insecticides in the first
evolution notes). SO, the genetic trait for resistance will be
'selected against' -- will decrease in frequency -- as long as no
insecticide is present, and would eventually be lost from the population. Thus the 'once every several years'
scenario presents a reversal of 'selective pressure' every few years;
mutations or genetic traits that make insect resistant will tend
to decline between applications, so, when insecticide IS applied it
will still be effective (or, rather, be effective AGAIN)...
b) Essential
point here is that, if the insecticides work in different ways,
resistance to one is unlikely to make for resistance to the others --
so to resist ALL of them simultaneously, you'd have to, by chance, in
the same insect or insect 'lineage' get a series of resistant mutations
arising for
each insecticide separately either all at once or in close enough
sequence that the first resistance gene isn't lost due to action of a
different insecticide before the next resistance mutation
occurs. Since mutations are random, that's tremendously
unlikely.
Consequently, each time a mutation resistant to one insecticide
pops up, that insect carrying it will almost certainly be killed by one
of the OTHER insecticides.; the mutation confers little or no fitness
advantage.
(A
COUPLE OF GENERAL POINTS about selective arguments: be VERY CLEAR that
selection doesn't happen 'in order' to do something, or because it
helps the species survive. It happens AS A CONSEQUENCE of heritable differences present in population.. Adaptation is a
consequence of selection, but selection doesn't happen 'in order to'
produce adaptation. And selection is driven by differences among
individuals in reproductive succes; any effect on the species survival
as a whole is coincidental. SO selective arguments must be in terms
of
differences in reproductive success (fitness) due to differences in
heritable traits/genetics -- and new differences in genetics happen by
either mutation (random) or by gene flow from other populations...)
4. Leaves of deciduous
trees start ‘shutting down’ (senescing) in the fall, recovering
materials from their foliage and then shedding it, in response to
a
combination of dropping temperatures and shortening day
length (the precise 'triggers' vary). Losing leaves is generally
seen as an adaptation to
reduce loss of water from the plant during the winter when
below-freezing temperatures make it impossible for trees to acquire
water from frozen soils or transport it through frozen tissues.
ASSUME that the 'triggers' for leaf senescence are genetically
controlled
(heritable).
a) Hypothesize about selective trade-offs involved
in the timing of leaf senescence; what would be the primary selective
costs and/or benefits of holding leaves longer? of dropping them sooner?
b) Day-length is often an important part of the triggering process;
why would this be a particularly selectively advantageous 'cue' for the
plant if the primary adaptive value of losing leaves is relate to cold
temperatures (i.e., why not respond simply to cold temperatures)?
Offer at least one hypothesis suggesting a 'fitness' value for responding to an 'indirect' cue like day-length.
c) Some species (like beech and sugar maple) have
very broad latitudinal ranges, including areas with very different
seasonal timing (beech in its southernmost range may see only a month
or two of 'winter' with freezing temperatures possible while in its
northernmost range, freezes are possible for more like 7 months).
What would you predict about the genetic/heritable triggers for leaf senescence across such a range
(what would happen to a beech tree from Georgia transplanted to
Vermont)? (I suggest you talk about stabilizing and directional
selection dynamics in your answers.)
The important thing here was to give thought as to how the balance between benefits and costs or risks would change if different cues were used. IF temperate trees depended directly and only on temperature to cue leaf loss vs. using day-length, for example, then they might drop leaves early in the fall due to an early cold snap and consequently miss some good photosynthesizing weather later on (thus reducing fitness compared to trees that kept leaves through that 'minor' bit of cold weather)-- OR if the weather stayed unusually warm until late in fall, trees might experience a really hard freeze before they started physiologically 'shutting down' leaves and have leaves killed while they're still full of good stuff that could be recovered and stored. The specific patterns of temperature change through the fall are highly variable and unpredictable in detail. Day length, on the other hand, is a highly predictable 'cue' -- it changes in ways that are absolutely predictably related to the overall trend of the seasons; using it as a trigger could mean you'd be less likely to experience the costs of responding directly to temperature in years when temperature change is not as 'orderly' as the average seasonal trends. You could also think in terms of how decreasing length of day means less photosynthetic gain to balance against costs and risks of maintenance and defense. Ultimately, selection should favor the optimal 'balancing' of these risks in a 'stabiliziing selection' scenario; start leaf senescence at the point where 'risk-cost' of leaf damage if you kept them functioning longer exactly balances likely fitness gain from keeping them functioning longer (which, of course, gets smaller as days get shorter anyhow)... You could experiment, for example, by inducing trees to drop leaves at different times by controlling their light environment (controlling day-length experienced) and see whether trees that dropped their leaves 'abnormally' early or late had slower or faster growth rates (growth rates being an indicator of total photosynthetic gain for a growing season). For the last part, you might predict that selection would have favored different timings in different areas so that trees from Georgia moved to Vermont would likely respond to changing day-length in a 'sub-optimal' way -- maybe holding their leaves into really cold weather when they could be badly damaged...
5. Wildebeest in the Serengeti of East Africa have a very restricted calving season. All females give birth within a 3 week period. This is a pretty common phenomenon among mammals and birds that breed in dense populations. It has been hypothesized that this is an 'adaptive' mechanism to reduce loss of calves to predators by "saturating" the predator populations briefly (this is similar to the notion that masting in trees saturates seed predators so that some seeds survive...). In other words, having calves at the same time as all the other individuals in the herd increases relative reproductive success (fitness), and any heritable tendency to do this would be selected for What kind of observations and data could you collect to test this selective hypothesis (be clear how these data would allow you to assess predictions of the predator saturation hypothesis and differences in fitness within the wildebeest population)?
This is essentially the same hypothesis as we talked about early in term as an explanation for oak masting. The main point here is that to TEST the hypothesis in this scenario, you have to focus on predictions it makes about survival of calves under different conditions. Because this hypothesis is grounded in selective arguments, you need to focus on differences in likely survival of calves (fitness contricution) among females giving birth at different times. There should be higher survival rate with respect to predation (less chance of an individual calf being eaten) when there are more calves around (NOTE that what's important is not the number of calves surviving at each time, but the PROPORTION, or chance that an individual will survive). One prediction, then, is that risk of being eaten should decrease during the peak of the calving season. If per capita RATES of mortality due to predation (risk of being eaten per individual) DON'T decrease as number of calves goes up, the hypothesis would have to be reconsidered (you might also expect rates of mortality to go up near the end of the calving season as numbers of newborns tail off). You might also look at TOTAL numbers of calves being eaten per day (or whatever time unit) over the course of the calving season; the hypothesis predicts that this number should tend to level off at some point (as predator population is 'saturated') -- but this is a bit trickier. You could also compare (if you could find them) smaller populations where the saturation would be less effective -- and calf survival SHOULD be lower. A more manipulative experiment might involve (if you could figure out a way to do it) inducing births earlier or later and seeing if they experiemce greater likelihood of predation (as hypoth would predict) -- or just looking at individuals born before the peak). IN GENERAL, need to make sure argument compares individual reproductive success (that makes it a selective hypoth), AND need to be clear on how your study would assess a necessary prediction of the saturation hypoth.