FORESTS 2019:  QUESTIONS TWO: DUE THURSDAY 24 OCTOBER

MAKE SURE you answer all portions of the questions posed (address all of the bold-faced bits)

1. These graphs show how two related species of fish -- brown trout (open bars) and arctic char (filled bars) use different types of prey (as percentages of their total diet) when the two species are living together in same streams (sympatric, bottom) or each in separate streams/by itself (allopatric, top).  The prey types are: A = small fish, B = small crustaceans, C = large crustaceans, D = insect larvae, E = terrestrial insects, F = everything else.  Interpret these data to
generate hypotheses about the the fundamental and realized niches of these species as they are related to food, and about the competitive relationships between themThese data are collected from stomach contents of wild fish in unmanipulated streams.  Propose an additional experiment to test some aspect of the hypotheses you come up with.

fish

This pattern might be the result of competitive interactions involving niche partitioning -- that is, some displacement of each species from parts of its fundamental niche (here, 'food niche' only) when the two are in sympatry.  When they're not in same habitat (allopatry), the two species exhibit similar patterns of food use, though with some slight proportional differences in types of food consumed or preferences.  Thus, the patterns in top graph (assuming no other competitors present) amount to an expression of fundamental niche as far as food goes.  In sympatry, competition seems to drive a change in consumption by CHAR by quite a bit; they shift almost entirely to small crustaceans (not preferred much by either species in allopatry), while trout become pretty focused on insect larvae (a more modest shift in preference, but a significant narrowing of diet). In both cases, this  realized niche appears to be reduced from fundamental niche -- both species are affected by competition -- but the shifts are asymmetrical in that char, in sympatry, are largely restricted to a food group that isn't favored by either when they're in allopatry; in other words, you might interpret this evidence to suggest that char are a weaker competitor. All of this should be regarded as HYPOTHESIS, as there are other things that might be going on.  Since these are natural streams, we don't know. for example, whether a) there are other species of competing fish in some streams or b) the abundance of these different food groups varies among the streams regardless of which fish are there (for example, observed patterns could simply be result of differences in prey availability among streams!).  Possible experiments/tests (among many) might look at these assumptions/constraints by posing appropriate hypotheses.  You could add food to streams with competitors (expectation, IF the observed pattern is about competition between char and trout (hypothesis), there should be at least short-term 'return' to favored food groups, at least until fish populations grew). You could add char or trout to streams where the other was previously in allopatry and see if both shift to food-use patterns like bottom graph (as would be predicted by the competition hypothesis outlined).  You could remove one species or the other or both from some stretches of stream (like some experiments in the competition notes). You could add or remove specific groups of prey types and see if patterns change as predicted.  Some of you suggested that the INTERspecific competition might be expressed by more spatially restricted foraging (to different portions of stream) by one or both species; that could be examined by appropriate experiments or observations (but describe them in at least a basic way). You could do some similar experiments in controlled environments in lab or in experimental streams set up for the purpose and similar in all other ways to control for factors like differences between natural streams.  You might predict that in the sympatric situations, all else equal, both species should have lower-density populations if the sympatric pattern is, in fact, due to competition.  And so on.  In any case, you should be clear as to what hypothesis you're testing, and consider expected results if your hypothesis is correct or not...

2.  According to the 'competitive exclusion principle', species that are competing for the same limiting resource can't coexist indefinitely; one or the other will eventually prove to be the superior competitor for the particular resource in question.  The superior competitor's population will grow until the 'left-over' resources are inadequate to support a viable population of the other species. Coexisting 'guilds' of species tend to show differences in resource use (like the graded beak sizes in Darwin's finches), a pattern usually attributed to competitive interactions preventing coexistence of species that are 'too similar'.  
    Imagine a guild of several species coexisting in a particular habitat; each is best at using some portion of the range of available resources, and, together, they fully consume available resources (for example, you might imagine several species of seed-eating ants, each with slightly different mandible size, such that the 'food dimension' of their niche is somewhat different.).
    NOW, imagine adding another species of the same guild (i.e., a potential competitor) to the same habitat.  What are the possible outcomes of this introduction?  Under what circumstances might such an introduction coexist with the initial species thus increasing the total diversity of the guild in that habitat (you are allowed to imagine changes in habitat...)?  Use niche terminology as appropriate.
 
Remember that the 'competitive exclusion principle' says that  species can't coexist if they are limited by the same limiting resource. Because the introduced species is of same 'guild' their resource usage is almost certainly going to overlap with that of native species (like the fish in #1) -- i.e., their fundamental niches will overlap in 'resource space'.  BUT, there are several possible outcomes.  IF the introduced species is similar enough to the existing members of the guild in resource use AND populations of these competing species are all limited by resources, then we would expect one or the other to go extinct .  BUT if each has parts of it's fundamental niche where it's the superior competitor, then they might partition resoruces so that they can coexist (again like fish in #1) (increasing the local diversity in the guild); this would be possible only if the 'realized niche' where each species is best competitor is 'big enough' (includes enough resources) to sustain a viable population.  ANOTHER possibility for coexistence would be if the populations in question are not limited by resource availability (maybe by predation, or density-independent factors), in which case competitive exclusion would not occur.  
SO: 1) introduced species goes extinct because out-competed by native species (one or more), and diversity is unchanged; 2) introduced species proves superior competitor and one or more native species excludede (driven to extinction), so guild diversity either unchanged or
declines ; 3)  Introduced species is able to gain control in realized niche of some resource that allows it to persist without causing competitive exclusion of existing species (could be previously unused resource OR causing SOME non-critical reduction in realized niche of existing species), resulting in increased diversity within guild; OR 4) IF population sizes of all potential competitors with introduced species are limited by something other than resource availability, all might coexist (again, increased diversity)

ANSWER 2 OF THE FOLLOWING 3: (In all of these use evolutionary/selective arguments carefully, making sure you put things in terms of how individuals with different traits are likely to differ in reproductive success (fitness) as a result of different selective factors or 'regimes'). 

3.  Insects are the primary herbivores affecting forest plants (and most plants that aren't in grazing/grassland ecosystems), and a wide range of chemical defenses has evolved in plants.  They come in two types:
    'Qualitative defenses' are outright poisons -- insecticides (many of our agricultural insecticides are modeled after these).  They tend to be effective in small amounts, and they're often quite small molecules (e.g., cyanide), thus energetically 'cheap' to produce.   
    'Quantitative defenses' are indigestible, often bitter, compounds that dilute the food value of the plant tissue and often make it hard to digest (tannins are an example); these chemicals are typically large molecules, and they have to be present in high concentrations to be effective, thus energetically expensive to produce in effective quantities.
    Discuss the trade-offs -- selective advantages and risks or disadvantages -- involved for the plant in each of these defense 'strategies'.  Make sure you put your arguments in appropriate selective terminology.  Also consider how the evolutionary/selective response of insect herbivores to these two types of defenses might differ.   (Here is a CLUE: large, long-lived plants  like trees tend to employ quantitative defenses, while smaller or short-lived plants are more likely to employ qualitatitve toxins.  See if you can explain why this makes sense in light of your consideration of trade-offs.)

Trade-offs: Toxins (qualitative defenses) are cheap and effective until herbivores evolve tolerance/resistance to them (which is inevitable; examples in evolution class notes for pesticides and antibiotics...); quantitative defenses are inherently expensive (big molecules that have to be present in high concentrations), but essentially unbeatable (herbivores can't simply 'overcome'  lower nutritional quality/concentration). Weedy, small, short-lived plants must reproduce early since they don't live long. Allocating large amounts of resources to quantitative defense would be a big trade-off against the great selective importance of getting offspring out quickly; toxins better in that regard.  BIG trade-off is that insect herbivores can evolve tolerance to toxins, but, by that time, plant's descendants are already elsewhere -- AND short generation time of short-lived plants means they have some chance of keeping up in the 'arms race' through selection for variants on the toxin. ALSO, small plants may be just harder to find for specialized herbivores (who've evolved resistance to the toxin). Plants that are long-lived and get big,  on the other hand, are easy to find, and their generation time is so long that the herbivorous insects can 'outrun' them in any evolutionary arms race (many generations of insect per generation of maple tree...) -- so toxins probably wouldn't work very well. However, these plants, by living long and getting big can defer reproduction and accumulate resources over long periods and allocate a lot to survival issues like chemical defenses; thus, quantitative defenses are 'affordable', and less likely to be overcome even by fast-evolving insects. SO, quantitatively defended oak trees are likely to have higher fitness than qualitatively defended oak trees; the reverse is likely true of dandelions.

4. Insect populations exposed to regular applications of insecticides typically show evolution of genetic resistance quite quickly; 5-10 years of intensive use is about all a new insecticide is good for.  This is a simple (directional) selection scenario; strong toxins impose strong selection if there's any heritable (genetic) variation in tolerance/resistance.  Individual insects who are even slightly more tolerant of the toxin will have  higher fitness -- reproductive contribution to subsequent generations -- when the toxin is a major cause of mortality.
   a) If the insecticide is removed from the environment, it is usually the case that insecticide-resistant genotypes in insect population have lower fitness than the normal or 'wild-type'.  Offer a hypothesis explaining this phenomenon.  Predict what would happen, in this case,  if the insecticide were applied only in brief episodes separated by a number of years.
   b) It's also frequently the case that resistance does NOT evolve when several different types of insecticide (that is, ones that work by different means) are used in combination. (NOTE that resistance is frequently a single-gene trait -- i.e., conferred by a single mutation to a gene related to whatever physiological pathway the insecticide poisons).  Propose a reason for this phenomenon.    
    (A SIDE NOTE:  this is  precisely parallel to what occurs when pathogens are treated with antibiotics or antivirals; the second scenario corresponds to modern treatment of HIV infection with 'cocktails' of multiple anitviral drugs)

a) REMEMBER that lower fitness means lower reproductive success relative to other individuals in the same population. Any argument about natural selection has to be in these terms. So this means resistant insects, when no insecticide is present, have lower reproductive success than 'wild-type' insects.  This implies there must be some "cost" to insecticide resistance -- a trade-off (think about things like sickle-cell trait -- an exact parallel; there's also an example regarding mosquito resistance to insecticides in the first evolution notes).  SO, the genetic trait for resistance will be 'selected against' -- will decrease in frequency -- as long as no insecticide is present, and would eventually be lost from the population.  Thus the 'once every several years' scenario presents a reversal of 'selective pressure' every few years;  mutations or genetic traits that make insect resistant will tend to decline between applications, so, when insecticide IS applied it will still be effective (or, rather, be effective AGAIN)...
b)
Essential point here is that, if the insecticides work in  different ways, resistance to one is unlikely to make for resistance to the others -- so to resist ALL of them simultaneously, you'd have to, by chance, in the same insect or insect 'lineage' get a series of resistant mutations arising for each insecticide separately either all at once or in close enough sequence in time that the first resistance gene isn't lost due to action of a different insecticide before the next resistance mutation occurs.  Since mutations are random, that's tremendously unlikely.  Consequently, each time a mutation resistant to one insecticide pops up, that insect carrying it will almost certainly be killed by one of the OTHER insecticides.; so the mutation confers little or no fitness advantage.
(A COUPLE OF GENERAL POINTS about selective arguments: be VERY CLEAR that selection doesn't happen 'in order' to do something, or because it helps the species survive.  It happens
AS A CONSEQUENCE of heritable differences present in population.. Adaptation is a consequence of selection, but selection doesn't happen 'in order to' produce adaptation.  And selection is driven by differences among individuals in reproductive succes; any effect on the species' survival as a whole is coincidental. SO selective arguments must be in terms of differences in reproductive success (fitness) of individuals due to differences in heritable traits/genetics -- and new differences in genetics happen by either mutation (random) or by gene flow from other populations...)

5. Leaves of deciduous trees start ‘shutting down’ (senescing) in the fall, recovering materials from their foliage and then shedding it,  in response to a combination of dropping temperatures and shortening day length (the precise 'triggers' vary).  Losing leaves is generally seen as an adaptation to reduce loss of water from the plant (leaves are the main place where water is lost) during the winter when below-freezing temperatures make it impossible for trees to acquire water from frozen soils or transport it through frozen tissues. Another way of thinking about this; leaves that can resist water loss during freezing weather are costly...  NOW, assume that the environmental 'triggers' for leaf senescence are genetically controlled (heritable). 
    a) Hypothesize about selective trade-offs involved in the timing of leaf senescence; what would be the primary selective costs and/or benefits of holding leaves longer? of dropping them sooner?
    b) Day-length is often an important part of the triggering process; why would this be a particularly selectively advantageous 'cue' for the plant if the primary adaptive value of losing leaves is relate to cold temperatures (i.e., why not respond simply to cold temperatures)?  Offer at least one hypothesis suggesting a 'fitness' value for responding to an 'indirect' cue like day-length.
    c) Some species (like beech and sugar maple) have very broad latitudinal ranges, including areas with very different seasonal timing (beech in its southernmost range may see only a month or two of 'winter' with freezing temperatures possible while in its northernmost range, freezes are possible for more like 7 months).  What would you predict about the genetic/heritable triggers for leaf senescence across such a range (what would happen to a beech tree from Georgia transplanted to Vermont)? (I suggest you talk about stabilizing and directional selection dynamics in your answers.)

The important thing here was to give thought as to how the balance between benefits and costs or risks would change if different cues were used.  IF temperate trees depended directly and only on temperature to cue leaf loss vs. using day-length, for example, then they might drop leaves early in the fall due to an early cold snap and consequently miss some good photosynthesizing weather later on (thus reducing fitness compared to trees that kept leaves through that 'minor' bit of cold weather)-- OR if the weather stayed unusually warm until late in fall, trees might experience a really hard freeze before they started physiologically 'shutting down' leaves and have leaves killed while they're still full of good stuff that could be recovered and stored.  The specific patterns of temperature change through the fall are highly variable and unpredictable in detail.  Day length, on the other hand, is a highly predictable 'cue' -- it  changes in ways that are absolutely predictably related to the overall trend of the seasons; using it as a trigger could mean you'd be less likely to experience the costs of responding directly to temperature in years when temperature change is not as 'orderly' as the average seasonal trends.  You could also think in terms of how decreasing length of day means less photosynthetic gain to balance against costs and risks of maintenance and defense.  Ultimately, selection should favor the optimal 'balancing' of these risks in a 'stabiliziing selection' scenario within any particular habitat/region; start leaf senescence at the point where 'risk-cost' of leaf damage if you kept them functioning longer exactly balances likely fitness gain from keeping them functioning longer (which, of course, gets smaller as days get shorter anyhow)... You could experiment, for example, by inducing trees to drop leaves at different times by controlling their light environment (controlling day-length experienced) and see whether trees that dropped their leaves 'abnormally' early or late had slower or faster growth rates (growth rates being an indicator of total photosynthetic gain for a growing season). For the last part, you might predict that selection would have favored different timings in different areas (recall that we saw different timing for leaf drop on some of the same species as we went up the mountains to Branch Pond) so that trees from Georgia moved to Vermont would likely respond to changing day-length in a 'sub-optimal' way -- maybe holding their leaves into really cold weather when they could be badly damaged (because, in Georgia, it doesn't get really cold until later in winter when day lengths are quite short)...