Showing posts with label null models. Show all posts
Showing posts with label null models. Show all posts

Monday, February 8, 2016

New ways to address an old idea: rethinking the regional species pool

Like many concepts in ecology (metacommunity, community), the idea of a regional species pool is useful, makes conceptual sense, and is incredibly difficult to apply to real data. Originally, the idea of a species pool came from the theory of island biogeography (MacArthur and Wilson, 1967), where it referred to all the species that could disperse to an island. Today, the regional species pool appears frequently, across null models, studies of community assembly both empirical and theoretical, and metacommunity theory. 

Understanding how particular processes shape community membership—whether the environmental, competition, or dispersal limitation—depends on knowing the identity of all the species that could have potentially assembled there. The species pool as defined by the research provides the frame of reference against which to consider a community's composition. Most null models of community assembly rely on correctly identifying this set of species, and worse, tend to be very sensitive to bias in how the regional pool is defined. If you include all species physically present in a region, in your species pool, environmental filtering may appear to be particularly important simply because many of those species can’t actually survive in your community (the narcissus effect). Given the importance of null models to community ecology, defining the species pool appropriately is an ongoing concern.

There are many decisions that can be made when asking 'which species could potentially be members of a community'? You could include all species that can physically arrive at a site (so only dispersal or geographic distance limits membership), or only include those species that can both arrive and establish (both dispersal and environmental conditions limit membership). Further, the availability of data is key: if you use observational data used to determine the environmental limitations, you may also incorporate the outcome of biotic interactions indirectly. If some species are rare and have low observation likelihoods, they will be under-represented. Abundances may be useful but inaccurate depending on how they are measured. Finally, it is common to define species as either present or not present for a species pool; this binary approach may conceal ecologically important information.
The 'filtering' heuristic for understanding community membership. Species groups 1-3 could each be defined as a regional species pool, depending on the definition applied.
A number of recent papers provide alternative approaches to constructing species pools, meant to avoid these pitfalls. Researchers can define multiple contrasting species pools, each pool representing an ecological process (or perhaps multiple processes) of interest. Each species pool can be modified further to reflect the strength of a particular process in constraining membership. The regional pool is not seen as a single entity but as a number of possible configurations whose utility is in comparison.

Lessard et al. (2016) illustrates how to produce this kind of process-based species pool with various constraints (figure below). Their three-step approach is to:
  1. Define absolutely all possible members of regional pool. This is determined by identifying all assemblages in the region containing at least one species also found in the focal community (creating a 'dispersion field') (figure below, section A). This delineates a large region and identifies all species within it.
  2. Calculate the probability of resampling a species from the focal community elsewhere in the dispersion field. This is done in the context of the process of interest. For example, the probability of observing a species in the focal community and another community might be determined based on the geographical or environmental distance between those sites. Every site in the dispersion field would now have a probability (or distance really) associated with it, representing some similarity with the focal site.
  3. Finally, apply constraints to the calculated probabilities. You might choose to consider only the species within communities that are at least 50% similar to the focal community, for example. Such constraints would reflect the strength or importance of filtering by the process of interest.
Another recent paper (Karger et al., 2016) takes an approach with a number of commonalities to the Lessard et al. method. However, rather than resampling to produce potential pools of species (with species being defined as present or absent), they advocate a probabilistic approach to species pools. They suggest that species pools should be thought of as a set of probabilities of membership, which may be more reflective of ecological reality. In some ways, this is a simply a formalization of probabilistic sampling from Lessard, but instead of applying constraints, the researcher acknowledges that probabilities vary for different species. “Hence, a species pool can simply be defined as a function of probabilities of a species’ occurrence in the focal unit given the unit’s environmental and biotic conditions, geographical location and the time frame of interest”.

Both comparative and probabilistic approaches to defining species are logical advances, and one way of dealing with the untidy concept of the species pool. If this topic is of interest, a few other papers, albeit slightly less recent, are definitely worth reading: Pigot and Etienne 2015; Lessard et al. 2012, Carsten et al., 2013.
From Lessard et al., 2016. The three steps to build a species pool.

Monday, January 18, 2016

Have humans altered community interactions?

A recent Nature paper argues that there is evidence for human impacts on communities starting at least six thousand years ago, which altered the interactions that structure communities. “Holocene shifts in the assembly of plant and animal communities implicate human impacts” from Lyons et al. (2016, Nature) analyses data spanning modern communities through to 300 million year old fossils, to measure how the co-occurrence structure of communities has changed. The analyses look at the co-occurrence of pairs of species, and identifies those that are are significantly more likely ('aggregation') or less likely ('segregation') than a null expectation. Once the authors identified the species pairs with non-random co-occurrences, they calculated the proportion of these that were aggregated (i.e. y-axis on Figure 1). Compared to the ancient past, the authors suggest that modern species had fewer aggregated species pairs than in the past, perhaps reflecting an increase in negative interactions or distinct habitat preferences. 
Main figure from Lyons et al. 2016.
The interpretation offered by the paper is “[o]ur results suggest that assemblage co-occurrence patterns remained relatively consistent for 300 Myr but have changed over the Holocene as the impact of humans has dramatically increased.” and "...that the rules governing the assembly of communities have recently been changed by human activity". 

There are many important and timely issues related to this – changing processes in natural systems, lasting human effects, the need to use all available data from across scales, the value of cross-disciplinary collaboration. But, in my view, the paper ignores a number of the assumptions and considerations that are essential to community ecology. There are a number of statistical issues that others have pointed out (e.g. temporal autocorrelation, use of loess regression, null model questions), but a few in particular are things I was warned about in graduate courses. Such as the peril of proportions as response data (Jackson 1997), and the collapsing of huge amounts of data into an analysis of a summary of the data ("the proportion of significant pairwise associations that are aggregated"). Beyond the potential issues with calculating correct error terms, interpretation is made much more difficult for the reader. 

Most importantly, in my view, the Nature paper commits the sin of ignoring the essential role of scale in community ecology. A good amount of time and writing has been spent reconciling issues of spatial and temporal scale in ecology. These concepts are essential even to the definition of a 'community'. And yet, scale is barely an afterthought for these analyses.  (Sorry, perhaps that's a bit over-dramatic....) Fossils—undeniably an incomplete and biased sample of the an assemblage—can't be described to more than a very broad spatial and temporal scale. E.g. a 2 million year old fossil and a 2.1 million year old fossil may or may not have interacted, habitats may have varied between those times, and populations of S1 and S2 may well have differed greatly over a few thousand years. Compare this to modern data, which represents species occurring at the exact same time and in relatively small areas. The differences in scale is huge, and so these data are not directly comparable.

Furthermore, because we know that scale matters, we might predict that co-occurrences should increase at larger spatial grains (you include more habitat, so more species with the same broad requirements will be routinely found in a large area). But the authors reported that they found no significant relationship between dataset scale and the degree of aggregation observed (their Figure 2, not replicated here): this might suggest the methodology or analyses needs further consideration. Co-occurrence data is also, unfortunately, a fairly weak source of inference for questions about community assembly, without other data. So while the questions remain fascinating to me - is community assembly changing fundamentally over time? is that a frequent occurrence or driven by humans? what did paleo-communities look like? - I think that the appropriate data and analyses to answer these questions are not so easy to find and apply.


#######################
Response from Brian McGill:
My comment I was trying to post was:

Interesting perspective Caroline! As a coauthor, I of course am bound to disagree. I'll keep it short, but 3 thoughts:

1) The authors definitely agonized over potential confounding effects. Indeed spent over a year on it. In my experience paleoecologists default to assuming everything is an artefact in their data until they can convince themselves otherwise, much more than neo-ecologists do.
2) They did analyze the effects of scale (both space and time) and found it didn't seem to have much effect at all on the variable of interest (% aggregations). You interpret this as "this might suggest the methodology or analyses needs further consideration". But to me, I hardly think we know enough about scaling of species interactions to reject empirical data when it contradicts our very limited theoretical knowledge (speculation might be a better word) of how species interactions scale.
3) To me (and I think most of the coauthors) by far the most convincing point is that the pattern (a transition around 8000 years ago plus or minus after 300,000,000 years of constancy) occurs WITHIN the two datasets that span it (pollen of North America and mammal bones of North America both span about 20,000 years ago to modern times) and they have consistent taphonomies, sampling methods, etc and yet both show the transition.

I agree that better data without these issues is difficult (impossible?) to find. The question is what you do with that. Do you not anwwer certain questions. Or do you do the best you can and put it it out for public assessment. Obviously I side with the latter.

Thanks for the provoking commentary.

Cheers

Brian

Friday, July 17, 2015

The first null model war in ecology didn't prevent the second one*

The most exciting advances in science often involve scientific conflict and debate. These can be friendly and cordial exchanges, or they can be acrimonious and personal. Scientists often wed themselves to their ideas and can be quite reluctant to admit that their precious idea was wrong. Students in ecology often learn about some of these classic debates (Clements v. Gleason; Diamond v. Simberloff and Connor), but often other debates fade from our collective memory. Scientific debates are important things to study, they tell us about how scientists function, how they communicate, but more importantly by studying them we are less likely to repeat them! Take for example the debate over species per genus ratios, which happened twice, first in the 1920s, then again in the 1940s. The second debate happened in ignorance of the first, with the same solution being offered!

To understand the importance of testing species-genus ratios we can start with a prediction from Darwin:

As species of the same genus have usually, though by no means invariably, some similarity in habits and constitution, and always in structure, the struggle will generally be more severe between species of the same genus, when they come into competition with each other, than between species of distinct genera (Darwin 1859)

To test this hypotheses, the Swiss botanist, Paul Jaccard (1901) created a ‘generic coefficient’ to describe biogeographical patterns and to measure the effects of competition on diversity. The generic coefficient was a form of the species-genus ratio (S/G), calculated as G/S x 100, and he interpreted a low S/G ratio (or high coefficient) to mean that competition between close relatives was high, and a high ratio (low coefficient) meant that there was a high diversity of ‘ecological conditions’ supporting closely related species in slightly different habitats (Jaccard 1922). At the same time as Jaccard was working on his generic coefficient, the Finnish botanist, Alvar Palmgren, compiled S/G patterns across the Aland Islands and inferred the low S/G values on distant islands to reflect random chance (Palmgren 1921). Over several years, Jaccard and Palmgren had a heated exchange in the literature (across different journals and languages!) about interpreting S/G ratios (e.g., Jaccard 1922, Palmgren 1925). Palmgren’s contention was that the S/G ratios he observed were related to the number of species occurring on the islands –an argument which later work vindicates. A few years after their exchange, another Swiss scientist, Arthur Maillefer, showed that Jaccard’s interpretation was not supported by statistical inference (Maillefer 1928, 1929). Maillefer created what is likely one of the first null model in ecology (Jarvinen 1982). He calculated the expected relationship between Jaccard’s generic coefficient and species richness from ‘chance’ communities that were randomly assembled (Fig. 1 –curve II). Maillefer rightly concluded that since the number of genera increase at a slower rate than richness, the ratio between the two couldn’t be independent of richness.

Jaccard’s generic coefficients plotted by Maillefer showing the relationship between the coefficients (calculated as Genera/Species x 100) and species richness (Maillefer 1929). The four curves depict different scenarios. Curve I shows the maximum values possible, and curve IV is the minimum. Curve III is when coefficients are calculated on sampled values from a flora, which stays on a mean value. Curve II represents the first null model in ecology, where species are randomly sampled (‘hasard’ is translated as chance or luck) and the coefficient was calculated from the random assemblages.

 This example is especially poignant because it foreshadowed another debate 20 years later –and not just in terms of using a null expectation, but that S/G ratios cannot be understood without comparison to the appropriate null. Elton (1946) examined an impressive set of studies to show that small assemblages tended to have low S/G ratios, which he thought indicated competitive interactions. Mirroring the earlier debate, Williams (1947), showed that S/G ratios were not independent of richness and that inferences about competition can only be supported if observed S/G values differed from expected null values. However, the error of inferring competition from S/G ratios without comparing them to null expectations continued into the 1960s (Grant 1966, Moreau 1966), until Dan Simberloff (1970) showed, unambiguously, that, independent of any ecological mechanism, lower S/G are expected on islands with fewer species. Because he compared observationed values to null expectations, Simberloff was able to show that assemblages actually tended to have higher S/G ratios than one would expect by chance (Simberloff 1970). So not only is competition not supported, but the available evidence indicated that perhaps there were more closely related species on islands, which Simberloff took to mean that close relatives prefer the same environments (Simberloff 1970).


Darwin, C. 1859. The origin of the species by means of natural selection. Murray, London.
Elton, C. S. 1946. Competition and the Structure of Ecological Communities. Journal of Animal Ecology 15:54-68.
Grant, P. R. 1966. Ecological Compatibility of Bird Species on Islands. The American Naturalist 100:451-462.
Jaccard, P. 1901. Etude comparative de la distribution florale dans une portion des Alpes et du Jura. Bulletin de la Societe Vaudoise des Sciences Naturelle 37:547-579.
Jaccard, P. 1922. La chorologie selective et sa signification pour la sociologie vegetale. Memoires de la Societe Vaudoise des Sciences Naturelle 2:81-107.
Jarvinen, O. 1982. Species-To-Genus Ratios in Biogeography: A Historical Note. Journal of Biogeography 9:363-370.
Maillefer, A. 1928. Les courbes de Willis: Repar- tition des especes dans les genres de diff6rente etendue. Bulletin de la Societe Vaudoise des Sciences Naturelle 56:617-631.
Maillefer, A. 1929. Le Coefficient générique de P. Jaccard et sa signification. Memoires de la Societe Vaudoise des Sciences Naturelle 3:9-183.
Moreau, R. E. 1966. The bird faunas of Africa and its islands. Academic Press, New York, NY.
Palmgren, A. 1921. Die Entfernung als pflanzengeographischer faktor. Series Acta Societatis pro Fauna et Flora Fennica 49:1-113.
Palmgren, A. 1925. Die Artenzahl als pflanzengeographischer Charakter sowie der Zufall und die säkulare Landhebung als pflanzengeographischer Faktoren. Ein pflanzengeographische Entwurf, basiert auf Material aus dem åländischen Schärenarchipel. Acta Botanica Fennica 1:1-143.
Simberloff, D. S. 1970. Taxonomic Diversity of Island Biotas. Evolution 24:23-47.
Williams, C. B. 1947. The Generic Relations of Species in Small Ecological Communities. Journal of Animal Ecology 16:11-18.


*This text has been modified from a forthcoming book on ecophylogenetics authored by Cadotte and Davies and published by Princeton University Press

Monday, April 21, 2014

Null models matter, but what should they look like?

Neutral Biogeography and the Evolution of Climatic Niches. Florian C. Boucher, Wilfried Thuiller, T. Jonathan Davies, and Sébastien Lavergne. The American Naturalist, Vol. 183, No. 5 (May 2014), pp. 573-584

Null models have become a fundamental part of community ecology. For the most part, this is an improvement over our null-model free days: patterns are now interpreted with reference to patterns that might arise through chance and in the absence of ecological processes of interest. Null models today are ubiquitous in tests of phylogenetic signals, patterns of species co-occurrence, models of species distribution-climate relationships. But even though null models are a success in that they are widespread and commonly used, there are problems--in particular, there is a disconnect between how null models are chosen and interpreted and what information they actually provide. Unfortunately, simple and easily applied null models tend to be favoured, but they are often interpreted as though they are complicated, mechanism-explicit models.

The new paper “Neutral Biogeography and the Evolution of Climatic Niches” from Boucher et al. provides a good example of this problem. The premise of the paper is straightforward: studies of phylogenetic niche conservation tend to rely on simple null models, and as a result may misinterpret what their data shows because of the type of null models that they use. The study of phylogenetic niche conservation and niche evolution is becoming increasingly popular, particularly studies on how species' climatic niches evolve and how climate niches relate to patterns of diversity. In a time of changing climates, there are also important applications looking at how species respond to climatic shifts. Studies of changes in climate niches through evolutionary time usually rely on a definition of the climate niche based on empirical data, more specifically, the mean position of a given species along a continuous abiotic gradient. Because this is not directly tied to physiological measurements, climate niche data may also capture the effect of dispersal limitations or biotic interactions. Hence the need for null models, however the null models used in these studies primarily flag changes in climate niche that result from to random drift or selection in a varying environment. These types of null models use Brownian motion (a "random walk") to answer questions about whether niches are more or less similar than expected due to chance, or else whether a particular model of niche evolution is a better fit to the data than a model of Brownian motion.

The authors suggest that the reliance on Brownian motion is problematic, since these simple null models cannot actually distinguish between patterns of climate niches that arise simply due to speciation and migration but no selection on climate niches, and those that are the result of true niche evolution. If this is true, conclusions about niche evolution may be suspect, since they depend on the null model used. The authors used a neutral, spatially explicit model (known as an "alternative neutral biogeographic model") that simulates dynamics driven only by speciation and migration, with species being neutral in their dynamics. This provides an alternative model of patterns that may arise in climate niches among species, despite the absence of direct selection on the trait. The paper then looks at whether climatic niches exhibit phylogenetic signals when they arise via neutral spatial dynamics; if gradualism a reasonable neutral expectation for the evolution of climatic niches on geological timescales; and whether constraints on climatic niche diversification can arise simply through bounded geographic space. Simulations of the neutral biogeographic model used a gridded “continent” with variable climate conditions: each cell has a carrying capacity, and species move via migration and split into two species either by point mutation, or else by vicariance (a geographic barrier appears, leading to divergence of 2 populations). Not surprisingly, their results show that even in the absence of any selection on species’ climate niches, patterns can result that differ greatly from a simple Brownian motion-based null model. So the simple null model (Brownian motion) often concluded that results from the more complex null model were different from the random/null expectation. This isn't a problem per se. The problem is that currently interpretations of the Brownian motion model may be that anything different from null is a signal for niche evolution (or conservation). Obviously that is not  correct.

This paper is focused on the issue of choosing null models for studies of climate niche evolution, but it fits into a current of thought about the problems with how ecologists are using null models. It is one thing to know that you need and want to use a null model, but it is much more difficult to construct an appropriate null model, and interpret the output correctly. Null models (such as the Brownian motion null model) are often so simplistic that they are straw man arguments – if ecology isn't the result of only randomness, your null model is pretty likely to be a poor fit to the data. On the other hand, the more specific and complex the null model is, the easier it is to throw the baby out with the bathwater. Given how much data is interpreted in the light of null models, it seems that choosing and interpreting null models needs to be more of a priority.

Monday, June 17, 2013

Another round in Diamond vs. Simberloff: revisiting the checkerboard pattern debate

Edward F. Connor, Michael D. Collins, and Daniel Simberloff. 2013. "The Chequered History of Checkerboard Distributions." Ecology. http://dx.doi.org/10.1890/12-1471.1.

One of the most vociferous recent debates in community ecology started in the 1970s between Jared Diamond and Dan Simberloff (and colleagues) regarding whether 'checkerboard patterns' of bird distributions provided evidence for interspecific competition. This was an early and particularly heated example of the pattern versus process debate that continues in various forms today. Diamond (1975) proposed that the distribution of birds in the Bismark Archipelago, and particularly the fact that some pairs of bird species did not co-occur on the same islands (producing a checkerboard pattern), was evidence that competition between species limited their distributions. The issue with using this checkerboard pattern as evidence of competition, which Connor and Simberloff (1979) subsequently pointed out, was that a null model was necessary to determine whether it was actually different from random patterns of apparent non-independence between species pairs. Further, other mechanisms (different habitat requirements, speciation, dispersal limitations) could also produce non-independence between species pairs. The original debate may have died down, but the methodology for null models of communities suggested by Connor and Simberloff has greatly influenced modern ecological methods, and continues to be debated and modified to this day.

The original null model of bird distributions in the Bismark Archipelago involved a binary community matrix (rows represent islands, columns represent species) with 0s and 1s representing species presences or absences. Hence, all the 1s in a row represent the species present on the island. The original null model approach involved randomly shuffling the 0s and 1s, maintaining island richness (row sums) and species range sizes (column sums). The authors of a new paper in Ecology admit that the original null models didn’t accurately capture what Diamond meant by a "checkerboard pattern". This is interesting in part because two of the authors (E.F. Connor and Dan Simberloff) lead the debate against Diamond and introduced the binary matrix approach for generating null expectations. So there is a little bit of a ‘mea culpa’ here. The authors note that earlier null models captured patterns of non-overlap between species' distributions but didn’t differentiate between non-overlap between species with overlapping ranges compared to non-overlap between species which simply occurred on sets of geographically distant islands (referred to here as 'regional allopatry'). The original binary matrix approach didn’t consider spatial proximity of species ranges.

With this fact in mind, the authors re-analyzed checkerboard patterns in the Bismark Archipelago, but in such a way as to control for regional allopatry. True checkerboarding was defined as: “a congeneric or within-guild pair with exclusive distribution, co-occurrence in at least one island group, and geographic ranges that overlap more or significantly more than expected under an hypothesis of pairwise independence”. This definition appears closer to Jared Diamond's original definition and so a null model that captures this is probably a better test of the original hypothesis. The authors looked at the overlap of convex hulls defining species’ ranges and when randomizing the binary matrix, added the further restriction that species could occur only within the island groups where they were actually found (instead of being randomly shuffled through any island, as before).

Even with these clarified and more precise null models, the results remain consistent. True checkerboarding appears to rarely occur compared to chance. Of course, this doesn't mean that competition is not important, but “Rather, in echoing what we said many years ago, one can only conclude that, if they do compete, competition does not strongly affect their patterns of distribution among islands.” More generally, the endurance of this particular debate says a lot about the longstanding tension in ecology over the value and wealth of information captured by ecological patterns, and the limitations and caveats that come with such data. There is also a subtle message about the limitations of null models: they are often treated as a magic wand for dealing with observed patterns, but null models are limited by our own understanding (or ignorance) of the processes at play and our interpretation of their meaning. 

Tuesday, February 14, 2012

A good null model is hard to find



Ecologists have always found the question of how communities assemble to be of great interest. However, studies of community assembly are often thwarted by the large temporal and spatial scales over which processes occur, making experimental tests of assembly theory difficult. As a result, researchers are often forced to rely on observational data and make inferences about the mechanisms at play from patterns alone. While historical assembly research focused on inferring evidence of competition or environmental filtering from patterns of species co-occurrence, more recent research often looks at patterns of phylogenetic or trait similarity in a community to answer these questions. 

Not surprisingly, when methods rely heavily on observational data they are open to criticism: one of the most important outcomes of early community assembly literature was the recognition that patterns that appeared to support a hypothesis about competition or environmental filtering could in fact result by random chance. This ultimately lead to the widespread incorporation of null models, which are meant to simulate patterns that might be observed by random chance (or other processes not under study), against which the observed data can be compared. Patterns of functional and phylogenetic information in communities can also be compared against null expectations to ensure that patterns of phylogenetic or functional over- or under-dispersion can't arise due to chance alone. However, while null models are an important tool in assembly research, they are sometimes as the foolproof solution to all of its problems.

In a new paper by Francesco de Bello, the author states frankly “whilst reading null-model methods applied in the literature (indeed including my work), one may have the impression of reading a book of magic spells”. While null models are increasingly sophisticated, allowing researchers to determine which processes to control for and which to leave out, de Bello suggests that the decision to include or omit particular factors from a null model can be unclear, making it difficult to interpret results or compare results across studies. Further, results from null models may not mean what researchers expect them to mean.

Using the example of functional diversity (FD; variation in trait values among species in a community), de Bellow illustrates how null models may have different meanings than expected. Ideally, a null model for FD should produce random values of FD, against which the observed values of FD can be compared. Interpreting the difference between the observed and random results can be done using the standardized effect size (SES, the standardized difference between the observed and randomly generated FD values); SES values >0 show that traits are more divergent than expected by chance, suggesting competition structures communities. If SES<0, traits are more convergent than expected by chance, suggesting environmental conditions structure communities. Finally, if SES ~0, then trait values aren’t different from random. However, de Bello shows that the SES is driven by the observed FD values, because the ‘random’ FD values are dependent on the pool of observations sampled. This means that the values the null model produces are ultimately dependent on those observed values, despite the fact you plan to make inferences by comparing the null and observed values as though they are independent. For example, consider the situation where you are building a null model of community structure for plant communities found along two vegetation belts. If the null model is constructed using all the plant communities, regardless of the habitat they are found in, the resulting null FD value will be higher, since species that are dissimilar and found in different vegetation belts are being randomly selected as occurring in a community. If null models are constructed separately for both vegetation belts, the null FD value is lower, since species are more similar. The magnitude of the difference between the null model and the observed values, and further, the biological conclusions one would take from this study, would therefore depend on which null model was constructed.

from de Bello 2012, illustrating how combining species pools (right) can lead to entirely different decisions about whether communities are convergent or divergent in terms of traits than when they are considered separately (left, centre).
De Bello’s findings make important points about the limitations of null models, particularly for functional diversity, but likely for other types of response variable. The type of null model he explores is relatively simplistic (reshuffling of species among sites), and the suggestion that the species pool affects the null model is not unique (Shipley & Weiher, 1995). However, even sophisticated and complex null models need to be biologically relevant and interpretable, and null models are still frequently used incorrectly. Although only mentioned briefly, De Bello also notes another problem with studies of community assembly, which is that popular indices like FD, PD, and others may not always be able to distinguish correctly between different assembly mechanisms (Mouchet et al. 2010Mayfield & Levine, 2010), something that null model do not control for.