This course will cover the basics of parsimony analysis and character optimisation, tree-searches, diagnosing and summarising results efficiently, and measuring group supports.
This course will be informal, with extensive hands-on exercises which will help students get familiar with the main aspects of phylogenetic analysis using TNT. For each of the units in the course, there will be a lecture (one to two hours, depending on the topics), then switching to exercises illustrating the points just seen in the lecture. Switches between “lecture” and “hands-on” mode will be dynamic, depending on how students progress with the exercises.
This course will make extensive use of TNT. There will also be a demonstration and some practice with GB->TNT, a program to create TNT matrices from GenBank data (in turn, GB->TNT requires installation of an alignment program, ideally Mafft or Muscle and possibly BioEdit to inspect alignments).
Graduate or postgraduate degree in any Biosciences discipline (including palaeontologists no matter the degree), basic knowledge of statistics, cladistics and personal computers. All participants must bring their own personal laptop, Windows strongly recommended (the software works with Macintosh, though without a graphic interface).
Monday, October 2nd, 2017. Intro and Basics.
Parsimony and phylogenetic systematics. Character optimisation and mapping. Most parsimonious reconstructions and specific changes. Input/output in TNT. Dataset formats. Using GB->TNT to create matrices. Instruction files. Options for graphic output (SVG, metafiles). Creation of “batch” files. Editing trees. Handling tree files. Groups of trees, characters and taxa.
Tuesday, October 3rd, 2017. Tree calculation.
Tree searches. Exact solutions, Wagner trees, branch-swapping. Local and global optima. Use of multiple addition sequences. Improving search strategies. Factors which affect the efficiency of tree-searches. Constraints and “timeouts”.
Tuesday, October 3rd and Wednesday, October 4th, 2017. Ambiguity and consensus; summarizing results.
Zero-length branches and collapsing rules. Types of consensus and their use; improving consensus trees; supertrees. Pruned consensus. Comparison of tree-topologies; SPR distances.
Wednesday, October 4th, 2017. Character weighting.
Successive and implied weighting. Auto-weighted optimization. Refining character weighting with blocks; taking into account missing entries. User-defined weighting functions.
Thursday, October 5th, 2017. Group supports.
Concept of group support. Bremer supports; how to calculate them; search of suboptimal trees. Problems with Bremer supports; absolute and relative Bremer support. Partitioned Bremer support and individual Bremer supports. Measures based on resampling; effect of search strategies and collapsing rules. Problems with resampling methods.
Thursday, October 5th and Friday, October 6th, 2017. Tree search in large and difficult data sets.
Special search algorithms. Sectorial searches. Ratchet and drifting. Tree fusing. Combining different algorithms. Driven searches and stabilization of consensus.
Friday, October 6th, 2017. Scripting.
Automation of decisions to go beyond simple commands. Flow control. Decisions. Expressions, user variables, and internal variables. Design of simple scripts.