**ISI - INTERNATIONAL STATISTICAL INSTITUTE **

Back to SBR-index

Short Book Reviews

Reviews 2002

back to index

Title STATISTICAL TECHNIQUES IN BIOASSAY, 2nd revised and enlarged edition. Author Z. Govindarajulu. Publisher Basel: Karger, 2001, pp. xvii + 234, SwFr. 98.00/DM127.00/ US$85.25. Contents:

1. Introduction

2. Preliminaries

3. Algebraic dose-response relationships

4. The logit approach

5. Other methods of estimating the parameters

6. The angular response curve and other topics

7. Estimation of points on the quantal response function

8. Sequential up and down methods

9. Estimation of 'Safe Doses'

10. Bayesian bioassay

11. Radioimmunoassays

12. Sequential estimation of the mean logistic response functionReadership: Experimental scientists, statisticians

This is a revised and enlarged version of the 1988 edition [Short Book Reviews, Vol. 9, p.3].

The scope is wide, though there is no detailed coverage of robust methods, nor of overdispersion, and little material on times to response, especially with regard to the material of Chapters 8 - 11, and historical work. It is claimed that readers only need a basic course in inference, but there is much algebraic detail. There are only five figures, with the first not appearing until page 119. It is curious that the first mention of GLIM is not until page 95, and there appears to be no explicit mention of generalized linear models; the sole reference to Gibbs sampling is an aside on the last page of Chapter 10. More guidance, more selection, and less detail would have resulted in a book that was less encyclopedic, but probably more practically useful.

Reviewer: Institute University of Kent Place Canterbury, U.K. Name B.J.T. Morgan

Title EXPERIMENTAL DESIGN WITH APPLICATIONS IN MANAGEMENT, ENGINEERING, AND THE SCIENCES. Author P.D. Berger and R.E. Maurer. Belmont, Publisher California: Wadsworth/Thomson Learning, 2002, pp. xvi + 480. Contents:

PART I: Primary Focus on Factors Under Study

PART II: Primary Focus on the Number of Levels of a Factor

PART III: Response-Surface Methods, Other Topics, and the Literature of Experimental DesignReadership: Students of quantitative methods in business

This is a largely traditional treatment of experimental design with an interesting overlay of scenarios from the authors' experience in business. The presentation is more algebraic than geometric or graphical; y-bar-double-dot notation appears seven pages prior to the first table of illustrative data, and seventeen pages before these data are first compared via box plots. The dearth of pictures is surprising for a modern textbook. I am also concerned that analysis of variance and significance testing are given such emphasis as core methods of interpretation. Confidence intervals for means and effects are treated very briefly.

The end-of-chapter exercises are an important part of the book. Some of the methodologies are illustrated with software output from JMP and SPSS, and data sets accompanying the text are available to download at the publisher's web site.

One can sense from reading the text that the authors are convincing expositors in their classrooms. They offer many good insights in their written narratives, but an impatient reader may miss these. The book will be more effective as a course text (accompanied by lectures) than as a reference for self-tutoring. Most students will need a strong instructor to bring perspective to the algebraic forest.

Reviewer: Institute Brookfield, Place U.S.A. Name C.A. Fung

Title APPLIED STOCHASTIC MODELLING. Author B.J.T. Morgan. Publisher London: Arnold, 2000, pp. xxi + 297, £19.99. Contents:

1. Introduction and examples

2. Basic model fitting

3. Function optimization

4. Basic likelihood tools

5. General principles

6. Simulation techniques

7. Bayesian methods and Markov chain Monte Carlo

8. General families of modelsReadership: Final year undergraduates, first year mathematics and statistics postgraduate students, scientific researchers using modern statistical methods

This volume provides both the methodology and the underlying theory (without formal proofs) for applying stochastic models to a very broad range of problems. It is driven by real data and problems. The prerequisite is a typical second-year level course on probability and statistics at a British university. The approach is modern and very computer-oriented; MATLAB is the chosen computer package for analyzing the illustrative data. General topics such as choice of model, parameter transformation and over-parameterization are discussed. Amongst the various methodologies described are deterministic and stochastic search algorithms, pro-file likelihoods, the EM algorithm and some of its generalizations, bootstrapping, the Gibbs sampler and Metropolis-Hastings algorithms, GLM, GLMM, and GAM models. Appendix A contains some reference material on probability and statistics. Appendix B describes various aspects of MATLAB. Appendix C summarizes the basic ideas on kernel density estimation. There are lots of exercises plus unusually detailed solutions and comments for selected exercises. The list of references is comprehensive and up-to-date; the subject index is thorough. The book's sets of data and the relevant MATLAB pro-grams are available on its website ( www.arnoldpublishers.com/support/stochastic). I enjoyed reading this book. It should appeal to a wide audience.

Reviewer: Institute University of St Andrews Place St Andrews, U.K. Name C.D. Kemp

Title LUNDBERG APPROXIMATIONS FOR COMPOUND DISTRIBUTIONS WITH INSURANCE APPLICATIONS. Author G.E. Willmot and X.S. Lin. Publisher New York: Springer-Verlag, 2000, pp. x + 250. Contents:

1. Introduction

2. Reliability background

3. Mixed Poisson distributions

4. Compound distributions

5. Bounds based on reliability classifications

6. Parametric bounds

7. Compound geometric and related distributions

8. Tijms approximations

9. Defective renewal equations

10. The severity of ruin

11. Renewal risk processesReadership: Probabilists, ruin and risk theorists, actuaries, insurers

The monograph studies in careful detail Lund-berg-type inequalities for the tail of a compound distribution, based on assumptions both on the mixing sequence and on the mixed distribution. These bounds are mainly of three types, exponentially light tails, heavy Pareto-type tails or intermediate medium-heavy tails and they are derived i.a. when the mixing distribution is based on an underlying Poisson, a mixed-Poisson or a renewal counting process. Also two-sided inequalities are obtained using properties of life distributions that are familiar in reliability theory. This area therefore plays a structuring role in the entire monograph. Other approaches are based on mathematical induction or on martingale theory.

Apart from direct applicability to estimation of compound distributions for the total claim amount in an insurance portfolio, the monograph has a number of special features and illustrations. For example, ruin probabilities are treated both in discrete and continuous time. Further risk quantities like the time of ruin, the severity of ruin and the duration of negative surplus under ruin are discussed in detail.

Reviewer: Institute Katholieke Universiteit Leuven Place Heverlee, Belgium Name J.L. Teugels

Title STOCHASTIC PROCESSESS: AN INTRODUCTION. Author P.W. Jones and P. Smith. Publisher London: Arnold/Oxford University Press, 2001, pp. xii + 259, £19.99. Contents:

1. Some background on probability

2. Some gambling problems

3. Random walks

4. Markov chains

5. Poisson processes

6. Birth and death processes

7. Queues

8. Reliability and renewal

9. Branching and other random processes

10. Computer simulations and projectsReadership: Second or third year undergraduates in mathematics, statistics or combined studies with a mathematical content

This is a straightforward introduction for mathematics students to the basic mathematics of stochastic processes. It is the course that has been given for as long as I can remember by most university mathematics departments throughout the English-speaking world. It is well done as far as it goes, and the exercises are well chosen. A concession to modernity is a short section on stopping rules and another giving computer simulation exercises and projects.

This book grew out of a course given again and again over the past twenty years. This alas has not pre-vented glaring howlers. In an example on the normal distribution moment generating function, the expected area E (XnXo)/2 of a right triangle with independent identically distributed normal perpendicular sides is calculated as E (X²)/2, the coefficient of t² in the expansion of the moment generating factor. Assiduous editing would also have avoided the discrete uniform distribution on the n values r to r + n - 1 having mean (n + 1)/2, an error repeated in the appendix. I could also have done without oft-repeated recalculation of the means of the exponential (sometimes called negative exponential), geometric, Poisson and uniform distributions.

Reviewer: Institute Imperial Collage of Science, Technology and Medicine Place London, U.K. Name R. Coleman

Title MONTE CARLO STRATEGIES IN SCIENTIFIC COMPUTING. Author J.S. Liu. Publisher New York: Springer-Verlag, 2001, pp. xvi + 343, US$69.95/DM160.00. Contents:

1. Introduction and examples

2. Basic principles: Rejection, weighting and others

3. Theory of sequential Monte Carlo

4. Sequential Monte Carlo in action

5. Metropolis algorithm and beyond

6. The Gibbs sampler

7. Cluster algorithms for the Ising model

8. General conditional sampling

9. Molecular dynamics and hybrid Monte Carlo

10. Multilevel sampling and optimization methods

11. Population-based Monte Carlo methods

12. Markov chains and their convergence

13. Selected theoretical topicsReadership: Researchers using Monte Carlo methods

This book begins with a brief discussion of standard Monte Carlo methods such as rejection and importance sampling and graduates very quickly to the more recent advances in the subject. The methodology is applied and illustrated through Bayesian missing data problems, molecular simulation, bioinformatics, dynamic system analysis, and self-avoiding random walks. The last half of the book concentrates on Markov Chain based Monte Carlo strategies (the Gibbs sampler, MCMC) and modifications designed to improve the efficiency. There is a brief discussion of coupling methods and perfect sampling and some theoretical topics related to MCMC discussed at the end of the book. This is a worthwhile reference to recent advances in sequential Monte Carlo, primarily Bayesian and Markov Chain methods. To those with an interest in these topics, it is worth a read.

Reviewer: Institute University of Waterloo Place Waterloo, Canada Name D.L. McLeish

Title LUNDBERG APPROXIMATIONS FOR COMPOUND DISTRIBUTIONS WITH INSURANCE APPLICATIONS. Author G.E. Willmot and X.S. Lin. Publisher New York: Springer-Verlag, 2000, pp. x + 250. Contents:

1. Introduction

2. Reliability background

3. Mixed Poisson distributions

4. Compound distributions

5. Bounds based on reliability classifications

6. Parametric bounds

7. Compound geometric and related distributions

8. Tijms approximations

9. Defective renewal equations

10. The severity of ruin

11. Renewal risk processesReadership: Probabilists, ruin and risk theorists, actuaries, insurers

The monograph studies in careful detail Lund-berg-type inequalities for the tail of a compound distribution, based on assumptions both on the mixing sequence and on the mixed distribution. These bounds are mainly of three types, exponentially light tails, heavy Pareto-type tails or intermediate medium-heavy tails and they are derived i.a. when the mixing distribution is based on an underlying Poisson, a mixed-Poisson or a renewal counting process. Also two-sided inequalities are obtained using properties of life distributions that are familiar in reliability theory. This area therefore plays a structuring role in the entire monograph. Other approaches are based on mathematical induction or on martingale theory.

Apart from direct applicability to estimation of compound distributions for the total claim amount in an insurance portfolio, the monograph has a number of special features and illustrations. For example, ruin probabilities are treated both in discrete and continuous time. Further risk quantities like the time of ruin, the severity of ruin and the duration of negative surplus under ruin are discussed in detail.

Reviewer: Institute Katholieke Universiteit Leuven Place Heverlee, Belgium Name J.L. Teugels

Title COMPUTER INTRUSION DETECTION AND MONITORING: A STATISTICAL VIEWPOINT. Author D.J.Marchette. Publisher New York: Springer-Verlag, 2001, pp. xvii + 331, US$64.95/DM145.00. Contents:

PART I: Networking Basics

PART II: Intrusion Detection

PART III: Viruses and Other CreaturesReadership: Statisticians who wish to become involved in the data analytic aspects of computer security and computer scientists who wish to expand their toolbox of techniques for detecting intruders

This book is about one of those areas that provides rich opportunities for statisticians, where statistics plays an essential role, and where statisticians can have a substantial impact. It is also an area at the inter-face with computer science. History shows that statisticians are often slow to grasp such opportunities (neural networks, expert systems and data mining spring to mind as three other such areas). The tools for computer intrusion detection are essentially statistical, though considerable familiarity with the application area is needed to be able to apply them; the areas of genomics and proteomics are similar in this regard. This book effectively provides the necessary background material for this intensely jargon-strewn area. The book includes many real examples - along with repeated pleas for the reader not to try the attacks, viruses, worms, etc. described in the book. The statistical tools used in this book include graphical methods, supervized classification and assessment methods such as ROC analysis, cluster analysis, probability models, outlier detection, mixture models, kernel estimators, functional data analysis, neural networks, hidden Markov models, and epidemiological models. The area is characterized by having large sets of data, models which often need to run in real time, and methods which may require adaptive updating estimation algorithms.

The book provides an excellent introduction to the area. I recommend it to any computer- (and Unix-) literate statistician who wishes to make an impact in an area, which will continue to be of great importance.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name D.J. Hand

Title ELEMENTS OF DISCLOSURE CONTROL. Author L. Willenborg and T. de Waal. Publisher New York: Springer-Verlag, 2001, pp. xv + 261, US$59.00. Contents:

1. Overview of the area

2. Disclosure risks for microdata

3. Data analytic impact of SDC techniques on microdata

4. Applications of non-perturbative SDC techniques for microdata

5. Applications of perturbative SDC techniques for microdata

6. Disclosure risk for tabular data

7. Information loss in tabular data

8. Applications of non-perturbative techniques for tabular data

9. Applications of perturbative techniques for tabular dataReadership: Official, social and medical statisticians and others involved in releasing personal or business data for statistical use

Statistical disclosure control (SDC) is the discipline concerned with modifying data in such a way that either statistical summaries or microdata may be released without permitting identities of individuals to be deduced - that is, so that confidentiality is preserved.

This book describes the theoretical and me-thodological issues of SDC, and does not discuss any of the specialized software now available. After an introductory chapter, the next four chapters deal with microdata and the final four with tabular data. The book discusses criteria for determining when data are safe, and distinguishes between 'non-peturbative' methods (recoding, subsampling, table redesign, etc.). There are subtleties: modifying data runs the risk of introducing in-consistencies, and these, in themselves, may permit someone to deduce what kind of modification has been made.

The area is still a rich one for research, with important work remaining to be done. Some such issues are pointed out by the authors. A statistician on the verge of a research career could do worse than consider this area, which has opportunities for theoretical work of practical importance, and which bridges a range of application domains. This book provides a good starting point.

Reviewer: Institute Imperial College of Science Technology and Medicine Place London, U.K. Name D.J. Hand

Title STATISTICIANS OF THE CENTURIES. Author C.C. Heyde and E. Seneta (Eds.). P. Crépel, S.E. Fienberg and J. Gani (Associate Eds.). Publisher New York: Springer-Verlag, 2001, pp. xii + 500, US$45.95/DM100.00. Contents:

Alphabetical listing of statisticians

16th century (1 entry)

17th century (7 entries)

18th century (10 entries)

19th century (32 entries)

20th century (54 entries)Readership: Very general

This book arose out of an International Statistical Institute initiative. Its objective is stated to be to demonstrate the achievement of statistics to a broad audience and to commemorate the work of celebrated statisticians. This demanding objective has been outstandingly well met. The choice of individuals for inclusion, taken from those born before 31 December 1900, has been wisely wide-ranging and includes individuals like Keynes and Hurst, who would be thought of as an economist and a civil engineer respectively, but whose work had an important and influential statistical component. Most entries have a photograph, a very short abstract and typically a four or five page essay, although a few are longer, followed by a short bibliography. The essays are lucid, informative, sympathetic to their subject (there are no hatchet jobs) and most contain a pleasing mixture of technical and personal detail. The editorial team and the contributors are to be congratulated on an important and enjoyable contribution to the general literature on our subject.

Reviewer: Institute Nuffield College Place Oxford, U.K. Name D.R. Cox

Title WEIGHING THE ODDS. A COURSE IN PROBABILITY AND STATISTICS. Author D. Williams. Publisher Cambridge University Press, 2001, pp.xvii + 547, £70.00/US$100.00Cloth; £24.95/US$37.95 Paper. Contents:

1. Introduction

2. Events and probabilities

3. Random variables, means and variances

4. Conditioning and independence

5. Generating functions; and the central limit theorem

6. Confidence intervals for one-parameter models

7. Conditional pdfs and multi-parameter Bayesian statistics

8. Linear models, ANOVA, etc.

9. Some further probability

10. Quantum probability and quantum computing

APPENDIX A: Some Prerequisites and Addenda

APPENDIX B: Discussion of Some Selected Exercises

APPENDIX C: Tables

APPENDIX D: A Small Sample of the LiteratureReadership: Students of the subject who need a coherent framework to develop elementary methods to extend and broaden their knowledge; mathematicians who seek a serious formal introduction to the subject with real-world motivation

This is an excellent mathematical introduction to its subject and I am somewhat envious that I was myself trained on a rather more classical diet! The exercises and motivating examples here are extremely thought provoking and the author is refreshingly honest in addressing controversial issues and points of difficulty. The range is very large - from the 'law of aver-ages' and 'car and goats' to gambling, martingales and a whole chapter on quantum computing, for which previous knowledge of quantum theory is not assumed.

The developing role of computer packages in applications to real data is recognized, but the author warns against the dangers of over-elaboration in modelling that can result from being seduced by modern computational efficiency. The style of the book is very attractive and the book has been very well produced.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name F.H. Berkshire

Title CHAOTIC ELECTIONS! Author D.G. Saari. Providence Publisher Rhode Island: American Mathematical Society, 2001, pp. xiii + 159. Contents:

1. A mess of an election

2. Voter preferences, or the procedure?

3. Chaotic election outcomes

4. How to be strategic

5. What do the voters want?

6. Other procedures; Other assumptionsReadership: Everyone who does, or should, vote (about anything)

It has often been said that 'the voters are always right' and likewise that 'the trouble with democracy is that you do not know who is going to win.' The difficulty is that there are paradoxes and problems associated with voting and other aggregation procedures, which have been addressed by mathematicians since the eighteenth century. One surprising and disturbing feature is that such procedures can be sensitive to small changes in individual votes or procedures associated in dynamics with the concept of chaos.

The author is well known for research in problems of Newtonian dynamics (the 'gravitational n-body problem') and his interest in voting systems began as a 'hobby'. However, this evidently turned into an obsession and also gave him a promotional device to attract mathematicians into the social sciences.

This book has a particular purpose: to explain even to a general reader, with only a modest mathematical background, what can go wrong in elections/gradings, and why. There are examples galore and naturally the year 2000 US Presidential election exerts a brutal, but very entertaining, fascination. It is disturbing that, for all the interplay among the counts, courts, re-counts, chads and defective ballots, the real problem is with the voting system itself.

They may always be 'right', but mathematics identifies what the phrase 'what the voters really want' might mean and how to set up a voting method, which produces the right outcome. This book is great and important reading for all - not only for the citizens of Florida.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name F.H. Berkshire

Title ENCYCLOPEDIA OF ENVIRONMETRICS. Author A.H. El-Shaarawi and W.W. Piergorsch (Editors-in-Chief), Publisher Chichester, U.K.: Wiley, 2002, pp. xxxix + 2502 (Four-volume set), £1.195.00. Subject Areas (editors):

Chemometrics (C. Spigelman)

Ecological Statistics (J. Verhoef)

Environmental Health (L. Ryan)

Environmental Policy and Regulation (L. Cox)

Extremes and Environmental Risk (J. Teugels)

Natural Resources and Agriculture (T. G. Gregoire)

Hydrological and Physical Processes (P. Chatwin and P. Sullivan)

Spatial/Temporal Modeling and Analysis (P. Guttorp)

Statistical and Numerical Computing (G. Hørst)

Stochastic Modeling and Environmental Change (D. Brillinger)

Statistical Theory and Methods (J. Zidek)Readership: Any student or researcher interested in quantitative methods for the analysis and evaluation of environmental systems

The success of Wiley's Encyclopedia of Statistical Science, published during the 1980s, has led to several spin-offs, including the six-volume Encyclopedia of Biostatistics in 1998, and some recent one-volume reference works, all sharing a common format. The present Encyclopedia has 530 entries, each briefly surveying a topic and providing a list of key literature references and cross-references to related articles. The average article length is just over four pages; most are shorter, while there are long articles on Atmospheric Dispersion (45 pages spread over four entries, Risk (50 pages in six entries), Extreme Value Analysis (21 pages), Graphical Displays (27) and Environmental Sampling (23).

Very roughly three-quarters of the material might be regarded as statistics. Much of this is generic, and since environmental applications embrace such a wide range of statistical methods, the Encyclopedia can serve as a general reference work on statistical methods. For example, the article on Stochastic Processes gives a very general entry to the subject, whereas environmental applications are covered elsewhere, such as in the entries on Point Pro-cesses. While there is no general article on Regression, there are entries for Linear Models, Generalized Linear Models, Logistic Regression, Non-linear Regression, Non-parametric Regression, Random Effects, Mixed Effects, Regression, Nonparametric Regression, Random Effects, Mixed Effects, Regression Diagnostics, and many related topics. Many of the generic methods are illustrated using environmental sets of data.

The remaining articles cover non-stochastic mathematics (e.g. Fractal Dimension), environmental science (e.g. lakes, defoliation, fisheries, forestry, meteorology), economics and policy issues and some key (mainly US) legislation, professional/scientific societies, journals, agencies (affiliated with the United Nations, the European Union, or the US Government), and software (S-PLUS and SAS, but not the widely-used freeware R and BUGS).

Although the Encyclopedia is international in scope, it is centred in North America: the eleven of fourteen editors are based there, as are about two thirds of the contributors, which reflects the North American dominance of the field.

This is an excellent resource for its targeted audience. Unfortunately its price will prevent it being on the personal bookshelf of most environmetrics researchers, but it is a must for the libraries of institutions which house multiple environmetrics-related students and researchers. It will be useful for other statistics users in those institutions.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name D.J. Balding

Title THE HONORS CLASS: HILBERT'S PROBLEMS AND THEIR SOLVERS. Author B.H. Yandell. Publisher Natick, Massachusetts: A.K. Peters, 2002, pp. ix + 486, US$39.00/£28.00/Euro46.00. Contents:

1. Introduction: The Origin of the Coordinates

2. The Foundation Problems: 1, 2, 10

3. The Foundations of Specific Areas: 3, 4, 5, 6

4. Number Theory: 7, 8, 9, 11, 12

5. Algebra and Geometry: A. Miscellany: 14, 15, 16, 17, 18

6. The Analysis Problems: 13, 19, 20, 21, 22, 23

7. We Come to Our CensusReadership: Everyone

In 1900, at the Paris International Congress of Mathematicians, David Hilbert delivered a lecture modestly titled "Mathematical Problems". This historic lecture set the stage and focus for a century of mathematics. In his talk, Hilbert listed ten outstanding problems and later, when the Proceedings of the Congress came out, expanded his list to twenty-three. These problems have since become known as Hilbert's problems. Not all have been solved, the most notorious being the Riemann hypothesis which is Hilbert's eighth problem. Only a few have been completely resolved and these too not the way Hilbert had envisioned, such as Cantor's continuum hypothesis. Solving a single one of Hilbert's problems puts one in "The Honours Class", whence the title of this book.

This book is about the story of these problems after more than a century since the epoch-making lecture of Hilbert. Yandell relates the colourful stories behind these problems and gives insightful descriptions of the personalities behind the solutions. The book is partitioned into five sections, classified according to subject. Each section has a gripping narrative of the mathematician. Truly, the book does much to show that mathematics and mathematicians are living organisms.

Reviewer: Institute Queen's University Place Kingston, Canada Name R. Murty

Title STATISTICS OF THE GALAXY DISTRIBUTION. Author V.J. Martinez and E. Saar. Publisher Boca Raton, Florida: Chapman and Hall/CRC Press, 2002, pp. 432, US$79.95/£49.99. Contents:

1. The clumpy universe

2. The standard model of the universe

3. Cosmological point processes

4. Fractal properties of the galaxy distribution

5. Statistical and geometrical models

6. Formation of structure

7. Random fields in cosmology

8. Fourier analysis of clustering

9. Cosmography

10. Structure statistics

APPENDIX A: Co-ordinate transformation

APPENDIX B: Some basic concepts in statisticsReadership: Practising cosmologists, graduate students in cosmology, statisticians with interest in spatial statistics applied to cosmology

Cosmology is an important application of spatial statistics. This book combines the description of methods in spatial statistics with background material on cosmological physics, thus illustrating the application of spatial statistics in cosmology. With extensive references to current literature and websites as well as a large collection of illustrations of statistical methods, physical models and available data, the book provides a compact overview on current statistical practise in cosmology. The book will be a good reference point for the challenge of rapidly increasing availability of cosmological data!

Reviewer: Institute University of Warwick Place Coventry, U.K. Name E. Thönnes

Title STATISTICAL METHODS IN MEDICAL RESEARCH, 4th edition. Author P. Armitage, G. Berry and J.N.S. Matthews. Publisher Oxford: Blackwell Science, 2002, pp. xi + 817, £55.00. Contents:

1. The scope of statistics

2. Describing data

3. Probability

4. Analysing means and proportions

5. Analysing variances, counts and other measures

6. Bayesian methods

7. Regression and correlation

8. Comparison of several groups

9. Experimental design

10. Analysing non-normal data

11. Modelling continuous data

12. Further regression models for a continuous response

13. Multivariate methods

14. Modelling categorical data

15. Empirical methods for categorical data

16. Further Bayesian methods

17. Survival analysis

18. Clinical trials

19. Statistical methods in epidemiology

20. Laboratory assaysReadership:Medical research workers, statisticians

This is the fourth edition of one of the standard references in medical statistics [Short Book Reviews Vol.16, p. 1]. The three previous editions have contributed substantially to the formal and informal education of many individuals involved in medical research and, thus, to a wide variety of specific research projects. However, with the proliferation of more recently published books on medical statistics, Statistical Methods in Medical Research is perhaps regarded by some as a classic more than a current standard resource. If so, then it should be the case no longer!

This new edition is a remarkably up to date survey of the statistical methodology used in medical research. It is a thoughtful and substantive revision of the third edition. It retains the wise perspective of earlier editions and extends it to more recent methodological developments. Chapter 12 is a particular 'tour de force'. While it might be possible to find minor criticisms of some sections, this is a volume which could usefully, and perhaps should, be read from cover to cover by anyone embarking on the study of medical statistics. For those already working in the area, it should at least be on their bookshelves.

Reviewer: Institute Medical Research Council Place Cambridge, U.K. Name V.T. Farewell

Title CASUAL ANALYSIS IN BIOMEDICINE AND EPIDEMIOLOGY. Author M. Aickin. Publisher New York: Dekker, 2002, pp. ix + 224, US$125.00. Contents:

1. Orientation

2. What is causation?

3. Naïve minimal sufficient cause

4. Events and probabilities

5. Unitary algebra

6. Nontrivial implication

7. Tiny examples

8. The one-factor model

9. Graphical elements

10. Causation

11. Structural equations

12. The two-factor model

13. Down's syndrome example

14. Marginalization

15. Stratification

16. Obesity example

17. Attribution

18. Indirect cause: Probabilities

19. Indirect cause: Structures

20. Reversal

21. Gestational diabetes example

22. More reversal

23. Double reversal

24. Complex indirect cause

25. Dual causation: Probabilities

26. Dual causation: Structures

27. Paradoxical causation

28. Interventions

29. Causal covariance

30. Unitary rates

31. Functional causation

32. The causation operator

33. Causal modelling

34. Dependence

35. DAG theory

36. Epilogue

37. Further readingReadership: Readers interested in joining the growing band of thinkers turning their attention to the meaning of cause

The study of causation is experiencing something of a boom. This is in a large part a consequence of recent developments in computer technology, such as expert systems, which have focused attention on the difficulties of identifying causes, initially for purely practical ends. It is perhaps not a coincidence that the examples in the book are from the fields of biomedicine and epidemiology, fields which were at the forefront of the development of expert systems in the 1980s. Aickin argues that a better under-standing of causal terms is important if biomedical research is to advance properly. Indeed, he goes so far as to raise the spectre, suggested by the cosmologist John Barrow, that the number of scientific questions raised will exceed society's ability to pursue answers, so that less and less useful work will be done, unless proper articulation of causal questions is made.

In this book, he adopts a constructivist approach, beginning with an underfined notion of sufficient cause and developing ideas from what he believes are natural proper-ties of this concept, using the work of Mackie on causal equivalence and Rothman on minimal sufficient causes. To a large extent, this direction of work is very different from the directed acyclic graph approach which has attracted much interest in recent years, especially with the work of J.Pearl.

This book represents a valuable addition to the literature on causality, and is likely to be of interest to any-one concerned with this topic.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name D.J. Hand

Title MATHEMATICS OF GENOME ANALYSIS. Author J.K. Percus. Publisher Cambridge University Press, 2002, pp. x + 139, £40.00/US$59.95 Cloth; £14.95/US$19.95 Paper. Contents:

1. Decomposing DNA

2. Recomposing DNA

3. Sequence statistics

4. Sequence comparison

5. Spatial structure and dynamics of DNAReadership: Advanced undergraduates and postgraduates in mathematics and statistics, and their teachers

The human genome project has thrown out many fascinating mathematical and statistical challenges, and this short book surveys many of them. One frustrating aspect of the subject is that, as technology develops, problems cease to be of scientific interest. Chapters 1 and 2, dealing with methods for constructing maps of overlapping cloned fragments of DNA, are now largely of historical interest scientifically, but retain their interest as exemplars of mathematical/statistical problem solving. Chapters 3 and 4, dealing respectively with the statistical analysis of individual sequences and the comparisons of distinct sequences, are of continuing importance. Almost all of the "mathematics" is statistical. The author's style suggests a background in applied mathematics rather than statistics. This results in some use of non-standard notation and terminology, a lack of integration of the problems with standard statistical theory (for example time series analysis), and an implicit error of transposing the conditional at the start of Chapter 4 (a question about the probability of observing the sequence if it is random, without a discussion of the relationship between these different questions). However, these are not fundamental objections, and the author's competence in the mathematics and experience of the field cannot be challenged. I thought the brief forays into biological background were well-judged and appropriate. Since the problems are almost all easy to state and interesting, this book would be an excellent pedagogical resource, providing problems for advanced undergraduate and postgraduate students. Eight assignments are set, and many more are latent in the text.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name D.J. Balding

Title INTRODUCTION TO DISTANCE SAMPLING: estimating abundance of biological populations. Author S.T. Buckland, D.R. Anderson, K.P. Burnham, J.L. Laake, D.L. Borchers and L. Thomas. Publisher Oxford University Press, 2001, pp. xv + 423, £23.50. Contents:

1. Introductory concepts

2. Assumptions and modelling philosophy

3. Statistical theory

4. Line transects

5. Point transects

6. Related methods

7. Study design and field methods

8. Illustrative examples

9. Common and scientific names of plants and animalsReadership: Statisticians and quantitative biologists interested in wildlife monitoring

Distance sampling is a useful and widely used method for estimating the density and the abundance of mobile organisms via their distances from a chosen line or point. Since the publication in 1993 of "Distance Sampling: Estimating Abundance of Biological Populations" by Buckland, Anderson, Burnham and Laake [Short Book Reviews, Vol.14, p. 46], much practical and theoretical research has taken place. This new book together with "Advanced Distance Sampling", in preparation by the same six authors, replaces the earlier volume.

The structure of the book is basically unaltered and the same clear style of writing, tables, and diagrams is maintained. Chapter 1 is greatly revised. In Chapters 3 and 6 some of the more mathematical material, for example on hazard-rate modelling, has been moved to the planned advanced volume. A new section on the efficient generation of distance data is added to Chapter 3. Chapters 4,5 and 7 are intended primarily for biologists; here and elsewhere, useful exercise sections are added for the benefit of graduate students. Chapter 7 is rewritten in order to include new research on the practicalities of survey design and protocol.

The bibliography has doubled in size. It now contains some 600 items relating to references in the text and in the wider literature, making it a valuable resource.

This book is the definitive work on distance sampling. Together with the projected advanced monograph by the same authors, it will provide essential study material for all involved in the estimation of animal abundance.

Reviewer: Institute University of St Andrews Place St. Andrews, U.K. Name A.W. Kemp

Title CALCULATED BETS. Author S. Skiena. Publisher Cambridge University Press, 2001, pp. xv + 232. Contents:

1. The making of a gambler

2. What is jai alai?

3. Monte Carlo on the tundra

4. The impact of the Internet

5. Is this bum any good?

6. Modelling the payoffs

7. Engineering the system

8. Putting my money where my mouth is

9. How should you bet?

10. Projects to ponderReadership: Gamblers, teachers of mathematics, statisticians, the 'general reader'

This book is published within the OUTLOOKS series – a collaboration between the Mathematical Association of America and Cambridge University Press. The se-ries' aims are to explore the interplay between mathematics and other disciplines. The idea is to provide 'a provocative and novel view for mathematicians, and for others an advertisement for the mathematical outlook'.

If you are interested in mathematics and in gambling, then this is certainly one for you! This is a book about trying to predict the future and is in the best tradition of Edward Thorp's 'Beat the Dealer' and for blackjack Thomas Bass's 'The Eudaemonic Pie' for roulette.

The main thrust here is in professional gambling, not only on the outcomes of individual sports contests, but also on the multitude of opportunities within those contests to bet on the individual performances of players. The key is to examine how past performance and current form can be used to gain a profitable edge.

The author's obsession is with the Basque Sport of jai alai and the mathematical analysis of gambling for this is very well presented here. However, there is also a lot about other sports, for example baseball and, of course, in the general applications of these ideas in predictions of outcomes of elections, performance of financial instruments and of stocks and shares. This is all done at a level accessible to the general reader.

With the generally increasing de-regulation of the gambling industry and its imminent proliferation on the Internet, it is evident that there is a widening market for an anecdotal mathematical book like this. Certainly to begin with a large one. However, winning systems – potential and/or actual – exert an enduring fascination.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name F.H. Berkshire

Title STATISTICAL THINKING FOR MANAGERS. Author J.A. John, D. Whitaker and D.G. Johnson. Publisher New York: Chapman and Hall/CRC Press, 2001, pp. xi + 337, US$44.95/£24.99. Contents:

1. Variation

2. Problem solving

3. Looking at data

4. Modelling data

5. Attribute data

6. Sampling

7. Estimation

8. Regression analysis

9. Multiple regression

10. Forecasting

11. Statistical process control

12. Control charts

13. Improvement strategies

14. Postscript

APPENDIX: Introduction to Excel and Statistical TablesReadership: Managers and business studies students

The book is designed as an introductory text in business statistics. It is a realistic book that challenges the way the students look at business problems and issues. The importance of statistics is paramount and the text equips the reader with the skills and techniques required to make informed decisions. The authors demonstrate the

techniques with a wealth of practical examples drawn from a variety of real life applications. Each chapter is littered with questions for the student to complete. More detailed exercises are at the end of each chapter. One of the innovative features of this book is the inclusion of a number of 'hands on' exercises and experiments. No solutions as such are given but supplementary information can be found on the book's website at http://www.crcpress.com. The book is extremely interesting and would be useful to aid managers in their decision-making process.

Reviewer: Institute South Bank University Place London, U.K. Name S. Starkings

Title OBSERVATIONAL STUDIES,2nd edition. Author P.R. Rosenbaum. Publisher New York: Springer-Verlag, 2002, pp. xiv + 375, US$79.95/Euro 93.00. Contents:

1. Observational studies

2. Randomized experiments

3. Overt bias in observational studies

4. Sensitivity to hidden bias

5. Modes for treatment effects

6. Known effects

7. Multiple reference groups in case-reference studies

8. Multiple control groups

9. Coherence and focused hypotheses

10. Constructing matched sets and strata

11. Planning an observational study

12. Some strategic issuesReadership: Statisticians

This second edition of a book first published in 1995 is about fifty per cent larger than the original [Short Book Reviews, Vol. 16, p.1]. The flavour is the same. The book provides a thoughtful discussion, at a mathematical and conceptual level, of what can and cannot be learned from observational studies. Emphasis is placed on the role of matching, propensity scores, permutation methodology and various types of sensitivity analysis. The book uses many real and interesting examples, but is emphatically not a "how-to" manual or a compendium of techniques – the term logistic regression does not appear in the index and censoring is introduced in the context of a discussion of partial order relationships. There is no discussion of soft-ware. Some facility with abstract algebra is needed to follow the detailed arguments. The book will be suitable for a seminar course for talented students with previous know-ledge of the subject area.

Reviewer: Institute University of Rochester Place Rochester, U.S.A. Name D. Oakes

Title GRAPHICAL MODELS: METHODS FOR DATA ANALYSIS AND MINING. Author C. Borgelt and R. Kruse. Publisher Chichester, U.K.: M.K. Wiley, 2002, pp. vii + 358, £55.00. Contents:

1. Introduction

2. Imprecision and uncertainty

3. Decomposition

4. Graphical representation

5. Computing projections

6. Naïve classifiers

7. Learning global structure

8. Learning local structure

9. Inductive causation

10. ApplicationsReadership: Researchers and practitioners who use graphical models, graduate students of applied statistics, computer science, and engineering

The authors distinguish between uncertainty (modelled by probability) and imprecision (modelled by relational algebra), and introduce possibility theory as a tool for modelling both uncertainty and imprecision. They do not restrict the structures described in the book to Bayesian and Markov models, but also describe relational graphical models and possibilistic graphical models. Possibility theory does not have the consensus of interpretation that the other two calculi have, and the authors describe their particular approach in more detail. This necessity to define terms which do not yet have a (fairly) universal acceptance means that the book is not simply a digested presentation of theory which has already been established elsewhere, but also attempts to present novel perspectives. Some readers will find the uncertainty, ambiguity, or lack of consensus about the concepts unsettling, and I suspect that many statisticians may not find this book as appealing as one of the several others on graphical models which have appeared in recent years despite (or perhaps because of) its greater breadth.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name D.J. Hand

Title SCAN STATISTICS. Author J. Glaz, J. Naus and S. Wallenstein. Publisher New York: Springer-Verlag, 2001, pp. xv + 370. Contents:

PART I: Methods and Applications

1. Introduction

2. Retrospective scanning of events over time

3. Prospective scanning of events over time

4. Success scans in a sequence of events

5. Higher-dimensional scans

6. Scan statistics in DNA and protein sequence analysis

PART II: Scan Distribution Theory and its Developments

7. Approaches used for derivations and approximations

8. Scanning N uniform distributed points: Exact results

9. Scanning N uniform distributed points: Bounds

10. Approximations for the conditional case

11. Scanning points in a Poisson process

12. The generalized birthday problem

13. Scan statistics for a sequence of discrete IID variates

14. Power

15. Testing for clustering

16. Two-dimensional scan statistics

17. Numbers of clusters: Ordered spacing

18. Extensions of the scan statisticReadership: Cluster data practitioners, mathematical statisticians and probabilists

Scan statistics "arise naturally in the scanning of time and place, looking for clusters of events". Clusters might consist of illnesses, accidents, burglaries and so on which raise suspicions of non-random causes. Are the suspicions justified? The 96 pages of Part 1 are aimed at researchers in applied fields. The other three-quarters of the book is a research monograph for theorists. This is an excellent book for a graduate class or self study. The writing is extremely clear, the development is sound, and the production is of the highest quality. Of the roughly 600 references given, the authors have contributed about seven per cent, a good sign!

Reviewer: Institute University of Wisconsin Place Madison, U.S.A. Name N.R. Draper

Title MATHEMATICAL STATISTICS: basic ideas and selected topics. Volume I, 2nd edition. Author P.J. Bickel and K.A. Doksum. Publisher Upper Saddle River, New Jersey: Prentice Hall, 2001, pp. xviii + 556. Contents:

1. Statistical models, goals, and performance criteria

2. Methods of estimation

3. Measures of performance

4. Testing and confidence regions

5. Asymptotic approximations

6. Inference in the multiparameter case

APPENDIX A: A Review of Basic Probability Theory

APPENDIX B: Additional Topics in Probability and Analysis

APPENDIX C: TablesReadership: Beginning graduate students in statistics and science and engineering graduate students whose research will involve statistics intrinsically rather than as an aid in drawing conclusions

Almost twenty-five years have passed since the appearance of the first edition of this classic. Scores of students have studied statistics using it. Many of us no doubt still cherish "the blue book". As the authors rightly claim, statistical theory has undergone some major changes over this period. This is to a large extent due to increased data availability and computational power. As a conesquence, an update of the first edition resulted in almost doubling the size. Volume II is promised for 2003. As can be seen from the table of contents, the present Volume I presents the basics every beginning graduate student in statistics has to master. Volume II will bring students up to speed with respect to current research, including such topics as resampling and MCMC. The authors have man-aged to keep the splendid balance between methodology and applications; all methods introduced are carried through to the point where the student should be able to apply the techniques to real data. Numerous exercises are included; as in the first edition, these exercises form the backbone for any serious study. Needless to say we are confronted with a really new book, one which every serious statistician (research or graduate student alike) is strongly advised to buy.

Reviewer: Institute ETH-Zürich Place Zürich, Switzerland Name P.A.L. Embrechts

Title CHAOS: STATISTICAL PERSPECTIVE. Author K.-S.Chan and H. Tong. Publisher New York: Springer-Verlag, 2001, pp. xv + 300. Contents:

1. Introduction and case studies

2. Deterministic chaos

3. Chaos and stochastic systems

4. Statistical analysis I

5. Statistical analysis II

6. Nonlinear least-square prediction

7. Miscellaneous topics

APPENDIX A: Deterministic chaos

APPENDIX B: Supplements to Chapter 3

APPENDIX C: Data sets and softwareReadership: Statisticians, statistically inclined scientists

The main theme of this book is the interface of deterministic chaos and statistics in both the mathematics world and real data analysis. This book was written with the intention of taking readers to the forefront of current research on statistical aspects of chaos. Among statisticians there has lately been a growing interest in stochastic dynamical system models for time series whose irregular behaviours had traditionally been modelled by random processes. Chapter 2 and 3 provide a reasonably self-contained and informal account of deterministic chaos and the relevant dynamical system theory. Chapters 4 through 6 emphasize statistical analysis of chaos, its initial-value sensitivity and other characteristic properties. Numerous examples and several case studies illustrate the practical scope of the presented techniques and methods. The authors have done an excellent job, providing an overview of known results with detailed references to the literature, as well as pointing out some open problems. In general, the book serves to "encourage more statisticians to join in with the fun of chaos".

Reviewer: Institute University of Wisconsin Place Madison, U.S.A. Name H. Zhang

Title STATISTICAL METHODS FOR THE ANALYSIS OF REPEATED MEASUREMENTS. Author C.S. Davis. Publisher New York: Springer-Verlag, 2002, pp. xxiv + 415, US$79.95/Euro89.95. Contents:

1. Introduction

2. Univariate methods

3. Normal theory methods: Unstructured multivariate approach

4. Normal theory methods: Multivariate analysis of variance

5. Normal theory methods: Repeated measures ANOVA

6. Normal theory methods: Linear mixed models

7. Weighted least squares analysis of repeated categorical outcomes

8. Randomization model methods for one-sample repeated measurements

9. Methods based on extensions of generalized linear models

10. Nonparametric methodsReadership: Statisticians, graduate students in statistics, research scientists

In Chapter 1, the author lists twenty-four books focussing on methodology for repeated measures, all but five written since 1990. So what distinguishes this one from the rest? The claims in the preface are that it is more comprehensive than many, is targeted at a lower mathematical level, is focussed more on applications, is enriched by extensive use of real sets of data, and contains numerous homework exercises.

Even a cursory reading shows these claims to be largely justified, although the less mathematical reader may find parts of some chapters (for example Chapters 4, 7 and 9) rather heavy going. However, each major topic is introduced logically; background theory is clearly elucidated, and at least one example is carefully worked in detail. The use of eighty real sets of data, given in full, is a most attractive feature. Attention is concentrated on those techniques that are most readily available in software. Descriptions are generally clear, and few misprints were noted. This should prove to be a very useful text for teacher, student and practitioner alike.

Reviewer: Institute University of Exeter Place Exeter, U.K. Name W.J. Krzanowski

Title TOPICS IN OPTIMAL DESIGN. Author E.P. Liski, N.P. Mandal, K.R. Shah and B.K. Sinha. Publisher New York: Springer-Verlag, 2002, pp. xi + 164, US$49.95/ Euro 49.95. Contents:

1. Scope of the monograph

2. Optimal regression designs in symmetric domains

3. Optimal regression designs in asymmetric domains

4. Optimal designs for covariates' models with structured intercept parameter

5. Stochastic distance optimality

6. Designs in the presence of trends

7. Additional selected topicsReadership: Statisticians, graduate students interested in optimal design

This monograph presents a short overview of the classical theory of optimal regression designs and covers recent developments in certain areas of the authors' research, mostly within the continuous design paradigm. The authors extensively use Loewner order domination and de la Garza phenomenon. The application of those two concepts is first illustrated for polynomial regression in symmetric design regions, and then generalized to polynomial models in asymmetric regions and regression with non-homogeneous variance. Among other topics are designs for random coefficient regression models, models with trend effects and new developments for distance optimality criterion.

Reviewer: Institute GlaxoSmithKline Place Collegeville, U.S.A. Name S. Leonov

Title CONTEMPORARY STATISTICAL MODELS FOR THE PLANT AND SOIL SCIENCES. Author O. Schabenberger and F.J. Pierce. Publisher Boca Raton, Florida: CRC Press, 2002, pp. xxii + 738 + CD Rom, US$99.95/£66.99. Contents:

1. Statistical models

2. Data structures

3. Linear algebra tools

4. The classical linear model: Least squares and alternatives

5. Nonlinear models

6. Generalized linear models

7. Linear mixed models for clustered data

8. Nonlinear models for clustered data

9. Statistical models for spatial dataReadership: Model-fitters, especially in the plant and soil sciences

Schabenberger is a mathematical statistician; Pierce directs an agricultural centre. Together they have compiled a magnificent compendium of methods for fitting models with emphasis in one applied area. The writing is first class and the techniques have many wider applications, also. So far, so good. In the preface one reads "This text is an attempt to squeeze between two covers many statistical methods…. Any one of …chapters (4 to 9) could have been expanded to the size of the entire text…" True; and so the book is an excellent reference. Further on, one reads "This text is both a reference and textbook…" and the suggestion that the book can be used as a text in both statistics and life sciences courses. True; however, "We did not include exercises…" The stated reason for this is that doing so would limit the range of courses that the book could be used for! Does having no exercises at all paradoxically make the book more versatile then? For calculations, the authors rely on SAS supplemented by the S+SpatialStats module, using the CD-Rom in the book.

Reviewer: Institute University of Wisconsin Place Medison, U.S.A. Name N.R. Draper

Title NONPARAMETRIC ANALYSIS OF LONGITUDINAL DATA IN FACTORIAL EXPERIMENTS. Author E. Brunner, S. Domhof, and F. Langer. Publisher Chichester, U.K.: Wiley, 2002, pp. xvii + 261, £70.50. Contents:

1. Introduction

2. Models

3. Effects and hypotheses

4. Estimators for relative effects

5. Test statistics

6. Software

7. Experiments for one group of subjects

8. Experiments for several groups of subjects

9. Dependent replications

10. Multifactorial experiments

11. Numerous time pointsReadership: Statisticians or statistically literate applications experts with repeated measures problems

Longitudinal data have been the focus of a striking efflorescence of research activity over the last couple of decades. So much so, in fact that, the researcher concerned with applications may not have the time to understand them all and choose that one which is best suited to the problem. Moreover, there may also be doubts about the appropriateness of the models in the context of the data. This book describes attempts to ease these issues by adopting nonparametric procedures, in particular in the context of factorial designs.

The book includes real examples from a wide range of application domains. It would be interesting to see comparative analyses of these examples, using standard approaches. Macros for the various analyses are described, and an internet address is given from which they may be downloaded. There are exercises at the end of each chapter, though most of these appear to be of the 'apply the methods described here to the data described here' form.

Perhaps it is in the nature of nonparametric methods, but my overall impression was of a field still under development. Indeed, the authors' comments in Chapter 6 on the software available (or rather, not available) for such methods in major packages supports this.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name D.J. Hand

Title COMBINATORIAL DATA ANALYSIS: OPTIMIZATION BY DYNAMIC PROGRAMMING. Author L. Hubert, P. Arabie and J. Meulman. Publisher Philadelphia: SIAM,2001, pp. xi + 163. Contents:

1. Introduction

2. 2.General dynamic programming paradigm

3. Cluster analysis

4. Object sequencing and seriation

5. Heuristic application of the GDPP

6. Extensions and generalizationsReadership: Combinatorialists

The core of this highly specialized text is given in Chapters 3 and 4. Given a proximity measure between n objects, Chapter 3 discusses the choices of heterogeneity measures and optimization criteria when partitioning the objects. Chapter 4 discusses merit measures for optimally sequencing n objects along a continuum. There is no mention of any practical examples of partitioning and sequencing objects. The authors propose computing the optimal solution to these problems by dynamic programming and illustrate the algorithm with the linear assignment problem. To suggest that the assignment problem be solved by an algorithm that ignores its mathematical structure is irresponsible. There are excellent algorithms for the assignment problem that will solve large problems very quickly. The authors should have demonstrated the dynamic programming algorithm on a small clustering problem.

Reviewer: Institute London School of Economics Place London, U.K. Name S. Powell

Title MULTIVARIATE STATISTICAL PROCESS CONTROL WITH INDUSTRIAL APPLICATIONS. Author R.L. Mason and J.C. Young. Publisher Philadelphia: Society for Industrial and Applied Mathematics, 2002, pp. xiii + 263 + CD. Contents:

1. Introduction to the T² statistic

2. Basic concepts about the T² statistic

3. Checking assumptions for using a T² statistic

4. Construction of historical data set

5. Charting the T² in phase I

6. Charting the T² in phase II

7. Interpretation of T² signals for two variables

8. Interpretation of T² signals for the general case

9. Improving the sensitivity of the T² statistic

10. Autocorrelation in T² control charts

11. The T² statistic and batch processesReadership: Senior students and practitioners with a statistical background in process industries

The book is aimed at senior students and practitioners with a statistical background in process industries. The authors provide a step-by-step approach to developing and applying a control chart based on Hotelling's T² statistic to processes with a large number of variable characteristics measured over time. There are many examples with excellent context woven into the theory. A running example, written as a short story, is used to demonstrate the value of the approach. An accompanying CD provides a ninety-day demonstration version of the software used in the text's examples.

There is little comparison to other monitoring methods such as those based on principal components or PLS. I expect that many students with no experience of process industries will find the examples difficult and that practitioners, unfamiliar with multivariate analysis, will struggle with the theory. The data for the examples are not available on the CD. There are no exercises.

Reviewer: Institute University of Waterloo Place Waterloo, Canada Name R.J. MacKay

Title REGRESSION MODELLING STRATEGIES. With Applications to Linear Models, Logistics Regression, and Survival Analysis. Author R. E. Harell. Publisher New York: Springer-Verlag, 2001, pp. xxiii + 568, US$79.95/DM179.00. Contents:

1. Introduction

2. General aspects of fitting regression models

3. Missing data

4. Multivariable modelling strategies

5. Re-sampling, validating, describing, and simplifying the model

6. S-plus software

7. Case study in least squares fitting and interpretations of a linear model

8. Case study in imputation and data reduction

9. Overview of maximum likelihood estimation

10. Binary logistic regression

11. Logistic model case study 1: Predicting cause of death

12. Logistic model case study 2: Survival of Titanic passengers

13. Ordinal logistic regression

14. Case study in ordinal regression, data reduction, and penalization

15. Models using nonparametric transformations of X and Y

16. Introduction to survival analysis

17. Parametric survival models

18. Case study in parametric survival modelling and model approximation

19. Cox proportional hazards regression model

20. Case study in Cox regressionReadership: Model-builders of many kinds

This is a book that leaves one breathless. It demands a lot, but gives plenty in return. Prospective readers need a statistics course and should be "well versed in ordinary multiple regression and intermediate algebra". They also need to be thoroughly familiar with S-PLUS or R; instructions for learning about these and about a library of special S-PLUS functions are given in the preface and appendix. The book has many sets of programming instructions and printouts, all delivered in a staccato fashion. Sets of data are large (for example, random sample of 10% of 10,000 hospitalized adults, page 51; 1309 Titanic survivors, page 300). Many different types of models and methods are discussed. There are many printouts and diagrams. Computer oriented readers will like this book immediately. Others may grow to like it. It is an essential reference for the library.

Reviewer: Institute University of Wisconsin Place Madison, U.S.A. Name N.R. Draper

Title A CONTINGENCY TABLE APPROACH TO NONPARAMETRIC TESTING. Author J.C.W. Rayner and D.J. Best. Publisher Boca Raton: Chapman and Hall/CRC, 2001, pp. viii + 256, US$89.95/£49.99. Contents:

1. Introduction

2. Modelling ties

3. Tests on one-way layout data: Extensions to the median and Kruskal-Wallis tests

4. Tests based on a product multinomial model: Yates' test and its extensions

5. Further tests based on a product multinomial model: Order in the sign test and ordinal categorical data with a factorial response

6. Tests on complete randomized blocks: Extensions to the Friedman and Cochran tests

7. Further tests on randomized blocks: Extensions to Durbin's test

8. Extensions to a nonparametric correlation test: Spearman's test

9. One and S-sample smooth tests of goodness of fit

10. ConclusionReadership: Experimental scientists, statisticians

This book shows how many standard nonparametric tests, such as Wilcoxon, Kruskal-Wallis and Spearman can be obtained from appropriate partitions of a Pearson X² test statistic. This unification is accomplished by presenting the data in contingency tables. The approach is given in Sprent (1993, Section 9.3), for instance, but it is generalized here. The treatment of ties, and the construction of almost exact Monte Carlo p-values are both readily dealt with. The book is up-to-date, and in many cases draws on the research of the authors and their co-workers, as in, for example, Chapters 8 and 9. The book considers historical sets of data, but the work is also very nicely illustrated by many examples from the area of sensory evaluation, drawn from the extensive consulting experience of Best. Various new results are also presented, as in Section 4.6, where the McCullagh (1980) method for ordered contingency tables is found to be outperformed by alternatives. Overall, material is presented without excessive theory. This is a useful and informative text, which deserves to be widely read.

McCullagh, P. (1980), Regression models for ordinal data. Journal of the Royal Statistical Society, B, 109-142.

Sprent, P. (1993), Applied Nonparametric Statistical Methods, 2nd edition. London: Chapman and Hall. [Short Book Reviews, Vol. 13, p.42].

Reviewer: Institute University of Kent Place Canterbury, U.K. Name B.J.T. Morgan

Title SMOOTHING SPLINE ANOVA MODELS. Author C. Gu. Publisher New York: Springer-Verlag, 2002, pp. xiii + 289, US$79.95 Contents:

1. Introduction

2. Model construction

3. Regression with Gaussian-type responses

4. More splines

5. Regression with exponential families

6. Probability density estimation

7. Hazard rate estimation

8. Asymptotic convergenceReadership: Graduate students with a solid background in mathematics and research statisticians

This is the only book available now written exclusively on the method of smoothing spline ANOVA, a newly-developed and broadly-applicable approach to nonparametric functional estimation problems. The author is one of the few main contributors to the development of this field. This book is mainly a summary of the author and his co-researchers' own work. Three main topics discussed are regression (in a generalized sense), density estimation and hazard rate estimation. Understandably, the focus is on fitting functions and not on inferences and predictions. The exposition style is for the most part heavily mathematical. There are a few illustrative examples and some discussion on using a R package developed by the author to do part of the computation in the book.

Reviewer: Institute Pennsylvania State University Place University Park, U.S.A. Name Z. Luo

Title ANALYZING MEDICAL DATA USING S-PLUS. Author B. Everitt and S. Rabe-Hesketh. Publisher New York: Springer-Verlag, 2001, pp. xii + 485, US$79.95/DM171.00. Contents:

Prologue

1. An introduction to S-PLUS

2. Describing data

3. Basic inference

4. Scatterplots, simple regression and smoothing

5. Analysis of variance and covariance

6. The analysis of longitudinal data

7. More graphics

8. Multiple linear regression

9. Generalized linear models I: Logistic regression

10. Generalized linear models II: Poisson regression

11. Linear mixed models I

12. Linear mixed models II

13. Generalized additive models

14. Nonlinear models

15. Regression trees

16. Survival analysis I

17. Survival analysis II: Cox's regression

18. Principal components and factor analysis

19. Cluster analysis

20. Discriminant function analysisReadership: Medical researchers who analyze statistical data, medical statisticians, students of statistics and medical statistics

This book presents a survey of modern methods used in, but not confined to, medical statistics, each accompanied by a brief summary of the mathematical components, instruction in the basics of the S-PLUS computer language, up-to-date references, examples using sets of data from medical research, and S-PLUS code for the examples. The sets of data and S-PLUS code are available for downloading from a website referenced in the book (some but not all of the S-PLUS code runs in R, the freeware counterpart of S). Given the number of methods covered, it is not surprising that discussion of interpreting output, limitations of methods and comparison of alternative analyses is largely absent; effective use, therefore, requires some statistical background. The book will be a handy reference on my consulting shelf.

Reviewer: Institute Queen's University Place Kingston, Canada Name J.T. Smith

Title NONLINEAR MODELS IN MEDICAL STATISTICS. Author J.K. Lindsey. Publisher Oxford University Press, 2001, pp. xii + 280, £35.00. Contents:

1. Basic concepts

2. Practical aspects

3. Families of nonlinear regression functions

4. Epidemiology

5. Clinical trials

6. Quality of life

7. Pharmacokinetics

8. Pharmacodynamics

9. Assays and formulations

10. Molecular geneticsReadership: Graduate students in biometry, bio-statistics, medicine and statistics

This book provides a practical text on nonlinear modelling with the emphasis on applications in medicine. In contrast to most publications on the subject, the emphasis is on models involving non-normal response distributions. Knowledge of advanced mathematics and statistics is assumed but a comprehensive list of reference texts is cited in the bibliography. Examples in the text are analyzed using computer code written in the freely available software R. The larger sets of data presented and analyzed in the examples are not reproduced in the text but are available through the World Wide Web. At the end of each chapter there is a further reading list together with a limited number of exercises; these might be incorporated into a short taught course on the subject. Tables of data for the exercises are provided in one of the three appendices. The text is well-written and the mix of applications should appeal to a wide readership.

Reviewer: Institute CEFAS Lowestoft Laboratory Place Lowestoft, U.K. Name C.M. O'Brien

Title EMPIRICAL LIKELIHOOD. Author A.B. Owen. Publisher Boca Raton, Florida: Chapman and Hall/CRC, 2001, pp. xv + 304, £39.99/US$74.95. Contents:

1. Introduction

2. Empirical likelihood

3. EL for random vectors

4. Regression and modeling

5. Empirical likelihood and smoothing

6. Biased and incomplete samples

7. Bands for distributions

8. Dependent data

9. Hybrids and connections

10. Challenges for EL

11. Some proofs

12. Algorithms

13. Higher order asymptoticsReadership: Statisticians, graduate students in statistics, practitioners with a reasonable degree of statistical sophistication

Empirical likelihood is a nonparametric technique that captures many of the advantages of conventional parametric likelihood methods without the disadvantage of their distributional assumptions. The basic idea is to use a likelihood based on a multinomial distribution with support on the observed sample points. Although there were precursors in sample survey theory and survival analysis, for example, the basic approach was developed by the author himself in a series of fundamental papers in the late 1980s and early 1990s. Since then many other people have extended the ideas and theory. This is the first book devoted to the topic and gives an excellent overview of the diversity of applications that can be treated and the potential of the empirical likelihood approach in applied statistics.

The emphasis is on applications and data analysis rather than theory, with most of the technical material collected in the final chapters, although the early chapters still require a reasonable acquaintance with statistical theory. The book starts with an example to illustrate the power of the empirical likelihood approach in a situation that would be difficult to handle with conventional methods. It then moves on to the core material in Chapters 2, 3 and 4, which cover inferences for means (both univariate and multivariate), estimating equations, regression, analysis of variance and generalized linear models. As the author says in the preface, these four chapters, along with some supplementary material from later chapters, could form the basis of a good graduate course. A variety of more specialized topics are treated in Chapters 5 to 8. Connections with other methods, especially the bootstrap, are discussed in Chapter 9 and Chapter 10 contains a sketch of some situations that pose difficulties for the empirical likelihood approach and of areas for future development. Proofs, higher-order asymptotic properties and some computational questions are discussed in the final three chapters.

Empirical likelihood is a powerful, but less familiar, alternative to bootstrapping in many situations. Unlike the bootstrap, there are few good general descriptions of empirical likelihood in existence and no widely available software that would enable practitioners to implement the method easily. This timely and well-written book remedies the first of these problems and, it is hoped that it will soon help to lead to more accessible software being produced.

Reviewer: Institute University of Auckland Place Auckland, New Zealand Name A.J. Scott

Title PROBABILITY AND RANDOM PROCESSES, 3rd edition. Author G. Grimmett and D. Stirzaker. Publisher Oxford University Press, 2001, pp. xii + 596, £60.00 Cloth; £29.95 Paper. Contents:

1. Events and their probabilities

2. Random variables and their distributions

3. Discrete random variables

4. Continuous random variables

5. Generating functions and their applications

6. Markov chains

7. Convergence of random variables

8. Random processes

9. Stationary processes

10. Renewals

11. Queues

12. Martingales

13. Diffusion processesReadership: Advanced undergraduates, postgraduates and researchers in applied probability and statistics

This is a new edition of a textbook that since its first edition has been my first port of call for clarification and recall of definitions and proofs in probability and stochastic processes [Short Book Reviews, Vol. 13,p.4]. This new edition includes new sections on sampling, MCMC, coupling, geometrical probability, spatial Poisson processes, renewal-reward, queuing networks, stochastic calculus, and option pricing in the Black-Schools model, as well as more than four hundred new exercises. We are given no more than brief introductions to these new topics, but they will prove valuable to readers already in possession of the previous edition, showing as they do how these advanced topics may be presented at an intermediate level. I recommend this book whole-heartedly.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name R. Coleman

Title ONE THOUSAND EXERCISES IN PROBABILITY. Author G. Grimmett and D. Stirzaker. Publisher Oxford University Press, 2001, pp. x + 438, £27.50 Paper. Contents:

1. Events and their probabilities

2. Random variables and their distributions

3. Discrete random variables

4. Continuous random variables

5. Generating functions and their applications

6. Markov chains

7. Convergence of random variables

8. Random processes

9. Stationary processes

10. Renewals

11. Queues

12. Martingales

13. Diffusion processesReadership: Advanced undergraduates, postgraduates and researchers in applied probability and statistics

This is an enlargement of the authors' earlier book Probability and Random Processes: Problems and Solutions, which gave all the exercises and solutions for the second edition of their Probability and Random Processes. This does the same for its third edition. The first one hundred and thirty-three pages give the exercises and problems, the remaining three hundred pages their solutions. Clearly there is no room for more than directions and answers, so we need have no fear from the fallout resulting from the book getting into the hands of students. Indeed, converting the "easily seen that" into model answers will offer sufficient challenge. The authors themselves acknowledge having found some of their problems rather tricky.

Reviewer: Institute Imperial Collage of Science, Technology and Medicine Place London, U.K. Name R. Coleman

Title PRINCIPLES OF DATA MINING. Author D. Hand, H. Mannila and P. Smyth. Publisher Cambridge Massachusetts: MIT Press, 2001, pp. xxxii + 546, US$50.00/£34.95. Contents:

1. Introduction

2. Measurement and data

3. Visualizing and exploring data

4. Data analysis and uncertainty

5. A systematic overview of data mining algorithms

6. Models and patterns

7. Score functions for data mining algorithms

8. Search and optimization methods

9. Descriptive modeling

10. Predictive modeling for classification

11. Predictive modeling for regression

12. Data organization and databases

13. Finding patterns and rules

14. Retrieval by content

APPENDIX: Random VariablesReadership: Applied statisticians, computer scientists, upper-year undergraduate students, first or second-year graduate students

Data mining is the science of extracting useful information from large sets of data or databases. It en-compasses several disciplines: statistics, machine learning, pattern recognition, databases and artificial intelligence. This very good book is one of the first inter-disciplinary approaches to the area, with the authors' areas of expertise spanning statistics, databases and computer science. The book takes a modular approach, focusing separately on model structure, score functions, optimization and data management. After considering these areas, specific data mining tasks are discussed. These include clustering, regression, and classification, and the less (statistically) familiar topics of finding patterns and retrieval by context. From the statistical perspective, much of the material is familiar, amounting to exploratory and adaptive modeling techniques for very large sets of data. One of the exciting things about data mining (and this book) is its interdisciplinary nature.

Reviewer: Institute University of Waterloo Place Waterloo, Canada Name H.A. Chipman

Title PERMUTATION METHODS: A DISTANCE FUNCTION APPROACH. Author P.W. Mielke, Jr. and K.J. Berry. Publisher New York: Springer-Verlag, 2001, pp. xv + 352, US$79.95/DM160.00. Contents:

1. Introduction

2. Description of MRPP

3. Further MRPP applications

4. Description of MRBP

5. Regression analysis, prediction, and agreement

6. Goodness-of-fit tests

7. Contingency tables

8. Multisample homogeneity testsReadership: Final year undergraduate and post-graduate students in statistics

Permutation tests were initiated more than sixty years ago, but many commercially available software packages provide for their routine use. These tests relax the parametric structure requirement of a test statistic and are less affected by an extreme measurement of a single object. This book presents univariate and multivariate permutation methods that provide both exact probability values and approximate probability values based on re-sampling and moment techniques. Multiresponse permutation procedures (MRPP) and multivariate ran-domized block permutation procedures (MRBP) are presented in detail, together with a number of novel applications from the fields of pattern detection and atmospheric science. The book also contains more familiar applications from behavioural research and psychometrics. Collectively, the authors have an impressive list of more than seventy published papers cited in their text. A listing of the FORTAN77 programs used in the book is presented in the Appendix and available on the World Wide Web. The inclusion of an author index would have greatly added to the utility of the text as a course companion.

Reviewer: Institute CEFAS Lowestoft Laboratory Place Lowestoft, U.K. Name C.M.O'Brien

Title A GUIDE TO FIRST-PASSAGE PROCESSES. Author S. Render. Publisher Cambridge University Press, 2001, pp. ix + 312, £55.00/US$80.00. Contents:

1. First-passage fundamentals

2. First-passage in an interval

3. Semi-infinite system

4. Illustrations of first-passage in simple geometries

5. Fractal and nonfractal networks

6. Systems with spherical symmetry

7. Wedge domains

8. Applications to simple reactionsReadership: Graduate students and researchers in physics, chemistry, theoretical biology, electrical engineering, chemical engineering, operations research and finance

The author begins the book with the fundamental background on first-passage times including an outline of the relationship between the occupation and first-passage- time probabilities of a random walk. Further, connections to electrostatics and current flows in resistor networks are discussed. The theory is developed for finite and semi-finite intervals, fractal networks and non-fractal networks, spherical geometries and wedges. Several applications to physics are presented. The subjective choice of the content makes this book easy to read for specialists in the field. It is no doubt valuable and accessible for graduate students in applied mathematics or physics, chemistry, biology or finance.

Reviewer: Institute ETH-Zürich Place Zürich, Switzerland Name L. Alili

Title A USER'S GUIDE TO MEASURE THEORETIC PROBABILITY. Author D. Pollard. Publisher Cambridge University Press, pp. xxiii + 351, £60.00/US$90.00 Cloth; £20.90/US$30.00 Paper. Contents:

1. Motivation

2. A modicum of measure theory

3. Densities and derivatives

4. Product spaces and independence

5. Conditioning

6. Martingale et al.

7. Convergence in distribution

8. Fourier transforms

9. Brownian motion

10. Representations and couplings

11. Exponential tails and the law of the iterated logarithm

12. Multivariate normal distributionsReadership: Probabilists, mathematicians, statisticians, teachers, students

This is a most remarkable book in that it succeeds in explaining the toughest probabilistic concepts without burying the reader under a pile of unwanted measure theory. Even a quick look at the titles of the sections shows that all important concepts from current day probability are there. But there is more: ideas and techniques from functional analysis are sprinkled over the manuscript, hundreds of exercises of varying degrees in difficulty are included, and each chapter ends with miscellaneous notes that guide the reader to other aspects not covered in the book but hinted at in the extensive bibliography. A refreshing book that can be strongly recommended to students as well as to teachers that like to learn rigorous probability theory without being forced to become professional probabilists first.

Reviewer: Institute Katholieke Universiteit Leuven Place Heverlee, Belgium Name J.L. Teugels

Title LIMIT DISTRIBUTIONS FOR SUMS OF INDEPENDENT RANDOM VECTORS: heavy tails in theory and practice. Author M. Meerschaert and H.-P. Scheffler. Publisher New York: Wiley, 2001, pp. xiii + 484, £67.95. Contents:

PART I: Introduction

1. Random vectors

2. Linear operators

3. Infinitely divisible distributions and triangular arrays

PART II: Multivariate Regular Variation

4. Regular variation for linear operators

5. Regular variation for real-valued functions

6. Regular variation for Borel measures

PART III: Multivariate Limit Theorems

7. The limit distributions

8. Central limit theorems

9. Related limit theorems

PART IV: Applications

10. Applications to statistics

11. Self-similar stochastic processesReadership: Researchers in probability and statistics

Central limit theory is fundamental to many results and methods in statistics and applied probability. This includes the classical central limit theorem with normal limit distributions but also, because of the rapidly increasing interest in the modelling of heavy-tailed phenomena, limit theory for convergence of appropriately normalized sums of independent random vectors to non-normal limits. This book treats the general central limit theory in detail through extensive use of the concept of multivariate regular variation, in particular regular variation for linear operators. The presentation is self-contained in the sense that all tools required for developing the theory, such as convergence of measures, infinite divisibility and multivariate regular variation, are presented in detail. I would like to stress that the approach to multivariate regular variation presented no doubt is useful for the limit theorems treated, but may be less convenient for important applications to other fields like extreme value theory. Researchers in probability and statistics will find it a carefully written and accessible reference for limit results for sums of independent random vectors, a topic of considerable importance in a wide variety of applied problems as seen from the applications addressed in the last chapters of the book.

Reviewer: Institute ETH-Zürich Place Zürich, Switzerland Name F. Lindskog

Title WEAKLY DEPENDENT STOCHASTIC SEQUENCES AND THEIR APPLICATIONS. Vol. XII: Random Sums, Extremes and Sequential Analysis. Author K.I. Yoshihara. Publisher Tokyo: Sanseido, 2001, pp. vii + 393. Contents:

1. Foundations

2. Random sums

3. Large deviation principles

4. Excursion random measures

5. Exceedance and first passage lines

6. Extremal properties of some statistics

7. GARCH processes

8. Sequential analysis

9. Change-point problemsReadership: Theoretical researchers in statistics

This is volume XII in a series of books on weakly dependent stochastic sequences, all written by the same author over the past ten years. Like all the previous ones in the collection, this volume is also written in a very mathematical style, using the strict format of definition – conditions – lemmas – theorems – proofs. The theorems rely on the most recent journal literature in the field. As seen from the table of contents, the author deals with miscellaneous topics. Most interesting is the extreme value theory for stationary sequences under suitable mixing conditions, the extremal behaviour of autoregressive processes and the asymptotics for change-point estimators under weak dependence. The book is useful for theoretical researchers in these topics.

Reviewer: Institute Limburgs Universitair Centrum Place Diepenbeek, Belgium Name N.D.C. Veraverbeke

Title DISCRETE-EVENT SIMULATION: MODELLING, PRGRAMMING AND ANALYSIS. Author G.S. Fishman. Publisher New York: Springer-Verlag, 2001, pp. xix + 537, US$69.95. Contents:

1. Simulation in perspective

2. Modelling concepts

3. Data collection and averages

4. Programming and execution

5. Search, space, and time

6. Output analysis

7. Making sense of output and increasing efficiency

8. Sampling from probability distributions

9. Pseudo-random number generation

10. Preparing the inputReadership: Researchers, graduates students, and advanced undergraduate students with an interest in the simulation of stochastic systems

This is an excellent and well-written text on discrete event simulation with a focus on applications in Operations Research. There is substantial attention to programming (largely in SIMSCRIPT II.5), output analysis, pseudo-random number generation and modelling and these sections are quite thorough. Methods are provided for generating pseudo-random numbers (including combining such streams) and for generating random numbers from most standard statistical distributions. I might quibble with the author on choice of topics. For example, there is less (or no) attention paid to variance-reduction techniques, validation and verification of models, regenerative simulation, sensitivity analysis or Infinitesimal Perturbation Analysis. The author has clearly sacrificed some potential coverage for a text that is accessible, coherent and readable on those topics of major interest to "simulationists" in the management and engineering sciences.

Reviewer: Institute University of Waterloo Place Waterloo, Canada Name D.L. McLeish

Title RISK MANAGEMENT: VALUE AT RISK AND BEYOND. Author M.A.H. Dempster (Ed.). Publisher Cambridge University Press, 2002, pp. xiv + 274, £45.00/US$65.00. Contents:

1. Quantifying the risks of trading (Picoult)

2. Value at risk analysis of a leveraged swap (Srivastava)

3. Stress testing in a value at risk framework (Kupiec)

4. Dynamic portfolio replication using stochastic programming (Dempster and Thompson)

5. Credit and interest rate risk (Kiesel, Perraudin and Taylor)

6. Coherent measures of risk (Artzner, Delbean, Eber and Heath)

7. Correlation and dependence in risk management: Properties and pitfalls (Embrechts, McNeil and Straumann)

8. Measuring risk with extreme value theory (R.L. Smith)

9. Extremes in operational risk management (Medova and Kyriacou)Readership: All those interested in financial mathematics

"Risk Management" is the term used by financial institutions to describe procedures for monitoring and controlling the exposure of their positions to movements in market prices and to other factors such as counterparty default and 'rogue trading'. The "value at risk" is generally defined as the bottom 5% percentile of the return distribution. This multi-author work gives a good summary of current techniques and concerns. Not surprisingly, (Chapter 8) and various forms of statistical dependence (Chapter 7) play a big role. Chapter 6 is a reprint of a classic paper setting out an axiomatic framework for risk assessment, while Chapter 2 describes in excruciating detail how Procter and Gamble threw away $100 million. The book is thought-provoking and informative.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name M.H.A. Davis

Title THE LAPLACE DISTRIBUTION AND GENERALIZATIONS. A Revisit with Applications to Communications, Economics, Engineering, and Finance. Author S. Kotz, T.J. Kozubowski and K. Podgórski. Publisher Boston: Birkhäuser, 2001, pp. xviii + 349. Contents:

PART I: Univariate Distributions

1. Historical background

2. Classical symmetric Laplace distribution

3. Asymmetric Laplace distributions

4. Related distributions

PART II: Multivariate Distributions

5. Symmetric multivariate Laplace distribution

6. Asymmetric multivariate Laplace distribution

PART III: Applications

7. Engineering sciences

8. Financial data

9. Inventory management and quality control

10. Astronomy and the biological and environmental sciences

APPENDIX: Bessel FunctionsReadership: Researchers in communication, economics, engineering, and finance

After a brief introduction on the history of the Laplace distribution in which a comparison with the normal distribution is made, the authors give a detailed study of the one-dimensional theory. Several analytic properties are presented including transforms, sums, mixtures, infinite divisibility and characterizations. Several of these properties are extended to the multivariate case. A brief discussion of the Laplace process, as a special case the Lévy process, is given. The examples given show for what kind of problems these models can be used. Throughout the text, several historical remarks and exercises are given. This book will be useful for those who are interested in tractable models which allow one to go beyond the normal distribution.

Reviewer: Institute ETH-Zürich Place Zürich, Switzerland Name P.A.L. Embrechts

Title BROWNIAN MOTION. Fluctuation, Dynamics and Applications. Author R.M. Mazo. Publisher Oxford: Clarenden Press, 2002, pp. xii + 289. Contents:

1. Historical background

2. Probability theory

3. Stochastic processes

4. Einstein-Smoluchowski theory

5. Stochastic differential equations and integrals

6. Functional integrals

7. Some important special cases

8. The Smoluchowski equation

9. Random walk

10. Statistical mechanics

11. Stochastic equations from a statistical mechanical viewpoint

12. Two exactly treatable models

13. Brownian motion and noise

14. Diffusion phenomena

15. Rotational diffusion

16. Polymer solutions

17. Interacting Brownian particles

18. Dynamics, fractals and chaosReadership: Physicists and mathematicians interested in the physics background of Brownian motion

Having borrowed its name from the biologist Robert Brown (1773-1858), Brownian motion found its way into mainstream physics through the work of Einstein and Smoluchowski. Perrin (1909) gave experimental verification of the ES theory and J.J. Thompson's work on the electron finally sealed the atomic theory of matter, and hence Brownian motion as a fundamental process underlying microscopic movement. After a brief discussion of this historical basis, the author then explains how Brownian motion technology was further developed within physics. The emphasis is very much put on physical relevance rather than mathematical depth. All constants appearing have a physical meaning. The historical development of the subject appears consequently throughout the text. Though the author is aware of the huge mathematical literature, he stays firmly within the realm of physics; recent probabilistic standard texts like Karatzas-Shreve or Revuz-Yor are not referred to. I do however find that this book makes a nice and even refreshing complement to the overwhelming emphasis put on the application of Brownian motion to finance in most of the books on the subject written for mathematicians. No doubt physicists will find the text useful. I am also convinced that mathematicians with a keen interest in the subject will want to go back to its historical roots and learn how Brownian motion has fared within fields of applications motivated by physics.

Reviewer: Institute ETH-Zürich Place Zürich, Switzerland Name P.A.L. Embrechts

Title PROBABILITY AND FINANCE: IT'S ONLY A GAME. Author G. Shafer and V. Vonk. Publisher New York: Wiley, 2001, pp. xvi + 414, £64.50. Contents:

1. Probability and finance as a game

PART I: Probability Without Measure

2. The historical context

3. The bounded strong law of large numbers

4. Kolmogorov's strong law of large numbers

5. The law of the iterated logarithm

6. The weak laws

7. Lindeberg's theorem

8. The generality of probability games

PART II: Finance Without Probability

9. Game-theoretic probability in finance

10. Games for pricing options in discrete time

11. Games for pricing options in continuous time

12. The generality of game-theoretic pricing

13. Games for American options

14. Games for diffusion processes

15. The game-theoretic efficient-market hypothesisReadership: Mathematicians, statisticians and philosophers interested in the foundations of probability, and anyone interested in a new approach to pricing derivatives

The first half of this book develops probability theory from a novel game-theoretic perspective, rather than the standard measure-theoretic perspective. However, the authors take pains to point out that this theoretical development has not yet been completed, with rich possibilities for further work existing. Relationships between the game-theoretic perspective and other approaches to probability are described, with the developments of the various interpretations of probability and alternative foundations being set in a historical context. The second half applies these new ideas to pricing financial derivatives.

This is a creative, entertaining and imaginative book. It will make intriguing reading for any statistician who wants something a little out of the ordinary, and for anyone who is attracted by the challenge of the authors' assertion that the game-theoretic approach 'goes deeper into probability's conceptual roots than the established measure-theoretic framework, [it] is better adapted to many practical problems, and [it] clarifies the close, relationship between probability theory and finance theory'.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name D.J. Hand

Title COMPARISON METHODS FOR STOCHASTIC MODELS AND RISKS. Author A. Muller and D. Stoyan. Publisher Chichester, U.K.: Wiley, 2002, pp. xii + 330, £55.00. Contents:

1. Univariate stochastic orders

2. Theory of integral stochastic orders

3. Multivariate stochastic orders

4. Stochastic models, comparison and monotonicity

5. Monotonicity and comparability of stochastic processes

6. Monotonicity properties and bounds for queueing systems

7. Applications to various stochastic models

8. Comparing risksReadership: Academic researchers in mathematics, probability, statistcs, physics, economics

The book gives a fairly formal exposition of the theory, much of it in the style of Theorem-Proof-Remarks. However, there is also a wealth of discussion and some important applications are thoroughly explored. The material is presented at the research level: references to the literature abound and, for example, there are no collections of student exercises.

In Chapter 1 there are many kinds of criteria for, and developments of the basic idea that one random variable can tend to take larger values than another. Chap-ter 2 is focussed upon a particular class, called integral stochastic orderings. In Chapter 3 the treatment is extended to multivariate distributions, and in Chapter 4 methods are presented for relating ordering properties of systems to those of their components. The theory is applied to stochastic processes. Markov processes in particular, in Chapter 5, and to queuing systems in Chapter 6. Chapter 7 contains a variety of applications, including renewal processes, reliability, scheduling, random sets and point processes, and the Ising model. Finally, some financial applications, portfolio optimization and actuarial risk, are covered in Chapter 8.

Reviewer: Institute Imperial College of Science Technology and Medicine Place London, U.K. Name M.J. Crowder

Title FINANCIAL ENGINEERING AND COMPUTATION. Author Y.D. Lyuu. Publisher Cambridge University Press, 2002, pp. xix + 627, £45.00/US$70.00. Contents:

1. Introduction

2. Analysis of algorithms

3. Basic financial mathematics

4. Bond price volatility

5. Term structure of interest rates

6. Fundamental statistical concepts

7. Option basics

8. Arbitrage in option pricing

9. Option pricing models

10. Sensitivity analysis of options

11. Extensions of options theory

12. Forwards, futures, futures options, swaps

13. Stochastic processes and Brownian motion

14. Continuous-time financial mathematics

15. Continuous-time derivative pricing

16. Hedging

17. Trees

18. Numerical methods

19. Matrix computation

20. Time series analysis

21. Interest rate derivative securities

22. Term structure fitting

23. Introduction to term structure modelling

24. Foundations of term structure modelling

25. Equilibrium term structure models

26. No-arbitrage term structure models

27. Fixed-income securities

28. Introduction to mortgage-backed securities

29. Analysis of mortgage-backed securities

30. Collateralized mortgage obligations

31. Modern portfolio theoryReadership: These in academian and the finance industry

The book is aimed at engineering and science students wishing to pursue quantitative finance, so no finance background is assumed. It will also be useful as a reference for workers in the finance industry. The flavour is inter-disciplinary, drawing together financial mathematics, econometrics and computation. The material is presented in small packets on the whole, for example the average chapter length is 15 pages; for me, this encourages self-study and reference. Many examples and exercises are interspersed throughout the text, and detailed answers to many of the latter given at the end. The computational aspects are Web-centric, to quote the author: the Java-based software can be accessed through a given web site. A guide to additional reading is appended to each chapter, referring to a huge bibliography (897 items), together with some miscellaneous notes. The index is likewise extremely thorough.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name M.J. Crowder

Title ANALYSIS OF FINANCIAL TIME SERIES. Author R.S. Tsay. Publisher Chichester, U.K.: Wiley 2002, pp. xii + 448, £66.95/105.90 Euro. Contents:

1. Financial time series and their characteristics

2. Linear time series analysis and its application

3. Conditional heteroscedastic models

4. Nonlinear models and their applications

5. High-frequency data analysis and market microstructure

6. Continuous-time models and their applications

7. Extreme values, quantile estimation, and value at risk

8. Multivariate time series analysis and its applications

9. Multivariate volatility models and their applications

10. Markov chain Monte Carlo methods with applicationsReadership: Academia (mathematics, statistics, economics undergraduates, MBA students), finance industry (quantitative analysts, statisticians).

A major strength of the book is that it presents both the theory and the practice for the various methods. Another feature is the coverage of recent advances in the methodology. The technical level is (approximately) first year undergraduate mathematics, together with a working knowledge of statistics. There are exercises and references at the end of each chapter. Programming code is listed for applying the methods, mainly using the RATS package, and real-data examples are tracked. Some chapters have appendices covering essential basic material, for example review of probability distributions, vectors and matrices, and the Black-Scholes formula. This will be a very useful book for learning and reference.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name M.J. Crowder

Title INTEREST RATE MODELS. Theory and Practice. Author D. Brigo and F. Mercurio. Publisher Berlin: Springer-Verlag, 2001, pp. xxxv + 518, US$64.95. Contents:

PART I. Models: Theory and Implementation

1. Definitions and notation

2. No-arbitrage pricing and numeraire change

3. One-factor short-rate models

4. Two-factor short-rate models

5. The Heath-Jarrow-Morton (HJM) framework

6. The LIBOR and swap market models (LFM and LSM)

7. Cases of calibration of the LIBOR market model

8. Monte Carlo tests for LFM analytical approximations

9. Other interest-rate models

PART II. Pricing Derivatives in Practice

10. Pricing derivatives on a single interest-rate curve

11. Pricing derivatives on two interest-rate curves

12. Pricing equity derivatives under stochastic rate

PART III. Appendices

13. Crash Introduction to Stochastic Differential Equations

14. A Useful Calculation

15. Approximating Diffusions with Trees

16. Talking to the TradesReadership: Those interested in quantitative finance

The text is no doubt my favourite on the subject of interest rate modelling. It perfectly combines mathematical depth, historical perspective and practical relevance. The fact that the authors combine a strong mathematical (finance) background with expert practice knowledge (they both work in a bank) contributes hugely to its format. I also admire the style of writing: at the same time concise and pedagogically fresh. The authors' applied background allows for numerous comments on why certain models have (or have not) made it in practice. The theory is interwoven with detailed numerical examples. A final Appendix "discussion" with a trader yields insight into current and future development of the field. For those who have a sufficiently strong mathematical background, this book is a must.

Reviewer: Institute ETH-Zürich Place Zürich, Switzerland Name P.A.L. Embrechts

Title AN INTRODUCTION TO STATISTICAL MODELING OF EXTREME VALUES. Author S. Coles. Publisher London: Springer-Verlag, 2001, pp. xiv + 208, US$79.95/ Euro 83.00. Contents:

1. Introduction

2. Basics of statistical modeling

3. Classical extreme value theory and models

4. Threshold models

5. Extremes of non-stationary sequences

6. A point characterization of extremes

7. Multivariate extremes

8. Further topicsReadership: Statisticians, risk management professionals, economists, engineers

This book is an accessible introduction to the univariate and multivariate extreme value methods. Topics on classical block maxima models, threshold exceedance models, modeling and testing issues with extremes of dependent sequences, explorations on extremes of non-stationary sequences and an introduction to multivariate extremes are covered. It contains numerous examples and applications with sets of data from various fields including engineering, oceanography and finance. It also contains a list of web sites where software routines for the extreme value methods are available.

Reviewer: Institute University of Windsor Place Windsor, Canada Name R. Gençay

Title CREDIT RISK: MODELING, VALUATION AND HEDGING. Author T.R. Bielecki and M. Rutkowski. Publisher New York: Springer-Verlag, 2002, pp. xviii + 500, US$64.95. Contents:

1. Introduction to credit risk

2. Corporate debt

3. First-passage time models

4. Hazard function of a random time

5. Hazard process of a random time

6. Martingale hazard process

7. Case of several random times

8. Intensity-based valuation of defaultable claims

9. Conditionally independent defaults

10. Dependent defaults

11. Markov chains

12. Markovian models of credit migrations

13. Heath-Jarrow-Morton type models

14. Defaultable market rates

15. Modeling of market ratesReadership: Researchers, graduate students and practitioners with a knowledge of stochastic calculus and arbitrage pricing theory

The intention of this monograph is to provide a comprehensive summary of recent advances in credit risk research including the value-of-the-firm and the intensity-based approaches. The emphasis is on the models, expressed through stochastic differential equations, leading to assessment of risk. The book is at an advanced mathematical level and covers a great deal of ground, discussing the results in hundreds of papers on the subject. In spite of considerable coverage and a mathematical approach, it is not difficult to read. My only criticism is generalizable to a great deal of finance literature; for all the high-level discussion of competing models, there is often a conspicuous lack of fitting these models to data and comparing their goodness-of-fit. The purpose of this book, to provide access to the explosive new research in credit risk and the accompanying mathematical models, is well-served.

Reviewer: Institute University of Waterloo Place Waterloo, Canada Name D.L. McLeish

Title LINEAR PROGRAMMING: Foundations and Extensions. Author R.J. Vanderbei. Publisher Kluwer Academic Publishers, 2001, pp. xviii + 450, Euro88.00/US$80.00/£56.00. Contents:

PART I. Basic Theory. The Simplex Method and Duality

1. Introduction

2. The simplex method

3. Degeneracy

4. Efficiency of the simplex method

5. Duality theory

6. The simplex method in matrix notation

7. Sensitivity and parametric analysis

8. Implementation issues

9. Problems in general form

10. Convex analysis

11. Game theory

12. Regression

PART II: Network–Type Problems

13. Network flow problems

14. Applications

15. Structural optimization

PART III: Interior–Point Methods

16. The central path

17. A Path-Following method

18. The KKT system

19. Implementation issues

20. The affine–scaling method

21. The homogeneous self–dual method

PART IV: Extensions

22. Integer programming

23. Quadratic programming

24. Convex programmingReadership: Mathematical programmers, operational researchers

This elegant text is eminently suitable for use in teaching linear programming to third year undergraduates or postgraduates. The carefully written material is presented as a reader friendly blend of mathematics and numerical examples. The author clearly prefers to solve problems by computer, rather than by hand, and his chapters on implementation issues are welcome. All the algorithms in the book have been coded and the source code, in C. is available from the author's website. An important contribution of this book is the section on Interior Point Methods; the author has research experience in this area and uses his expository skills to describe the algorithms and to discus the relationship between these methods. This book is highly recommended, it is a must for modern linear programmers.

Reviewer: Institute London School of Economics Place London, U.K. Name S. Powell

Title LINEAR AND INTEGER PROGRAMMING. Theory and Practice, 2nd edition. Author G. Sierksma. Publisher New York: Dekker, 2002, pp. xiv + 633, US$175.00. Contents:

1. Linear programming: Basic concepts

2. Dantzig's simplex method

3. Duality and optimality

4. Sensitivity analysis

5. Karmarkar's interior path method

6. Integer linear programming

7. Linear network models

8. Computational complexity issues

9. Model building, case studies, and advanced techniquesReadership: Mathematical programmers, operational researchers

This text is one of many suitable for a third year undergraduate or postgraduate course, that describes linear and integer programming theory. It is satisfying that an interior point method is now so well understood that it appears as a chapter alongside chapters on the classical simplex and network flow algorithms. The computer program INTPM enables the user to solve small (less than 75 variables and 40 equality constraints) problems using an interior point method. The author thanks the editorial staff of Marcel Dekker for their professional expertise; regrettably this did not extend to typographic issues. The text is too widely spaced, indeed on many pages it appears as if there are three spaces either side of each word. An English- speaking proof reader would have been helpful.

Reviewer: Institute London School of Economics Place London, U.K. Name S. Powell

Title OPTIMIZATION HEURISTICS IN ECONOMETRICS. Author P. Winker. Publisher Chichester U.K.: Wiley, 2001, pp. xiii + 333, £55.00. Contents:

1. Introduction

PART I: Optimization in Statistics and Econometrics

2. Optimization in economics

3. Optimization is statistics and econometrics

4. The heuristic optimization paradigm

PART II: Heuristic Optimization: Threshold Accepting

5. Optimization methods

6. The global optimization heuristic threshold accepting

7. Relative performance of threshold accepting

8. Tuning of threshold accepting

9. A practical guide to the implementation of threshold accepting

PART III: Application in Statistics and Econometrics

10. Introduction

11. Experimental design

12. Identification of multivariate lag structures

13. Optimal aggregation

14. Censored quantile regression

15. Continuous global optimization

PART IV: Conclusion and Outlook

16. Conclusion

17. Outlook for further research

Readership: Economists, econometricians, operational researchersIn the last decade the methods of heuristic optimization, for example simulated annealing, tabu search, neural networks and genetic algorithms have been satisfactorily applied by operational researchers to their combinatorial optimization problems. This text aims to demonstrate to economists and econometricians that these methods are accessible, powerful and useful for their models. The author is keen to convince economists that there are optimization tools, other than the classical methods, that can be applied to difficult or intractable problems. Many of the algorithms have been computationally implemented and the description of this practical experience is invaluable to readers with a problem to solve. There is an 'art' in successfully using these methods, and, because of its heuristic nature, it is an area little discussed in the literature. A text that comprehensively addresses this 'art' is to be welcomed.

Reviewer: Institute London School of Economics Place London, U.K. Name S. Powell

Title THE ELEMENTS OF STATISTICAL LEARNING. Author T. Hastie, R. Tibshirani and J. Friedman. Publisher New York: Springer-Verlag, 2001, pp. xvi + 533, US$74.95/DM159.90. Contents:

1. Introduction

2. Overview of supervised learning

3. Linear methods for regression

4. Linear methods for classification

5. Basis expansions and regularization

6. Kernel methods

7. Model assessment and selection

8. Model inference and averaging

9. Additive models, trees, and related methods

10. Boosting and additive trees

11. Neural networks

12. Support vector machines and flexible discriminates

13. Prototype methods and nearest neighbors

14. Unsupervised learningReadership: Students of statistics, data analysis, data mining and machine learning, and anyone who wishes an accessible, clear and authoritative introduction to these areas

This book describes modern tools for data analysis. With the exception of the last chapter, it is concerned with 'supervised' methods - those methods in which a sample of cases is available, including values of an outcome variable, and on which one can build a model allowing one to predict the value of the outcome variable for new cases. The authors are amongst the leaders in this area, having developed many of the modern tools. Such methods have seen extraordinary development in recent decades, primarily because of progress in computer technology, but also because of the huge range of applications. Furthermore, the practical development of these modeling and inferential tools has resulted in a deeper theoretical understanding of the modeling process. The (72 page) last chapter of the book describes unsupervised methods, which model distribution without being told known values of some outcome variable.

The book includes many special cases and examples, which give insights into the ideas and methods. It explains very clearly the relationships between the methods, and covers both standard statistical staples, such as linear and logistic regression, as well as modern tools. It is not overburdened with unnecessary mathematics but uses only what is necessary for the practical application of the methods.

The index is fairly brief, and does not do justice to a book covering such a range of ideas. There are exercises at the end of each chapter, although these are primarily of a theoretical rather than a data analytic nature. The book would make an ideal course text, but would need to be supplemented by practical details of how to use software tools to implement the methods (such as, for example, S-plus or R).

The book has been beautifully produced. It was a pleasure to read. I strongly recommend it.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name D.J. Hand

Title CAUSALITY: MODELS REASONING AND INFERENCE. Author J. Pearl. Publisher Cambridge University Press, 2001, pp. xvi + 384, £25.00/US$39.95. Contents:

1. Introduction to probabilities, graphs and causal models

2. A theory of inferred causation

3. Causal diagrams and the identification of causal effects

4. Actions, plans and direct effects

5. Causality and structural models in social science and economics

6. Simpson's paradox, confounding and collapsibility

7. The logic of structure based counterfactuals

8. Imperfect experiments; Bounding effects and counterfactuals

9. Probability of causation: interpretation and identification

10. The actual cause

Epilogue: The art and science of cause and effectReadership: Artificial intelligence researchers, cognitive scientists, social scientists, statisticians

Causality is a complex and contested notion. This book does two things: it provides a brief, historical overview and offers intelligent, pragmatic help for re-searchers. It will particularly help clarify the thoughts of researchers in fields who are struggling to make sense of messy, noisy environments and need tools and reassurance. As Pearl writes in the context of visual perception (p.60), "How safe are our predictions…? …Not absolutely safe, but good enough to tell a tree from a house…." That is not a purist view, but it is an intensely practical one and necessary as a way of making a first cut description. Pearl's chapter on counterfactuals is perhaps the most important for those not expert in the topic (Chapter 7), providing a clear account of the underlying logic. Indeed, concepts and reasoning are explained with great care throughout. The Epilogue, which is also a summary and can usefully be read first, is highly entertaining, reflecting the author's generally confident and outgoing approach.

Reviewer: Institute University College London Place London, U.K. Name B. Farbey

Title THEORIE DES SONDAGES: Echantillonnage et estimation en population finie. Author Y. Tillé. Publisher Paris: Dunod, 2001, pp. xii + 284. Table des matières:

1. Introduction

2. Une histoire des idées en théorie des sondages

3. Les fondements de la théorie des sondages

4. Plans simples

5. Plans à probabilités inégales

6. Echantillonnage par scission et estimation de variance

7. Stratification

8. Plans équilibrés

9. Plans par grappes, à plusieurs degrés et à deux phases

10. Estimation avec informations auxiliaires et plans simples

11. Estimation avec informations auxiliaires et plans complexes

12. Estimation de variance par linéarisation

13. Traitements des non-réponsesReadership: Étudiants de deuxième cycle en mathématiques et sciences appliquées

Ce livre trouve ses origines dans des notes de cours pour étudiants mais beaucoup de nouveaux résul-tats de recherche ont été ajoutés. Il est encore toujours intéressant pour utiliser dans l'enseignement, surtout grâce aux exercises (avec solutions). Le sujet de l'ouvrage est très intéressant en actuel: la théorie des sondages, les méthodes d'estimation des paramètres et le traitement des non-réponses. Le niveau de l'ouvrage est relativement mathématique mais grâce à une bibliographie excellente, beaucoup de détails techniques ont été écartés du texte.

Reviewer: Institute Limburgs Universitair Centrum Place Diepenbeek, Belgium Name N.D.C. Veraverbeke

Title THE BAYESIAN CHOICE. From Decision-Theoretic Foundations to Computational Implementation Author C.P. Robert. Publisher New York: Springer-Verlag, 2001, pp. xxiii + 604, US$79.95/DM183.00. Contents:

1. Introduction

2. Decision-theoretic foundations

3. From prior information to prior distributions

4. Bayesian point estimation

5. Tests and confidence regions

6. Bayesian calculations

7. Model choice

8. Admissibility and complete classes

9. Invariance, Haar measures, and equivariant estimators

10. Hierarchical and empirical Bayes extensions

11. A defense of the Bayesian choiceReadership: Undergraduate and postgraduate students of Bayesian statistics

The first edition [Short Book Reviews, Vol. 15, p.27] of this book was a translation, by the author, of his book in French, and this second expanded edition advances from an introductory level to cover recent work in the Bayesian area. The text reads fluently and beautifully throughout, with light, good-humoured touches that warm the reader without being intrusive. There are many examples and exercises, some of which draw out the essence of the work of other authors. Each chapter ends with a "Notes" section containing further brief descriptions of research papers. A reference section lists about eight hundred and sixty references. Each chapter begins with a quotation from The Wheel of Time a sequence of books by Robert Jordan. Only a few displays and equations have numbers attached. This is an extremely fine, exceptional text of the highest quality.

Reviewer: Institute University of Wisconsin Place Madison, U.S.A. Name N.R. Draper

Title SUBJECTIVE PROBABILITY MODELS FOR LIFETIMES. Author F. Spizzichino. Boca Raton, Publisher Florida: Chapman and Hall/CRC 2001, pp. xx + 248, £39.99/US$59.95. Contents:

1. Exchangeability and subjective probability

2. Exchangeable lifetimes

3. Some concepts of dependence and aging

4. Bayesian models of aging

5. Bayesian decisions, orderings, and majorizationReadership: Academia (Statistics and Operational Research, Biostatistics, Epidemiology), Industry (Reliability Research)

The book reflects a fairly committed type of Bayesian approach, for example, 'any unknown quantity is treated as a random variable' (Preface). The level is postgraduate in that it assumes a background in calculus of several variables, probability at an intermediate level, basic elements of stochastic processes, etc. (Preface). The style is fairly mathematical, punctuated throughout by Definition, Remark, Proposition, Example, Lemma and Theorem. That said, there is also much clear and helpful explanation of concepts. The material is mainly theoretical - one will not find practical analyses of data sets here. Each chapter ends with Exercises and Bibliography, which will be particularly useful for research students.

Reviewer: Institute Imperial College of Science, Technology and Medicine Place London, U.K. Name M. Crowder

Title BAYESIAN SURVIVAL ANALYSIS. Author J.G. Ibrahim, M.H. Chen and D. Sinha. Publisher New York: Springer-Verlag, 2001, pp. xiv + 479, US$79.95/DM183.00. Contents:

1. Introduction

2. Parametric models

3. Semiparametric models

4. Frailty models

5. Cure rate models

6. Model comparison

7. Joint models for longitudinal and survival data

8. Missing covariate data

9. Design and monitoring of randomized clinical trials

10. Other topicsReadership: Graduate students in biostatistics and statistics

The analysis of time-to-event data arises naturally in many fields of study. This book focuses exclusively on medicine and public health but the methods presented can be applied in a number of other areas, including biology, economics and engineering.

Although several previously published texts address survival analysis from a frequentist perspective, this book examines solely Bayesian approaches to survival analysis. Recent advances in computing and practical methods for prior elicitation have now made Bayesian survival analysis of complex models feasible. This book provides a comprehensive and modern treatment of the subject. In addition, the authors demonstrate the use of the statistical package BUGS for several of the models and methodologies discussed in the book. The authors provide a collection of theoretical and applied problems in the exercises at the end of each chapter. Whilst BUGS is a very useful software package, there are a number of ways in which it might be ex-tended and the authors discuss some of these towards the end of the book.

Reviewer: Institute CEFAS Lowestoft Laboratory Place Lowestoft, U.K. Name C.M. O'Brien

Title CLASSICAL COMPETING RISKS. Author M.J. Crowder. Publisher Boca Raton: Chapman and Hall/CRC, 2001, pp. iii + 186. Contents:

1. Continuous failure times and their causes

2. Parametric likelihood inference

3. Latent failure times: Probability distributions

4. Likelihood functions for univariate survival data

5. Discrete failure times in competing risks

6. Hazard-based methods for continuous failure times

7. Latent failure times: Identifiability crises

8. Martingale counting processes in survival data

APPENDIX A: Numerical Maximisation of Likelihood Functions

APPENDIX B: Bayesian ComputationReadership: Statisticians, engineers, scientists

This short book gives an excellent self-contained treatment of competing risks and perforce of survival analysis. The coverage is quite comprehensive, with parametric, nonparametric, and semi-parametric methods discussed and illustrated on a variety of sets of data from the literature. The chapter on identifiability issues collects results, which are not much discussed in other books on survival analysis. Finally, the book is fun to read, with occasional outbreaks of breezy style and references to sages such as Sherlock Holmes and Peter Sellers.

Reviewer: Institute University of Waterloo Place Waterloo, Canada Name J.F. Lawless

Title REGRESSION BASICS. Author L.H. Kahane. Publisher Thousand Oaks, California: Sage, 2001, pp. xi + 202, £21.00. Contents:

1. An introduction to the linear regression model

2. The least-squares estimation method: Fitting lines to data

3. Model performance and evaluation

4. Multiple regression analysis

5. Non-linear, dummy, interaction and time variables

6. Some common problems in regression analysis

7. Where to go from HEREReadership: Regression beginners in econometrics

The Sage publishing philosophy appears to be one of taking a subject (like regression analysis, for example) and splitting it up into many small pieces. This is one such small piece; the book "comes to an end" on page 138 and the Appendices follow. It is written by an economist, as a simple introductory manual and "is intended to be a companion to a more comprehensive textbook", which seeks to demonstrate the value of regression in research. It refers to twenty-six other publications in a reference list: ten papers, nine other Sage books, and seven more books, mostly econometrics oriented. There are twenty exercises. The narrow vision of this book is both its main virtue and its major drawback.

Reviewer: Institute University of Wisconsin Place Madison, U.S.A. Name N.R. Draper

Title ADVANCED LINEAR MODELING. Multivariate, Time Series, and Spatial Data; Nonparametric Regression and Response Surface Maximization. Author R. Christensen. Publisher New York: Springer-Verlag, 2001, pp. xii + 398, US$79.95/DM182.00. Contents:

1. Multivariate linear models

2. Discrimination and allocation

3. Principal components and factor analysis

4. Frequency analysis of time series

5. Time domain analysis

6. Linear models for spatial data: Kriging

7. Nonparametric regression

8. Response surface maximizationReadership: Mathematical statisticians

This book is the second edition of Linear Models for Multivariate, Time Series and Spatial Data (1991) (Short Book Reviews 11, p.47). The main change is the addition of Chapter 7 on nonparametric regression (orthogonal series approximations, splines, regression trees… and Chapter 8 on response surface maximization. The emphasis in this work is on the linear model theory, which unifies three major fields in statistics: multivariate analysis, time series and spatial data. Most chapters end with a selection of exercises, which makes the book also interesting for teaching purposes.

Reviewer: Institute Limburgs Universitair Centrum Place Diepenbeek, Belgium Name N.D.C. Veraverbeke

Title ANALYSIS OF MESSY DATA. Volume III. Analysis of Covariance. Author G.A. Milliken and D.E. Johnson. Publisher Boca Raton, Florida: Chapman and Hall/CRC, 2002, pp. xv + 605, US$69.95/£46.99. Contents:

1. Introduction to the analysis of covariance

2. One-way analysis of covariance - one covariate in a completely randomized design structure

3. Examples for Chapter 2

4. Multiple covariates in a one way treatment structure in a completely randomized design structure

5. Two-way treatment structure and analysis of covariance in a completely randomized design structure

6. Beta-hat models

7. Variable selection in the analysis of covariance model

8. Comparing models for several treatments

9. Two treatments in a randomized complete block design structure

10. More then two treatments in a blocked design structure

11. Covariate measured on the block in RCB and incomplete block design strucrtures

12. Random effects models with covariates

13. Mixed models

14. Analysis of covariance models with heterogeneous errors

15. Analysis of covariance for split-plot and strip-plot design structures

16. Analysis of covariance for repeated measure designs

17. Analysis of covariance for nonreplicated experiments

18. Special applications of analysis of covarianceReadership: Applied statisticians, experimenters, graduate students

This is the authors' third Analysis of Messy Data volume, following on from their 1984 Volume 1 [Short Book Reviews, Vol. 5, p. 18] and their 1989 Volume 2 [Short Book Reviews, Vol. 9, p.21]. We owe them a huge debt for their twenty-five years (since they began writing) of persistence. As with the previous volumes, the authors go systematically and solidly through their material, in this volume analysis of covariance. Their displays rely mostly on the SASR system software, with a leavening of JMPR tables. This should not deter users of other systems who will easily follow and adapt what they see. A few references are given at the end of each chapter, but there is no collected bibliography. The data in most of the examples "were generated to simulate real world applications that we have encountered in our consulting experiences." Each chapter has a few exercises. The book as a whole has many sets of data. In the years to come, many consulting statisticians will say to their clients, "Why don't we see what Milliken and Johnson have to say on that?" as they pull this book from their shelves.

Reviewer: Institute University of Wisconsin Place Madison, U.S.A. Name N.R. Draper

Title MEASUREMENT ERROR AND LATENT VARIABLES IN ECONOMETRICS. Author T. Wansbeek and E. Meijer. Publisher Amsterdam: Elsevier, 2000, pp. xii + 440, NLG195.00/EURO88.49/US$102.00. Contents:

1. Introduction

2. Regression and measurement error

3. Bounds on the parameters

4. Identification

5. Consistent adjusted least squares

6. Instrumental variables

7. Factor analysis and related methods

8. Structural equation models

9. Generalized method of moments

10. Model evaluation

11. Nonlinear latent variable modelsReadership: Econometricians, government statisticians, graduate econometrics students and researchers in regression modelling

This text presents a unified approach to dealing with two apparently different problems resulting in regressor variables being unobservable. These two kinds of 'unobservable' variables are those that are subject to measurement error or 'noise', possibly introduced as part of the data collection process, and latent variables, these are conceptual or idealistic variables that cannot be measured directly. The book begins with a discussion of what goes wrong, in the sense of inconsistency of the estimators, when regressors in a multiple regression model are subject to error. This is recognized as an identification problem where no consistent estimator may exist. The use of additional information to enable the development of reliable consistent estimators and the construction of instruments from the available data are covered in Chapters 5 and 6. These ideas are extended to the multiple equations setting through the use of factor analysis models in Chapter 7. The important general class of structural equation models and the use of the generalized method of moments form the basis of later chapters. Polynomial models and non-linear models with ordered categorical variables are considered in the final chapter.

The approach relies heavily on the extensive use of matrix algebra as associated with fitting linear regression models and on the statistical results involved particularly with the distributions of quadratic forms. Most of the required results in algebra, calculus and statistics are brought together in two detailed appendices. Each chapter concludes with an interesting set of bibliographical notes that ties up any loose ends and identifies sources of additional material. Relatively few numerical examples are included but there is an extensive thirty-four-page list of references.

Reviewer: Institute University of Southampton Place Southampton, U.K. Name P. Prescott

Title THE ECONOMETRIC ANALYSIS OF SEASONAL TIME SERIES. Author E. Ghysels and D.R. Osborn. Publisher Cambridge University Press, 2001, pp. xxi + 228, £4750/US$69.95 Cloth; £17.95/US$24.95 Paper. Contents:

1. Introduction to seasonal processes

2. Deterministic seasonality

3. Seasonal unit root processes

4. Seasonal adjustment programs

5. Estimation and hypothesis testing with unfiltered and filtered data

6. Periodic processes

7. Some non-linear seasonal models

EpilogueReadership: Economists, government statisticians, advanced graduate students

The book contains recent developments in the theory and practice of seasonal adjustment. The authors describe various test procedures for unit root models and compared these procedures by Monte Carlo methods. This is a very useful book for economists and econometricians working in this area who are interested to learn all the up-to-date methods. My only criticism with this book is that the authors did not illustrate the techniques with a large number of real time series. One feels that the book is written for theoreticians rather than for practicians.

Reviewer: Institute University of Manchester Institute of Science and Technology Place Manchester, U.K. Name T. Subba Rao

Title MULTIVARIATE ANALYSIS OF QUALITY, AN INTRODUCTION. Author H. Martens and M. Martens. Publisher Chichester: Wiley, 2001, pp. xx + 445, £85.00. Contents:

PART I: Overview

PART II: Methodology

PART III: Applications

PART IV: AppendicesReadership: Chemometricians, food scientists, sensometricians, data analysts

Traditional multivariate analysis deals predominantly with "tall thin" data matrices having more individuals than variables. Computerized measurements as found in chemometrics, food science, sensory studies and related areas, however, produce "short fat" data matrices having many more variables than individuals. Traditional methods often hit problems with such matrices, so a numerical approach termed soft modelling has been popularized in these areas. Central to this approach is the bi-linear model, with iterative partial least squares and cross-validation, which provides the tools for estimation and assessment.

This book presents a wide-ranging account of these methods. While certainly of general interest to statisticians, its main target is the research scientist working in these substantive areas. There is extensive discussion of design and experimentation issues as well as those of analysis. The authors adopt an application-driven, user-friendly and graphically oriented presentation, with most technical mathematics relegated to the appendices. It should prove a very useful text for this target readership.

Reviewer: Institute University of Exeter Place Exeter, U.K. Name W.J. Krzanowski

Title ANALYSIS OF TIME SERIES STRUCTURE. SSA AND RELATED TECHNIQUES. Author N. Golyandina, V. Nekrutkin and A. Zhigljavsky. Publisher Boca Raton, Florida: Chapman and Hall/CRC, 2001, pp. xi + 305, US$79.95/£49.99. Contents:

Introduction

PART I

1. Basic SSA

2. SSA forecasting

3. SSA detection of structural changes

PART II: SSA Theory

4. Singular value decomposition

5. Time series of finite rank

6. SVD for stationary seriesReadership: Statisticians, graduate students of statistics, econometrists, multivariate analysts

"Singular-Spectrum Analysis" (SSA) offers an intriguing view of time series analysis and forecasting developed mainly by physicists and meteorologists. It is an approach that is different from the traditional Box-Jenkins and spectral (frequency domain) methods familiar to statisticians.

Part I is devoted entirely to illustrating the theory by detailed analyses of a number of time series by SSA. The examples include some well-known series such as the sunspot data. Each example illustrates some special features. The practical choices which have to be made when applying the theory are discussed. The explanations give much insight into the method.

In Part II the formal mathematical theory, which underpins the method, is laid out with admirable clarity. The authors have performed a service to the statistical community by writing this book. It is likely to become

the standard reference to SSA; helpful to the applied statistician who wishes to analyse a times series and also to the theoretician who may wish to develop this interesting approach to time series analysis further.

Reviewer: Institute University of Cape Town Place Rondebosch, South Africa Name J.M. Juritz

Title MULTIVARIATE PERMUTATION TESTS: WITH APPLICATIONS IN BIOSTATISTICS. Author F. Pesarin. Publisher Chichester, U.K.: Wiley, 2001, pp. xxvi + 408, £55.00. Contents:

1. Introduction

2. Discussion of a simple testing problem

3. Theory of permutation tests for one-sample problems

4. Examples of univariate multi-sample problems

5. Theory of permutation tests for multi-sample problems

6. Nonparametric combination methodology

7. Examples of nonparametric combination

8. Permutation analysis in factorial designs

9. Permutation testing with missing data

10. The Behrens-Fisher permutation problem

11. Permutation testing for repeated measurements

12. Further applicationsReadership: Professional statisticians, graduate students, researchers and practitioners facing complex testing problems

This text carefully presents a concise and mathematically rigorous treatment of permutation testing in univariate and multivariate situations. In the Introduction, the author points out that there are two approaches to the construction of permutation tests; the first is the heuristic or intuitive approach, which is often used for simple problems; the second is based on the concept of conditioning with respect to a set of sufficient statistics in the null hypothesis. The basic ideas are introduced through a simple two-sample testing problem. More involved problems are developed in subsequent chapters, each illustrated with practical examples and concluding with a set of exercises. Later chapters deal with quite complex problems including the use of synchronized tests in factorial designs and the discussion of permutation testing in situations, such as repeated measures analyses, longitudinal studies, the analysis of panel data and response trajectories, which can be re-presented as multivariate problems.

The computationally intensive methods are carried out using conditional Monte Carlo (CMC) methods. Programs and macros for these methods, suitable for running in SAS and S-Plus, together with a demonstration copy of NPC Test 2.0 and all the data sets used in the text, are available from the Internet.

The text could be used for a mathematically orientated graduate class but it is more likely that the book will form a source of recent reference material for research workers in the area of permutation testing. Many of the references, twenty-seven pages in all, are to publications within the last two or three years.

Reviewer: Institute University of Southampton Place Southampton, U.K. Name P. Prescott

Title STATISTICAL METHODS IN BIOINFORMATICS - AN INTRODUCTION. Author W.J. Ewens and G.R. Grant, Publisher New York: Springer-Verlag, 2001, pp. xix + 475, US$79.95/DM177.00. Contents:

1. Probability theory (i): One random variable

2. Probability theory (ii): Many random variables

3. Statistics (i): An introduction to statistical inference

4. Stochastic processes (i): Poisson process and Markov chains

5. The analysis of one DNA sequence

6. The analysis of multiple DNA or protein sequences

7. Stochastic processes (ii): Random walks

8. Statistics (ii): Classical estimation and hypothesis testing

9. BLAST

10. Stochastic processes (iii): Markov chains

11. Hidden Markov chains

12. Computationally intensive methods

13. Evolutionary models

14. Phylogenetic tree estimation

APPENDIX A: Basic Notions in Biology

APPENDIX B: Mathematical Formulae

APPENDIX C: Computational Aspects of the Binomial and Generalized Geometric Distribution Function

APPENDIX D: Sum of Normalized ScoresReadership: Biostatisticians and statisticians who want to learn about bioinformatics

The book is self-contained and even includes a few pieces from calculus. The first four chapters are a solid introduction to basic probability theory, stochastic processes and statistics. The material is well seasoned with examples and problems related to genetics. Starting with Chapter 6, the authors focus their efforts on modeling and DNA and protein sequences. Often they digress (Chapters 7 and 8, for instance) towards mathematical statistics with the strong emphasis on numerical techniques. The software package BLAST is the most cited item but I have failed to satisfy my curiosity and to find the origin of the acronym.

The book is a very substantial and highly professional contribution to bioinformatics and applied statistics.

Reviewer: Institute GlaxoSmithkline Place Collegeville, U.S.A. Name V.V. Fedorov

Title SAMPLING AND MONITORING IN CROP PROTECTION. Author M.R. Binns, J.P. Nyrop and W. van der Werf. Publisher Wallingford, U.K., CABI Publishing, 2000, pp. xi + 284, US$90.00/£49.95. Contents:

1. Basic concepts of decision-making in pest management

2. Basic concepts of sampling for pest management

3. Classifying pest density

4. Distributions

5. Sequential sampling for classification

6. Enhancing and evaluating the usefulness of sampling plans

7. Binomial counts

8. Multiple sources of variation

9. Resampling to evaluate the properties of sampling plans

10. Sampling over time to classify or estimate a population growth curve

11. Monitoring pest populations through timeReadership: Graduates and final-year undergraduates in pest management

There are pests in your fields. You go out into those fields and sample the pests. You study your data. You decide to (a) do nothing; (b) introduce, or reintroduce, natural enemies; (c) apply a pesticide; (d) wait and take another sample soon. Or you (e) realize that, before you went out into the fields, you should have consulted this book. The correct decision is, of course (e), after which you can dispense with choice (e). To appreciate this volume, you will need some statistical knowledge, basic college mathematics, some knowledge of pests and crops, and the ability to work with computer soft-ware made available as electronic chapters on the Internet. The authors have succeeded in producing an excellent, workmanlike volume that takes account of recent literature on these topics.

Reviewer: Institute University of Wisconsin Place Madison, U.S.A. Name N.R. Draper

Title WAHRSCHEINLICHKEITSTHEORIE UND STATISTIK. Author A. Irle. Publisher B. Stuttgart: Teubner, 2001, pp. 378, DM62.00/ÖS453.00/S.F.54.50. Contents:

1. Zufallsexperimente

2. Wahrscheinlichkeitsräume

3. Umgang mit Wahrscheinlichkeiten

4. Bedingte Wahrscheinlichkeiten

5. Diskrete Wahrscheinlichkeitsmasse

6. Reelle Wahrscheinlichkeitsmasse

7. Zufallsvariablen

8. Erwartungswerte und Integrale

9. Momente und Ungleichungen

10. Stochastische Unabhängigkeit

11. Gesetze der grossen Zahlen

12. Der zentrale Grenzwertsatz

13. Die statistische Modellbildung

14. Statistisches Entscheiden

15. Zur Struktur statistischer Experimente

16. Optimale Schätzer

17. Das lineare Modell

18. Maximum-Likelihood-Schätzung

19. Optimale Tests

20. Spezielle Tests und KonfidenzbereicheReadership: Undergraduate students in mathematics and engineering

The first twelve chapters of this book give a classical introduction to probability theory, starting from Kolmogorov's axioms and ending with the strong law of large numbers and central limit theorem. The next eight chapters provide an introduction to statistical inference with discussion of optimality of estimators and tests. An interesting feature of the book is that each chapter is split up in two parts. In the first part, the main con-cepts, methods and examples are given. The second parts give a more in-depth study of certain aspects, usually involving more mathematical techniques. The book is well suited as a textbook for a first course at a good level. Unfortunately there are no exercises included in the text.

Reviewer: Institute Limburgs Universitair Centrum Place Diepenbeek, Belgium Name N.D.C. Veraverbeke

Title STATISTICAL INFERENCE, 2nd edition. Author G. Casella and R.L. Berger. Publisher Pacific Grove, California: Duxbury, 2001, pp. xxviii + 660. Contents:

1. Probability theory

2. Transformations and expectations

3. Common families of distributions

4. Multiple random variables

5. Properties of a random sample

6. Principles of data reduction

7. Point estimation

8. Hypotheses testing

9. Interval estimation

10. Asymptotic evaluations

11. Analysis of variance and regression

12. Regression modelsReadership: Probabilists, statisticians, teachers, students

This is most welcome second edition of an already popular book that emphasizes usefulness while keeping a distinct level of rigour in the development. Compared to the first edition there is more emphasis on computing aspects and at the same time more applicable techniques have been expanded. Many special features make this a fine book to have on one's desk. There are some three hundred examples sprinkled all over the manuscript. More than one hundred pages of exercises of varying degrees of difficulty are included. Each chapter ends with a section titled miscellanea that guides the reader to relevant aspects not covered in the book but hinted at in the extensive bibliography. This is a refreshing book that can be strongly recommended to students as well as to teachers.

Reviewer: Institute Katholieke Universiteit Leuven Place Heverlee, Belgium Name J.L. Teugels

Title FUNDAMENTALS OF MODERN STATISTICAL METHODS: Substantially Improving Power and Accuracy. R.R. Wilcox. Author New York: Springer-Verlag, 2001, pp.xiii + 258, US$49.95. Publisher Contents: 1. Introduction

2. Getting started

3. The normal curve and outliner detection

4. Accuracy and inference

5. Hypothesis testing and small sample sizes

6. The bootstrap

7. A fundamental problem

8. Robust measures of location

9. Inferences about robust measures of location

10. Measures of association

11. Robust regression

12. Alternate strategiesReadership: Applied researchers interested in current day statistical methods

This book tries to bridge the gap between state-of-the-art statistical methods versus techniques commonly used. The first part of the manuscript covers Chapters 2 to 7 and is non-mathematical. It provides a verbal and graphical explanation of why standard statistical methods can be highly misleading. At the same time, an intuitive understanding of the practical advantage of modern techniques is highlighted. In the second part, the author describes a subset of modern techniques that are usually only covered in high-level publications; by this token, these techniques and their advantages remain hardly accessible to an applied, but non-statistically trained, researcher. By data from actual studies, many examples are included to illustrate the practical problems with conventional procedures and how more modern methods can make a substantial difference in the conclusions reached in many areas of statistical research.

Reviewer: Institute Katholieke Universiteit Leuven Place Heverlee, Belgium Name J.L. Teugels

Title MODELLING AND QUANTITATIVE METHODS IN FISHERIES. Author M. Haddon. Boca Raton, Publisher Florida: Chapman and Hall/CRC, 2001, pp. xvi + 406, US$69.95/£29.99. Contents:

1. Fisheries, population dynamics, and modelling

2. Simple population models

3. Model parameter estimation

4. Computer intensive methods

5. Randomization tests

6. Statistical bootstrap methods

7. Monte Carlo modelling

8. Growth of individuals

9. Stock-recruitment relationships

10. Surplus production models

11. Age-structured modelsReadership: Students of undergraduate courses in biology, marine ecology and statistics

This book has been produced from a series of short and intensive courses on modelling and quantitative methods that the author has given at fisheries laboratories and universities around Australia.

The main objective of the book is to provide a text that details the analytical methods currently being used in quantitative biology and fisheries science. I might disagree with this as risk assessment receives only a cursory mention. In contrast, age-structured models are explained in simple terms but the exposition suffers from a lack of recent references to the published literature.

A major aim of the author was to focus on the details of how to perform the analyses described but this has been at the expense of an integrated development of the subject matter. The text does include Microsoft Excel workbooks relating to each example and problem discussed. Undoubtedly, this will assist the novice but will be of annoyance to the more experienced practitioner.

The book would be greatly enhanced by the addition of an author index; this would enable the inquiring student to pursue a course of independent study.

Reviewer: Institute CEFAS Lowestoft Laboratory Place Lowestoft, U.K. Name C.M. O'Brien

Title MAXIMUM PENALIZED LIKELIHOOD ESTIMATION. Volume I: Density Estimation. Author P.P.B. Eggermont and V.N. LaRiccia. Publisher New York: Springer-Verlag, 2001, pp. xvii + 510, US$84.95/DM196.00. Contents:

PART I: Parametric Estimation

PART II: Nonparametric Estimation

PART III: ConvexityReadership: Graduate students and researchers in statistics

This is the first volume in a two-part project on maximum penalized likelihood estimation. It deals with parametric and nonparametric density estimation and also with convex estimation. The theoretical chapters give a detailed account of asymptotic properties (consistency, rates of convergence asymptotic normality,…), optimality properties, computational aspects and band-width choice. The mathematical level is quite high, but most of the required tools, like martingales, exponential inequalities, Fourier analysis, Banach spaces, etc. are explained in the text. An interesting feature of the book is also that each part ends with an "in action" chapter in which the estimation procedures are put to work and small sample performance is discussed. The book can be used for classes and seminars, particularly because of the presence of numerous exercises and tasks.

Reviewer: Institute Limburgs Universitair Centrum Place Diepenbeek, Belgium Name N.D.C. Veraverbeke

Back to ISI Home