• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Analysis of Covid-19 Mathematical and Software Models Or how NOT to set up a software project

Daniel Duffy

C++ author, trainer
Joined
10/4/07
Messages
10,091
Points
648

I've been pontificating about good software project management for years but this high-profile adventure exposes all the warts and all. The chickens have come home to roost.
 
Last edited:
I've just read quickly through the article, but this line in the abstract caught my interest:
"(A word of advice: it is tempting to use the Euler method but don’t use it, not even for producing cute S-curves in your blogs)."

And it seems to be not discussed in the article as to why that is the case. So my question is: why not Euler? Euler-Maruyama seems to be employed often when it comes to the stochastic DE versions of these compartmental models.
 
I've just read quickly through the article, but this line in the abstract caught my interest:
"(A word of advice: it is tempting to use the Euler method but don’t use it, not even for producing cute S-curves in your blogs)."

And it seems to be not discussed in the article as to why that is the case. So my question is: why not Euler? Euler-Maruyama seems to be employed often when it comes to the stochastic DE versions of these compartmental models.
That's 2 questions, one for ODEs, one for SDEs. I'm referring to ODEs (doesn't let Euler-Maruyama off the hook), Well-known issues, lots of them.
Euler is no good for stiff ODEs, for example.

Anyways, Euler is the least of their worries.
 
Last edited:
This report undertakes a critical examination of the by now well-discussed open-source software based on an epidemiological model from a team from Imperial College London led by Dr. Neil Ferguson.

I've been pontificating about good software project management for years but this high-profile adventure exposes all the warts and all. The chickens have come home to roost.

The full report is here:

Blogs - Analysis of Covid-19 Mathematical and Software Models :: Datasim

The idealised software project lifecycle can be described by four activities or phases:

  • Requirements Determination (Discovering and documenting the right (mathematical) model).
  • Software architecture and design (“doing the things from phase 1 right”).
  • Implementing the design (by professional programmers), including testing.
  • Acceptance testing and delivery of software product.
In general, the output from a given phase is the input to the succeeding phase. For example, phase 1 produces the system requirements, mathematical models and data structures that will form the design blueprints of phase 2. This is the ideal trajectory but real life is much messier. In the immortal words words of Robbie Burns “the best-laid schemes of mice and men go often askew, and leave us nothing but grief and pain” and software projects are no exception and we are all familiar with horror stories down the years.

What about the COVID-19 project? Phases 1 and 3 are confounded into one activity (“my code is my model”) and phase 2 is absent. It is high-risk and as discussed in more detail in my report but I would like to summarise the top risk items (most of them are project show-stoppers) as based on the incomplete evidence at my disposal:

R1: Personnel shortfalls.

R2: Inadequate knowledge and skills.

R3: Lack of effective project management skills.

R4: Misunderstanding of the requirements.

R5: Political games or conflicts.

R6: Lack of project champion.

R7: Insufficient resources.

R8: Project ambiguity.

R9: Project progress not monitored closely enough.

Many failed (or failing) software projects are susceptible to these risks and even with well-run projects we must be eternally vigilant if we wish to avoid project meltdown. In this sense for the COVID-19 software system items R1,..,R9 are potential risks that have a positive probability of blossoming into full-blown show-stoppers.

As already mentioned, R1,…,R9 are the perceived risks based on my analysis of the source code, internet-based chit-chat and not being privy to vital information. It is possible that the above risk items can be resolved by better communication.

The advantages of risks are that there are so many to choose from (see [1] for a thorough overview of software risk dimensions). No doubt new risks will surface if and when the COVID-19 evolves from its current embryonic state (it is essentially a proof-of-concept prototype product in my opinion).

To disrespectfully misquote the Bard of Stratford “some softwares are born great, some softwares achieve greatness, and some softwares have greatness thrust upon them.”

This project cannot be salvaged in its current form. I could be wrong. Maybe a miracle will happen. In the words of Niels Bohr : “Prediction is very difficult, especially if it's about the future.”



[1] T. Arnuphaptrairong 2011 Top Ten Lists of Software Project Risks: Evidence from the Literature Survey, Proceedings of the International Multi Conference of Engineers and Computer Scientists, 2011 Vol. I, IMECS 2011, Hong Kong.
 
Update November 2020

Covid models revisited, BBC2 "Lockdown 1.0 - Follow the Science"

Apart from hubris and lies from the politicians, the penitent scientists(*) don't come out unscathed. I listened twice to get all the nuances. Some of the original doubts and questions in this thread have been answered.


Highly educational. The maestro was out of synch with the orchestra. And the string section were playing a different tune altogether.

(*) no mention whether "next time better".
 
Annotations on BBC2 TV programme “Lockdown 1.0 – Follow the Science”

1. Nothing (data, precedent) to go on in January except work by the modellers. Too much reliance on the models. Modelling was the driver of the “Science”.
2. Not a single institute (bar a single scientist from Bristol) with knowledge of human corona virus.
3. No access to “fresh” data from China; scientists used Wikipedia.
4. NHS data was always a week too late to be useful for the mathematical models.
5. The quality of the UK data << quality of Ebola data from Democratic Republic of Congo.
6. “Follow the Science is a meaningless term.”
7. And the care homes? (20,000 deaths from a population of 400,000). Answer? “We didn’t look”.
It was only in April that care homes were seen as a serious problem. Previously, it didn’t appear on the radar.
8. Learn from the Lodi (Lombardy, Italy) pandemic? No! “We are different and we have the NHS”. And this mindset persisted up to 11 March 2020. UK made exactly the same mistakes as the Italians.
9. Scientists wanted Lockdown 10 March 2020. Actual lockdown 23 March.
10. Care homes not supported in the mathematical models. Look like no support for age as an independent variable? If yes, then the SIR model is not suitable.
11. SIR model does not support time delays.
12. 16 March 2020 modellers predicted 250, 000 deaths. Panic stations at Number 10?
13. Modelling estimates had very wide upper and lower bounds of uncertainty.
14. Gabriel Scally “this was eminently curable”.
15. No screening of flights from Wuhan.

nolite interficere nuntius
 
Back
Top